Nov 25 10:12:24 np0005535469 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 10:12:24 np0005535469 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 10:12:24 np0005535469 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 10:12:24 np0005535469 kernel: BIOS-provided physical RAM map:
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 10:12:24 np0005535469 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 10:12:24 np0005535469 kernel: NX (Execute Disable) protection: active
Nov 25 10:12:24 np0005535469 kernel: APIC: Static calls initialized
Nov 25 10:12:24 np0005535469 kernel: SMBIOS 2.8 present.
Nov 25 10:12:24 np0005535469 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 10:12:24 np0005535469 kernel: Hypervisor detected: KVM
Nov 25 10:12:24 np0005535469 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 10:12:24 np0005535469 kernel: kvm-clock: using sched offset of 5109129684 cycles
Nov 25 10:12:24 np0005535469 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 10:12:24 np0005535469 kernel: tsc: Detected 2799.998 MHz processor
Nov 25 10:12:24 np0005535469 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 10:12:24 np0005535469 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 10:12:24 np0005535469 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 10:12:24 np0005535469 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 10:12:24 np0005535469 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 10:12:24 np0005535469 kernel: Using GB pages for direct mapping
Nov 25 10:12:24 np0005535469 kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 10:12:24 np0005535469 kernel: ACPI: Early table checksum verification disabled
Nov 25 10:12:24 np0005535469 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 10:12:24 np0005535469 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 10:12:24 np0005535469 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 10:12:24 np0005535469 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 10:12:24 np0005535469 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 10:12:24 np0005535469 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 10:12:24 np0005535469 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 10:12:24 np0005535469 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 10:12:24 np0005535469 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 10:12:24 np0005535469 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 10:12:24 np0005535469 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 10:12:24 np0005535469 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 10:12:24 np0005535469 kernel: No NUMA configuration found
Nov 25 10:12:24 np0005535469 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 10:12:24 np0005535469 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 25 10:12:24 np0005535469 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 25 10:12:24 np0005535469 kernel: Zone ranges:
Nov 25 10:12:24 np0005535469 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 10:12:24 np0005535469 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 10:12:24 np0005535469 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 10:12:24 np0005535469 kernel:  Device   empty
Nov 25 10:12:24 np0005535469 kernel: Movable zone start for each node
Nov 25 10:12:24 np0005535469 kernel: Early memory node ranges
Nov 25 10:12:24 np0005535469 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 10:12:24 np0005535469 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 10:12:24 np0005535469 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 10:12:24 np0005535469 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 10:12:24 np0005535469 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 10:12:24 np0005535469 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 10:12:24 np0005535469 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 10:12:24 np0005535469 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 10:12:24 np0005535469 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 10:12:24 np0005535469 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 10:12:24 np0005535469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 10:12:24 np0005535469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 10:12:24 np0005535469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 10:12:24 np0005535469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 10:12:24 np0005535469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 10:12:24 np0005535469 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 10:12:24 np0005535469 kernel: TSC deadline timer available
Nov 25 10:12:24 np0005535469 kernel: CPU topo: Max. logical packages:   8
Nov 25 10:12:24 np0005535469 kernel: CPU topo: Max. logical dies:       8
Nov 25 10:12:24 np0005535469 kernel: CPU topo: Max. dies per package:   1
Nov 25 10:12:24 np0005535469 kernel: CPU topo: Max. threads per core:   1
Nov 25 10:12:24 np0005535469 kernel: CPU topo: Num. cores per package:     1
Nov 25 10:12:24 np0005535469 kernel: CPU topo: Num. threads per package:   1
Nov 25 10:12:24 np0005535469 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 10:12:24 np0005535469 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 10:12:24 np0005535469 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 10:12:24 np0005535469 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 10:12:24 np0005535469 kernel: Booting paravirtualized kernel on KVM
Nov 25 10:12:24 np0005535469 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 10:12:24 np0005535469 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 10:12:24 np0005535469 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 10:12:24 np0005535469 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 10:12:24 np0005535469 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 10:12:24 np0005535469 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 10:12:24 np0005535469 kernel: random: crng init done
Nov 25 10:12:24 np0005535469 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: Fallback order for Node 0: 0 
Nov 25 10:12:24 np0005535469 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 10:12:24 np0005535469 kernel: Policy zone: Normal
Nov 25 10:12:24 np0005535469 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 10:12:24 np0005535469 kernel: software IO TLB: area num 8.
Nov 25 10:12:24 np0005535469 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 10:12:24 np0005535469 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 10:12:24 np0005535469 kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 10:12:24 np0005535469 kernel: Dynamic Preempt: voluntary
Nov 25 10:12:24 np0005535469 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 10:12:24 np0005535469 kernel: rcu: #011RCU event tracing is enabled.
Nov 25 10:12:24 np0005535469 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 10:12:24 np0005535469 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 25 10:12:24 np0005535469 kernel: #011Rude variant of Tasks RCU enabled.
Nov 25 10:12:24 np0005535469 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 25 10:12:24 np0005535469 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 10:12:24 np0005535469 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 10:12:24 np0005535469 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 10:12:24 np0005535469 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 10:12:24 np0005535469 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 10:12:24 np0005535469 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 10:12:24 np0005535469 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 10:12:24 np0005535469 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 10:12:24 np0005535469 kernel: Console: colour VGA+ 80x25
Nov 25 10:12:24 np0005535469 kernel: printk: console [ttyS0] enabled
Nov 25 10:12:24 np0005535469 kernel: ACPI: Core revision 20230331
Nov 25 10:12:24 np0005535469 kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 10:12:24 np0005535469 kernel: x2apic enabled
Nov 25 10:12:24 np0005535469 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 10:12:24 np0005535469 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 10:12:24 np0005535469 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 25 10:12:24 np0005535469 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 10:12:24 np0005535469 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 10:12:24 np0005535469 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 10:12:24 np0005535469 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 10:12:24 np0005535469 kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 10:12:24 np0005535469 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 10:12:24 np0005535469 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 10:12:24 np0005535469 kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 10:12:24 np0005535469 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 10:12:24 np0005535469 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 10:12:24 np0005535469 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 10:12:24 np0005535469 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 10:12:24 np0005535469 kernel: x86/bugs: return thunk changed
Nov 25 10:12:24 np0005535469 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 10:12:24 np0005535469 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 10:12:24 np0005535469 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 10:12:24 np0005535469 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 10:12:24 np0005535469 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 10:12:24 np0005535469 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 10:12:24 np0005535469 kernel: Freeing SMP alternatives memory: 40K
Nov 25 10:12:24 np0005535469 kernel: pid_max: default: 32768 minimum: 301
Nov 25 10:12:24 np0005535469 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 10:12:24 np0005535469 kernel: landlock: Up and running.
Nov 25 10:12:24 np0005535469 kernel: Yama: becoming mindful.
Nov 25 10:12:24 np0005535469 kernel: SELinux:  Initializing.
Nov 25 10:12:24 np0005535469 kernel: LSM support for eBPF active
Nov 25 10:12:24 np0005535469 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 10:12:24 np0005535469 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 10:12:24 np0005535469 kernel: ... version:                0
Nov 25 10:12:24 np0005535469 kernel: ... bit width:              48
Nov 25 10:12:24 np0005535469 kernel: ... generic registers:      6
Nov 25 10:12:24 np0005535469 kernel: ... value mask:             0000ffffffffffff
Nov 25 10:12:24 np0005535469 kernel: ... max period:             00007fffffffffff
Nov 25 10:12:24 np0005535469 kernel: ... fixed-purpose events:   0
Nov 25 10:12:24 np0005535469 kernel: ... event mask:             000000000000003f
Nov 25 10:12:24 np0005535469 kernel: signal: max sigframe size: 1776
Nov 25 10:12:24 np0005535469 kernel: rcu: Hierarchical SRCU implementation.
Nov 25 10:12:24 np0005535469 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 25 10:12:24 np0005535469 kernel: smp: Bringing up secondary CPUs ...
Nov 25 10:12:24 np0005535469 kernel: smpboot: x86: Booting SMP configuration:
Nov 25 10:12:24 np0005535469 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 10:12:24 np0005535469 kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 10:12:24 np0005535469 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 25 10:12:24 np0005535469 kernel: node 0 deferred pages initialised in 8ms
Nov 25 10:12:24 np0005535469 kernel: Memory: 7776412K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 605564K reserved, 0K cma-reserved)
Nov 25 10:12:24 np0005535469 kernel: devtmpfs: initialized
Nov 25 10:12:24 np0005535469 kernel: x86/mm: Memory block size: 128MB
Nov 25 10:12:24 np0005535469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 10:12:24 np0005535469 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 10:12:24 np0005535469 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 10:12:24 np0005535469 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 10:12:24 np0005535469 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 10:12:24 np0005535469 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 10:12:24 np0005535469 kernel: audit: initializing netlink subsys (disabled)
Nov 25 10:12:24 np0005535469 kernel: audit: type=2000 audit(1764083543.228:1): state=initialized audit_enabled=0 res=1
Nov 25 10:12:24 np0005535469 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 10:12:24 np0005535469 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 10:12:24 np0005535469 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 10:12:24 np0005535469 kernel: cpuidle: using governor menu
Nov 25 10:12:24 np0005535469 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 10:12:24 np0005535469 kernel: PCI: Using configuration type 1 for base access
Nov 25 10:12:24 np0005535469 kernel: PCI: Using configuration type 1 for extended access
Nov 25 10:12:24 np0005535469 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 10:12:24 np0005535469 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 10:12:24 np0005535469 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 10:12:24 np0005535469 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 10:12:24 np0005535469 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 10:12:24 np0005535469 kernel: Demotion targets for Node 0: null
Nov 25 10:12:24 np0005535469 kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 10:12:24 np0005535469 kernel: ACPI: Added _OSI(Module Device)
Nov 25 10:12:24 np0005535469 kernel: ACPI: Added _OSI(Processor Device)
Nov 25 10:12:24 np0005535469 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 10:12:24 np0005535469 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 10:12:24 np0005535469 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 10:12:24 np0005535469 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 10:12:24 np0005535469 kernel: ACPI: Interpreter enabled
Nov 25 10:12:24 np0005535469 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 10:12:24 np0005535469 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 10:12:24 np0005535469 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 10:12:24 np0005535469 kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 10:12:24 np0005535469 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 10:12:24 np0005535469 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 10:12:24 np0005535469 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [3] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [4] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [5] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [6] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [7] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [8] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [9] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [10] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [11] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [12] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [13] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [14] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [15] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [16] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [17] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [18] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [19] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [20] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [21] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [22] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [23] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [24] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [25] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [26] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [27] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [28] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [29] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [30] registered
Nov 25 10:12:24 np0005535469 kernel: acpiphp: Slot [31] registered
Nov 25 10:12:24 np0005535469 kernel: PCI host bridge to bus 0000:00
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 10:12:24 np0005535469 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 10:12:24 np0005535469 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 10:12:24 np0005535469 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 10:12:24 np0005535469 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 10:12:24 np0005535469 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 10:12:24 np0005535469 kernel: iommu: Default domain type: Translated
Nov 25 10:12:24 np0005535469 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 10:12:24 np0005535469 kernel: SCSI subsystem initialized
Nov 25 10:12:24 np0005535469 kernel: ACPI: bus type USB registered
Nov 25 10:12:24 np0005535469 kernel: usbcore: registered new interface driver usbfs
Nov 25 10:12:24 np0005535469 kernel: usbcore: registered new interface driver hub
Nov 25 10:12:24 np0005535469 kernel: usbcore: registered new device driver usb
Nov 25 10:12:24 np0005535469 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 10:12:24 np0005535469 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 10:12:24 np0005535469 kernel: PTP clock support registered
Nov 25 10:12:24 np0005535469 kernel: EDAC MC: Ver: 3.0.0
Nov 25 10:12:24 np0005535469 kernel: NetLabel: Initializing
Nov 25 10:12:24 np0005535469 kernel: NetLabel:  domain hash size = 128
Nov 25 10:12:24 np0005535469 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 10:12:24 np0005535469 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 10:12:24 np0005535469 kernel: PCI: Using ACPI for IRQ routing
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 10:12:24 np0005535469 kernel: vgaarb: loaded
Nov 25 10:12:24 np0005535469 kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 10:12:24 np0005535469 kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 10:12:24 np0005535469 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 10:12:24 np0005535469 kernel: pnp: PnP ACPI init
Nov 25 10:12:24 np0005535469 kernel: pnp: PnP ACPI: found 5 devices
Nov 25 10:12:24 np0005535469 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 10:12:24 np0005535469 kernel: NET: Registered PF_INET protocol family
Nov 25 10:12:24 np0005535469 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 10:12:24 np0005535469 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 10:12:24 np0005535469 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 10:12:24 np0005535469 kernel: NET: Registered PF_XDP protocol family
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 10:12:24 np0005535469 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 10:12:24 np0005535469 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 10:12:24 np0005535469 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 70934 usecs
Nov 25 10:12:24 np0005535469 kernel: PCI: CLS 0 bytes, default 64
Nov 25 10:12:24 np0005535469 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 10:12:24 np0005535469 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 25 10:12:24 np0005535469 kernel: ACPI: bus type thunderbolt registered
Nov 25 10:12:24 np0005535469 kernel: Trying to unpack rootfs image as initramfs...
Nov 25 10:12:24 np0005535469 kernel: Initialise system trusted keyrings
Nov 25 10:12:24 np0005535469 kernel: Key type blacklist registered
Nov 25 10:12:24 np0005535469 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 10:12:24 np0005535469 kernel: zbud: loaded
Nov 25 10:12:24 np0005535469 kernel: integrity: Platform Keyring initialized
Nov 25 10:12:24 np0005535469 kernel: integrity: Machine keyring initialized
Nov 25 10:12:24 np0005535469 kernel: Freeing initrd memory: 75160K
Nov 25 10:12:24 np0005535469 kernel: NET: Registered PF_ALG protocol family
Nov 25 10:12:24 np0005535469 kernel: xor: automatically using best checksumming function   avx       
Nov 25 10:12:24 np0005535469 kernel: Key type asymmetric registered
Nov 25 10:12:24 np0005535469 kernel: Asymmetric key parser 'x509' registered
Nov 25 10:12:24 np0005535469 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 10:12:24 np0005535469 kernel: io scheduler mq-deadline registered
Nov 25 10:12:24 np0005535469 kernel: io scheduler kyber registered
Nov 25 10:12:24 np0005535469 kernel: io scheduler bfq registered
Nov 25 10:12:24 np0005535469 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 10:12:24 np0005535469 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 10:12:24 np0005535469 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 10:12:24 np0005535469 kernel: ACPI: button: Power Button [PWRF]
Nov 25 10:12:24 np0005535469 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 10:12:24 np0005535469 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 10:12:24 np0005535469 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 10:12:24 np0005535469 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 10:12:24 np0005535469 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 10:12:24 np0005535469 kernel: Non-volatile memory driver v1.3
Nov 25 10:12:24 np0005535469 kernel: rdac: device handler registered
Nov 25 10:12:24 np0005535469 kernel: hp_sw: device handler registered
Nov 25 10:12:24 np0005535469 kernel: emc: device handler registered
Nov 25 10:12:24 np0005535469 kernel: alua: device handler registered
Nov 25 10:12:24 np0005535469 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 10:12:24 np0005535469 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 10:12:24 np0005535469 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 10:12:24 np0005535469 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 10:12:24 np0005535469 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 10:12:24 np0005535469 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 10:12:24 np0005535469 kernel: usb usb1: Product: UHCI Host Controller
Nov 25 10:12:24 np0005535469 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 10:12:24 np0005535469 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 10:12:24 np0005535469 kernel: hub 1-0:1.0: USB hub found
Nov 25 10:12:24 np0005535469 kernel: hub 1-0:1.0: 2 ports detected
Nov 25 10:12:24 np0005535469 kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 10:12:24 np0005535469 kernel: usbserial: USB Serial support registered for generic
Nov 25 10:12:24 np0005535469 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 10:12:24 np0005535469 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 10:12:24 np0005535469 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 10:12:24 np0005535469 kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 10:12:24 np0005535469 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 10:12:24 np0005535469 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 10:12:24 np0005535469 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 10:12:24 np0005535469 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 10:12:24 np0005535469 kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 10:12:24 np0005535469 kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T15:12:23 UTC (1764083543)
Nov 25 10:12:24 np0005535469 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 10:12:24 np0005535469 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 10:12:24 np0005535469 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 10:12:24 np0005535469 kernel: usbcore: registered new interface driver usbhid
Nov 25 10:12:24 np0005535469 kernel: usbhid: USB HID core driver
Nov 25 10:12:24 np0005535469 kernel: drop_monitor: Initializing network drop monitor service
Nov 25 10:12:24 np0005535469 kernel: Initializing XFRM netlink socket
Nov 25 10:12:24 np0005535469 kernel: NET: Registered PF_INET6 protocol family
Nov 25 10:12:24 np0005535469 kernel: Segment Routing with IPv6
Nov 25 10:12:24 np0005535469 kernel: NET: Registered PF_PACKET protocol family
Nov 25 10:12:24 np0005535469 kernel: mpls_gso: MPLS GSO support
Nov 25 10:12:24 np0005535469 kernel: IPI shorthand broadcast: enabled
Nov 25 10:12:24 np0005535469 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 10:12:24 np0005535469 kernel: AES CTR mode by8 optimization enabled
Nov 25 10:12:24 np0005535469 kernel: sched_clock: Marking stable (1210007323, 153660005)->(1438978824, -75311496)
Nov 25 10:12:24 np0005535469 kernel: registered taskstats version 1
Nov 25 10:12:24 np0005535469 kernel: Loading compiled-in X.509 certificates
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 10:12:24 np0005535469 kernel: Demotion targets for Node 0: null
Nov 25 10:12:24 np0005535469 kernel: page_owner is disabled
Nov 25 10:12:24 np0005535469 kernel: Key type .fscrypt registered
Nov 25 10:12:24 np0005535469 kernel: Key type fscrypt-provisioning registered
Nov 25 10:12:24 np0005535469 kernel: Key type big_key registered
Nov 25 10:12:24 np0005535469 kernel: Key type encrypted registered
Nov 25 10:12:24 np0005535469 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 10:12:24 np0005535469 kernel: Loading compiled-in module X.509 certificates
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 10:12:24 np0005535469 kernel: ima: Allocated hash algorithm: sha256
Nov 25 10:12:24 np0005535469 kernel: ima: No architecture policies found
Nov 25 10:12:24 np0005535469 kernel: evm: Initialising EVM extended attributes:
Nov 25 10:12:24 np0005535469 kernel: evm: security.selinux
Nov 25 10:12:24 np0005535469 kernel: evm: security.SMACK64 (disabled)
Nov 25 10:12:24 np0005535469 kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 10:12:24 np0005535469 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 10:12:24 np0005535469 kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 10:12:24 np0005535469 kernel: evm: security.apparmor (disabled)
Nov 25 10:12:24 np0005535469 kernel: evm: security.ima
Nov 25 10:12:24 np0005535469 kernel: evm: security.capability
Nov 25 10:12:24 np0005535469 kernel: evm: HMAC attrs: 0x1
Nov 25 10:12:24 np0005535469 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 10:12:24 np0005535469 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 10:12:24 np0005535469 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 10:12:24 np0005535469 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 10:12:24 np0005535469 kernel: usb 1-1: Manufacturer: QEMU
Nov 25 10:12:24 np0005535469 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 10:12:24 np0005535469 kernel: Running certificate verification RSA selftest
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 10:12:24 np0005535469 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 10:12:24 np0005535469 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 10:12:24 np0005535469 kernel: Running certificate verification ECDSA selftest
Nov 25 10:12:24 np0005535469 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 10:12:24 np0005535469 kernel: clk: Disabling unused clocks
Nov 25 10:12:24 np0005535469 kernel: Freeing unused decrypted memory: 2028K
Nov 25 10:12:24 np0005535469 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 10:12:24 np0005535469 kernel: Write protecting the kernel read-only data: 30720k
Nov 25 10:12:24 np0005535469 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 10:12:24 np0005535469 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 10:12:24 np0005535469 kernel: Run /init as init process
Nov 25 10:12:24 np0005535469 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 10:12:24 np0005535469 systemd: Detected virtualization kvm.
Nov 25 10:12:24 np0005535469 systemd: Detected architecture x86-64.
Nov 25 10:12:24 np0005535469 systemd: Running in initrd.
Nov 25 10:12:24 np0005535469 systemd: No hostname configured, using default hostname.
Nov 25 10:12:24 np0005535469 systemd: Hostname set to <localhost>.
Nov 25 10:12:24 np0005535469 systemd: Initializing machine ID from VM UUID.
Nov 25 10:12:24 np0005535469 systemd: Queued start job for default target Initrd Default Target.
Nov 25 10:12:24 np0005535469 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 10:12:24 np0005535469 systemd: Reached target Local Encrypted Volumes.
Nov 25 10:12:24 np0005535469 systemd: Reached target Initrd /usr File System.
Nov 25 10:12:24 np0005535469 systemd: Reached target Local File Systems.
Nov 25 10:12:24 np0005535469 systemd: Reached target Path Units.
Nov 25 10:12:24 np0005535469 systemd: Reached target Slice Units.
Nov 25 10:12:24 np0005535469 systemd: Reached target Swaps.
Nov 25 10:12:24 np0005535469 systemd: Reached target Timer Units.
Nov 25 10:12:24 np0005535469 systemd: Listening on D-Bus System Message Bus Socket.
Nov 25 10:12:24 np0005535469 systemd: Listening on Journal Socket (/dev/log).
Nov 25 10:12:24 np0005535469 systemd: Listening on Journal Socket.
Nov 25 10:12:24 np0005535469 systemd: Listening on udev Control Socket.
Nov 25 10:12:24 np0005535469 systemd: Listening on udev Kernel Socket.
Nov 25 10:12:24 np0005535469 systemd: Reached target Socket Units.
Nov 25 10:12:24 np0005535469 systemd: Starting Create List of Static Device Nodes...
Nov 25 10:12:24 np0005535469 systemd: Starting Journal Service...
Nov 25 10:12:24 np0005535469 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 10:12:24 np0005535469 systemd: Starting Apply Kernel Variables...
Nov 25 10:12:24 np0005535469 systemd: Starting Create System Users...
Nov 25 10:12:24 np0005535469 systemd: Starting Setup Virtual Console...
Nov 25 10:12:24 np0005535469 systemd: Finished Create List of Static Device Nodes.
Nov 25 10:12:24 np0005535469 systemd: Finished Apply Kernel Variables.
Nov 25 10:12:24 np0005535469 systemd: Finished Create System Users.
Nov 25 10:12:24 np0005535469 systemd-journald[305]: Journal started
Nov 25 10:12:24 np0005535469 systemd-journald[305]: Runtime Journal (/run/log/journal/3ad80417845649f6921921501d8909bb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 10:12:24 np0005535469 systemd-sysusers[310]: Creating group 'users' with GID 100.
Nov 25 10:12:24 np0005535469 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Nov 25 10:12:24 np0005535469 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 10:12:24 np0005535469 systemd: Started Journal Service.
Nov 25 10:12:24 np0005535469 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 10:12:24 np0005535469 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 10:12:24 np0005535469 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 10:12:24 np0005535469 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 10:12:24 np0005535469 systemd[1]: Finished Setup Virtual Console.
Nov 25 10:12:24 np0005535469 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 10:12:24 np0005535469 systemd[1]: Starting dracut cmdline hook...
Nov 25 10:12:24 np0005535469 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 10:12:24 np0005535469 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 10:12:24 np0005535469 systemd[1]: Finished dracut cmdline hook.
Nov 25 10:12:24 np0005535469 systemd[1]: Starting dracut pre-udev hook...
Nov 25 10:12:24 np0005535469 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 10:12:24 np0005535469 kernel: device-mapper: uevent: version 1.0.3
Nov 25 10:12:24 np0005535469 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 10:12:24 np0005535469 kernel: RPC: Registered named UNIX socket transport module.
Nov 25 10:12:24 np0005535469 kernel: RPC: Registered udp transport module.
Nov 25 10:12:24 np0005535469 kernel: RPC: Registered tcp transport module.
Nov 25 10:12:24 np0005535469 kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 10:12:24 np0005535469 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 10:12:24 np0005535469 rpc.statd[443]: Version 2.5.4 starting
Nov 25 10:12:25 np0005535469 rpc.statd[443]: Initializing NSM state
Nov 25 10:12:25 np0005535469 rpc.idmapd[448]: Setting log level to 0
Nov 25 10:12:25 np0005535469 systemd[1]: Finished dracut pre-udev hook.
Nov 25 10:12:25 np0005535469 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 10:12:25 np0005535469 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 10:12:25 np0005535469 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 10:12:25 np0005535469 systemd[1]: Starting dracut pre-trigger hook...
Nov 25 10:12:25 np0005535469 systemd[1]: Finished dracut pre-trigger hook.
Nov 25 10:12:25 np0005535469 systemd[1]: Starting Coldplug All udev Devices...
Nov 25 10:12:25 np0005535469 systemd[1]: Created slice Slice /system/modprobe.
Nov 25 10:12:25 np0005535469 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 10:12:25 np0005535469 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 10:12:25 np0005535469 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 10:12:25 np0005535469 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 10:12:25 np0005535469 systemd[1]: Mounting Kernel Configuration File System...
Nov 25 10:12:25 np0005535469 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 10:12:25 np0005535469 systemd[1]: Reached target Network.
Nov 25 10:12:25 np0005535469 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 10:12:25 np0005535469 systemd[1]: Starting dracut initqueue hook...
Nov 25 10:12:25 np0005535469 systemd[1]: Mounted Kernel Configuration File System.
Nov 25 10:12:25 np0005535469 systemd[1]: Reached target System Initialization.
Nov 25 10:12:25 np0005535469 systemd[1]: Reached target Basic System.
Nov 25 10:12:25 np0005535469 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 10:12:25 np0005535469 systemd-udevd[488]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:12:25 np0005535469 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 10:12:25 np0005535469 kernel: scsi host0: ata_piix
Nov 25 10:12:25 np0005535469 kernel: scsi host1: ata_piix
Nov 25 10:12:25 np0005535469 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 10:12:25 np0005535469 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 10:12:25 np0005535469 kernel: vda: vda1
Nov 25 10:12:25 np0005535469 kernel: ata1: found unknown device (class 0)
Nov 25 10:12:25 np0005535469 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 10:12:25 np0005535469 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 10:12:25 np0005535469 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 10:12:25 np0005535469 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 10:12:25 np0005535469 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 10:12:25 np0005535469 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 10:12:25 np0005535469 systemd[1]: Reached target Initrd Root Device.
Nov 25 10:12:25 np0005535469 systemd[1]: Finished dracut initqueue hook.
Nov 25 10:12:25 np0005535469 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 10:12:25 np0005535469 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 10:12:25 np0005535469 systemd[1]: Reached target Remote File Systems.
Nov 25 10:12:25 np0005535469 systemd[1]: Starting dracut pre-mount hook...
Nov 25 10:12:25 np0005535469 systemd[1]: Finished dracut pre-mount hook.
Nov 25 10:12:25 np0005535469 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 10:12:25 np0005535469 systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 10:12:25 np0005535469 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 10:12:25 np0005535469 systemd[1]: Mounting /sysroot...
Nov 25 10:12:26 np0005535469 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 10:12:26 np0005535469 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 10:12:26 np0005535469 kernel: XFS (vda1): Ending clean mount
Nov 25 10:12:26 np0005535469 systemd[1]: Mounted /sysroot.
Nov 25 10:12:26 np0005535469 systemd[1]: Reached target Initrd Root File System.
Nov 25 10:12:26 np0005535469 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 10:12:26 np0005535469 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 10:12:26 np0005535469 systemd[1]: Reached target Initrd File Systems.
Nov 25 10:12:26 np0005535469 systemd[1]: Reached target Initrd Default Target.
Nov 25 10:12:26 np0005535469 systemd[1]: Starting dracut mount hook...
Nov 25 10:12:26 np0005535469 systemd[1]: Finished dracut mount hook.
Nov 25 10:12:26 np0005535469 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 10:12:26 np0005535469 rpc.idmapd[448]: exiting on signal 15
Nov 25 10:12:26 np0005535469 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 10:12:26 np0005535469 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Network.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Timer Units.
Nov 25 10:12:26 np0005535469 systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 10:12:26 np0005535469 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Initrd Default Target.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Basic System.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Initrd Root Device.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Initrd /usr File System.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Path Units.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Remote File Systems.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Slice Units.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Socket Units.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target System Initialization.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Local File Systems.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Swaps.
Nov 25 10:12:26 np0005535469 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped dracut mount hook.
Nov 25 10:12:26 np0005535469 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped dracut pre-mount hook.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 10:12:26 np0005535469 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped dracut initqueue hook.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 10:12:26 np0005535469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Setup Virtual Console.
Nov 25 10:12:26 np0005535469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-udevd.service: Consumed 1.074s CPU time.
Nov 25 10:12:26 np0005535469 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Closed udev Control Socket.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Closed udev Kernel Socket.
Nov 25 10:12:26 np0005535469 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped dracut pre-udev hook.
Nov 25 10:12:26 np0005535469 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped dracut cmdline hook.
Nov 25 10:12:26 np0005535469 systemd[1]: Starting Cleanup udev Database...
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 10:12:26 np0005535469 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 10:12:26 np0005535469 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Stopped Create System Users.
Nov 25 10:12:26 np0005535469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 10:12:26 np0005535469 systemd[1]: Finished Cleanup udev Database.
Nov 25 10:12:26 np0005535469 systemd[1]: Reached target Switch Root.
Nov 25 10:12:26 np0005535469 systemd[1]: Starting Switch Root...
Nov 25 10:12:26 np0005535469 systemd[1]: Switching root.
Nov 25 10:12:26 np0005535469 systemd-journald[305]: Journal stopped
Nov 25 10:12:27 np0005535469 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 25 10:12:27 np0005535469 kernel: audit: type=1404 audit(1764083546.977:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 10:12:27 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:12:27 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:12:27 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:12:27 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:12:27 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:12:27 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:12:27 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:12:27 np0005535469 kernel: audit: type=1403 audit(1764083547.139:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 10:12:27 np0005535469 systemd: Successfully loaded SELinux policy in 165.205ms.
Nov 25 10:12:27 np0005535469 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.919ms.
Nov 25 10:12:27 np0005535469 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 10:12:27 np0005535469 systemd: Detected virtualization kvm.
Nov 25 10:12:27 np0005535469 systemd: Detected architecture x86-64.
Nov 25 10:12:27 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:12:27 np0005535469 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 25 10:12:27 np0005535469 systemd: Stopped Switch Root.
Nov 25 10:12:27 np0005535469 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 10:12:27 np0005535469 systemd: Created slice Slice /system/getty.
Nov 25 10:12:27 np0005535469 systemd: Created slice Slice /system/serial-getty.
Nov 25 10:12:27 np0005535469 systemd: Created slice Slice /system/sshd-keygen.
Nov 25 10:12:27 np0005535469 systemd: Created slice User and Session Slice.
Nov 25 10:12:27 np0005535469 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 10:12:27 np0005535469 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 25 10:12:27 np0005535469 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 10:12:27 np0005535469 systemd: Reached target Local Encrypted Volumes.
Nov 25 10:12:27 np0005535469 systemd: Stopped target Switch Root.
Nov 25 10:12:27 np0005535469 systemd: Stopped target Initrd File Systems.
Nov 25 10:12:27 np0005535469 systemd: Stopped target Initrd Root File System.
Nov 25 10:12:27 np0005535469 systemd: Reached target Local Integrity Protected Volumes.
Nov 25 10:12:27 np0005535469 systemd: Reached target Path Units.
Nov 25 10:12:27 np0005535469 systemd: Reached target rpc_pipefs.target.
Nov 25 10:12:27 np0005535469 systemd: Reached target Slice Units.
Nov 25 10:12:27 np0005535469 systemd: Reached target Swaps.
Nov 25 10:12:27 np0005535469 systemd: Reached target Local Verity Protected Volumes.
Nov 25 10:12:27 np0005535469 systemd: Listening on RPCbind Server Activation Socket.
Nov 25 10:12:27 np0005535469 systemd: Reached target RPC Port Mapper.
Nov 25 10:12:27 np0005535469 systemd: Listening on Process Core Dump Socket.
Nov 25 10:12:27 np0005535469 systemd: Listening on initctl Compatibility Named Pipe.
Nov 25 10:12:27 np0005535469 systemd: Listening on udev Control Socket.
Nov 25 10:12:27 np0005535469 systemd: Listening on udev Kernel Socket.
Nov 25 10:12:27 np0005535469 systemd: Mounting Huge Pages File System...
Nov 25 10:12:27 np0005535469 systemd: Mounting POSIX Message Queue File System...
Nov 25 10:12:27 np0005535469 systemd: Mounting Kernel Debug File System...
Nov 25 10:12:27 np0005535469 systemd: Mounting Kernel Trace File System...
Nov 25 10:12:27 np0005535469 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 10:12:27 np0005535469 systemd: Starting Create List of Static Device Nodes...
Nov 25 10:12:27 np0005535469 systemd: Starting Load Kernel Module configfs...
Nov 25 10:12:27 np0005535469 systemd: Starting Load Kernel Module drm...
Nov 25 10:12:27 np0005535469 systemd: Starting Load Kernel Module efi_pstore...
Nov 25 10:12:27 np0005535469 systemd: Starting Load Kernel Module fuse...
Nov 25 10:12:27 np0005535469 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 10:12:27 np0005535469 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 25 10:12:27 np0005535469 systemd: Stopped File System Check on Root Device.
Nov 25 10:12:27 np0005535469 systemd: Stopped Journal Service.
Nov 25 10:12:27 np0005535469 kernel: fuse: init (API version 7.37)
Nov 25 10:12:27 np0005535469 systemd: Starting Journal Service...
Nov 25 10:12:27 np0005535469 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 10:12:27 np0005535469 systemd: Starting Generate network units from Kernel command line...
Nov 25 10:12:27 np0005535469 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 10:12:27 np0005535469 systemd: Starting Remount Root and Kernel File Systems...
Nov 25 10:12:27 np0005535469 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 10:12:27 np0005535469 systemd: Starting Apply Kernel Variables...
Nov 25 10:12:27 np0005535469 systemd: Starting Coldplug All udev Devices...
Nov 25 10:12:27 np0005535469 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 10:12:27 np0005535469 systemd: Mounted Huge Pages File System.
Nov 25 10:12:27 np0005535469 systemd: Mounted POSIX Message Queue File System.
Nov 25 10:12:27 np0005535469 systemd: Mounted Kernel Debug File System.
Nov 25 10:12:27 np0005535469 systemd: Mounted Kernel Trace File System.
Nov 25 10:12:27 np0005535469 systemd: Finished Create List of Static Device Nodes.
Nov 25 10:12:27 np0005535469 kernel: ACPI: bus type drm_connector registered
Nov 25 10:12:27 np0005535469 systemd-journald[680]: Journal started
Nov 25 10:12:27 np0005535469 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 10:12:27 np0005535469 systemd[1]: Queued start job for default target Multi-User System.
Nov 25 10:12:27 np0005535469 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 10:12:27 np0005535469 systemd: Started Journal Service.
Nov 25 10:12:27 np0005535469 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 10:12:27 np0005535469 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Load Kernel Module drm.
Nov 25 10:12:27 np0005535469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 10:12:27 np0005535469 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Load Kernel Module fuse.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Apply Kernel Variables.
Nov 25 10:12:27 np0005535469 systemd[1]: Mounting FUSE Control File System...
Nov 25 10:12:27 np0005535469 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 10:12:27 np0005535469 systemd[1]: Starting Rebuild Hardware Database...
Nov 25 10:12:27 np0005535469 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 10:12:27 np0005535469 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 10:12:27 np0005535469 systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 10:12:27 np0005535469 systemd[1]: Starting Create System Users...
Nov 25 10:12:27 np0005535469 systemd[1]: Mounted FUSE Control File System.
Nov 25 10:12:27 np0005535469 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 10:12:27 np0005535469 systemd-journald[680]: Received client request to flush runtime journal.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 10:12:27 np0005535469 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Create System Users.
Nov 25 10:12:27 np0005535469 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 10:12:27 np0005535469 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 10:12:27 np0005535469 systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 10:12:27 np0005535469 systemd[1]: Reached target Local File Systems.
Nov 25 10:12:28 np0005535469 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 10:12:28 np0005535469 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 10:12:28 np0005535469 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 10:12:28 np0005535469 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 10:12:28 np0005535469 systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 10:12:28 np0005535469 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 10:12:28 np0005535469 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 10:12:28 np0005535469 bootctl[696]: Couldn't find EFI system partition, skipping.
Nov 25 10:12:28 np0005535469 systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 10:12:28 np0005535469 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 10:12:28 np0005535469 systemd[1]: Starting Security Auditing Service...
Nov 25 10:12:28 np0005535469 systemd[1]: Starting RPC Bind...
Nov 25 10:12:28 np0005535469 systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 10:12:28 np0005535469 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 10:12:28 np0005535469 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 10:12:28 np0005535469 systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 10:12:28 np0005535469 systemd[1]: Started RPC Bind.
Nov 25 10:12:28 np0005535469 augenrules[707]: /sbin/augenrules: No change
Nov 25 10:12:28 np0005535469 augenrules[722]: No rules
Nov 25 10:12:28 np0005535469 augenrules[722]: enabled 1
Nov 25 10:12:28 np0005535469 augenrules[722]: failure 1
Nov 25 10:12:28 np0005535469 augenrules[722]: pid 702
Nov 25 10:12:28 np0005535469 augenrules[722]: rate_limit 0
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_limit 8192
Nov 25 10:12:28 np0005535469 augenrules[722]: lost 0
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog 3
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_wait_time 60000
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_wait_time_actual 0
Nov 25 10:12:28 np0005535469 augenrules[722]: enabled 1
Nov 25 10:12:28 np0005535469 augenrules[722]: failure 1
Nov 25 10:12:28 np0005535469 augenrules[722]: pid 702
Nov 25 10:12:28 np0005535469 augenrules[722]: rate_limit 0
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_limit 8192
Nov 25 10:12:28 np0005535469 augenrules[722]: lost 0
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog 4
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_wait_time 60000
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_wait_time_actual 0
Nov 25 10:12:28 np0005535469 augenrules[722]: enabled 1
Nov 25 10:12:28 np0005535469 augenrules[722]: failure 1
Nov 25 10:12:28 np0005535469 augenrules[722]: pid 702
Nov 25 10:12:28 np0005535469 augenrules[722]: rate_limit 0
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_limit 8192
Nov 25 10:12:28 np0005535469 augenrules[722]: lost 0
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog 2
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_wait_time 60000
Nov 25 10:12:28 np0005535469 augenrules[722]: backlog_wait_time_actual 0
Nov 25 10:12:28 np0005535469 systemd[1]: Started Security Auditing Service.
Nov 25 10:12:28 np0005535469 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 10:12:28 np0005535469 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 10:12:29 np0005535469 systemd[1]: Finished Rebuild Hardware Database.
Nov 25 10:12:29 np0005535469 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 10:12:29 np0005535469 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 10:12:29 np0005535469 systemd[1]: Starting Update is Completed...
Nov 25 10:12:29 np0005535469 systemd[1]: Finished Update is Completed.
Nov 25 10:12:29 np0005535469 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 10:12:29 np0005535469 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 10:12:29 np0005535469 systemd[1]: Reached target System Initialization.
Nov 25 10:12:29 np0005535469 systemd[1]: Started dnf makecache --timer.
Nov 25 10:12:29 np0005535469 systemd[1]: Started Daily rotation of log files.
Nov 25 10:12:29 np0005535469 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 10:12:29 np0005535469 systemd[1]: Reached target Timer Units.
Nov 25 10:12:29 np0005535469 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 10:12:29 np0005535469 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 10:12:29 np0005535469 systemd[1]: Reached target Socket Units.
Nov 25 10:12:29 np0005535469 systemd[1]: Starting D-Bus System Message Bus...
Nov 25 10:12:29 np0005535469 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 10:12:29 np0005535469 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 10:12:29 np0005535469 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 10:12:29 np0005535469 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 10:12:29 np0005535469 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 10:12:29 np0005535469 systemd-udevd[732]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:12:29 np0005535469 systemd[1]: Started D-Bus System Message Bus.
Nov 25 10:12:29 np0005535469 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 25 10:12:29 np0005535469 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 10:12:29 np0005535469 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 10:12:29 np0005535469 systemd[1]: Reached target Basic System.
Nov 25 10:12:29 np0005535469 dbus-broker-lau[749]: Ready
Nov 25 10:12:29 np0005535469 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 10:12:29 np0005535469 systemd[1]: Starting NTP client/server...
Nov 25 10:12:29 np0005535469 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 10:12:29 np0005535469 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 10:12:29 np0005535469 systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 10:12:29 np0005535469 systemd[1]: Started irqbalance daemon.
Nov 25 10:12:29 np0005535469 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 10:12:29 np0005535469 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 10:12:29 np0005535469 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 10:12:29 np0005535469 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 10:12:29 np0005535469 systemd[1]: Reached target sshd-keygen.target.
Nov 25 10:12:29 np0005535469 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 10:12:29 np0005535469 systemd[1]: Reached target User and Group Name Lookups.
Nov 25 10:12:29 np0005535469 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 25 10:12:29 np0005535469 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 25 10:12:29 np0005535469 kernel: Console: switching to colour dummy device 80x25
Nov 25 10:12:29 np0005535469 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 10:12:29 np0005535469 kernel: [drm] features: -context_init
Nov 25 10:12:29 np0005535469 kernel: [drm] number of scanouts: 1
Nov 25 10:12:29 np0005535469 kernel: [drm] number of cap sets: 0
Nov 25 10:12:29 np0005535469 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 25 10:12:29 np0005535469 systemd[1]: Starting User Login Management...
Nov 25 10:12:29 np0005535469 chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 10:12:29 np0005535469 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 10:12:29 np0005535469 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 10:12:29 np0005535469 kernel: Console: switching to colour frame buffer device 128x48
Nov 25 10:12:29 np0005535469 chronyd[795]: Loaded 0 symmetric keys
Nov 25 10:12:29 np0005535469 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 10:12:29 np0005535469 chronyd[795]: Using right/UTC timezone to obtain leap second data
Nov 25 10:12:29 np0005535469 chronyd[795]: Loaded seccomp filter (level 2)
Nov 25 10:12:29 np0005535469 systemd[1]: Started NTP client/server.
Nov 25 10:12:29 np0005535469 systemd-logind[791]: New seat seat0.
Nov 25 10:12:29 np0005535469 systemd-logind[791]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 10:12:29 np0005535469 systemd-logind[791]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 10:12:29 np0005535469 systemd[1]: Started User Login Management.
Nov 25 10:12:29 np0005535469 kernel: kvm_amd: TSC scaling supported
Nov 25 10:12:29 np0005535469 kernel: kvm_amd: Nested Virtualization enabled
Nov 25 10:12:29 np0005535469 kernel: kvm_amd: Nested Paging enabled
Nov 25 10:12:29 np0005535469 kernel: kvm_amd: LBR virtualization supported
Nov 25 10:12:29 np0005535469 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 10:12:29 np0005535469 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 10:12:29 np0005535469 iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Nov 25 10:12:29 np0005535469 systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 10:12:30 np0005535469 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 15:12:30 +0000. Up 7.82 seconds.
Nov 25 10:12:30 np0005535469 systemd[1]: run-cloud\x2dinit-tmp-tmp4aqvbqsy.mount: Deactivated successfully.
Nov 25 10:12:30 np0005535469 systemd[1]: Starting Hostname Service...
Nov 25 10:12:30 np0005535469 systemd[1]: Started Hostname Service.
Nov 25 10:12:30 np0005535469 systemd-hostnamed[854]: Hostname set to <np0005535469.novalocal> (static)
Nov 25 10:12:30 np0005535469 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 10:12:30 np0005535469 systemd[1]: Reached target Preparation for Network.
Nov 25 10:12:30 np0005535469 systemd[1]: Starting Network Manager...
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.7665] NetworkManager (version 1.54.1-1.el9) is starting... (boot:d9e49c3e-3190-4777-a484-ac14f78b032e)
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.7671] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.7868] manager[0x563330eb3080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.7924] hostname: hostname: using hostnamed
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.7925] hostname: static hostname changed from (none) to "np0005535469.novalocal"
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.7931] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8044] manager[0x563330eb3080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8045] manager[0x563330eb3080]: rfkill: WWAN hardware radio set enabled
Nov 25 10:12:30 np0005535469 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8141] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8143] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8144] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8145] manager: Networking is enabled by state file
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8148] settings: Loaded settings plugin: keyfile (internal)
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8180] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8215] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8249] dhcp: init: Using DHCP client 'internal'
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8254] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8275] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8294] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8308] device (lo): Activation: starting connection 'lo' (2a85aae0-7d1a-47a9-be63-ba664cab07b2)
Nov 25 10:12:30 np0005535469 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8326] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8332] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:12:30 np0005535469 systemd[1]: Started Network Manager.
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8369] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8375] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8379] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 10:12:30 np0005535469 systemd[1]: Reached target Network.
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8381] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8384] device (eth0): carrier: link connected
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8390] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8400] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8409] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 10:12:30 np0005535469 systemd[1]: Starting Network Manager Wait Online...
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8414] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8415] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8419] manager: NetworkManager state is now CONNECTING
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8421] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8482] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8493] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:12:30 np0005535469 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8542] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8544] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 10:12:30 np0005535469 NetworkManager[858]: <info>  [1764083550.8548] device (lo): Activation: successful, device activated.
Nov 25 10:12:30 np0005535469 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:12:30 np0005535469 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 10:12:30 np0005535469 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 10:12:30 np0005535469 systemd[1]: Reached target NFS client services.
Nov 25 10:12:30 np0005535469 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 10:12:30 np0005535469 systemd[1]: Reached target Remote File Systems.
Nov 25 10:12:30 np0005535469 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0915] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0927] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0947] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0983] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0985] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0988] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0990] device (eth0): Activation: successful, device activated.
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0995] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 10:12:31 np0005535469 NetworkManager[858]: <info>  [1764083551.0997] manager: startup complete
Nov 25 10:12:31 np0005535469 systemd[1]: Finished Network Manager Wait Online.
Nov 25 10:12:31 np0005535469 systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 10:12:31 np0005535469 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 15:12:31 +0000. Up 9.04 seconds.
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.64         | 255.255.255.0 | global | fa:16:3e:47:4b:60 |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe47:4b60/64 |       .       |  link  | fa:16:3e:47:4b:60 |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 25 10:12:31 np0005535469 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 10:12:32 np0005535469 cloud-init[922]: Generating public/private rsa key pair.
Nov 25 10:12:32 np0005535469 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 10:12:32 np0005535469 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 10:12:32 np0005535469 cloud-init[922]: The key fingerprint is:
Nov 25 10:12:32 np0005535469 cloud-init[922]: SHA256:zTrtdX47pKhr6+JA1Uyqr4l/xK0vrSEpgM5ndRo3M9c root@np0005535469.novalocal
Nov 25 10:12:32 np0005535469 cloud-init[922]: The key's randomart image is:
Nov 25 10:12:32 np0005535469 cloud-init[922]: +---[RSA 3072]----+
Nov 25 10:12:32 np0005535469 cloud-init[922]: |         .       |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |        =        |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |       o o       |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |.     o  o.      |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |..   =.*S.oE     |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |o . o Bo=+     . |
Nov 25 10:12:32 np0005535469 cloud-init[922]: | o + =.o= . o +  |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |  o o ==.= o + ..|
Nov 25 10:12:32 np0005535469 cloud-init[922]: |   ..+ooB*=   .oo|
Nov 25 10:12:32 np0005535469 cloud-init[922]: +----[SHA256]-----+
Nov 25 10:12:32 np0005535469 cloud-init[922]: Generating public/private ecdsa key pair.
Nov 25 10:12:32 np0005535469 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 10:12:32 np0005535469 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 10:12:32 np0005535469 cloud-init[922]: The key fingerprint is:
Nov 25 10:12:32 np0005535469 cloud-init[922]: SHA256:39CVaqHgVOu7kUVj9NsiSpvvKgJamHysMfS+n5kE3Cw root@np0005535469.novalocal
Nov 25 10:12:32 np0005535469 cloud-init[922]: The key's randomart image is:
Nov 25 10:12:32 np0005535469 cloud-init[922]: +---[ECDSA 256]---+
Nov 25 10:12:32 np0005535469 cloud-init[922]: |          . .    |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |         . o . . |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |        o . = +  |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |  .. o o o = = o |
Nov 25 10:12:32 np0005535469 cloud-init[922]: | o =E o S = * o .|
Nov 25 10:12:32 np0005535469 cloud-init[922]: |  * *o   o @ . . |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |   O ..   O .    |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |  o ...+.  +     |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |    .o=. .ooo    |
Nov 25 10:12:32 np0005535469 cloud-init[922]: +----[SHA256]-----+
Nov 25 10:12:32 np0005535469 cloud-init[922]: Generating public/private ed25519 key pair.
Nov 25 10:12:32 np0005535469 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 10:12:32 np0005535469 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 10:12:32 np0005535469 cloud-init[922]: The key fingerprint is:
Nov 25 10:12:32 np0005535469 cloud-init[922]: SHA256:Rfev42T3z4GWYtrLMyp6gUnSDlQN2wG73SiidhbviRQ root@np0005535469.novalocal
Nov 25 10:12:32 np0005535469 cloud-init[922]: The key's randomart image is:
Nov 25 10:12:32 np0005535469 cloud-init[922]: +--[ED25519 256]--+
Nov 25 10:12:32 np0005535469 cloud-init[922]: |    .++.  . .    |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |   .  +... . .   |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |  . .o .  .   .  |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |   o oo o.     . |
Nov 25 10:12:32 np0005535469 cloud-init[922]: |   E=oooS.      .|
Nov 25 10:12:32 np0005535469 cloud-init[922]: |  . =+..      o. |
Nov 25 10:12:32 np0005535469 cloud-init[922]: | o + .  .  o +=..|
Nov 25 10:12:32 np0005535469 cloud-init[922]: |. + o .o  =oo+ +o|
Nov 25 10:12:32 np0005535469 cloud-init[922]: |   . +o .o.++ . =|
Nov 25 10:12:32 np0005535469 cloud-init[922]: +----[SHA256]-----+
Nov 25 10:12:32 np0005535469 systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 10:12:32 np0005535469 systemd[1]: Reached target Cloud-config availability.
Nov 25 10:12:32 np0005535469 systemd[1]: Reached target Network is Online.
Nov 25 10:12:32 np0005535469 systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 10:12:32 np0005535469 systemd[1]: Starting Crash recovery kernel arming...
Nov 25 10:12:32 np0005535469 systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 10:12:32 np0005535469 systemd[1]: Starting System Logging Service...
Nov 25 10:12:32 np0005535469 sm-notify[1005]: Version 2.5.4 starting
Nov 25 10:12:32 np0005535469 systemd[1]: Starting OpenSSH server daemon...
Nov 25 10:12:32 np0005535469 systemd[1]: Starting Permit User Sessions...
Nov 25 10:12:32 np0005535469 systemd[1]: Started Notify NFS peers of a restart.
Nov 25 10:12:32 np0005535469 systemd[1]: Finished Permit User Sessions.
Nov 25 10:12:32 np0005535469 systemd[1]: Started Command Scheduler.
Nov 25 10:12:32 np0005535469 systemd[1]: Started Getty on tty1.
Nov 25 10:12:32 np0005535469 systemd[1]: Started Serial Getty on ttyS0.
Nov 25 10:12:32 np0005535469 systemd[1]: Reached target Login Prompts.
Nov 25 10:12:32 np0005535469 systemd[1]: Started OpenSSH server daemon.
Nov 25 10:12:33 np0005535469 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Nov 25 10:12:33 np0005535469 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 10:12:33 np0005535469 systemd[1]: Started System Logging Service.
Nov 25 10:12:33 np0005535469 systemd[1]: Reached target Multi-User System.
Nov 25 10:12:33 np0005535469 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 10:12:33 np0005535469 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 10:12:33 np0005535469 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 10:12:33 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:12:33 np0005535469 kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Nov 25 10:12:33 np0005535469 kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 10:12:33 np0005535469 cloud-init[1108]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 15:12:33 +0000. Up 10.84 seconds.
Nov 25 10:12:33 np0005535469 systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 10:12:33 np0005535469 systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 10:12:33 np0005535469 dracut[1268]: dracut-057-102.git20250818.el9
Nov 25 10:12:33 np0005535469 cloud-init[1269]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 15:12:33 +0000. Up 11.24 seconds.
Nov 25 10:12:33 np0005535469 cloud-init[1286]: #############################################################
Nov 25 10:12:33 np0005535469 cloud-init[1287]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 10:12:33 np0005535469 cloud-init[1289]: 256 SHA256:39CVaqHgVOu7kUVj9NsiSpvvKgJamHysMfS+n5kE3Cw root@np0005535469.novalocal (ECDSA)
Nov 25 10:12:33 np0005535469 cloud-init[1291]: 256 SHA256:Rfev42T3z4GWYtrLMyp6gUnSDlQN2wG73SiidhbviRQ root@np0005535469.novalocal (ED25519)
Nov 25 10:12:33 np0005535469 cloud-init[1293]: 3072 SHA256:zTrtdX47pKhr6+JA1Uyqr4l/xK0vrSEpgM5ndRo3M9c root@np0005535469.novalocal (RSA)
Nov 25 10:12:33 np0005535469 cloud-init[1294]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 10:12:33 np0005535469 cloud-init[1295]: #############################################################
Nov 25 10:12:33 np0005535469 cloud-init[1269]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 15:12:33 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.46 seconds
Nov 25 10:12:33 np0005535469 dracut[1271]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 10:12:33 np0005535469 systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 10:12:33 np0005535469 systemd[1]: Reached target Cloud-init target.
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 10:12:34 np0005535469 dracut[1271]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: memstrack is not available
Nov 25 10:12:35 np0005535469 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 10:12:35 np0005535469 dracut[1271]: memstrack is not available
Nov 25 10:12:35 np0005535469 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 10:12:35 np0005535469 dracut[1271]: *** Including module: systemd ***
Nov 25 10:12:35 np0005535469 dracut[1271]: *** Including module: fips ***
Nov 25 10:12:35 np0005535469 chronyd[795]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Nov 25 10:12:35 np0005535469 chronyd[795]: System clock TAI offset set to 37 seconds
Nov 25 10:12:36 np0005535469 dracut[1271]: *** Including module: systemd-initrd ***
Nov 25 10:12:36 np0005535469 dracut[1271]: *** Including module: i18n ***
Nov 25 10:12:36 np0005535469 dracut[1271]: *** Including module: drm ***
Nov 25 10:12:36 np0005535469 dracut[1271]: *** Including module: prefixdevname ***
Nov 25 10:12:36 np0005535469 dracut[1271]: *** Including module: kernel-modules ***
Nov 25 10:12:37 np0005535469 kernel: block vda: the capability attribute has been deprecated.
Nov 25 10:12:37 np0005535469 dracut[1271]: *** Including module: kernel-modules-extra ***
Nov 25 10:12:37 np0005535469 dracut[1271]: *** Including module: qemu ***
Nov 25 10:12:37 np0005535469 dracut[1271]: *** Including module: fstab-sys ***
Nov 25 10:12:37 np0005535469 dracut[1271]: *** Including module: rootfs-block ***
Nov 25 10:12:37 np0005535469 dracut[1271]: *** Including module: terminfo ***
Nov 25 10:12:37 np0005535469 dracut[1271]: *** Including module: udev-rules ***
Nov 25 10:12:38 np0005535469 dracut[1271]: Skipping udev rule: 91-permissions.rules
Nov 25 10:12:38 np0005535469 dracut[1271]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 10:12:38 np0005535469 dracut[1271]: *** Including module: virtiofs ***
Nov 25 10:12:38 np0005535469 dracut[1271]: *** Including module: dracut-systemd ***
Nov 25 10:12:39 np0005535469 dracut[1271]: *** Including module: usrmount ***
Nov 25 10:12:39 np0005535469 dracut[1271]: *** Including module: base ***
Nov 25 10:12:39 np0005535469 dracut[1271]: *** Including module: fs-lib ***
Nov 25 10:12:39 np0005535469 dracut[1271]: *** Including module: kdumpbase ***
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 35 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 35 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 33 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 33 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 31 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 28 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 34 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 34 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 32 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 30 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 25 10:12:39 np0005535469 irqbalance[786]: IRQ 29 affinity is now unmanaged
Nov 25 10:12:39 np0005535469 dracut[1271]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 10:12:39 np0005535469 dracut[1271]:  microcode_ctl module: mangling fw_dir
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel" is ignored
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 10:12:39 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 10:12:40 np0005535469 dracut[1271]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 10:12:40 np0005535469 dracut[1271]: *** Including module: openssl ***
Nov 25 10:12:40 np0005535469 dracut[1271]: *** Including module: shutdown ***
Nov 25 10:12:40 np0005535469 dracut[1271]: *** Including module: squash ***
Nov 25 10:12:40 np0005535469 dracut[1271]: *** Including modules done ***
Nov 25 10:12:40 np0005535469 dracut[1271]: *** Installing kernel module dependencies ***
Nov 25 10:12:41 np0005535469 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:12:41 np0005535469 dracut[1271]: *** Installing kernel module dependencies done ***
Nov 25 10:12:41 np0005535469 dracut[1271]: *** Resolving executable dependencies ***
Nov 25 10:12:43 np0005535469 dracut[1271]: *** Resolving executable dependencies done ***
Nov 25 10:12:43 np0005535469 dracut[1271]: *** Generating early-microcode cpio image ***
Nov 25 10:12:43 np0005535469 dracut[1271]: *** Store current command line parameters ***
Nov 25 10:12:43 np0005535469 dracut[1271]: Stored kernel commandline:
Nov 25 10:12:43 np0005535469 dracut[1271]: No dracut internal kernel commandline stored in the initramfs
Nov 25 10:12:43 np0005535469 dracut[1271]: *** Install squash loader ***
Nov 25 10:12:44 np0005535469 dracut[1271]: *** Squashing the files inside the initramfs ***
Nov 25 10:12:46 np0005535469 dracut[1271]: *** Squashing the files inside the initramfs done ***
Nov 25 10:12:46 np0005535469 dracut[1271]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 10:12:46 np0005535469 dracut[1271]: *** Hardlinking files ***
Nov 25 10:12:46 np0005535469 dracut[1271]: *** Hardlinking files done ***
Nov 25 10:12:46 np0005535469 dracut[1271]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 10:12:47 np0005535469 kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Nov 25 10:12:47 np0005535469 kdumpctl[1015]: kdump: Starting kdump: [OK]
Nov 25 10:12:47 np0005535469 systemd[1]: Finished Crash recovery kernel arming.
Nov 25 10:12:47 np0005535469 systemd[1]: Startup finished in 1.671s (kernel) + 2.940s (initrd) + 20.219s (userspace) = 24.831s.
Nov 25 10:12:52 np0005535469 systemd[1]: Created slice User Slice of UID 1000.
Nov 25 10:12:52 np0005535469 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 10:12:52 np0005535469 systemd-logind[791]: New session 1 of user zuul.
Nov 25 10:12:52 np0005535469 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 10:12:52 np0005535469 systemd[1]: Starting User Manager for UID 1000...
Nov 25 10:12:52 np0005535469 systemd[4299]: Queued start job for default target Main User Target.
Nov 25 10:12:52 np0005535469 systemd[4299]: Created slice User Application Slice.
Nov 25 10:12:52 np0005535469 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 10:12:52 np0005535469 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 10:12:52 np0005535469 systemd[4299]: Reached target Paths.
Nov 25 10:12:52 np0005535469 systemd[4299]: Reached target Timers.
Nov 25 10:12:52 np0005535469 systemd[4299]: Starting D-Bus User Message Bus Socket...
Nov 25 10:12:52 np0005535469 systemd[4299]: Starting Create User's Volatile Files and Directories...
Nov 25 10:12:52 np0005535469 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Nov 25 10:12:52 np0005535469 systemd[4299]: Reached target Sockets.
Nov 25 10:12:52 np0005535469 systemd[4299]: Finished Create User's Volatile Files and Directories.
Nov 25 10:12:52 np0005535469 systemd[4299]: Reached target Basic System.
Nov 25 10:12:52 np0005535469 systemd[4299]: Reached target Main User Target.
Nov 25 10:12:52 np0005535469 systemd[4299]: Startup finished in 164ms.
Nov 25 10:12:52 np0005535469 systemd[1]: Started User Manager for UID 1000.
Nov 25 10:12:53 np0005535469 systemd[1]: Started Session 1 of User zuul.
Nov 25 10:12:53 np0005535469 python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:12:56 np0005535469 python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:13:00 np0005535469 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 10:13:02 np0005535469 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:13:03 np0005535469 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 10:13:05 np0005535469 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkeoA3WZl6Q7Uf3uP53j2tpPfpzgB4GKeSMhPaPcangYwGoKsq3r2UCY8zku/n8YQRjCTCX11F8+6kANcSDxJn/HCkbSeg4iaZe28baTknqM1ij9uQWmteS5+QtUl03J+4mL2miMAvkFFHMbgpJi4q5aQVek9GBFKQ5wnxcKE+1zc5zYI1fPMN1Z2fLCWz87CBu4CXRx3kbVNKBEn1MreJnj/k6I6Mw9Vrgbi2dsLVKZ0a64h2g21KtQbP160stFOYEyf2M+MW4lU0EWXR2V7a6qK0L/F+QP59+euocPngYhSzi0NnrGjEWohqV9qgIC1QWXaBr1Nm59gai2qnSSTJOEvhW96rshunh6cxmci/qAOi8reEcSINAYsFv56Pwry6jm+bMI/STdyC3U/Z3ANwL1CQtLvxU0SJEfOSPAwFgzUiJZRTsU8eS7NfbPFr0QkyzQsFiIGHZgZ8Bm6Wdideyup9s34QnQ/V6vDJAB1F8h1L56Sy1gZ7OnkxG7TAHx0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:05 np0005535469 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:06 np0005535469 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:13:06 np0005535469 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764083585.9986212-207-251420458798692/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=d4f72a9e06ea46969ee22f690169ac55_id_rsa follow=False checksum=fef216208056657ef89d74bd094c2cb348665499 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:07 np0005535469 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:13:07 np0005535469 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764083586.9880915-240-197372065635544/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=d4f72a9e06ea46969ee22f690169ac55_id_rsa.pub follow=False checksum=1f0d9ce900b69c32617f71bd38c5dd253769d5f1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:09 np0005535469 python3[4971]: ansible-ping Invoked with data=pong
Nov 25 10:13:10 np0005535469 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:13:11 np0005535469 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 10:13:12 np0005535469 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:12 np0005535469 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:13 np0005535469 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:13 np0005535469 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:13 np0005535469 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:14 np0005535469 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:15 np0005535469 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:16 np0005535469 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:13:17 np0005535469 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764083596.0938365-21-225322089273423/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:17 np0005535469 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:17 np0005535469 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:18 np0005535469 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:18 np0005535469 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:18 np0005535469 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:19 np0005535469 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:19 np0005535469 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:19 np0005535469 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:20 np0005535469 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:20 np0005535469 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:20 np0005535469 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:20 np0005535469 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:21 np0005535469 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:21 np0005535469 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:21 np0005535469 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:22 np0005535469 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:22 np0005535469 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:22 np0005535469 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:23 np0005535469 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:23 np0005535469 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:24 np0005535469 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:24 np0005535469 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:24 np0005535469 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:25 np0005535469 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:25 np0005535469 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:25 np0005535469 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:13:27 np0005535469 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 10:13:27 np0005535469 systemd[1]: Starting Time & Date Service...
Nov 25 10:13:27 np0005535469 systemd[1]: Started Time & Date Service.
Nov 25 10:13:27 np0005535469 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Nov 25 10:13:28 np0005535469 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:28 np0005535469 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:13:29 np0005535469 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764083608.6107595-153-77588927383020/source _original_basename=tmpwwitscf8 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:29 np0005535469 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:13:30 np0005535469 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764083609.4809515-183-66752620303194/source _original_basename=tmp9kzdg9b1 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:30 np0005535469 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:13:31 np0005535469 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764083610.5326803-231-54237062542572/source _original_basename=tmp3753880l follow=False checksum=d1fb5b4f9f73b8c84cf3b5af0e2af5367a435780 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:31 np0005535469 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:13:32 np0005535469 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:13:32 np0005535469 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:13:32 np0005535469 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764083612.251785-273-58476855102172/source _original_basename=tmp5jb4x4rt follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:33 np0005535469 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-08ce-fa5d-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:13:34 np0005535469 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-08ce-fa5d-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 10:13:35 np0005535469 python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:54 np0005535469 python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:57 np0005535469 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 25 10:14:33 np0005535469 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 25 10:14:33 np0005535469 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1541] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 10:14:33 np0005535469 systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1820] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1854] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1861] device (eth1): carrier: link connected
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1864] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1873] policy: auto-activating connection 'Wired connection 1' (83f9c217-b623-3927-aa63-2d6e79ff47cf)
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1881] device (eth1): Activation: starting connection 'Wired connection 1' (83f9c217-b623-3927-aa63-2d6e79ff47cf)
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1882] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1888] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1894] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:14:33 np0005535469 NetworkManager[858]: <info>  [1764083673.1903] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:14:33 np0005535469 python3[6971]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-9d17-2ba9-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:14:44 np0005535469 python3[7051]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:14:44 np0005535469 python3[7124]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764083683.6940308-102-250198351944368/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=121e333432eb860cb363606fe138ff59d0d9850a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:45 np0005535469 python3[7174]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:14:45 np0005535469 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 10:14:45 np0005535469 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 10:14:45 np0005535469 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 10:14:45 np0005535469 systemd[1]: Stopping Network Manager...
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.4878] caught SIGTERM, shutting down normally.
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.4891] dhcp4 (eth0): canceled DHCP transaction
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.4891] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.4891] dhcp4 (eth0): state changed no lease
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.4896] manager: NetworkManager state is now CONNECTING
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.4977] dhcp4 (eth1): canceled DHCP transaction
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.4978] dhcp4 (eth1): state changed no lease
Nov 25 10:14:45 np0005535469 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:14:45 np0005535469 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:14:45 np0005535469 NetworkManager[858]: <info>  [1764083685.5619] exiting (success)
Nov 25 10:14:45 np0005535469 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 10:14:45 np0005535469 systemd[1]: Stopped Network Manager.
Nov 25 10:14:45 np0005535469 systemd[1]: Starting Network Manager...
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.6237] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:d9e49c3e-3190-4777-a484-ac14f78b032e)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.6238] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.6293] manager[0x559d11bf5070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 10:14:45 np0005535469 systemd[1]: Starting Hostname Service...
Nov 25 10:14:45 np0005535469 systemd[1]: Started Hostname Service.
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7408] hostname: hostname: using hostnamed
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7412] hostname: static hostname changed from (none) to "np0005535469.novalocal"
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7421] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7429] manager[0x559d11bf5070]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7429] manager[0x559d11bf5070]: rfkill: WWAN hardware radio set enabled
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7478] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7479] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7480] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7481] manager: Networking is enabled by state file
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7485] settings: Loaded settings plugin: keyfile (internal)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7490] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7527] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7540] dhcp: init: Using DHCP client 'internal'
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7543] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7550] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7559] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7569] device (lo): Activation: starting connection 'lo' (2a85aae0-7d1a-47a9-be63-ba664cab07b2)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7577] device (eth0): carrier: link connected
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7582] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7589] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7589] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7599] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7609] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7618] device (eth1): carrier: link connected
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7623] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7632] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (83f9c217-b623-3927-aa63-2d6e79ff47cf) (indicated)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7632] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7639] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7647] device (eth1): Activation: starting connection 'Wired connection 1' (83f9c217-b623-3927-aa63-2d6e79ff47cf)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7655] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7660] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 10:14:45 np0005535469 systemd[1]: Started Network Manager.
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7662] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7664] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7666] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7669] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7671] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7674] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7676] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7683] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7686] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7702] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7706] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7725] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7730] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7736] device (lo): Activation: successful, device activated.
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7745] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.7752] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 10:14:45 np0005535469 systemd[1]: Starting Network Manager Wait Online...
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.8260] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.8282] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.8284] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.8287] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.8290] device (eth0): Activation: successful, device activated.
Nov 25 10:14:45 np0005535469 NetworkManager[7191]: <info>  [1764083685.8295] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 10:14:46 np0005535469 python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-9d17-2ba9-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:14:55 np0005535469 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:15:15 np0005535469 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3554] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 10:15:31 np0005535469 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:15:31 np0005535469 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3891] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3894] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3901] device (eth1): Activation: successful, device activated.
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3907] manager: startup complete
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3909] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <warn>  [1764083731.3914] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3922] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 10:15:31 np0005535469 systemd[1]: Finished Network Manager Wait Online.
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3997] dhcp4 (eth1): canceled DHCP transaction
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3998] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.3998] dhcp4 (eth1): state changed no lease
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4017] policy: auto-activating connection 'ci-private-network' (09808217-d49c-5cba-bacd-b33b16161199)
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4022] device (eth1): Activation: starting connection 'ci-private-network' (09808217-d49c-5cba-bacd-b33b16161199)
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4023] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4028] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4036] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4046] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4095] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4097] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:15:31 np0005535469 NetworkManager[7191]: <info>  [1764083731.4101] device (eth1): Activation: successful, device activated.
Nov 25 10:15:39 np0005535469 systemd[4299]: Starting Mark boot as successful...
Nov 25 10:15:39 np0005535469 systemd[4299]: Finished Mark boot as successful.
Nov 25 10:15:41 np0005535469 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:15:46 np0005535469 systemd-logind[791]: Session 1 logged out. Waiting for processes to exit.
Nov 25 10:15:50 np0005535469 systemd-logind[791]: New session 3 of user zuul.
Nov 25 10:15:50 np0005535469 systemd[1]: Started Session 3 of User zuul.
Nov 25 10:15:50 np0005535469 python3[7369]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:15:51 np0005535469 python3[7442]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764083750.373174-267-205676440070761/source _original_basename=tmpvb81vnuo follow=False checksum=7c010077733b40b96bef9a8ed37d372947c7453e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:53 np0005535469 systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 10:15:53 np0005535469 systemd-logind[791]: Session 3 logged out. Waiting for processes to exit.
Nov 25 10:15:53 np0005535469 systemd-logind[791]: Removed session 3.
Nov 25 10:18:39 np0005535469 systemd[4299]: Created slice User Background Tasks Slice.
Nov 25 10:18:39 np0005535469 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 10:18:39 np0005535469 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 10:24:17 np0005535469 systemd-logind[791]: New session 4 of user zuul.
Nov 25 10:24:17 np0005535469 systemd[1]: Started Session 4 of User zuul.
Nov 25 10:24:17 np0005535469 python3[7508]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-d0ea-a376-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:17 np0005535469 python3[7537]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:18 np0005535469 python3[7563]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:18 np0005535469 python3[7589]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:18 np0005535469 python3[7615]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:19 np0005535469 python3[7641]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:19 np0005535469 python3[7719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:24:20 np0005535469 python3[7792]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764084259.5801592-480-64789137120037/source _original_basename=tmpf_r34nkb follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:21 np0005535469 python3[7842]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:24:21 np0005535469 systemd[1]: Reloading.
Nov 25 10:24:21 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:24:23 np0005535469 python3[7898]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 10:24:23 np0005535469 python3[7924]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:23 np0005535469 python3[7952]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:24 np0005535469 python3[7980]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:24 np0005535469 python3[8008]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:24 np0005535469 python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-d0ea-a376-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:25 np0005535469 python3[8065]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:24:27 np0005535469 systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 10:24:27 np0005535469 systemd[1]: session-4.scope: Consumed 3.992s CPU time.
Nov 25 10:24:27 np0005535469 systemd-logind[791]: Session 4 logged out. Waiting for processes to exit.
Nov 25 10:24:27 np0005535469 systemd-logind[791]: Removed session 4.
Nov 25 10:24:29 np0005535469 systemd-logind[791]: New session 5 of user zuul.
Nov 25 10:24:29 np0005535469 systemd[1]: Started Session 5 of User zuul.
Nov 25 10:24:29 np0005535469 python3[8101]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 10:24:56 np0005535469 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 10:24:56 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:24:56 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:24:56 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:24:56 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:24:56 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:24:56 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:24:56 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:25:04 np0005535469 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 10:25:04 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:25:04 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:25:04 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:25:04 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:25:04 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:25:04 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:25:04 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:25:15 np0005535469 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 10:25:15 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:25:15 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:25:15 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:25:15 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:25:15 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:25:15 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:25:15 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:25:16 np0005535469 setsebool[8168]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 10:25:16 np0005535469 setsebool[8168]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 10:25:28 np0005535469 kernel: SELinux:  Converting 388 SID table entries...
Nov 25 10:25:28 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:25:28 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:25:28 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:25:28 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:25:28 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:25:28 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:25:28 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:26:04 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 10:26:04 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:26:04 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:26:04 np0005535469 systemd[1]: Reloading.
Nov 25 10:26:04 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:26:05 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:26:06 np0005535469 python3[10020]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-4c11-8403-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:26:07 np0005535469 kernel: evm: overlay not supported
Nov 25 10:26:07 np0005535469 systemd[4299]: Starting D-Bus User Message Bus...
Nov 25 10:26:07 np0005535469 dbus-broker-launch[11019]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 10:26:07 np0005535469 dbus-broker-launch[11019]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 10:26:07 np0005535469 systemd[4299]: Started D-Bus User Message Bus.
Nov 25 10:26:07 np0005535469 dbus-broker-lau[11019]: Ready
Nov 25 10:26:07 np0005535469 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 10:26:07 np0005535469 systemd[4299]: Created slice Slice /user.
Nov 25 10:26:07 np0005535469 systemd[4299]: podman-10902.scope: unit configures an IP firewall, but not running as root.
Nov 25 10:26:07 np0005535469 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 10:26:07 np0005535469 systemd[4299]: Started podman-10902.scope.
Nov 25 10:26:07 np0005535469 systemd[4299]: Started podman-pause-af4bdaaa.scope.
Nov 25 10:26:08 np0005535469 python3[11865]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.97:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.97:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:08 np0005535469 python3[11865]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 25 10:26:08 np0005535469 systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 10:26:08 np0005535469 systemd[1]: session-5.scope: Consumed 1min 2.029s CPU time.
Nov 25 10:26:08 np0005535469 systemd-logind[791]: Session 5 logged out. Waiting for processes to exit.
Nov 25 10:26:08 np0005535469 systemd-logind[791]: Removed session 5.
Nov 25 10:26:33 np0005535469 systemd-logind[791]: New session 6 of user zuul.
Nov 25 10:26:33 np0005535469 systemd[1]: Started Session 6 of User zuul.
Nov 25 10:26:33 np0005535469 python3[22871]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHu7EWbcWxKmZTyuUfzIV3IWq7xPajCmoVB+cs21oZl5i5s2IGA/i1gvPvhxPqFw1NI2XsKRiRXJEC7RTPWi1Yg= zuul@np0005535468.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:26:33 np0005535469 python3[23044]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHu7EWbcWxKmZTyuUfzIV3IWq7xPajCmoVB+cs21oZl5i5s2IGA/i1gvPvhxPqFw1NI2XsKRiRXJEC7RTPWi1Yg= zuul@np0005535468.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:26:34 np0005535469 python3[23466]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005535469.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 10:26:35 np0005535469 python3[23676]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHu7EWbcWxKmZTyuUfzIV3IWq7xPajCmoVB+cs21oZl5i5s2IGA/i1gvPvhxPqFw1NI2XsKRiRXJEC7RTPWi1Yg= zuul@np0005535468.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 10:26:35 np0005535469 python3[23907]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:26:36 np0005535469 python3[24154]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764084395.4683416-135-105807672161591/source _original_basename=tmpajja69mq follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:37 np0005535469 python3[24443]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 10:26:37 np0005535469 systemd[1]: Starting Hostname Service...
Nov 25 10:26:37 np0005535469 systemd[1]: Started Hostname Service.
Nov 25 10:26:37 np0005535469 systemd-hostnamed[24542]: Changed pretty hostname to 'compute-0'
Nov 25 10:26:37 np0005535469 systemd-hostnamed[24542]: Hostname set to <compute-0> (static)
Nov 25 10:26:37 np0005535469 NetworkManager[7191]: <info>  [1764084397.2605] hostname: static hostname changed from "np0005535469.novalocal" to "compute-0"
Nov 25 10:26:37 np0005535469 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:26:37 np0005535469 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:26:37 np0005535469 systemd-logind[791]: Session 6 logged out. Waiting for processes to exit.
Nov 25 10:26:37 np0005535469 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 10:26:37 np0005535469 systemd[1]: session-6.scope: Consumed 2.270s CPU time.
Nov 25 10:26:37 np0005535469 systemd-logind[791]: Removed session 6.
Nov 25 10:26:47 np0005535469 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:26:59 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:26:59 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:26:59 np0005535469 systemd[1]: man-db-cache-update.service: Consumed 52.524s CPU time.
Nov 25 10:26:59 np0005535469 systemd[1]: run-ref8e9998d0474f1aafb6fa8445257791.service: Deactivated successfully.
Nov 25 10:27:07 np0005535469 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 10:27:39 np0005535469 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 10:27:39 np0005535469 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 10:27:39 np0005535469 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 10:27:39 np0005535469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 10:30:55 np0005535469 systemd-logind[791]: New session 7 of user zuul.
Nov 25 10:30:55 np0005535469 systemd[1]: Started Session 7 of User zuul.
Nov 25 10:30:55 np0005535469 python3[30009]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:30:57 np0005535469 python3[30125]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:30:57 np0005535469 python3[30198]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764084657.040674-33665-239815145261076/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:30:58 np0005535469 python3[30224]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:30:58 np0005535469 python3[30297]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764084657.040674-33665-239815145261076/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:30:58 np0005535469 python3[30323]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:30:59 np0005535469 python3[30396]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764084657.040674-33665-239815145261076/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:30:59 np0005535469 python3[30422]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:30:59 np0005535469 python3[30495]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764084657.040674-33665-239815145261076/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:30:59 np0005535469 python3[30521]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:31:00 np0005535469 python3[30594]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764084657.040674-33665-239815145261076/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:31:00 np0005535469 python3[30620]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:31:00 np0005535469 python3[30693]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764084657.040674-33665-239815145261076/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:31:00 np0005535469 python3[30719]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:31:01 np0005535469 python3[30792]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764084657.040674-33665-239815145261076/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:31:15 np0005535469 python3[30850]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:36:15 np0005535469 systemd-logind[791]: Session 7 logged out. Waiting for processes to exit.
Nov 25 10:36:15 np0005535469 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 10:36:15 np0005535469 systemd[1]: session-7.scope: Consumed 4.628s CPU time.
Nov 25 10:36:15 np0005535469 systemd-logind[791]: Removed session 7.
Nov 25 10:43:26 np0005535469 systemd-logind[791]: New session 8 of user zuul.
Nov 25 10:43:26 np0005535469 systemd[1]: Started Session 8 of User zuul.
Nov 25 10:43:27 np0005535469 python3.9[31017]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:43:28 np0005535469 python3.9[31198]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:43:36 np0005535469 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 10:43:36 np0005535469 systemd[1]: session-8.scope: Consumed 7.752s CPU time.
Nov 25 10:43:36 np0005535469 systemd-logind[791]: Session 8 logged out. Waiting for processes to exit.
Nov 25 10:43:36 np0005535469 systemd-logind[791]: Removed session 8.
Nov 25 10:43:52 np0005535469 systemd-logind[791]: New session 9 of user zuul.
Nov 25 10:43:52 np0005535469 systemd[1]: Started Session 9 of User zuul.
Nov 25 10:43:53 np0005535469 python3.9[31409]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 10:43:54 np0005535469 python3.9[31583]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:43:55 np0005535469 python3.9[31735]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:43:56 np0005535469 python3.9[31888]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:43:56 np0005535469 python3.9[32040]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:43:57 np0005535469 python3.9[32192]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:43:58 np0005535469 python3.9[32315]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085437.0889273-73-52712370498288/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:43:59 np0005535469 python3.9[32467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:43:59 np0005535469 python3.9[32623]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:44:00 np0005535469 python3.9[32775]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:44:01 np0005535469 python3.9[32925]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:44:05 np0005535469 python3.9[33178]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:44:06 np0005535469 python3.9[33328]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:44:07 np0005535469 python3.9[33482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:44:08 np0005535469 python3.9[33640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:44:09 np0005535469 python3.9[33724]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:44:52 np0005535469 systemd[1]: Reloading.
Nov 25 10:44:52 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:44:52 np0005535469 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 10:44:52 np0005535469 systemd[1]: Reloading.
Nov 25 10:44:52 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:44:52 np0005535469 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 10:44:52 np0005535469 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 10:44:52 np0005535469 systemd[1]: Reloading.
Nov 25 10:44:53 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:44:53 np0005535469 systemd[1]: Starting dnf makecache...
Nov 25 10:44:53 np0005535469 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 10:44:53 np0005535469 dnf[34006]: Failed determining last makecache time.
Nov 25 10:44:53 np0005535469 dbus-broker-launch[749]: Noticed file-system modification, trigger reload.
Nov 25 10:44:53 np0005535469 dbus-broker-launch[749]: Noticed file-system modification, trigger reload.
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-barbican-42b4c41831408a8e323 137 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dbus-broker-launch[749]: Noticed file-system modification, trigger reload.
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 190 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-cinder-1c00d6490d88e436f26ef 158 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-python-stevedore-c4acc5639fd2329372142 138 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-python-observabilityclient-2f31846d73c 152 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-os-net-config-bbae2ed8a159b0435a473f38 157 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 152 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-python-designate-tests-tempest-347fdbc 128 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-glance-1fd12c29b339f30fe823e 130 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 131 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-manila-3c01b7181572c95dac462 152 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-python-whitebox-neutron-tests-tempest- 165 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-octavia-ba397f07a7331190208c 152 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-watcher-c014f81a8647287f6dcc 158 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-python-tcib-1124124ec06aadbac34f0d340b 164 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 156 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-swift-dc98a8463506ac520c469a 154 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-python-tempestconf-8515371b7cceebd4282 155 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: delorean-openstack-heat-ui-013accbfd179753bc3f0 154 kB/s | 3.0 kB     00:00
Nov 25 10:44:53 np0005535469 dnf[34006]: CentOS Stream 9 - BaseOS                         76 kB/s | 6.7 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: CentOS Stream 9 - AppStream                      68 kB/s | 6.8 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: CentOS Stream 9 - CRB                            68 kB/s | 6.5 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: CentOS Stream 9 - Extras packages                75 kB/s | 8.3 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: dlrn-antelope-testing                           160 kB/s | 3.0 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: dlrn-antelope-build-deps                        157 kB/s | 3.0 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: centos9-rabbitmq                                 34 kB/s | 3.0 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: centos9-storage                                  77 kB/s | 3.0 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: centos9-opstools                                 80 kB/s | 3.0 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: NFV SIG OpenvSwitch                              88 kB/s | 3.0 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: repo-setup-centos-appstream                     151 kB/s | 4.4 kB     00:00
Nov 25 10:44:54 np0005535469 dnf[34006]: repo-setup-centos-baseos                        159 kB/s | 3.9 kB     00:00
Nov 25 10:44:55 np0005535469 dnf[34006]: repo-setup-centos-highavailability              122 kB/s | 3.9 kB     00:00
Nov 25 10:44:55 np0005535469 dnf[34006]: repo-setup-centos-powertools                    157 kB/s | 4.3 kB     00:00
Nov 25 10:44:55 np0005535469 dnf[34006]: Extra Packages for Enterprise Linux 9 - x86_64  222 kB/s |  34 kB     00:00
Nov 25 10:44:55 np0005535469 dnf[34006]: Metadata cache created.
Nov 25 10:44:55 np0005535469 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 10:44:55 np0005535469 systemd[1]: Finished dnf makecache.
Nov 25 10:44:55 np0005535469 systemd[1]: dnf-makecache.service: Consumed 1.885s CPU time.
Nov 25 10:45:56 np0005535469 kernel: SELinux:  Converting 2718 SID table entries...
Nov 25 10:45:56 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:45:56 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:45:56 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:45:56 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:45:56 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:45:56 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:45:56 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:45:57 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 10:45:57 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:45:57 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:45:57 np0005535469 systemd[1]: Reloading.
Nov 25 10:45:57 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:45:57 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:45:58 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:45:58 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:45:58 np0005535469 systemd[1]: man-db-cache-update.service: Consumed 1.044s CPU time.
Nov 25 10:45:58 np0005535469 systemd[1]: run-rcf1693d75c8e44fc81ca608bc7052637.service: Deactivated successfully.
Nov 25 10:45:58 np0005535469 python3.9[35266]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:46:00 np0005535469 python3.9[35550]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 10:46:01 np0005535469 python3.9[35702]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 10:46:03 np0005535469 python3.9[35855]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:46:04 np0005535469 python3.9[36007]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 10:46:05 np0005535469 python3.9[36159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:46:06 np0005535469 python3.9[36311]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:46:07 np0005535469 python3.9[36434]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085566.0613635-236-212316072546734/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=faad1a4d90713afcfc116f7cdc6943e028def941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:46:10 np0005535469 python3.9[36586]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:46:11 np0005535469 python3.9[36739]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:46:11 np0005535469 python3.9[36892]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:46:12 np0005535469 python3.9[37044]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 10:46:12 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:46:12 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:46:13 np0005535469 python3.9[37198]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:46:14 np0005535469 python3.9[37356]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 10:46:15 np0005535469 python3.9[37516]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 10:46:16 np0005535469 python3.9[37669]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:46:16 np0005535469 python3.9[37827]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 10:46:17 np0005535469 python3.9[37979]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:46:20 np0005535469 python3.9[38132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:46:20 np0005535469 python3.9[38284]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:46:21 np0005535469 python3.9[38407]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764085580.3006315-355-107301995796516/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:46:22 np0005535469 python3.9[38559]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:46:22 np0005535469 systemd[1]: Starting Load Kernel Modules...
Nov 25 10:46:22 np0005535469 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 10:46:22 np0005535469 kernel: Bridge firewalling registered
Nov 25 10:46:22 np0005535469 systemd-modules-load[38563]: Inserted module 'br_netfilter'
Nov 25 10:46:22 np0005535469 systemd[1]: Finished Load Kernel Modules.
Nov 25 10:46:23 np0005535469 python3.9[38718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:46:24 np0005535469 python3.9[38841]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764085582.9269493-378-147967042831667/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:46:24 np0005535469 python3.9[38993]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:46:28 np0005535469 dbus-broker-launch[749]: Noticed file-system modification, trigger reload.
Nov 25 10:46:28 np0005535469 dbus-broker-launch[749]: Noticed file-system modification, trigger reload.
Nov 25 10:46:29 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:46:29 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:46:29 np0005535469 systemd[1]: Reloading.
Nov 25 10:46:29 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:46:29 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:46:30 np0005535469 python3.9[40702]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:46:31 np0005535469 python3.9[41729]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 10:46:32 np0005535469 python3.9[42500]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:46:32 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:46:32 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:46:32 np0005535469 systemd[1]: man-db-cache-update.service: Consumed 4.708s CPU time.
Nov 25 10:46:32 np0005535469 systemd[1]: run-r99e5f285bb634a56b378ba567842e3e9.service: Deactivated successfully.
Nov 25 10:46:33 np0005535469 python3.9[43160]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:46:33 np0005535469 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 10:46:33 np0005535469 systemd[1]: Starting Authorization Manager...
Nov 25 10:46:33 np0005535469 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 10:46:33 np0005535469 polkitd[43379]: Started polkitd version 0.117
Nov 25 10:46:33 np0005535469 systemd[1]: Started Authorization Manager.
Nov 25 10:46:34 np0005535469 python3.9[43549]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:46:34 np0005535469 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 10:46:34 np0005535469 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 10:46:34 np0005535469 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 10:46:34 np0005535469 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 10:46:34 np0005535469 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 10:46:35 np0005535469 python3.9[43710]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 10:46:37 np0005535469 python3.9[43862]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:46:37 np0005535469 systemd[1]: Reloading.
Nov 25 10:46:37 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:46:38 np0005535469 python3.9[44051]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:46:38 np0005535469 systemd[1]: Reloading.
Nov 25 10:46:38 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:46:39 np0005535469 python3.9[44240]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:46:40 np0005535469 python3.9[44393]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:46:40 np0005535469 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 10:46:41 np0005535469 python3.9[44546]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:46:43 np0005535469 python3.9[44708]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:46:44 np0005535469 python3.9[44861]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:46:44 np0005535469 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 10:46:44 np0005535469 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 10:46:44 np0005535469 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 10:46:44 np0005535469 systemd[1]: Starting Apply Kernel Variables...
Nov 25 10:46:44 np0005535469 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 10:46:44 np0005535469 systemd[1]: Finished Apply Kernel Variables.
Nov 25 10:46:44 np0005535469 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 10:46:44 np0005535469 systemd[1]: session-9.scope: Consumed 2min 10.938s CPU time.
Nov 25 10:46:44 np0005535469 systemd-logind[791]: Session 9 logged out. Waiting for processes to exit.
Nov 25 10:46:44 np0005535469 systemd-logind[791]: Removed session 9.
Nov 25 10:46:50 np0005535469 systemd-logind[791]: New session 10 of user zuul.
Nov 25 10:46:50 np0005535469 systemd[1]: Started Session 10 of User zuul.
Nov 25 10:46:51 np0005535469 python3.9[45044]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:46:52 np0005535469 python3.9[45200]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 10:46:53 np0005535469 python3.9[45353]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:46:54 np0005535469 python3.9[45511]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 10:46:55 np0005535469 python3.9[45671]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:46:56 np0005535469 python3.9[45755]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:46:59 np0005535469 python3.9[45919]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:47:11 np0005535469 kernel: SELinux:  Converting 2730 SID table entries...
Nov 25 10:47:11 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:47:11 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:47:11 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:47:11 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:47:11 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:47:11 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:47:11 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:47:11 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 10:47:11 np0005535469 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 10:47:12 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:47:12 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:47:12 np0005535469 systemd[1]: Reloading.
Nov 25 10:47:12 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:47:12 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:47:12 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:47:13 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:47:13 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:47:13 np0005535469 systemd[1]: run-rcaa892ac9700470d8009cf7f41f5d1fe.service: Deactivated successfully.
Nov 25 10:47:14 np0005535469 python3.9[47019]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:47:14 np0005535469 systemd[1]: Reloading.
Nov 25 10:47:14 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:47:14 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:47:14 np0005535469 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 10:47:14 np0005535469 chown[47062]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 10:47:14 np0005535469 ovs-ctl[47067]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 10:47:14 np0005535469 ovs-ctl[47067]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 10:47:14 np0005535469 ovs-ctl[47067]: Starting ovsdb-server [  OK  ]
Nov 25 10:47:14 np0005535469 ovs-vsctl[47116]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 10:47:14 np0005535469 ovs-vsctl[47136]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"124ba75b-cc77-4d76-b66e-16ed1bbb2181\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 10:47:14 np0005535469 ovs-ctl[47067]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 10:47:14 np0005535469 ovs-ctl[47067]: Enabling remote OVSDB managers [  OK  ]
Nov 25 10:47:14 np0005535469 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 10:47:14 np0005535469 ovs-vsctl[47142]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 10:47:14 np0005535469 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 10:47:14 np0005535469 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 10:47:14 np0005535469 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 10:47:14 np0005535469 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 10:47:14 np0005535469 ovs-ctl[47186]: Inserting openvswitch module [  OK  ]
Nov 25 10:47:15 np0005535469 ovs-ctl[47155]: Starting ovs-vswitchd [  OK  ]
Nov 25 10:47:15 np0005535469 ovs-vsctl[47203]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 10:47:15 np0005535469 ovs-ctl[47155]: Enabling remote OVSDB managers [  OK  ]
Nov 25 10:47:15 np0005535469 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 10:47:15 np0005535469 systemd[1]: Starting Open vSwitch...
Nov 25 10:47:15 np0005535469 systemd[1]: Finished Open vSwitch.
Nov 25 10:47:15 np0005535469 python3.9[47355]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:47:16 np0005535469 python3.9[47507]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 10:47:17 np0005535469 kernel: SELinux:  Converting 2744 SID table entries...
Nov 25 10:47:17 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:47:17 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:47:17 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:47:17 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:47:17 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:47:17 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:47:17 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:47:18 np0005535469 python3.9[47662]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:47:19 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 10:47:19 np0005535469 python3.9[47820]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:47:21 np0005535469 python3.9[47973]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:47:23 np0005535469 python3.9[48260]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 10:47:24 np0005535469 python3.9[48410]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:47:24 np0005535469 python3.9[48564]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:47:27 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:47:27 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:47:27 np0005535469 systemd[1]: Reloading.
Nov 25 10:47:27 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:47:27 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:47:27 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:47:28 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:47:28 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:47:28 np0005535469 systemd[1]: run-r7922ea6cf1cb4c7eaf06619b5aff20c2.service: Deactivated successfully.
Nov 25 10:47:29 np0005535469 python3.9[48882]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:47:29 np0005535469 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 10:47:29 np0005535469 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 10:47:29 np0005535469 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 10:47:29 np0005535469 systemd[1]: Stopping Network Manager...
Nov 25 10:47:29 np0005535469 NetworkManager[7191]: <info>  [1764085649.1245] caught SIGTERM, shutting down normally.
Nov 25 10:47:29 np0005535469 NetworkManager[7191]: <info>  [1764085649.1257] dhcp4 (eth0): canceled DHCP transaction
Nov 25 10:47:29 np0005535469 NetworkManager[7191]: <info>  [1764085649.1257] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:47:29 np0005535469 NetworkManager[7191]: <info>  [1764085649.1258] dhcp4 (eth0): state changed no lease
Nov 25 10:47:29 np0005535469 NetworkManager[7191]: <info>  [1764085649.1259] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 10:47:29 np0005535469 NetworkManager[7191]: <info>  [1764085649.1344] exiting (success)
Nov 25 10:47:29 np0005535469 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:47:29 np0005535469 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 10:47:29 np0005535469 systemd[1]: Stopped Network Manager.
Nov 25 10:47:29 np0005535469 systemd[1]: NetworkManager.service: Consumed 11.386s CPU time, 4.1M memory peak, read 0B from disk, written 41.0K to disk.
Nov 25 10:47:29 np0005535469 systemd[1]: Starting Network Manager...
Nov 25 10:47:29 np0005535469 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.2126] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:d9e49c3e-3190-4777-a484-ac14f78b032e)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.2128] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.2191] manager[0x560b1a5a6090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 10:47:29 np0005535469 systemd[1]: Starting Hostname Service...
Nov 25 10:47:29 np0005535469 systemd[1]: Started Hostname Service.
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3067] hostname: hostname: using hostnamed
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3068] hostname: static hostname changed from (none) to "compute-0"
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3079] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3089] manager[0x560b1a5a6090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3089] manager[0x560b1a5a6090]: rfkill: WWAN hardware radio set enabled
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3134] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3152] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3153] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3154] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3155] manager: Networking is enabled by state file
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3159] settings: Loaded settings plugin: keyfile (internal)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3165] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3212] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3231] dhcp: init: Using DHCP client 'internal'
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3238] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3248] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3260] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3275] device (lo): Activation: starting connection 'lo' (2a85aae0-7d1a-47a9-be63-ba664cab07b2)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3288] device (eth0): carrier: link connected
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3298] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3308] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3310] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3325] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3341] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3355] device (eth1): carrier: link connected
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3365] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3377] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (09808217-d49c-5cba-bacd-b33b16161199) (indicated)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3378] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3390] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3405] device (eth1): Activation: starting connection 'ci-private-network' (09808217-d49c-5cba-bacd-b33b16161199)
Nov 25 10:47:29 np0005535469 systemd[1]: Started Network Manager.
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3414] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3429] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3434] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3439] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3442] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3449] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3467] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3470] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3474] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3483] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3486] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3493] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3505] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3516] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3518] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3523] device (lo): Activation: successful, device activated.
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3529] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3535] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3602] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3605] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 systemd[1]: Starting Network Manager Wait Online...
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3617] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3619] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3622] device (eth1): Activation: successful, device activated.
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3636] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3639] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3642] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3645] device (eth0): Activation: successful, device activated.
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3655] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 10:47:29 np0005535469 NetworkManager[48891]: <info>  [1764085649.3659] manager: startup complete
Nov 25 10:47:29 np0005535469 systemd[1]: Finished Network Manager Wait Online.
Nov 25 10:47:30 np0005535469 python3.9[49108]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:47:34 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:47:34 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:47:34 np0005535469 systemd[1]: Reloading.
Nov 25 10:47:34 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:47:34 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:47:35 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:47:35 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:47:35 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:47:35 np0005535469 systemd[1]: run-rdba06b04d5b24540a1373e7cc2b5d057.service: Deactivated successfully.
Nov 25 10:47:36 np0005535469 python3.9[49568]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:47:37 np0005535469 python3.9[49720]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:38 np0005535469 python3.9[49874]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:38 np0005535469 python3.9[50026]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:39 np0005535469 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:47:39 np0005535469 python3.9[50178]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:40 np0005535469 python3.9[50330]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:40 np0005535469 python3.9[50482]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:47:41 np0005535469 python3.9[50605]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085660.4630947-229-127677883411464/.source _original_basename=.68q6id5l follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:42 np0005535469 python3.9[50757]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:43 np0005535469 python3.9[50909]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 10:47:43 np0005535469 python3.9[51061]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:46 np0005535469 python3.9[51488]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 10:47:47 np0005535469 ansible-async_wrapper.py[51663]: Invoked with j67149174071 300 /home/zuul/.ansible/tmp/ansible-tmp-1764085666.1966507-295-66573288905896/AnsiballZ_edpm_os_net_config.py _
Nov 25 10:47:47 np0005535469 ansible-async_wrapper.py[51666]: Starting module and watcher
Nov 25 10:47:47 np0005535469 ansible-async_wrapper.py[51666]: Start watching 51667 (300)
Nov 25 10:47:47 np0005535469 ansible-async_wrapper.py[51667]: Start module (51667)
Nov 25 10:47:47 np0005535469 ansible-async_wrapper.py[51663]: Return async_wrapper task started.
Nov 25 10:47:47 np0005535469 python3.9[51668]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 10:47:47 np0005535469 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 10:47:47 np0005535469 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 10:47:47 np0005535469 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 10:47:47 np0005535469 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 10:47:47 np0005535469 kernel: cfg80211: failed to load regulatory.db
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8493] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8505] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8929] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8930] audit: op="connection-add" uuid="ccd44e1a-3bd1-462c-b48e-38502d897ba0" name="br-ex-br" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8943] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8944] audit: op="connection-add" uuid="a0dab54d-ddfe-4ca3-907f-cca4e6f44593" name="br-ex-port" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8954] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8955] audit: op="connection-add" uuid="3f91cf87-ed5b-4aab-a4e0-e0c021efe324" name="eth1-port" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8965] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8965] audit: op="connection-add" uuid="a97927b6-be35-4cd1-a274-e1fdf2e53873" name="vlan20-port" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8975] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8977] audit: op="connection-add" uuid="10237073-b3a7-43bb-bee3-95191b62a1f1" name="vlan21-port" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8985] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8986] audit: op="connection-add" uuid="7b00bcea-0411-428b-a351-20631cde070b" name="vlan22-port" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8995] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.8996] audit: op="connection-add" uuid="442d640c-876e-474f-a9f5-83cb6a3f5dbe" name="vlan23-port" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9012] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9024] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9026] audit: op="connection-add" uuid="f699f912-7f52-47ca-9b99-dc83a9056a36" name="br-ex-if" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9059] audit: op="connection-update" uuid="09808217-d49c-5cba-bacd-b33b16161199" name="ci-private-network" args="ovs-interface.type,ipv4.routes,ipv4.addresses,ipv4.method,ipv4.routing-rules,ipv4.never-default,ipv4.dns,connection.controller,connection.master,connection.timestamp,connection.port-type,connection.slave-type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.routing-rules,ipv6.dns,ovs-external-ids.data" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9072] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9074] audit: op="connection-add" uuid="22e57493-debd-4571-8b2a-417c60349f1e" name="vlan20-if" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9089] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9090] audit: op="connection-add" uuid="af96aae3-feac-44e2-9d31-fa15049faf93" name="vlan21-if" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9104] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9105] audit: op="connection-add" uuid="61c0d501-0083-4326-8378-be7f31e10cd0" name="vlan22-if" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9119] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9121] audit: op="connection-add" uuid="329c3273-3687-492a-86a5-028001a361f4" name="vlan23-if" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9131] audit: op="connection-delete" uuid="83f9c217-b623-3927-aa63-2d6e79ff47cf" name="Wired connection 1" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9142] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9151] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9155] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (ccd44e1a-3bd1-462c-b48e-38502d897ba0)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9155] audit: op="connection-activate" uuid="ccd44e1a-3bd1-462c-b48e-38502d897ba0" name="br-ex-br" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9157] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9164] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9167] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a0dab54d-ddfe-4ca3-907f-cca4e6f44593)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9169] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9174] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9178] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (3f91cf87-ed5b-4aab-a4e0-e0c021efe324)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9179] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9185] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9189] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (a97927b6-be35-4cd1-a274-e1fdf2e53873)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9191] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9197] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9200] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (10237073-b3a7-43bb-bee3-95191b62a1f1)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9202] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9208] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9212] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (7b00bcea-0411-428b-a351-20631cde070b)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9213] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9220] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9223] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (442d640c-876e-474f-a9f5-83cb6a3f5dbe)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9224] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9226] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9228] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9234] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9238] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9241] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (f699f912-7f52-47ca-9b99-dc83a9056a36)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9242] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9245] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9247] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9248] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9250] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9259] device (eth1): disconnecting for new activation request.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9260] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9262] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9264] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9265] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9268] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9273] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9276] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (22e57493-debd-4571-8b2a-417c60349f1e)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9277] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9280] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9282] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9283] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9285] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9290] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9294] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (af96aae3-feac-44e2-9d31-fa15049faf93)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9294] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9297] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9299] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9300] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9303] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9307] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9311] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (61c0d501-0083-4326-8378-be7f31e10cd0)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9311] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9314] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9316] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9317] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9320] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9324] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9328] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (329c3273-3687-492a-86a5-028001a361f4)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9329] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9332] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9334] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9335] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9336] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9347] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9348] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9351] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9353] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9359] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9362] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9365] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9369] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9371] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9375] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9379] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9382] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9384] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 kernel: ovs-system: entered promiscuous mode
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9388] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9391] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9395] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9397] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9402] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9406] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9409] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9411] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9415] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 kernel: Timeout policy base is empty
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9418] dhcp4 (eth0): canceled DHCP transaction
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9418] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9419] dhcp4 (eth0): state changed no lease
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9420] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 10:47:48 np0005535469 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9428] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9430] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51669 uid=0 result="fail" reason="Device is not activated"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9435] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 systemd-udevd[51674]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9476] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9488] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9491] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9495] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9544] device (eth1): disconnecting for new activation request.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9545] audit: op="connection-activate" uuid="09808217-d49c-5cba-bacd-b33b16161199" name="ci-private-network" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9550] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9653] device (eth1): Activation: starting connection 'ci-private-network' (09808217-d49c-5cba-bacd-b33b16161199)
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9657] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9673] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9675] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9681] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9684] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9689] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9690] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9692] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9694] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9696] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51669 uid=0 result="success"
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9697] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9698] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9701] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9707] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9710] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9713] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9716] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9719] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9722] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9726] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9729] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9732] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9735] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9738] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9742] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9747] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9750] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9781] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9783] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9789] device (eth1): Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 kernel: br-ex: entered promiscuous mode
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9919] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9930] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9945] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9947] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:48 np0005535469 NetworkManager[48891]: <info>  [1764085668.9952] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:47:48 np0005535469 kernel: vlan22: entered promiscuous mode
Nov 25 10:47:49 np0005535469 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 10:47:49 np0005535469 kernel: vlan23: entered promiscuous mode
Nov 25 10:47:49 np0005535469 systemd-udevd[51675]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:47:49 np0005535469 kernel: vlan20: entered promiscuous mode
Nov 25 10:47:49 np0005535469 systemd-udevd[51673]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:47:49 np0005535469 kernel: vlan21: entered promiscuous mode
Nov 25 10:47:49 np0005535469 systemd-udevd[51790]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0284] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0290] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0294] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0299] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0326] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0331] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0335] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0341] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0351] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0352] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0353] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0354] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0358] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0362] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0366] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0371] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0377] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0384] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0385] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:47:49 np0005535469 NetworkManager[48891]: <info>  [1764085669.0390] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.1652] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51669 uid=0 result="success"
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.3488] checkpoint[0x560b1a57c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.3492] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51669 uid=0 result="success"
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.6326] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51669 uid=0 result="success"
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.6340] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51669 uid=0 result="success"
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.8506] audit: op="networking-control" arg="global-dns-configuration" pid=51669 uid=0 result="success"
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.8537] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.8579] audit: op="networking-control" arg="global-dns-configuration" pid=51669 uid=0 result="success"
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.8600] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51669 uid=0 result="success"
Nov 25 10:47:50 np0005535469 python3.9[52031]: ansible-ansible.legacy.async_status Invoked with jid=j67149174071.51663 mode=status _async_dir=/root/.ansible_async
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.9990] checkpoint[0x560b1a57ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 10:47:50 np0005535469 NetworkManager[48891]: <info>  [1764085670.9994] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51669 uid=0 result="success"
Nov 25 10:47:51 np0005535469 ansible-async_wrapper.py[51667]: Module complete (51667)
Nov 25 10:47:52 np0005535469 ansible-async_wrapper.py[51666]: Done in kid B.
Nov 25 10:47:54 np0005535469 python3.9[52135]: ansible-ansible.legacy.async_status Invoked with jid=j67149174071.51663 mode=status _async_dir=/root/.ansible_async
Nov 25 10:47:54 np0005535469 python3.9[52235]: ansible-ansible.legacy.async_status Invoked with jid=j67149174071.51663 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 10:47:55 np0005535469 python3.9[52387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:47:56 np0005535469 python3.9[52510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085675.1436272-322-258337070894443/.source.returncode _original_basename=.u7wopb9n follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:56 np0005535469 python3.9[52662]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:47:57 np0005535469 python3.9[52785]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085676.3320527-338-60554383990680/.source.cfg _original_basename=.ijqpj8nx follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:47:58 np0005535469 python3.9[52938]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:47:58 np0005535469 systemd[1]: Reloading Network Manager...
Nov 25 10:47:58 np0005535469 NetworkManager[48891]: <info>  [1764085678.0868] audit: op="reload" arg="0" pid=52942 uid=0 result="success"
Nov 25 10:47:58 np0005535469 NetworkManager[48891]: <info>  [1764085678.0874] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 10:47:58 np0005535469 systemd[1]: Reloaded Network Manager.
Nov 25 10:47:58 np0005535469 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 10:47:58 np0005535469 systemd[1]: session-10.scope: Consumed 48.449s CPU time.
Nov 25 10:47:58 np0005535469 systemd-logind[791]: Session 10 logged out. Waiting for processes to exit.
Nov 25 10:47:58 np0005535469 systemd-logind[791]: Removed session 10.
Nov 25 10:47:59 np0005535469 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 10:48:03 np0005535469 systemd-logind[791]: New session 11 of user zuul.
Nov 25 10:48:03 np0005535469 systemd[1]: Started Session 11 of User zuul.
Nov 25 10:48:04 np0005535469 python3.9[53128]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:48:05 np0005535469 python3.9[53282]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:48:06 np0005535469 python3.9[53475]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:48:07 np0005535469 systemd-logind[791]: Session 11 logged out. Waiting for processes to exit.
Nov 25 10:48:07 np0005535469 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 10:48:07 np0005535469 systemd[1]: session-11.scope: Consumed 2.430s CPU time.
Nov 25 10:48:07 np0005535469 systemd-logind[791]: Removed session 11.
Nov 25 10:48:08 np0005535469 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:48:12 np0005535469 systemd-logind[791]: New session 12 of user zuul.
Nov 25 10:48:12 np0005535469 systemd[1]: Started Session 12 of User zuul.
Nov 25 10:48:13 np0005535469 python3.9[53658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:48:14 np0005535469 python3.9[53812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:48:15 np0005535469 python3.9[53968]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:48:16 np0005535469 python3.9[54052]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:48:18 np0005535469 python3.9[54206]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:48:19 np0005535469 python3.9[54401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:48:20 np0005535469 python3.9[54554]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:48:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-compat3844850324-merged.mount: Deactivated successfully.
Nov 25 10:48:20 np0005535469 podman[54555]: 2025-11-25 15:48:20.786294797 +0000 UTC m=+0.064497777 system refresh
Nov 25 10:48:21 np0005535469 python3.9[54717]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:48:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:48:22 np0005535469 python3.9[54840]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085701.0005474-79-258475706921651/.source.json follow=False _original_basename=podman_network_config.j2 checksum=17b31bb467e2508711a747f523d3bbea50d1d137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:48:23 np0005535469 python3.9[54992]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:48:23 np0005535469 python3.9[55115]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764085702.5036306-94-132168777695544/.source.conf follow=False _original_basename=registries.conf.j2 checksum=25aa6c560e50dcbd81b989ea46a7865cb55b8998 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:48:24 np0005535469 python3.9[55267]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:48:25 np0005535469 python3.9[55419]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:48:25 np0005535469 python3.9[55571]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:48:26 np0005535469 python3.9[55723]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:48:27 np0005535469 python3.9[55875]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:48:29 np0005535469 python3.9[56028]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:48:30 np0005535469 python3.9[56182]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:48:30 np0005535469 python3.9[56334]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:48:31 np0005535469 python3.9[56486]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:48:32 np0005535469 python3.9[56639]: ansible-service_facts Invoked
Nov 25 10:48:32 np0005535469 network[56656]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:48:32 np0005535469 network[56657]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:48:32 np0005535469 network[56658]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:48:38 np0005535469 python3.9[57110]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:48:40 np0005535469 python3.9[57263]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 10:48:41 np0005535469 python3.9[57415]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:48:42 np0005535469 python3.9[57540]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085721.2191606-238-254871855493207/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:48:42 np0005535469 python3.9[57694]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:48:43 np0005535469 python3.9[57819]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085722.5316737-253-17139293067649/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:48:44 np0005535469 python3.9[57973]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:48:45 np0005535469 python3.9[58127]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:48:46 np0005535469 python3.9[58211]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:48:47 np0005535469 python3.9[58365]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:48:48 np0005535469 python3.9[58449]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:48:48 np0005535469 chronyd[795]: chronyd exiting
Nov 25 10:48:48 np0005535469 systemd[1]: Stopping NTP client/server...
Nov 25 10:48:48 np0005535469 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 10:48:48 np0005535469 systemd[1]: Stopped NTP client/server.
Nov 25 10:48:48 np0005535469 systemd[1]: Starting NTP client/server...
Nov 25 10:48:48 np0005535469 chronyd[58457]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 10:48:48 np0005535469 chronyd[58457]: Frequency -26.893 +/- 0.199 ppm read from /var/lib/chrony/drift
Nov 25 10:48:48 np0005535469 chronyd[58457]: Loaded seccomp filter (level 2)
Nov 25 10:48:48 np0005535469 systemd[1]: Started NTP client/server.
Nov 25 10:48:49 np0005535469 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 10:48:49 np0005535469 systemd[1]: session-12.scope: Consumed 25.699s CPU time.
Nov 25 10:48:49 np0005535469 systemd-logind[791]: Session 12 logged out. Waiting for processes to exit.
Nov 25 10:48:49 np0005535469 systemd-logind[791]: Removed session 12.
Nov 25 10:48:55 np0005535469 systemd-logind[791]: New session 13 of user zuul.
Nov 25 10:48:55 np0005535469 systemd[1]: Started Session 13 of User zuul.
Nov 25 10:48:56 np0005535469 python3.9[58638]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:48:57 np0005535469 python3.9[58790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:48:57 np0005535469 python3.9[58913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085736.4315646-34-279567485037616/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:48:58 np0005535469 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 10:48:58 np0005535469 systemd[1]: session-13.scope: Consumed 1.689s CPU time.
Nov 25 10:48:58 np0005535469 systemd-logind[791]: Session 13 logged out. Waiting for processes to exit.
Nov 25 10:48:58 np0005535469 systemd-logind[791]: Removed session 13.
Nov 25 10:49:03 np0005535469 systemd-logind[791]: New session 14 of user zuul.
Nov 25 10:49:03 np0005535469 systemd[1]: Started Session 14 of User zuul.
Nov 25 10:49:04 np0005535469 python3.9[59091]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:49:05 np0005535469 python3.9[59247]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:06 np0005535469 python3.9[59422]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:07 np0005535469 python3.9[59545]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764085745.775515-41-69959844639135/.source.json _original_basename=.rn1x1rdr follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:08 np0005535469 python3.9[59697]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:08 np0005535469 python3.9[59820]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085747.5997605-64-192056506920249/.source _original_basename=.f54dmjfn follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:09 np0005535469 python3.9[59972]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:49:10 np0005535469 python3.9[60124]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:10 np0005535469 python3.9[60247]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764085749.604907-88-167716088480333/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:49:11 np0005535469 python3.9[60399]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:11 np0005535469 python3.9[60522]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764085750.879835-88-243090757534137/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:49:12 np0005535469 python3.9[60674]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:13 np0005535469 python3.9[60826]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:13 np0005535469 python3.9[60949]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085752.6450205-125-28077290189615/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:14 np0005535469 python3.9[61101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:14 np0005535469 python3.9[61224]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085753.8190203-140-94550478078760/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:15 np0005535469 python3.9[61376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:49:15 np0005535469 systemd[1]: Reloading.
Nov 25 10:49:16 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:49:16 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:49:16 np0005535469 systemd[1]: Reloading.
Nov 25 10:49:16 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:49:16 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:49:16 np0005535469 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 10:49:16 np0005535469 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 10:49:17 np0005535469 python3.9[61604]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:17 np0005535469 python3.9[61727]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085756.6308649-163-191558045449833/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:18 np0005535469 python3.9[61879]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:18 np0005535469 python3.9[62002]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085757.7449067-178-152908115762465/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:19 np0005535469 python3.9[62154]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:49:19 np0005535469 systemd[1]: Reloading.
Nov 25 10:49:19 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:49:19 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:49:20 np0005535469 systemd[1]: Reloading.
Nov 25 10:49:20 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:49:20 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:49:20 np0005535469 systemd[1]: Starting Create netns directory...
Nov 25 10:49:20 np0005535469 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 10:49:20 np0005535469 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 10:49:20 np0005535469 systemd[1]: Finished Create netns directory.
Nov 25 10:49:21 np0005535469 python3.9[62381]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:49:21 np0005535469 network[62398]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:49:21 np0005535469 network[62399]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:49:21 np0005535469 network[62400]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:49:25 np0005535469 python3.9[62662]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:49:25 np0005535469 systemd[1]: Reloading.
Nov 25 10:49:25 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:49:25 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:49:26 np0005535469 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 10:49:26 np0005535469 iptables.init[62701]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 10:49:26 np0005535469 iptables.init[62701]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 10:49:26 np0005535469 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 10:49:26 np0005535469 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 10:49:27 np0005535469 python3.9[62897]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:49:28 np0005535469 python3.9[63051]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:49:28 np0005535469 systemd[1]: Reloading.
Nov 25 10:49:28 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:49:28 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:49:28 np0005535469 systemd[1]: Starting Netfilter Tables...
Nov 25 10:49:28 np0005535469 systemd[1]: Finished Netfilter Tables.
Nov 25 10:49:29 np0005535469 python3.9[63243]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:49:30 np0005535469 python3.9[63396]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:31 np0005535469 python3.9[63522]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085769.741285-247-255414801121603/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:32 np0005535469 python3.9[63675]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:49:32 np0005535469 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 10:49:32 np0005535469 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 10:49:33 np0005535469 python3.9[63831]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:34 np0005535469 python3.9[63983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:34 np0005535469 python3.9[64106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085773.5662043-278-170083680548083/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:35 np0005535469 python3.9[64258]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 10:49:35 np0005535469 systemd[1]: Starting Time & Date Service...
Nov 25 10:49:35 np0005535469 systemd[1]: Started Time & Date Service.
Nov 25 10:49:36 np0005535469 python3.9[64414]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:37 np0005535469 python3.9[64566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:37 np0005535469 python3.9[64689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085776.8324573-313-196140907612723/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:38 np0005535469 python3.9[64841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:39 np0005535469 python3.9[64964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764085778.161684-328-142598733061978/.source.yaml _original_basename=.vujdoebv follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:39 np0005535469 python3.9[65116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:40 np0005535469 python3.9[65239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085779.327591-343-112548194653742/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:40 np0005535469 python3.9[65391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:49:41 np0005535469 python3.9[65544]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:49:42 np0005535469 python3[65697]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 10:49:43 np0005535469 python3.9[65849]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:43 np0005535469 python3.9[65972]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085782.6160839-382-134530800663888/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:44 np0005535469 python3.9[66124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:44 np0005535469 python3.9[66247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085783.7209625-397-138991408321225/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:45 np0005535469 python3.9[66399]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:45 np0005535469 python3.9[66522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085784.904836-412-209641424783808/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:46 np0005535469 python3.9[66674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:47 np0005535469 python3.9[66797]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085786.1285617-427-2374318029689/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:47 np0005535469 python3.9[66949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:49:48 np0005535469 python3.9[67072]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764085787.2311873-442-186224493782267/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:48 np0005535469 python3.9[67224]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:49 np0005535469 python3.9[67376]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:49:50 np0005535469 python3.9[67535]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:51 np0005535469 python3.9[67688]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:51 np0005535469 python3.9[67840]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:49:52 np0005535469 python3.9[67992]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 10:49:53 np0005535469 python3.9[68145]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 10:49:53 np0005535469 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 10:49:53 np0005535469 systemd[1]: session-14.scope: Consumed 34.643s CPU time.
Nov 25 10:49:53 np0005535469 systemd-logind[791]: Session 14 logged out. Waiting for processes to exit.
Nov 25 10:49:53 np0005535469 systemd-logind[791]: Removed session 14.
Nov 25 10:49:59 np0005535469 systemd-logind[791]: New session 15 of user zuul.
Nov 25 10:49:59 np0005535469 systemd[1]: Started Session 15 of User zuul.
Nov 25 10:50:00 np0005535469 python3.9[68326]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 10:50:01 np0005535469 python3.9[68478]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:50:02 np0005535469 python3.9[68630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:50:03 np0005535469 python3.9[68782]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfZCSXvOcw/IEogxXwqVl31uDKlIGL1yCYoHYf1Hq37wPmrm3332XZgVpiaonqTGDrVKu7kC5ZtT4fNx86COcVL5P35x1CMYCXCpbfLPtDrB9ovk+nvaWoeiW2HY7nhMpNWLz0UPcMAjbjq7+LwUDy+w8fFyMcB7uuPPnD5kG+qG2Os9Lp+Q5NJCU/WO68Q3Vk1qBqiEB9QECCNfmF4fXehpW/idQR/ykJooTOb0djW2KAfUtle5XBLF2UxGobAsyIttY31J9sy0dhqY0h3H+ZPiNB11PROCj1OdMyXg7chvF6OJ4aFHlmNY+YDI0wSt8kC43C6i9RYDW71/FuGDjMBMwPp9DzG9LAc8VPcsQj6YEiNoqUWQGsD2+mYBhj1ofQVcISiuL7hrONc2vcCtC3cbYb6a9UEXcQxs8z/pWRw5kCujEYklnj1O6Y6pjWra9fnhogwikDwViiDGLOwhXgyQosxllCDGOEafFcMbL94TVwFENc0FeMSLXtU3RBqhU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEjXuWbF2e+9xQSQ7XZIhNCtP0gS6mmUIrlSH7Q7Jpyy#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNra5eob9nZprGwdbwfM0OOC+PQK0G3M5SCkOTMaDRTa/6360f+sILjj4tVAZ6FcmfKKqWN4+4hUUPUqbZHE+ew=#012 create=True mode=0644 path=/tmp/ansible.1lwvjor0 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:50:04 np0005535469 python3.9[68934]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1lwvjor0' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:05 np0005535469 python3.9[69088]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1lwvjor0 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:50:05 np0005535469 systemd-logind[791]: Session 15 logged out. Waiting for processes to exit.
Nov 25 10:50:05 np0005535469 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 10:50:05 np0005535469 systemd[1]: session-15.scope: Consumed 3.496s CPU time.
Nov 25 10:50:05 np0005535469 systemd-logind[791]: Removed session 15.
Nov 25 10:50:05 np0005535469 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 10:50:11 np0005535469 systemd-logind[791]: New session 16 of user zuul.
Nov 25 10:50:11 np0005535469 systemd[1]: Started Session 16 of User zuul.
Nov 25 10:50:12 np0005535469 python3.9[69268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:50:13 np0005535469 python3.9[69424]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 10:50:15 np0005535469 python3.9[69578]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:50:16 np0005535469 python3.9[69731]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:17 np0005535469 python3.9[69884]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:50:18 np0005535469 python3.9[70038]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:18 np0005535469 python3.9[70193]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:50:19 np0005535469 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 10:50:19 np0005535469 systemd[1]: session-16.scope: Consumed 4.340s CPU time.
Nov 25 10:50:19 np0005535469 systemd-logind[791]: Session 16 logged out. Waiting for processes to exit.
Nov 25 10:50:19 np0005535469 systemd-logind[791]: Removed session 16.
Nov 25 10:50:24 np0005535469 systemd-logind[791]: New session 17 of user zuul.
Nov 25 10:50:24 np0005535469 systemd[1]: Started Session 17 of User zuul.
Nov 25 10:50:25 np0005535469 python3.9[70371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:50:26 np0005535469 python3.9[70527]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:50:27 np0005535469 python3.9[70611]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:50:29 np0005535469 python3.9[70762]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:31 np0005535469 python3.9[70913]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:50:31 np0005535469 python3.9[71063]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:50:31 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:50:32 np0005535469 python3.9[71214]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:50:33 np0005535469 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 10:50:33 np0005535469 systemd[1]: session-17.scope: Consumed 5.787s CPU time.
Nov 25 10:50:33 np0005535469 systemd-logind[791]: Session 17 logged out. Waiting for processes to exit.
Nov 25 10:50:33 np0005535469 systemd-logind[791]: Removed session 17.
Nov 25 10:50:40 np0005535469 systemd-logind[791]: New session 18 of user zuul.
Nov 25 10:50:40 np0005535469 systemd[1]: Started Session 18 of User zuul.
Nov 25 10:50:46 np0005535469 python3[71981]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:50:48 np0005535469 python3[72077]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 10:50:50 np0005535469 python3[72104]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:50:50 np0005535469 python3[72130]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:50 np0005535469 kernel: loop: module loaded
Nov 25 10:50:50 np0005535469 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 10:50:50 np0005535469 python3[72165]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:50 np0005535469 lvm[72168]: PV /dev/loop3 not used.
Nov 25 10:50:51 np0005535469 lvm[72177]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:50:51 np0005535469 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 10:50:51 np0005535469 lvm[72179]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 10:50:51 np0005535469 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 10:50:51 np0005535469 python3[72257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:50:52 np0005535469 python3[72330]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764085851.342096-36451-271022559719514/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:50:52 np0005535469 python3[72380]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:50:52 np0005535469 systemd[1]: Reloading.
Nov 25 10:50:53 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:50:53 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:50:53 np0005535469 systemd[1]: Starting Ceph OSD losetup...
Nov 25 10:50:53 np0005535469 bash[72420]: /dev/loop3: [64513]:4194940 (/var/lib/ceph-osd-0.img)
Nov 25 10:50:53 np0005535469 systemd[1]: Finished Ceph OSD losetup.
Nov 25 10:50:53 np0005535469 lvm[72421]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:50:53 np0005535469 lvm[72421]: VG ceph_vg0 finished
Nov 25 10:50:53 np0005535469 python3[72447]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 10:50:55 np0005535469 python3[72474]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:50:55 np0005535469 python3[72500]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:55 np0005535469 kernel: loop4: detected capacity change from 0 to 41943040
Nov 25 10:50:56 np0005535469 python3[72532]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:56 np0005535469 lvm[72535]: PV /dev/loop4 not used.
Nov 25 10:50:56 np0005535469 lvm[72537]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 10:50:56 np0005535469 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 25 10:50:56 np0005535469 lvm[72548]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 10:50:56 np0005535469 lvm[72548]: VG ceph_vg1 finished
Nov 25 10:50:56 np0005535469 lvm[72544]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 25 10:50:56 np0005535469 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 25 10:50:56 np0005535469 python3[72626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:50:57 np0005535469 python3[72699]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764085856.5109317-36478-88074597513768/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:50:57 np0005535469 python3[72749]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:50:57 np0005535469 systemd[1]: Reloading.
Nov 25 10:50:57 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:50:57 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:50:58 np0005535469 systemd[1]: Starting Ceph OSD losetup...
Nov 25 10:50:58 np0005535469 bash[72790]: /dev/loop4: [64513]:4327909 (/var/lib/ceph-osd-1.img)
Nov 25 10:50:58 np0005535469 systemd[1]: Finished Ceph OSD losetup.
Nov 25 10:50:58 np0005535469 lvm[72791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 10:50:58 np0005535469 lvm[72791]: VG ceph_vg1 finished
Nov 25 10:50:58 np0005535469 chronyd[58457]: Selected source 162.159.200.1 (pool.ntp.org)
Nov 25 10:50:58 np0005535469 python3[72817]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 10:51:00 np0005535469 python3[72844]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:51:00 np0005535469 python3[72870]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:51:00 np0005535469 kernel: loop5: detected capacity change from 0 to 41943040
Nov 25 10:51:00 np0005535469 python3[72902]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:51:00 np0005535469 lvm[72905]: PV /dev/loop5 not used.
Nov 25 10:51:01 np0005535469 lvm[72907]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 10:51:01 np0005535469 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 25 10:51:01 np0005535469 lvm[72918]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 10:51:01 np0005535469 lvm[72918]: VG ceph_vg2 finished
Nov 25 10:51:01 np0005535469 lvm[72910]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 25 10:51:01 np0005535469 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 25 10:51:01 np0005535469 python3[72996]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:51:02 np0005535469 python3[73069]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764085861.356416-36505-210820717681198/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:51:02 np0005535469 python3[73119]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:51:02 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:02 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:02 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:02 np0005535469 systemd[1]: Starting Ceph OSD losetup...
Nov 25 10:51:02 np0005535469 bash[73159]: /dev/loop5: [64513]:4327911 (/var/lib/ceph-osd-2.img)
Nov 25 10:51:02 np0005535469 systemd[1]: Finished Ceph OSD losetup.
Nov 25 10:51:02 np0005535469 lvm[73160]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 10:51:02 np0005535469 lvm[73160]: VG ceph_vg2 finished
Nov 25 10:51:04 np0005535469 python3[73185]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:51:07 np0005535469 python3[73280]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 10:51:09 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:51:09 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:51:09 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:51:09 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:51:09 np0005535469 systemd[1]: run-r992913709e03431eb603a3f371bf0755.service: Deactivated successfully.
Nov 25 10:51:09 np0005535469 python3[73390]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:51:10 np0005535469 python3[73419]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:51:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:10 np0005535469 python3[73484]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:51:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:11 np0005535469 python3[73510]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:51:12 np0005535469 python3[73588]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:51:12 np0005535469 python3[73661]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764085871.7722826-36652-242713555347436/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:51:13 np0005535469 python3[73763]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:51:13 np0005535469 python3[73836]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764085872.8667777-36670-219775511826961/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:51:13 np0005535469 python3[73886]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:51:14 np0005535469 python3[73914]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:51:14 np0005535469 python3[73942]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:51:14 np0005535469 python3[73970]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:51:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:15 np0005535469 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 10:51:15 np0005535469 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 10:51:15 np0005535469 systemd-logind[791]: New session 19 of user ceph-admin.
Nov 25 10:51:15 np0005535469 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 10:51:15 np0005535469 systemd[1]: Starting User Manager for UID 42477...
Nov 25 10:51:15 np0005535469 systemd[73990]: Queued start job for default target Main User Target.
Nov 25 10:51:15 np0005535469 systemd[73990]: Created slice User Application Slice.
Nov 25 10:51:15 np0005535469 systemd[73990]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 10:51:15 np0005535469 systemd[73990]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 10:51:15 np0005535469 systemd[73990]: Reached target Paths.
Nov 25 10:51:15 np0005535469 systemd[73990]: Reached target Timers.
Nov 25 10:51:15 np0005535469 systemd[73990]: Starting D-Bus User Message Bus Socket...
Nov 25 10:51:15 np0005535469 systemd[73990]: Starting Create User's Volatile Files and Directories...
Nov 25 10:51:15 np0005535469 systemd[73990]: Listening on D-Bus User Message Bus Socket.
Nov 25 10:51:15 np0005535469 systemd[73990]: Reached target Sockets.
Nov 25 10:51:15 np0005535469 systemd[73990]: Finished Create User's Volatile Files and Directories.
Nov 25 10:51:15 np0005535469 systemd[73990]: Reached target Basic System.
Nov 25 10:51:15 np0005535469 systemd[73990]: Reached target Main User Target.
Nov 25 10:51:15 np0005535469 systemd[73990]: Startup finished in 151ms.
Nov 25 10:51:15 np0005535469 systemd[1]: Started User Manager for UID 42477.
Nov 25 10:51:15 np0005535469 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 10:51:15 np0005535469 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 10:51:15 np0005535469 systemd-logind[791]: Session 19 logged out. Waiting for processes to exit.
Nov 25 10:51:15 np0005535469 systemd-logind[791]: Removed session 19.
Nov 25 10:51:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-compat3111727525-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 10:51:25 np0005535469 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 10:51:25 np0005535469 systemd[73990]: Activating special unit Exit the Session...
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped target Main User Target.
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped target Basic System.
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped target Paths.
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped target Sockets.
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped target Timers.
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 10:51:25 np0005535469 systemd[73990]: Closed D-Bus User Message Bus Socket.
Nov 25 10:51:25 np0005535469 systemd[73990]: Stopped Create User's Volatile Files and Directories.
Nov 25 10:51:25 np0005535469 systemd[73990]: Removed slice User Application Slice.
Nov 25 10:51:25 np0005535469 systemd[73990]: Reached target Shutdown.
Nov 25 10:51:25 np0005535469 systemd[73990]: Finished Exit the Session.
Nov 25 10:51:25 np0005535469 systemd[73990]: Reached target Exit the Session.
Nov 25 10:51:25 np0005535469 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 10:51:25 np0005535469 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 10:51:25 np0005535469 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 10:51:25 np0005535469 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 10:51:25 np0005535469 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 10:51:25 np0005535469 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 10:51:25 np0005535469 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 10:51:29 np0005535469 podman[74045]: 2025-11-25 15:51:29.32158646 +0000 UTC m=+13.553264047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:29 np0005535469 podman[74105]: 2025-11-25 15:51:29.409615599 +0000 UTC m=+0.057763156 container create 36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147 (image=quay.io/ceph/ceph:v18, name=eager_mclaren, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 10:51:29 np0005535469 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 10:51:29 np0005535469 systemd[1]: Started libpod-conmon-36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147.scope.
Nov 25 10:51:29 np0005535469 podman[74105]: 2025-11-25 15:51:29.378234734 +0000 UTC m=+0.026382381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:29 np0005535469 podman[74105]: 2025-11-25 15:51:29.538143841 +0000 UTC m=+0.186291488 container init 36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147 (image=quay.io/ceph/ceph:v18, name=eager_mclaren, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:51:29 np0005535469 podman[74105]: 2025-11-25 15:51:29.550910968 +0000 UTC m=+0.199058515 container start 36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147 (image=quay.io/ceph/ceph:v18, name=eager_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:51:29 np0005535469 podman[74105]: 2025-11-25 15:51:29.554486626 +0000 UTC m=+0.202634193 container attach 36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147 (image=quay.io/ceph/ceph:v18, name=eager_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 10:51:29 np0005535469 eager_mclaren[74119]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 10:51:29 np0005535469 systemd[1]: libpod-36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147.scope: Deactivated successfully.
Nov 25 10:51:29 np0005535469 podman[74105]: 2025-11-25 15:51:29.834751523 +0000 UTC m=+0.482899070 container died 36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147 (image=quay.io/ceph/ceph:v18, name=eager_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:51:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dfda428ab49f159b8f6a7bc82f731ccaa57fe703625691f8aea1cbe0209feed2-merged.mount: Deactivated successfully.
Nov 25 10:51:29 np0005535469 podman[74105]: 2025-11-25 15:51:29.908311578 +0000 UTC m=+0.556459165 container remove 36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147 (image=quay.io/ceph/ceph:v18, name=eager_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:51:29 np0005535469 systemd[1]: libpod-conmon-36fe12074d3b01dddbc7f31df79d39abd8f8a91abc6d6beb757b50133e1e8147.scope: Deactivated successfully.
Nov 25 10:51:30 np0005535469 podman[74140]: 2025-11-25 15:51:30.01226374 +0000 UTC m=+0.074531541 container create 8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b (image=quay.io/ceph/ceph:v18, name=great_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:51:30 np0005535469 podman[74140]: 2025-11-25 15:51:29.971679565 +0000 UTC m=+0.033947406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:30 np0005535469 systemd[1]: Started libpod-conmon-8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b.scope.
Nov 25 10:51:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:30 np0005535469 podman[74140]: 2025-11-25 15:51:30.170911113 +0000 UTC m=+0.233178934 container init 8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b (image=quay.io/ceph/ceph:v18, name=great_dubinsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 10:51:30 np0005535469 podman[74140]: 2025-11-25 15:51:30.176249549 +0000 UTC m=+0.238517360 container start 8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b (image=quay.io/ceph/ceph:v18, name=great_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:30 np0005535469 great_dubinsky[74157]: 167 167
Nov 25 10:51:30 np0005535469 systemd[1]: libpod-8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b.scope: Deactivated successfully.
Nov 25 10:51:30 np0005535469 podman[74140]: 2025-11-25 15:51:30.208673082 +0000 UTC m=+0.270940913 container attach 8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b (image=quay.io/ceph/ceph:v18, name=great_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:51:30 np0005535469 podman[74140]: 2025-11-25 15:51:30.209030612 +0000 UTC m=+0.271298413 container died 8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b (image=quay.io/ceph/ceph:v18, name=great_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 10:51:30 np0005535469 podman[74140]: 2025-11-25 15:51:30.329521385 +0000 UTC m=+0.391789186 container remove 8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b (image=quay.io/ceph/ceph:v18, name=great_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 10:51:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:30 np0005535469 systemd[1]: libpod-conmon-8aba1fd5b76a7d8e7c24eb9863f51060d5357bda47ab050dcb46ca59ed5e658b.scope: Deactivated successfully.
Nov 25 10:51:30 np0005535469 podman[74174]: 2025-11-25 15:51:30.448463807 +0000 UTC m=+0.085494092 container create a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769 (image=quay.io/ceph/ceph:v18, name=cool_sammet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:30 np0005535469 systemd[1]: Started libpod-conmon-a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769.scope.
Nov 25 10:51:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:30 np0005535469 podman[74174]: 2025-11-25 15:51:30.401862156 +0000 UTC m=+0.038892491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:30 np0005535469 podman[74174]: 2025-11-25 15:51:30.523927083 +0000 UTC m=+0.160957388 container init a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769 (image=quay.io/ceph/ceph:v18, name=cool_sammet, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:30 np0005535469 podman[74174]: 2025-11-25 15:51:30.530784269 +0000 UTC m=+0.167814554 container start a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769 (image=quay.io/ceph/ceph:v18, name=cool_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:30 np0005535469 cool_sammet[74188]: AQCC0CVpZSOpIBAAITujCFl4MU6xJOHeYN2KOw==
Nov 25 10:51:30 np0005535469 podman[74174]: 2025-11-25 15:51:30.550303151 +0000 UTC m=+0.187333466 container attach a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769 (image=quay.io/ceph/ceph:v18, name=cool_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 10:51:30 np0005535469 systemd[1]: libpod-a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769.scope: Deactivated successfully.
Nov 25 10:51:30 np0005535469 podman[74174]: 2025-11-25 15:51:30.550716402 +0000 UTC m=+0.187746717 container died a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769 (image=quay.io/ceph/ceph:v18, name=cool_sammet, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:51:30 np0005535469 podman[74174]: 2025-11-25 15:51:30.596018997 +0000 UTC m=+0.233049312 container remove a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769 (image=quay.io/ceph/ceph:v18, name=cool_sammet, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:51:30 np0005535469 systemd[1]: libpod-conmon-a09d6a9ad5ef4397d46eeabf0333687f4c3f11eb026e8f3f5a0709eec0ae8769.scope: Deactivated successfully.
Nov 25 10:51:30 np0005535469 podman[74209]: 2025-11-25 15:51:30.680059277 +0000 UTC m=+0.052303236 container create 824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f (image=quay.io/ceph/ceph:v18, name=vigorous_mcclintock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:51:30 np0005535469 systemd[1]: Started libpod-conmon-824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f.scope.
Nov 25 10:51:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:30 np0005535469 podman[74209]: 2025-11-25 15:51:30.74147497 +0000 UTC m=+0.113718929 container init 824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f (image=quay.io/ceph/ceph:v18, name=vigorous_mcclintock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 10:51:30 np0005535469 podman[74209]: 2025-11-25 15:51:30.747723351 +0000 UTC m=+0.119967330 container start 824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f (image=quay.io/ceph/ceph:v18, name=vigorous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:51:30 np0005535469 podman[74209]: 2025-11-25 15:51:30.755477682 +0000 UTC m=+0.127721641 container attach 824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f (image=quay.io/ceph/ceph:v18, name=vigorous_mcclintock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:30 np0005535469 podman[74209]: 2025-11-25 15:51:30.65999379 +0000 UTC m=+0.032237769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:30 np0005535469 vigorous_mcclintock[74225]: AQCC0CVpHQK0LhAAhCAYvMftClzpiFYvCYmqpg==
Nov 25 10:51:30 np0005535469 systemd[1]: libpod-824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f.scope: Deactivated successfully.
Nov 25 10:51:30 np0005535469 podman[74209]: 2025-11-25 15:51:30.789939591 +0000 UTC m=+0.162183590 container died 824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f (image=quay.io/ceph/ceph:v18, name=vigorous_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:51:30 np0005535469 podman[74209]: 2025-11-25 15:51:30.838244557 +0000 UTC m=+0.210488526 container remove 824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f (image=quay.io/ceph/ceph:v18, name=vigorous_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:51:30 np0005535469 systemd[1]: libpod-conmon-824b59dd1a2ee028d8b572c8264cd2d18b7bf0659308cb36a6c8fa1aa4118c1f.scope: Deactivated successfully.
Nov 25 10:51:30 np0005535469 podman[74243]: 2025-11-25 15:51:30.90806292 +0000 UTC m=+0.046978001 container create 3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d (image=quay.io/ceph/ceph:v18, name=loving_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 10:51:30 np0005535469 systemd[1]: Started libpod-conmon-3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d.scope.
Nov 25 10:51:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:30 np0005535469 podman[74243]: 2025-11-25 15:51:30.887300715 +0000 UTC m=+0.026215826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:31 np0005535469 podman[74243]: 2025-11-25 15:51:31.26918162 +0000 UTC m=+0.408096751 container init 3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d (image=quay.io/ceph/ceph:v18, name=loving_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 10:51:31 np0005535469 podman[74243]: 2025-11-25 15:51:31.27944442 +0000 UTC m=+0.418359551 container start 3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d (image=quay.io/ceph/ceph:v18, name=loving_agnesi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 10:51:31 np0005535469 podman[74243]: 2025-11-25 15:51:31.284318342 +0000 UTC m=+0.423233483 container attach 3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d (image=quay.io/ceph/ceph:v18, name=loving_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 10:51:31 np0005535469 loving_agnesi[74259]: AQCD0CVpWo4SExAAwj7Tc7oAXHwCC/rE/aa+Zg==
Nov 25 10:51:31 np0005535469 podman[74243]: 2025-11-25 15:51:31.327333275 +0000 UTC m=+0.466248376 container died 3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d (image=quay.io/ceph/ceph:v18, name=loving_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 10:51:31 np0005535469 systemd[1]: libpod-3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d.scope: Deactivated successfully.
Nov 25 10:51:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-86541761c9f3a4c6cb0c08c4722bd0e29dbbfa6b62ceca20b83913641c3235bc-merged.mount: Deactivated successfully.
Nov 25 10:51:31 np0005535469 podman[74243]: 2025-11-25 15:51:31.374222182 +0000 UTC m=+0.513137283 container remove 3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d (image=quay.io/ceph/ceph:v18, name=loving_agnesi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 10:51:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:31 np0005535469 systemd[1]: libpod-conmon-3fc4f40742ad5b6b651ee551c95d147bc940219f1447e1b001eee162ff97b17d.scope: Deactivated successfully.
Nov 25 10:51:31 np0005535469 podman[74278]: 2025-11-25 15:51:31.443840599 +0000 UTC m=+0.046671382 container create d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43 (image=quay.io/ceph/ceph:v18, name=amazing_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 10:51:31 np0005535469 systemd[1]: Started libpod-conmon-d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43.scope.
Nov 25 10:51:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6d7c091ea3cf0a032adcdab0311d0e959a092b870458e3c187fface4b5ae1/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:31 np0005535469 podman[74278]: 2025-11-25 15:51:31.425170801 +0000 UTC m=+0.028001594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:31 np0005535469 podman[74278]: 2025-11-25 15:51:31.521561067 +0000 UTC m=+0.124391890 container init d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43 (image=quay.io/ceph/ceph:v18, name=amazing_proskuriakova, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:51:31 np0005535469 podman[74278]: 2025-11-25 15:51:31.531872259 +0000 UTC m=+0.134703042 container start d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43 (image=quay.io/ceph/ceph:v18, name=amazing_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 10:51:31 np0005535469 podman[74278]: 2025-11-25 15:51:31.534774258 +0000 UTC m=+0.137605071 container attach d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43 (image=quay.io/ceph/ceph:v18, name=amazing_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 10:51:31 np0005535469 amazing_proskuriakova[74294]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 25 10:51:31 np0005535469 amazing_proskuriakova[74294]: setting min_mon_release = pacific
Nov 25 10:51:31 np0005535469 amazing_proskuriakova[74294]: /usr/bin/monmaptool: set fsid to d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:31 np0005535469 amazing_proskuriakova[74294]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 25 10:51:31 np0005535469 systemd[1]: libpod-d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43.scope: Deactivated successfully.
Nov 25 10:51:31 np0005535469 podman[74301]: 2025-11-25 15:51:31.634082723 +0000 UTC m=+0.041887662 container died d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43 (image=quay.io/ceph/ceph:v18, name=amazing_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:51:31 np0005535469 podman[74301]: 2025-11-25 15:51:31.672570122 +0000 UTC m=+0.080375031 container remove d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43 (image=quay.io/ceph/ceph:v18, name=amazing_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:51:31 np0005535469 systemd[1]: libpod-conmon-d676b8c7ce34abccd2f6246172864295f4246e301910e15d284fdd895f9bbe43.scope: Deactivated successfully.
Nov 25 10:51:31 np0005535469 podman[74313]: 2025-11-25 15:51:31.743403172 +0000 UTC m=+0.044384200 container create 11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89 (image=quay.io/ceph/ceph:v18, name=zen_robinson, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:51:31 np0005535469 systemd[1]: Started libpod-conmon-11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89.scope.
Nov 25 10:51:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5376b9df993d730f3fd6382e5872ea97ecd881847c126d499ff1ba6f90af1a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5376b9df993d730f3fd6382e5872ea97ecd881847c126d499ff1ba6f90af1a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5376b9df993d730f3fd6382e5872ea97ecd881847c126d499ff1ba6f90af1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5376b9df993d730f3fd6382e5872ea97ecd881847c126d499ff1ba6f90af1a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:31 np0005535469 podman[74313]: 2025-11-25 15:51:31.814208761 +0000 UTC m=+0.115189849 container init 11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89 (image=quay.io/ceph/ceph:v18, name=zen_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:51:31 np0005535469 podman[74313]: 2025-11-25 15:51:31.719191942 +0000 UTC m=+0.020173020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:31 np0005535469 podman[74313]: 2025-11-25 15:51:31.819906416 +0000 UTC m=+0.120887464 container start 11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89 (image=quay.io/ceph/ceph:v18, name=zen_robinson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:51:31 np0005535469 podman[74313]: 2025-11-25 15:51:31.831995707 +0000 UTC m=+0.132976765 container attach 11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89 (image=quay.io/ceph/ceph:v18, name=zen_robinson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:31 np0005535469 systemd[1]: libpod-11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89.scope: Deactivated successfully.
Nov 25 10:51:31 np0005535469 podman[74313]: 2025-11-25 15:51:31.921056854 +0000 UTC m=+0.222037882 container died 11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89 (image=quay.io/ceph/ceph:v18, name=zen_robinson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:31 np0005535469 podman[74313]: 2025-11-25 15:51:31.970697116 +0000 UTC m=+0.271678164 container remove 11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89 (image=quay.io/ceph/ceph:v18, name=zen_robinson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:51:31 np0005535469 systemd[1]: libpod-conmon-11eb3b73ffcd768f797f681694b32087676bd2d5ce4238ed1c3afd7ece593f89.scope: Deactivated successfully.
Nov 25 10:51:32 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:32 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:32 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:32 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:32 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:32 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:32 np0005535469 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 10:51:32 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:32 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:32 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:32 np0005535469 systemd[1]: Reached target Ceph cluster d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:51:32 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:32 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:32 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:33 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:33 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:33 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:33 np0005535469 systemd[1]: Created slice Slice /system/ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:51:33 np0005535469 systemd[1]: Reached target System Time Set.
Nov 25 10:51:33 np0005535469 systemd[1]: Reached target System Time Synchronized.
Nov 25 10:51:33 np0005535469 systemd[1]: Starting Ceph mon.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:51:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:33 np0005535469 podman[74606]: 2025-11-25 15:51:33.64284452 +0000 UTC m=+0.068631620 container create f6c3a510c25fd48e1bdd96510ef8bbc99d5a917d8b607a60aac1e6a9a37353de (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 10:51:33 np0005535469 podman[74606]: 2025-11-25 15:51:33.613990525 +0000 UTC m=+0.039777665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafb24f3116d3914fcee45b4c0afa2b421b41262e301e2d744509b787c2ee63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafb24f3116d3914fcee45b4c0afa2b421b41262e301e2d744509b787c2ee63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafb24f3116d3914fcee45b4c0afa2b421b41262e301e2d744509b787c2ee63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafb24f3116d3914fcee45b4c0afa2b421b41262e301e2d744509b787c2ee63/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:33 np0005535469 podman[74606]: 2025-11-25 15:51:33.799368756 +0000 UTC m=+0.225155916 container init f6c3a510c25fd48e1bdd96510ef8bbc99d5a917d8b607a60aac1e6a9a37353de (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 10:51:33 np0005535469 podman[74606]: 2025-11-25 15:51:33.806105189 +0000 UTC m=+0.231892289 container start f6c3a510c25fd48e1bdd96510ef8bbc99d5a917d8b607a60aac1e6a9a37353de (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 10:51:33 np0005535469 bash[74606]: f6c3a510c25fd48e1bdd96510ef8bbc99d5a917d8b607a60aac1e6a9a37353de
Nov 25 10:51:33 np0005535469 systemd[1]: Started Ceph mon.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: pidfile_write: ignore empty --pid-file
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: load: jerasure load: lrc 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Git sha 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: DB SUMMARY
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: DB Session ID:  AVDQ51M3QSYDCNZ24FQB
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                                     Options.env: 0x5624fe7abc40
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                                Options.info_log: 0x562500752e80
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                                 Options.wal_dir: 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                    Options.write_buffer_manager: 0x562500762b40
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                               Options.row_cache: None
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                              Options.wal_filter: None
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.wal_compression: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.max_background_jobs: 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Compression algorithms supported:
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kZSTD supported: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:           Options.merge_operator: 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:        Options.compaction_filter: None
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562500752a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56250074b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:          Options.compression: NoCompression
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.num_levels: 7
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 649eb26e-5d3e-4279-9fd6-2380e7a57b81
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764085893854847, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764085893879528, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "AVDQ51M3QSYDCNZ24FQB", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764085893879713, "job": 1, "event": "recovery_finished"}
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562500774e00
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: DB pointer 0x56250087e000
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.025       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.025       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.025       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.025       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56250074b1f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@-1(???) e0 preinit fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 10:51:33 np0005535469 podman[74647]: 2025-11-25 15:51:33.94815692 +0000 UTC m=+0.048163653 container create 60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31 (image=quay.io/ceph/ceph:v18, name=mystifying_franklin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-25T15:51:31.872932Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).mds e1 new map
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 10:51:33 np0005535469 systemd[1]: Started libpod-conmon-60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31.scope.
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mkfs d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 10:51:33 np0005535469 ceph-mon[74625]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 10:51:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:34 np0005535469 podman[74647]: 2025-11-25 15:51:33.923319223 +0000 UTC m=+0.023325966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0059b87f96d930c7e2a6420e74bafd1a394a250d6e3b3d212059024715b48821/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0059b87f96d930c7e2a6420e74bafd1a394a250d6e3b3d212059024715b48821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0059b87f96d930c7e2a6420e74bafd1a394a250d6e3b3d212059024715b48821/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:34 np0005535469 podman[74647]: 2025-11-25 15:51:34.047897728 +0000 UTC m=+0.147904501 container init 60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31 (image=quay.io/ceph/ceph:v18, name=mystifying_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 10:51:34 np0005535469 podman[74647]: 2025-11-25 15:51:34.056960395 +0000 UTC m=+0.156967118 container start 60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31 (image=quay.io/ceph/ceph:v18, name=mystifying_franklin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:51:34 np0005535469 podman[74647]: 2025-11-25 15:51:34.061065156 +0000 UTC m=+0.161071919 container attach 60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31 (image=quay.io/ceph/ceph:v18, name=mystifying_franklin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 10:51:34 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 10:51:34 np0005535469 ceph-mon[74625]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272787444' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:  cluster:
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    id:     d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    health: HEALTH_OK
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]: 
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:  services:
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    mon: 1 daemons, quorum compute-0 (age 0.50403s)
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    mgr: no daemons active
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    osd: 0 osds: 0 up, 0 in
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]: 
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:  data:
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    pools:   0 pools, 0 pgs
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    objects: 0 objects, 0 B
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    usage:   0 B used, 0 B / 0 B avail
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]:    pgs:     
Nov 25 10:51:34 np0005535469 mystifying_franklin[74680]: 
Nov 25 10:51:34 np0005535469 systemd[1]: libpod-60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31.scope: Deactivated successfully.
Nov 25 10:51:34 np0005535469 podman[74647]: 2025-11-25 15:51:34.48246874 +0000 UTC m=+0.582475473 container died 60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31 (image=quay.io/ceph/ceph:v18, name=mystifying_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0059b87f96d930c7e2a6420e74bafd1a394a250d6e3b3d212059024715b48821-merged.mount: Deactivated successfully.
Nov 25 10:51:34 np0005535469 podman[74647]: 2025-11-25 15:51:34.535781233 +0000 UTC m=+0.635787956 container remove 60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31 (image=quay.io/ceph/ceph:v18, name=mystifying_franklin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:34 np0005535469 systemd[1]: libpod-conmon-60e17d46610a7831db9f663be3b18c87c04c543e2b54b0f2b8de00605cca3c31.scope: Deactivated successfully.
Nov 25 10:51:34 np0005535469 podman[74720]: 2025-11-25 15:51:34.604249848 +0000 UTC m=+0.047175906 container create b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35 (image=quay.io/ceph/ceph:v18, name=gifted_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:51:34 np0005535469 systemd[1]: Started libpod-conmon-b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35.scope.
Nov 25 10:51:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:34 np0005535469 podman[74720]: 2025-11-25 15:51:34.585063906 +0000 UTC m=+0.027989984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b4950d7c22aecdccea9c493d549f9ab107c9ff260154e566b42c99e7a2d3be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b4950d7c22aecdccea9c493d549f9ab107c9ff260154e566b42c99e7a2d3be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b4950d7c22aecdccea9c493d549f9ab107c9ff260154e566b42c99e7a2d3be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b4950d7c22aecdccea9c493d549f9ab107c9ff260154e566b42c99e7a2d3be/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:34 np0005535469 podman[74720]: 2025-11-25 15:51:34.701250412 +0000 UTC m=+0.144176470 container init b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35 (image=quay.io/ceph/ceph:v18, name=gifted_buck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 10:51:34 np0005535469 podman[74720]: 2025-11-25 15:51:34.710448232 +0000 UTC m=+0.153374320 container start b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35 (image=quay.io/ceph/ceph:v18, name=gifted_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 10:51:34 np0005535469 podman[74720]: 2025-11-25 15:51:34.714196035 +0000 UTC m=+0.157122123 container attach b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35 (image=quay.io/ceph/ceph:v18, name=gifted_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:51:35 np0005535469 ceph-mon[74625]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 10:51:35 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 10:51:35 np0005535469 ceph-mon[74625]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3869570850' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 10:51:35 np0005535469 ceph-mon[74625]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3869570850' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 10:51:35 np0005535469 gifted_buck[74737]: 
Nov 25 10:51:35 np0005535469 gifted_buck[74737]: [global]
Nov 25 10:51:35 np0005535469 gifted_buck[74737]: #011fsid = d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:35 np0005535469 gifted_buck[74737]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 25 10:51:35 np0005535469 gifted_buck[74737]: #011osd_crush_chooseleaf_type = 0
Nov 25 10:51:35 np0005535469 systemd[1]: libpod-b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35.scope: Deactivated successfully.
Nov 25 10:51:35 np0005535469 podman[74720]: 2025-11-25 15:51:35.105774934 +0000 UTC m=+0.548701022 container died b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35 (image=quay.io/ceph/ceph:v18, name=gifted_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:51:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-49b4950d7c22aecdccea9c493d549f9ab107c9ff260154e566b42c99e7a2d3be-merged.mount: Deactivated successfully.
Nov 25 10:51:35 np0005535469 podman[74720]: 2025-11-25 15:51:35.153184946 +0000 UTC m=+0.596111034 container remove b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35 (image=quay.io/ceph/ceph:v18, name=gifted_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:35 np0005535469 systemd[1]: libpod-conmon-b86e61196ac705f0369045ad43d694d97abe823532abeb2c1c6e174bdb185a35.scope: Deactivated successfully.
Nov 25 10:51:35 np0005535469 podman[74777]: 2025-11-25 15:51:35.251542576 +0000 UTC m=+0.072596438 container create bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589 (image=quay.io/ceph/ceph:v18, name=dreamy_engelbart, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 10:51:35 np0005535469 systemd[1]: Started libpod-conmon-bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589.scope.
Nov 25 10:51:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:35 np0005535469 podman[74777]: 2025-11-25 15:51:35.215412313 +0000 UTC m=+0.036466265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fa6698488a46d821cec75b428310787b031e3c5e9d9c9404316d07b4266cb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fa6698488a46d821cec75b428310787b031e3c5e9d9c9404316d07b4266cb0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fa6698488a46d821cec75b428310787b031e3c5e9d9c9404316d07b4266cb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fa6698488a46d821cec75b428310787b031e3c5e9d9c9404316d07b4266cb0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:35 np0005535469 podman[74777]: 2025-11-25 15:51:35.330206951 +0000 UTC m=+0.151260863 container init bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589 (image=quay.io/ceph/ceph:v18, name=dreamy_engelbart, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 10:51:35 np0005535469 podman[74777]: 2025-11-25 15:51:35.34931026 +0000 UTC m=+0.170364172 container start bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589 (image=quay.io/ceph/ceph:v18, name=dreamy_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 10:51:35 np0005535469 podman[74777]: 2025-11-25 15:51:35.364421913 +0000 UTC m=+0.185475825 container attach bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589 (image=quay.io/ceph/ceph:v18, name=dreamy_engelbart, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:51:35 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:51:35 np0005535469 ceph-mon[74625]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135267195' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:51:35 np0005535469 systemd[1]: libpod-bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589.scope: Deactivated successfully.
Nov 25 10:51:35 np0005535469 podman[74777]: 2025-11-25 15:51:35.727811834 +0000 UTC m=+0.548865706 container died bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589 (image=quay.io/ceph/ceph:v18, name=dreamy_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-99fa6698488a46d821cec75b428310787b031e3c5e9d9c9404316d07b4266cb0-merged.mount: Deactivated successfully.
Nov 25 10:51:35 np0005535469 podman[74777]: 2025-11-25 15:51:35.968136843 +0000 UTC m=+0.789190725 container remove bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589 (image=quay.io/ceph/ceph:v18, name=dreamy_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:51:36 np0005535469 systemd[1]: Stopping Ceph mon.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:51:36 np0005535469 systemd[1]: libpod-conmon-bdf45dfca9db3a4443cfca1dd0637b62b01882bb7f36eb55125ee364ba09b589.scope: Deactivated successfully.
Nov 25 10:51:36 np0005535469 ceph-mon[74625]: from='client.? 192.168.122.100:0/3869570850' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 10:51:36 np0005535469 ceph-mon[74625]: from='client.? 192.168.122.100:0/3869570850' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 10:51:36 np0005535469 ceph-mon[74625]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 10:51:36 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 10:51:36 np0005535469 ceph-mon[74625]: mon.compute-0@0(leader) e1 shutdown
Nov 25 10:51:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0[74621]: 2025-11-25T15:51:36.191+0000 7f3ee8d9e640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 10:51:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0[74621]: 2025-11-25T15:51:36.191+0000 7f3ee8d9e640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 10:51:36 np0005535469 ceph-mon[74625]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 10:51:36 np0005535469 ceph-mon[74625]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 10:51:36 np0005535469 podman[74861]: 2025-11-25 15:51:36.223354778 +0000 UTC m=+0.087138426 container died f6c3a510c25fd48e1bdd96510ef8bbc99d5a917d8b607a60aac1e6a9a37353de (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-faafb24f3116d3914fcee45b4c0afa2b421b41262e301e2d744509b787c2ee63-merged.mount: Deactivated successfully.
Nov 25 10:51:36 np0005535469 podman[74861]: 2025-11-25 15:51:36.560588377 +0000 UTC m=+0.424372015 container remove f6c3a510c25fd48e1bdd96510ef8bbc99d5a917d8b607a60aac1e6a9a37353de (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:51:36 np0005535469 bash[74861]: ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0
Nov 25 10:51:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:36 np0005535469 systemd[1]: ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@mon.compute-0.service: Deactivated successfully.
Nov 25 10:51:36 np0005535469 systemd[1]: Stopped Ceph mon.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:51:36 np0005535469 systemd[1]: ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@mon.compute-0.service: Consumed 1.048s CPU time.
Nov 25 10:51:36 np0005535469 systemd[1]: Starting Ceph mon.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:51:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:51:36 np0005535469 podman[74968]: 2025-11-25 15:51:36.92585968 +0000 UTC m=+0.071703404 container create 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 10:51:36 np0005535469 podman[74968]: 2025-11-25 15:51:36.87520057 +0000 UTC m=+0.021044334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fb54874e107b5ace52771f166a79e9e749d28bd594064879ee8aab0723063/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fb54874e107b5ace52771f166a79e9e749d28bd594064879ee8aab0723063/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fb54874e107b5ace52771f166a79e9e749d28bd594064879ee8aab0723063/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96fb54874e107b5ace52771f166a79e9e749d28bd594064879ee8aab0723063/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:37 np0005535469 podman[74968]: 2025-11-25 15:51:37.017874908 +0000 UTC m=+0.163718662 container init 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:51:37 np0005535469 podman[74968]: 2025-11-25 15:51:37.022983357 +0000 UTC m=+0.168827091 container start 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:51:37 np0005535469 bash[74968]: 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b
Nov 25 10:51:37 np0005535469 systemd[1]: Started Ceph mon.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: pidfile_write: ignore empty --pid-file
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: load: jerasure load: lrc 
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Git sha 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: DB SUMMARY
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: DB Session ID:  9WO1M855NQQLW0TZFZN0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55680 ; 
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                                     Options.env: 0x563cc686fc40
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                                Options.info_log: 0x563cc838b040
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                                 Options.wal_dir: 
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                    Options.write_buffer_manager: 0x563cc839ab40
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                               Options.row_cache: None
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                              Options.wal_filter: None
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.wal_compression: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.max_background_jobs: 2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Compression algorithms supported:
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kZSTD supported: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:           Options.merge_operator: 
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:        Options.compaction_filter: None
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563cc838ac40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563cc83831f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:          Options.compression: NoCompression
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.num_levels: 7
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 649eb26e-5d3e-4279-9fd6-2380e7a57b81
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764085897088010, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764085897135542, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53801, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51390, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085897, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764085897135685, "job": 1, "event": "recovery_finished"}
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 25 10:51:37 np0005535469 podman[74986]: 2025-11-25 15:51:37.177989691 +0000 UTC m=+0.099422360 container create e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be (image=quay.io/ceph/ceph:v18, name=determined_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563cc83ace00
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: DB pointer 0x563cc8436000
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.52 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.52 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 512.00 MB usage: 1.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,5.03 KB,0.000959635%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???) e1 preinit fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???).mds e1 new map
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 25 10:51:37 np0005535469 podman[74986]: 2025-11-25 15:51:37.116159476 +0000 UTC m=+0.037592165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 10:51:37 np0005535469 systemd[1]: Started libpod-conmon-e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be.scope.
Nov 25 10:51:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a655a27331b13d03b43cb9fa035688abd41e19b5e9be2175af237ac09765cea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a655a27331b13d03b43cb9fa035688abd41e19b5e9be2175af237ac09765cea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a655a27331b13d03b43cb9fa035688abd41e19b5e9be2175af237ac09765cea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 10:51:37 np0005535469 podman[74986]: 2025-11-25 15:51:37.319151007 +0000 UTC m=+0.240583706 container init e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be (image=quay.io/ceph/ceph:v18, name=determined_bassi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:51:37 np0005535469 podman[74986]: 2025-11-25 15:51:37.326758654 +0000 UTC m=+0.248191353 container start e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be (image=quay.io/ceph/ceph:v18, name=determined_bassi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 10:51:37 np0005535469 podman[74986]: 2025-11-25 15:51:37.338318859 +0000 UTC m=+0.259751588 container attach e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be (image=quay.io/ceph/ceph:v18, name=determined_bassi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 25 10:51:37 np0005535469 systemd[1]: libpod-e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be.scope: Deactivated successfully.
Nov 25 10:51:37 np0005535469 podman[74986]: 2025-11-25 15:51:37.70752293 +0000 UTC m=+0.628955599 container died e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be (image=quay.io/ceph/ceph:v18, name=determined_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:51:37 np0005535469 podman[74986]: 2025-11-25 15:51:37.757316457 +0000 UTC m=+0.678749166 container remove e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be (image=quay.io/ceph/ceph:v18, name=determined_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:51:37 np0005535469 systemd[1]: libpod-conmon-e82b63c06653fd81b7cd800078f7a0a060d6e3d9120b22e3e05fc239e07d85be.scope: Deactivated successfully.
Nov 25 10:51:37 np0005535469 podman[75079]: 2025-11-25 15:51:37.822857973 +0000 UTC m=+0.046682963 container create 554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b (image=quay.io/ceph/ceph:v18, name=serene_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:37 np0005535469 systemd[1]: Started libpod-conmon-554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b.scope.
Nov 25 10:51:37 np0005535469 podman[75079]: 2025-11-25 15:51:37.799817555 +0000 UTC m=+0.023642545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84b4e3e3f6b00ebbf754232144742646639125f61c1a290ce390d461e887909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84b4e3e3f6b00ebbf754232144742646639125f61c1a290ce390d461e887909/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84b4e3e3f6b00ebbf754232144742646639125f61c1a290ce390d461e887909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:37 np0005535469 podman[75079]: 2025-11-25 15:51:37.932041098 +0000 UTC m=+0.155866058 container init 554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b (image=quay.io/ceph/ceph:v18, name=serene_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 10:51:37 np0005535469 podman[75079]: 2025-11-25 15:51:37.937920658 +0000 UTC m=+0.161745608 container start 554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b (image=quay.io/ceph/ceph:v18, name=serene_lovelace, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:51:37 np0005535469 podman[75079]: 2025-11-25 15:51:37.941116266 +0000 UTC m=+0.164941266 container attach 554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b (image=quay.io/ceph/ceph:v18, name=serene_lovelace, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 10:51:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 25 10:51:38 np0005535469 systemd[1]: libpod-554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b.scope: Deactivated successfully.
Nov 25 10:51:38 np0005535469 podman[75079]: 2025-11-25 15:51:38.319841806 +0000 UTC m=+0.543666766 container died 554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b (image=quay.io/ceph/ceph:v18, name=serene_lovelace, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:51:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f84b4e3e3f6b00ebbf754232144742646639125f61c1a290ce390d461e887909-merged.mount: Deactivated successfully.
Nov 25 10:51:38 np0005535469 podman[75079]: 2025-11-25 15:51:38.362739474 +0000 UTC m=+0.586564424 container remove 554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b (image=quay.io/ceph/ceph:v18, name=serene_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:51:38 np0005535469 systemd[1]: libpod-conmon-554c6ee83291251cbd6d805ef6659694750f8d68992f809b92890276a33aa46b.scope: Deactivated successfully.
Nov 25 10:51:38 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:38 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:38 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:38 np0005535469 systemd[1]: Reloading.
Nov 25 10:51:38 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:51:38 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:51:38 np0005535469 systemd[1]: Starting Ceph mgr.compute-0.mavpeh for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:51:39 np0005535469 podman[75261]: 2025-11-25 15:51:39.204738029 +0000 UTC m=+0.060707906 container create 3aa2be67788fef6814e22778c09928cd3e95960ab6e39064ed8a7f043048e3e3 (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 10:51:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb370b2da3785878a1a54e87f2873e124ea0e32b0c50259cd6aa50e48208c4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb370b2da3785878a1a54e87f2873e124ea0e32b0c50259cd6aa50e48208c4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb370b2da3785878a1a54e87f2873e124ea0e32b0c50259cd6aa50e48208c4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb370b2da3785878a1a54e87f2873e124ea0e32b0c50259cd6aa50e48208c4f/merged/var/lib/ceph/mgr/ceph-compute-0.mavpeh supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:39 np0005535469 podman[75261]: 2025-11-25 15:51:39.269250006 +0000 UTC m=+0.125219903 container init 3aa2be67788fef6814e22778c09928cd3e95960ab6e39064ed8a7f043048e3e3 (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:51:39 np0005535469 podman[75261]: 2025-11-25 15:51:39.183277843 +0000 UTC m=+0.039247720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:39 np0005535469 podman[75261]: 2025-11-25 15:51:39.279626589 +0000 UTC m=+0.135596456 container start 3aa2be67788fef6814e22778c09928cd3e95960ab6e39064ed8a7f043048e3e3 (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:51:39 np0005535469 bash[75261]: 3aa2be67788fef6814e22778c09928cd3e95960ab6e39064ed8a7f043048e3e3
Nov 25 10:51:39 np0005535469 systemd[1]: Started Ceph mgr.compute-0.mavpeh for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:51:39 np0005535469 ceph-mgr[75280]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:51:39 np0005535469 ceph-mgr[75280]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 10:51:39 np0005535469 ceph-mgr[75280]: pidfile_write: ignore empty --pid-file
Nov 25 10:51:39 np0005535469 podman[75281]: 2025-11-25 15:51:39.393699547 +0000 UTC m=+0.066487463 container create 73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474 (image=quay.io/ceph/ceph:v18, name=inspiring_bartik, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:51:39 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'alerts'
Nov 25 10:51:39 np0005535469 systemd[1]: Started libpod-conmon-73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474.scope.
Nov 25 10:51:39 np0005535469 podman[75281]: 2025-11-25 15:51:39.352921396 +0000 UTC m=+0.025709302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785ab466049ef9a8841c7e45bd592d77ed67ecf6d2bc1772b738a68c0ddaad5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785ab466049ef9a8841c7e45bd592d77ed67ecf6d2bc1772b738a68c0ddaad5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6785ab466049ef9a8841c7e45bd592d77ed67ecf6d2bc1772b738a68c0ddaad5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:39 np0005535469 podman[75281]: 2025-11-25 15:51:39.506944533 +0000 UTC m=+0.179732449 container init 73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474 (image=quay.io/ceph/ceph:v18, name=inspiring_bartik, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:51:39 np0005535469 podman[75281]: 2025-11-25 15:51:39.518815077 +0000 UTC m=+0.191602993 container start 73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474 (image=quay.io/ceph/ceph:v18, name=inspiring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:39 np0005535469 podman[75281]: 2025-11-25 15:51:39.522370083 +0000 UTC m=+0.195158009 container attach 73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474 (image=quay.io/ceph/ceph:v18, name=inspiring_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:51:39 np0005535469 ceph-mgr[75280]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 10:51:39 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'balancer'
Nov 25 10:51:39 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:39.752+0000 7f81d347c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 10:51:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:51:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259629770' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]: 
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]: {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "health": {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "status": "HEALTH_OK",
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "checks": {},
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "mutes": []
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    },
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "election_epoch": 5,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "quorum": [
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        0
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    ],
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "quorum_names": [
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "compute-0"
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    ],
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "quorum_age": 2,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "monmap": {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "epoch": 1,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "min_mon_release_name": "reef",
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_mons": 1
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    },
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "osdmap": {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "epoch": 1,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_osds": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_up_osds": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "osd_up_since": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_in_osds": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "osd_in_since": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_remapped_pgs": 0
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    },
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "pgmap": {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "pgs_by_state": [],
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_pgs": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_pools": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_objects": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "data_bytes": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "bytes_used": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "bytes_avail": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "bytes_total": 0
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    },
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "fsmap": {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "epoch": 1,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "by_rank": [],
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "up:standby": 0
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    },
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "mgrmap": {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "available": false,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "num_standbys": 0,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "modules": [
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:            "iostat",
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:            "nfs",
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:            "restful"
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        ],
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "services": {}
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    },
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "servicemap": {
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "epoch": 1,
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:        "services": {}
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    },
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]:    "progress_events": {}
Nov 25 10:51:39 np0005535469 inspiring_bartik[75322]: }
Nov 25 10:51:39 np0005535469 systemd[1]: libpod-73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474.scope: Deactivated successfully.
Nov 25 10:51:39 np0005535469 podman[75281]: 2025-11-25 15:51:39.954012345 +0000 UTC m=+0.626800241 container died 73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474 (image=quay.io/ceph/ceph:v18, name=inspiring_bartik, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6785ab466049ef9a8841c7e45bd592d77ed67ecf6d2bc1772b738a68c0ddaad5-merged.mount: Deactivated successfully.
Nov 25 10:51:39 np0005535469 podman[75281]: 2025-11-25 15:51:39.998311293 +0000 UTC m=+0.671099169 container remove 73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474 (image=quay.io/ceph/ceph:v18, name=inspiring_bartik, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 10:51:40 np0005535469 systemd[1]: libpod-conmon-73ff1ab51b1ec9118fe43f16f15b3a5b71be88b00e391d20c4b0124d3f90a474.scope: Deactivated successfully.
Nov 25 10:51:40 np0005535469 ceph-mgr[75280]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 10:51:40 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'cephadm'
Nov 25 10:51:40 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:40.012+0000 7f81d347c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 10:51:41 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'crash'
Nov 25 10:51:42 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:42.101+0000 7f81d347c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 10:51:42 np0005535469 ceph-mgr[75280]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 10:51:42 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'dashboard'
Nov 25 10:51:42 np0005535469 podman[75370]: 2025-11-25 15:51:42.131738736 +0000 UTC m=+0.101281260 container create b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9 (image=quay.io/ceph/ceph:v18, name=naughty_pasteur, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 10:51:42 np0005535469 podman[75370]: 2025-11-25 15:51:42.059363085 +0000 UTC m=+0.028905629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:42 np0005535469 systemd[1]: Started libpod-conmon-b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9.scope.
Nov 25 10:51:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11076fd99aab375d4468c8bc7833b86e83208e292e27eed84b5c2d4865ce151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11076fd99aab375d4468c8bc7833b86e83208e292e27eed84b5c2d4865ce151/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11076fd99aab375d4468c8bc7833b86e83208e292e27eed84b5c2d4865ce151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:42 np0005535469 podman[75370]: 2025-11-25 15:51:42.351916866 +0000 UTC m=+0.321459380 container init b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9 (image=quay.io/ceph/ceph:v18, name=naughty_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 10:51:42 np0005535469 podman[75370]: 2025-11-25 15:51:42.359435132 +0000 UTC m=+0.328977626 container start b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9 (image=quay.io/ceph/ceph:v18, name=naughty_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:42 np0005535469 podman[75370]: 2025-11-25 15:51:42.421670517 +0000 UTC m=+0.391213031 container attach b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9 (image=quay.io/ceph/ceph:v18, name=naughty_pasteur, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:51:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:51:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636796884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]: 
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]: {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "health": {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "status": "HEALTH_OK",
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "checks": {},
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "mutes": []
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    },
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "election_epoch": 5,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "quorum": [
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        0
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    ],
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "quorum_names": [
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "compute-0"
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    ],
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "quorum_age": 5,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "monmap": {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "epoch": 1,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "min_mon_release_name": "reef",
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_mons": 1
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    },
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "osdmap": {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "epoch": 1,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_osds": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_up_osds": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "osd_up_since": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_in_osds": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "osd_in_since": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_remapped_pgs": 0
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    },
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "pgmap": {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "pgs_by_state": [],
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_pgs": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_pools": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_objects": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "data_bytes": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "bytes_used": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "bytes_avail": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "bytes_total": 0
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    },
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "fsmap": {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "epoch": 1,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "by_rank": [],
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "up:standby": 0
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    },
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "mgrmap": {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "available": false,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "num_standbys": 0,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "modules": [
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:            "iostat",
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:            "nfs",
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:            "restful"
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        ],
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "services": {}
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    },
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "servicemap": {
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "epoch": 1,
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:        "services": {}
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    },
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]:    "progress_events": {}
Nov 25 10:51:42 np0005535469 naughty_pasteur[75386]: }
Nov 25 10:51:42 np0005535469 systemd[1]: libpod-b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9.scope: Deactivated successfully.
Nov 25 10:51:42 np0005535469 podman[75370]: 2025-11-25 15:51:42.751983448 +0000 UTC m=+0.721525952 container died b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9 (image=quay.io/ceph/ceph:v18, name=naughty_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:51:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d11076fd99aab375d4468c8bc7833b86e83208e292e27eed84b5c2d4865ce151-merged.mount: Deactivated successfully.
Nov 25 10:51:43 np0005535469 podman[75370]: 2025-11-25 15:51:43.360238023 +0000 UTC m=+1.329780517 container remove b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9 (image=quay.io/ceph/ceph:v18, name=naughty_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:51:43 np0005535469 systemd[1]: libpod-conmon-b05804d9ebfa0d5ef3ed7f1d81c63a1814b30957b8edcbdc98595131a7f364e9.scope: Deactivated successfully.
Nov 25 10:51:43 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'devicehealth'
Nov 25 10:51:43 np0005535469 ceph-mgr[75280]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 10:51:43 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 10:51:43 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:43.726+0000 7f81d347c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 10:51:44 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 10:51:44 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 10:51:44 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]:  from numpy import show_config as show_numpy_config
Nov 25 10:51:44 np0005535469 ceph-mgr[75280]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 10:51:44 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:44.237+0000 7f81d347c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 10:51:44 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'influx'
Nov 25 10:51:44 np0005535469 ceph-mgr[75280]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 10:51:44 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'insights'
Nov 25 10:51:44 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:44.475+0000 7f81d347c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 10:51:44 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'iostat'
Nov 25 10:51:44 np0005535469 ceph-mgr[75280]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 10:51:44 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'k8sevents'
Nov 25 10:51:44 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:44.942+0000 7f81d347c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 10:51:45 np0005535469 podman[75425]: 2025-11-25 15:51:45.4581669 +0000 UTC m=+0.076577018 container create 9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6 (image=quay.io/ceph/ceph:v18, name=unruffled_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 10:51:45 np0005535469 systemd[1]: Started libpod-conmon-9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6.scope.
Nov 25 10:51:45 np0005535469 podman[75425]: 2025-11-25 15:51:45.401781563 +0000 UTC m=+0.020191701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75ca8b0642d75009491945ad90965ffe529bfa982b8f5b4809e92e1a1022ee9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75ca8b0642d75009491945ad90965ffe529bfa982b8f5b4809e92e1a1022ee9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75ca8b0642d75009491945ad90965ffe529bfa982b8f5b4809e92e1a1022ee9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:45 np0005535469 podman[75425]: 2025-11-25 15:51:45.549122418 +0000 UTC m=+0.167532566 container init 9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6 (image=quay.io/ceph/ceph:v18, name=unruffled_leakey, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:51:45 np0005535469 podman[75425]: 2025-11-25 15:51:45.557137656 +0000 UTC m=+0.175547774 container start 9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6 (image=quay.io/ceph/ceph:v18, name=unruffled_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 10:51:45 np0005535469 podman[75425]: 2025-11-25 15:51:45.566997165 +0000 UTC m=+0.185407293 container attach 9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6 (image=quay.io/ceph/ceph:v18, name=unruffled_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 10:51:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:51:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338550284' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]: 
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]: {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "health": {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "status": "HEALTH_OK",
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "checks": {},
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "mutes": []
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    },
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "election_epoch": 5,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "quorum": [
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        0
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    ],
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "quorum_names": [
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "compute-0"
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    ],
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "quorum_age": 8,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "monmap": {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "epoch": 1,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "min_mon_release_name": "reef",
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_mons": 1
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    },
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "osdmap": {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "epoch": 1,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_osds": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_up_osds": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "osd_up_since": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_in_osds": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "osd_in_since": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_remapped_pgs": 0
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    },
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "pgmap": {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "pgs_by_state": [],
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_pgs": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_pools": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_objects": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "data_bytes": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "bytes_used": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "bytes_avail": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "bytes_total": 0
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    },
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "fsmap": {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "epoch": 1,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "by_rank": [],
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "up:standby": 0
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    },
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "mgrmap": {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "available": false,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "num_standbys": 0,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "modules": [
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:            "iostat",
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:            "nfs",
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:            "restful"
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        ],
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "services": {}
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    },
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "servicemap": {
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "epoch": 1,
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:        "services": {}
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    },
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]:    "progress_events": {}
Nov 25 10:51:45 np0005535469 unruffled_leakey[75441]: }
Nov 25 10:51:45 np0005535469 systemd[1]: libpod-9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6.scope: Deactivated successfully.
Nov 25 10:51:45 np0005535469 podman[75425]: 2025-11-25 15:51:45.919895382 +0000 UTC m=+0.538305510 container died 9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6 (image=quay.io/ceph/ceph:v18, name=unruffled_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:51:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f75ca8b0642d75009491945ad90965ffe529bfa982b8f5b4809e92e1a1022ee9-merged.mount: Deactivated successfully.
Nov 25 10:51:46 np0005535469 podman[75425]: 2025-11-25 15:51:46.060237406 +0000 UTC m=+0.678647534 container remove 9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6 (image=quay.io/ceph/ceph:v18, name=unruffled_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:51:46 np0005535469 systemd[1]: libpod-conmon-9dbfdba56db875a62513c10305cdf3cd5aae6652944c3266d57deab332181fc6.scope: Deactivated successfully.
Nov 25 10:51:46 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'localpool'
Nov 25 10:51:46 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 10:51:47 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'mirroring'
Nov 25 10:51:47 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'nfs'
Nov 25 10:51:48 np0005535469 podman[75481]: 2025-11-25 15:51:48.103352039 +0000 UTC m=+0.022464883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:48 np0005535469 podman[75481]: 2025-11-25 15:51:48.238957604 +0000 UTC m=+0.158070428 container create 2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4 (image=quay.io/ceph/ceph:v18, name=stoic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 10:51:48 np0005535469 systemd[1]: Started libpod-conmon-2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4.scope.
Nov 25 10:51:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82243ad3ebd39534347c195587b04a83dbeb9faa10c6943edbfb3883ae35944b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82243ad3ebd39534347c195587b04a83dbeb9faa10c6943edbfb3883ae35944b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82243ad3ebd39534347c195587b04a83dbeb9faa10c6943edbfb3883ae35944b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:48 np0005535469 podman[75481]: 2025-11-25 15:51:48.401478233 +0000 UTC m=+0.320591087 container init 2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4 (image=quay.io/ceph/ceph:v18, name=stoic_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 10:51:48 np0005535469 podman[75481]: 2025-11-25 15:51:48.407665741 +0000 UTC m=+0.326778575 container start 2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4 (image=quay.io/ceph/ceph:v18, name=stoic_hodgkin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:51:48 np0005535469 podman[75481]: 2025-11-25 15:51:48.41130783 +0000 UTC m=+0.330420694 container attach 2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4 (image=quay.io/ceph/ceph:v18, name=stoic_hodgkin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:51:48 np0005535469 ceph-mgr[75280]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 10:51:48 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'orchestrator'
Nov 25 10:51:48 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:48.575+0000 7f81d347c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 10:51:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:51:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307660644' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]: 
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]: {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "health": {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "status": "HEALTH_OK",
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "checks": {},
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "mutes": []
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    },
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "election_epoch": 5,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "quorum": [
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        0
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    ],
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "quorum_names": [
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "compute-0"
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    ],
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "quorum_age": 11,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "monmap": {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "epoch": 1,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "min_mon_release_name": "reef",
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_mons": 1
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    },
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "osdmap": {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "epoch": 1,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_osds": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_up_osds": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "osd_up_since": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_in_osds": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "osd_in_since": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_remapped_pgs": 0
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    },
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "pgmap": {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "pgs_by_state": [],
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_pgs": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_pools": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_objects": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "data_bytes": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "bytes_used": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "bytes_avail": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "bytes_total": 0
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    },
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "fsmap": {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "epoch": 1,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "by_rank": [],
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "up:standby": 0
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    },
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "mgrmap": {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "available": false,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "num_standbys": 0,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "modules": [
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:            "iostat",
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:            "nfs",
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:            "restful"
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        ],
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "services": {}
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    },
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "servicemap": {
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "epoch": 1,
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:        "services": {}
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    },
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]:    "progress_events": {}
Nov 25 10:51:48 np0005535469 stoic_hodgkin[75497]: }
Nov 25 10:51:48 np0005535469 systemd[1]: libpod-2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4.scope: Deactivated successfully.
Nov 25 10:51:48 np0005535469 podman[75481]: 2025-11-25 15:51:48.788341425 +0000 UTC m=+0.707454279 container died 2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4 (image=quay.io/ceph/ceph:v18, name=stoic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-82243ad3ebd39534347c195587b04a83dbeb9faa10c6943edbfb3883ae35944b-merged.mount: Deactivated successfully.
Nov 25 10:51:49 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:49.239+0000 7f81d347c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 10:51:49 np0005535469 ceph-mgr[75280]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 10:51:49 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 10:51:49 np0005535469 podman[75481]: 2025-11-25 15:51:49.368492203 +0000 UTC m=+1.287605027 container remove 2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4 (image=quay.io/ceph/ceph:v18, name=stoic_hodgkin, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:51:49 np0005535469 systemd[1]: libpod-conmon-2ca6a118647910c4328fee97f3ad09345ec110a364c3da847111fc77a884bdd4.scope: Deactivated successfully.
Nov 25 10:51:49 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:49.523+0000 7f81d347c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 10:51:49 np0005535469 ceph-mgr[75280]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 10:51:49 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'osd_support'
Nov 25 10:51:49 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:49.761+0000 7f81d347c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 10:51:49 np0005535469 ceph-mgr[75280]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 10:51:49 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 10:51:50 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:50.044+0000 7f81d347c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 10:51:50 np0005535469 ceph-mgr[75280]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 10:51:50 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'progress'
Nov 25 10:51:50 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:50.323+0000 7f81d347c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 10:51:50 np0005535469 ceph-mgr[75280]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 10:51:50 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'prometheus'
Nov 25 10:51:51 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:51.415+0000 7f81d347c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 10:51:51 np0005535469 ceph-mgr[75280]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 10:51:51 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'rbd_support'
Nov 25 10:51:51 np0005535469 podman[75534]: 2025-11-25 15:51:51.512553587 +0000 UTC m=+0.107265464 container create c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f (image=quay.io/ceph/ceph:v18, name=determined_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:51:51 np0005535469 podman[75534]: 2025-11-25 15:51:51.444045021 +0000 UTC m=+0.038756958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:51 np0005535469 systemd[1]: Started libpod-conmon-c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f.scope.
Nov 25 10:51:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918bac8566d449001773b64f0c748471f5371f6359820e1ae642790b0cb655f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918bac8566d449001773b64f0c748471f5371f6359820e1ae642790b0cb655f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918bac8566d449001773b64f0c748471f5371f6359820e1ae642790b0cb655f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:51 np0005535469 podman[75534]: 2025-11-25 15:51:51.651193425 +0000 UTC m=+0.245905382 container init c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f (image=quay.io/ceph/ceph:v18, name=determined_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:51:51 np0005535469 podman[75534]: 2025-11-25 15:51:51.660746385 +0000 UTC m=+0.255458232 container start c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f (image=quay.io/ceph/ceph:v18, name=determined_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 10:51:51 np0005535469 podman[75534]: 2025-11-25 15:51:51.671847728 +0000 UTC m=+0.266559665 container attach c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f (image=quay.io/ceph/ceph:v18, name=determined_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:51 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:51.707+0000 7f81d347c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 10:51:51 np0005535469 ceph-mgr[75280]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 10:51:51 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'restful'
Nov 25 10:51:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:51:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925074190' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:51:52 np0005535469 determined_tesla[75551]: 
Nov 25 10:51:52 np0005535469 determined_tesla[75551]: {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "health": {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "status": "HEALTH_OK",
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "checks": {},
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "mutes": []
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    },
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "election_epoch": 5,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "quorum": [
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        0
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    ],
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "quorum_names": [
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "compute-0"
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    ],
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "quorum_age": 14,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "monmap": {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "epoch": 1,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "min_mon_release_name": "reef",
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_mons": 1
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    },
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "osdmap": {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "epoch": 1,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_osds": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_up_osds": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "osd_up_since": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_in_osds": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "osd_in_since": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_remapped_pgs": 0
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    },
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "pgmap": {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "pgs_by_state": [],
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_pgs": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_pools": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_objects": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "data_bytes": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "bytes_used": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "bytes_avail": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "bytes_total": 0
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    },
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "fsmap": {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "epoch": 1,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "by_rank": [],
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "up:standby": 0
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    },
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "mgrmap": {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "available": false,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "num_standbys": 0,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "modules": [
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:            "iostat",
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:            "nfs",
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:            "restful"
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        ],
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "services": {}
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    },
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "servicemap": {
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "epoch": 1,
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:        "services": {}
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    },
Nov 25 10:51:52 np0005535469 determined_tesla[75551]:    "progress_events": {}
Nov 25 10:51:52 np0005535469 determined_tesla[75551]: }
Nov 25 10:51:52 np0005535469 systemd[1]: libpod-c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f.scope: Deactivated successfully.
Nov 25 10:51:52 np0005535469 podman[75534]: 2025-11-25 15:51:52.063288804 +0000 UTC m=+0.658000661 container died c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f (image=quay.io/ceph/ceph:v18, name=determined_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:51:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-918bac8566d449001773b64f0c748471f5371f6359820e1ae642790b0cb655f1-merged.mount: Deactivated successfully.
Nov 25 10:51:52 np0005535469 podman[75534]: 2025-11-25 15:51:52.377000313 +0000 UTC m=+0.971712200 container remove c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f (image=quay.io/ceph/ceph:v18, name=determined_tesla, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 10:51:52 np0005535469 systemd[1]: libpod-conmon-c1dd2ad7d28d53d4ec57edc9a8b1c25a1c06f910aed6d26d10abbcb7264afb0f.scope: Deactivated successfully.
Nov 25 10:51:52 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'rgw'
Nov 25 10:51:53 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:53.138+0000 7f81d347c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 10:51:53 np0005535469 ceph-mgr[75280]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 10:51:53 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'rook'
Nov 25 10:51:54 np0005535469 podman[75588]: 2025-11-25 15:51:54.472675148 +0000 UTC m=+0.059621096 container create 8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7 (image=quay.io/ceph/ceph:v18, name=relaxed_bohr, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 10:51:54 np0005535469 systemd[1]: Started libpod-conmon-8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7.scope.
Nov 25 10:51:54 np0005535469 podman[75588]: 2025-11-25 15:51:54.440876791 +0000 UTC m=+0.027822789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee1b5573c06a9ddce77828afc02edb3fe20fb27456454ac6fa21f21b10f7baf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee1b5573c06a9ddce77828afc02edb3fe20fb27456454ac6fa21f21b10f7baf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee1b5573c06a9ddce77828afc02edb3fe20fb27456454ac6fa21f21b10f7baf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:54 np0005535469 podman[75588]: 2025-11-25 15:51:54.556556164 +0000 UTC m=+0.143502122 container init 8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7 (image=quay.io/ceph/ceph:v18, name=relaxed_bohr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:51:54 np0005535469 podman[75588]: 2025-11-25 15:51:54.566937866 +0000 UTC m=+0.153883784 container start 8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7 (image=quay.io/ceph/ceph:v18, name=relaxed_bohr, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:51:54 np0005535469 podman[75588]: 2025-11-25 15:51:54.570817973 +0000 UTC m=+0.157763941 container attach 8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7 (image=quay.io/ceph/ceph:v18, name=relaxed_bohr, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:51:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:51:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510632878' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]: 
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]: {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "health": {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "status": "HEALTH_OK",
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "checks": {},
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "mutes": []
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    },
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "election_epoch": 5,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "quorum": [
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        0
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    ],
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "quorum_names": [
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "compute-0"
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    ],
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "quorum_age": 17,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "monmap": {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "epoch": 1,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "min_mon_release_name": "reef",
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_mons": 1
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    },
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "osdmap": {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "epoch": 1,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_osds": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_up_osds": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "osd_up_since": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_in_osds": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "osd_in_since": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_remapped_pgs": 0
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    },
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "pgmap": {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "pgs_by_state": [],
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_pgs": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_pools": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_objects": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "data_bytes": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "bytes_used": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "bytes_avail": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "bytes_total": 0
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    },
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "fsmap": {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "epoch": 1,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "by_rank": [],
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "up:standby": 0
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    },
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "mgrmap": {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "available": false,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "num_standbys": 0,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "modules": [
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:            "iostat",
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:            "nfs",
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:            "restful"
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        ],
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "services": {}
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    },
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "servicemap": {
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "epoch": 1,
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:        "services": {}
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    },
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]:    "progress_events": {}
Nov 25 10:51:54 np0005535469 relaxed_bohr[75604]: }
Nov 25 10:51:54 np0005535469 systemd[1]: libpod-8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7.scope: Deactivated successfully.
Nov 25 10:51:54 np0005535469 podman[75588]: 2025-11-25 15:51:54.94251612 +0000 UTC m=+0.529462028 container died 8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7 (image=quay.io/ceph/ceph:v18, name=relaxed_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 10:51:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2ee1b5573c06a9ddce77828afc02edb3fe20fb27456454ac6fa21f21b10f7baf-merged.mount: Deactivated successfully.
Nov 25 10:51:54 np0005535469 podman[75588]: 2025-11-25 15:51:54.976384194 +0000 UTC m=+0.563330102 container remove 8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7 (image=quay.io/ceph/ceph:v18, name=relaxed_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:51:55 np0005535469 systemd[1]: libpod-conmon-8267cbd2242f29137560a44c821079d0d8e6baba79cebe13b93759a23bd040e7.scope: Deactivated successfully.
Nov 25 10:51:55 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:55.189+0000 7f81d347c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 10:51:55 np0005535469 ceph-mgr[75280]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 10:51:55 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'selftest'
Nov 25 10:51:55 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:55.434+0000 7f81d347c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 10:51:55 np0005535469 ceph-mgr[75280]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 10:51:55 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'snap_schedule'
Nov 25 10:51:55 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:55.715+0000 7f81d347c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 10:51:55 np0005535469 ceph-mgr[75280]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 10:51:55 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'stats'
Nov 25 10:51:55 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'status'
Nov 25 10:51:56 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:56.239+0000 7f81d347c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 10:51:56 np0005535469 ceph-mgr[75280]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 10:51:56 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'telegraf'
Nov 25 10:51:56 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:56.481+0000 7f81d347c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 10:51:56 np0005535469 ceph-mgr[75280]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 10:51:56 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'telemetry'
Nov 25 10:51:57 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:57.084+0000 7f81d347c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 10:51:57 np0005535469 ceph-mgr[75280]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 10:51:57 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 10:51:57 np0005535469 podman[75643]: 2025-11-25 15:51:57.03430301 +0000 UTC m=+0.027333266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:51:57 np0005535469 podman[75643]: 2025-11-25 15:51:57.142882789 +0000 UTC m=+0.135912985 container create 04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43 (image=quay.io/ceph/ceph:v18, name=kind_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:51:57 np0005535469 systemd[1]: Started libpod-conmon-04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43.scope.
Nov 25 10:51:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:51:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73c70601fc90d02f1e398842a4bdc7e82d9a1e32ab033c1140f5bfb2097bae5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73c70601fc90d02f1e398842a4bdc7e82d9a1e32ab033c1140f5bfb2097bae5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73c70601fc90d02f1e398842a4bdc7e82d9a1e32ab033c1140f5bfb2097bae5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:51:57 np0005535469 podman[75643]: 2025-11-25 15:51:57.354611108 +0000 UTC m=+0.347641364 container init 04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43 (image=quay.io/ceph/ceph:v18, name=kind_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 10:51:57 np0005535469 podman[75643]: 2025-11-25 15:51:57.360668204 +0000 UTC m=+0.353698380 container start 04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43 (image=quay.io/ceph/ceph:v18, name=kind_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:51:57 np0005535469 podman[75643]: 2025-11-25 15:51:57.408519758 +0000 UTC m=+0.401549964 container attach 04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43 (image=quay.io/ceph/ceph:v18, name=kind_babbage, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:51:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:51:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278497537' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:51:57 np0005535469 kind_babbage[75659]: 
Nov 25 10:51:57 np0005535469 kind_babbage[75659]: {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "health": {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "status": "HEALTH_OK",
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "checks": {},
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "mutes": []
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    },
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "election_epoch": 5,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "quorum": [
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        0
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    ],
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "quorum_names": [
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "compute-0"
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    ],
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "quorum_age": 20,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "monmap": {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "epoch": 1,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "min_mon_release_name": "reef",
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_mons": 1
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    },
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "osdmap": {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "epoch": 1,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_osds": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_up_osds": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "osd_up_since": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_in_osds": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "osd_in_since": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_remapped_pgs": 0
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    },
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "pgmap": {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "pgs_by_state": [],
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_pgs": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_pools": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_objects": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "data_bytes": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "bytes_used": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "bytes_avail": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "bytes_total": 0
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    },
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "fsmap": {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "epoch": 1,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "by_rank": [],
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "up:standby": 0
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    },
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "mgrmap": {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "available": false,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "num_standbys": 0,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "modules": [
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:            "iostat",
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:            "nfs",
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:            "restful"
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        ],
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "services": {}
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    },
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "servicemap": {
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "epoch": 1,
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:        "services": {}
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    },
Nov 25 10:51:57 np0005535469 kind_babbage[75659]:    "progress_events": {}
Nov 25 10:51:57 np0005535469 kind_babbage[75659]: }
Nov 25 10:51:57 np0005535469 systemd[1]: libpod-04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43.scope: Deactivated successfully.
Nov 25 10:51:57 np0005535469 podman[75643]: 2025-11-25 15:51:57.73414749 +0000 UTC m=+0.727177656 container died 04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43 (image=quay.io/ceph/ceph:v18, name=kind_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:51:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d73c70601fc90d02f1e398842a4bdc7e82d9a1e32ab033c1140f5bfb2097bae5-merged.mount: Deactivated successfully.
Nov 25 10:51:57 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:57.780+0000 7f81d347c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 10:51:57 np0005535469 ceph-mgr[75280]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 10:51:57 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'volumes'
Nov 25 10:51:57 np0005535469 podman[75643]: 2025-11-25 15:51:57.885621538 +0000 UTC m=+0.878651704 container remove 04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43 (image=quay.io/ceph/ceph:v18, name=kind_babbage, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 10:51:57 np0005535469 systemd[1]: libpod-conmon-04ba458aeef1884921c9a1ee5b773d66060667553844610531138a8b8eb26d43.scope: Deactivated successfully.
Nov 25 10:51:58 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:58.510+0000 7f81d347c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'zabbix'
Nov 25 10:51:58 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:51:58.773+0000 7f81d347c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: ms_deliver_dispatch: unhandled message 0x564c8d5d11e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mavpeh
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr handle_mgr_map Activating!
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr handle_mgr_map I am now activating
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mavpeh(active, starting, since 0.00992517s)
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: Activating manager daemon compute-0.mavpeh
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mavpeh", "id": "compute-0.mavpeh"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mavpeh", "id": "compute-0.mavpeh"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: balancer
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [balancer INFO root] Starting
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Manager daemon compute-0.mavpeh is now available
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: crash
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:51:58
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [balancer INFO root] No pools available
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: devicehealth
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Starting
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: iostat
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: nfs
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: orchestrator
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: pg_autoscaler
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: progress
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [progress INFO root] Loading...
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [progress INFO root] No stored events to load
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [progress INFO root] Loaded [] historic events
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] recovery thread starting
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] starting setup
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: rbd_support
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: restful
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/mirror_snapshot_schedule"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/mirror_snapshot_schedule"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: status
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: telemetry
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] PerfHandler: starting
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [restful WARNING root] server not running: no certificate configured
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TaskHandler: starting
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/trash_purge_schedule"} v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/trash_purge_schedule"}]: dispatch
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' 
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] setup complete
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' 
Nov 25 10:51:58 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: volumes
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 25 10:51:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' 
Nov 25 10:51:59 np0005535469 ceph-mon[74985]: Manager daemon compute-0.mavpeh is now available
Nov 25 10:51:59 np0005535469 ceph-mon[74985]: from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/mirror_snapshot_schedule"}]: dispatch
Nov 25 10:51:59 np0005535469 ceph-mon[74985]: from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/trash_purge_schedule"}]: dispatch
Nov 25 10:51:59 np0005535469 ceph-mon[74985]: from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' 
Nov 25 10:51:59 np0005535469 ceph-mon[74985]: from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' 
Nov 25 10:51:59 np0005535469 ceph-mon[74985]: from='mgr.14102 192.168.122.100:0/1571102873' entity='mgr.compute-0.mavpeh' 
Nov 25 10:51:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mavpeh(active, since 1.0458s)
Nov 25 10:51:59 np0005535469 podman[75776]: 2025-11-25 15:51:59.989848456 +0000 UTC m=+0.062870844 container create 2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29 (image=quay.io/ceph/ceph:v18, name=confident_johnson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:00 np0005535469 systemd[1]: Started libpod-conmon-2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29.scope.
Nov 25 10:52:00 np0005535469 podman[75776]: 2025-11-25 15:51:59.962516641 +0000 UTC m=+0.035539059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f268b0305965fa499596567849a8c2b21248eecafa4033f6040c782e4be7deb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f268b0305965fa499596567849a8c2b21248eecafa4033f6040c782e4be7deb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f268b0305965fa499596567849a8c2b21248eecafa4033f6040c782e4be7deb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:00 np0005535469 podman[75776]: 2025-11-25 15:52:00.094906139 +0000 UTC m=+0.167928567 container init 2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29 (image=quay.io/ceph/ceph:v18, name=confident_johnson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:00 np0005535469 podman[75776]: 2025-11-25 15:52:00.101740755 +0000 UTC m=+0.174763113 container start 2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29 (image=quay.io/ceph/ceph:v18, name=confident_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:00 np0005535469 podman[75776]: 2025-11-25 15:52:00.1059301 +0000 UTC m=+0.178952488 container attach 2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29 (image=quay.io/ceph/ceph:v18, name=confident_johnson, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 10:52:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765984669' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 10:52:00 np0005535469 confident_johnson[75792]: 
Nov 25 10:52:00 np0005535469 confident_johnson[75792]: {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "health": {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "status": "HEALTH_OK",
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "checks": {},
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "mutes": []
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    },
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "election_epoch": 5,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "quorum": [
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        0
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    ],
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "quorum_names": [
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "compute-0"
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    ],
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "quorum_age": 23,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "monmap": {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "epoch": 1,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "min_mon_release_name": "reef",
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_mons": 1
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    },
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "osdmap": {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "epoch": 1,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_osds": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_up_osds": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "osd_up_since": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_in_osds": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "osd_in_since": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_remapped_pgs": 0
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    },
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "pgmap": {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "pgs_by_state": [],
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_pgs": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_pools": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_objects": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "data_bytes": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "bytes_used": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "bytes_avail": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "bytes_total": 0
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    },
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "fsmap": {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "epoch": 1,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "by_rank": [],
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "up:standby": 0
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    },
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "mgrmap": {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "available": true,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "num_standbys": 0,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "modules": [
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:            "iostat",
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:            "nfs",
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:            "restful"
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        ],
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "services": {}
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    },
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "servicemap": {
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "epoch": 1,
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "modified": "2025-11-25T15:51:33.967034+0000",
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:        "services": {}
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    },
Nov 25 10:52:00 np0005535469 confident_johnson[75792]:    "progress_events": {}
Nov 25 10:52:00 np0005535469 confident_johnson[75792]: }
Nov 25 10:52:00 np0005535469 systemd[1]: libpod-2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29.scope: Deactivated successfully.
Nov 25 10:52:00 np0005535469 podman[75776]: 2025-11-25 15:52:00.716424435 +0000 UTC m=+0.789446813 container died 2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29 (image=quay.io/ceph/ceph:v18, name=confident_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 10:52:00 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7f268b0305965fa499596567849a8c2b21248eecafa4033f6040c782e4be7deb-merged.mount: Deactivated successfully.
Nov 25 10:52:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mavpeh(active, since 2s)
Nov 25 10:52:00 np0005535469 podman[75776]: 2025-11-25 15:52:00.946144245 +0000 UTC m=+1.019166623 container remove 2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29 (image=quay.io/ceph/ceph:v18, name=confident_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 10:52:00 np0005535469 systemd[1]: libpod-conmon-2d1f9b633a8a22c8fa8a72cb2977ef2d982a889a9cca8c0c51352d406edf2e29.scope: Deactivated successfully.
Nov 25 10:52:01 np0005535469 podman[75831]: 2025-11-25 15:52:01.067342277 +0000 UTC m=+0.087620618 container create 65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26 (image=quay.io/ceph/ceph:v18, name=stupefied_pascal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:01 np0005535469 podman[75831]: 2025-11-25 15:52:01.020604394 +0000 UTC m=+0.040882755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:01 np0005535469 systemd[1]: Started libpod-conmon-65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26.scope.
Nov 25 10:52:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4e25b1e43c4dbc515d1db657fe3afeec208036e6995c5d9fbc00dfed328d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4e25b1e43c4dbc515d1db657fe3afeec208036e6995c5d9fbc00dfed328d4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4e25b1e43c4dbc515d1db657fe3afeec208036e6995c5d9fbc00dfed328d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4e25b1e43c4dbc515d1db657fe3afeec208036e6995c5d9fbc00dfed328d4/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:01 np0005535469 podman[75831]: 2025-11-25 15:52:01.199053357 +0000 UTC m=+0.219331768 container init 65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26 (image=quay.io/ceph/ceph:v18, name=stupefied_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:52:01 np0005535469 podman[75831]: 2025-11-25 15:52:01.211342711 +0000 UTC m=+0.231621042 container start 65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26 (image=quay.io/ceph/ceph:v18, name=stupefied_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 10:52:01 np0005535469 podman[75831]: 2025-11-25 15:52:01.240668231 +0000 UTC m=+0.260946562 container attach 65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26 (image=quay.io/ceph/ceph:v18, name=stupefied_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 10:52:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 10:52:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2650248766' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 10:52:01 np0005535469 systemd[1]: libpod-65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26.scope: Deactivated successfully.
Nov 25 10:52:01 np0005535469 podman[75874]: 2025-11-25 15:52:01.823615746 +0000 UTC m=+0.034082560 container died 65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26 (image=quay.io/ceph/ceph:v18, name=stupefied_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:02 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:04 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:06 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:08 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e1 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.1333 seconds
Nov 25 10:52:10 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:11 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2650248766' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 10:52:12 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:14 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:16 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-60e4e25b1e43c4dbc515d1db657fe3afeec208036e6995c5d9fbc00dfed328d4-merged.mount: Deactivated successfully.
Nov 25 10:52:18 np0005535469 podman[75874]: 2025-11-25 15:52:18.10117852 +0000 UTC m=+16.311645314 container remove 65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26 (image=quay.io/ceph/ceph:v18, name=stupefied_pascal, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:18 np0005535469 systemd[1]: libpod-conmon-65b63767fecb526d7464c6a536f0943838520f1c21198ca0f9c96533dab5df26.scope: Deactivated successfully.
Nov 25 10:52:18 np0005535469 podman[75889]: 2025-11-25 15:52:18.213230332 +0000 UTC m=+0.088511313 container create 9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2 (image=quay.io/ceph/ceph:v18, name=eloquent_mcclintock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:18 np0005535469 podman[75889]: 2025-11-25 15:52:18.145869899 +0000 UTC m=+0.021150930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:18 np0005535469 systemd[1]: Started libpod-conmon-9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2.scope.
Nov 25 10:52:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18a0b1a6d0b300571ff5d59d0e92706403a6049f3122b6cf770a9d236cfa539/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18a0b1a6d0b300571ff5d59d0e92706403a6049f3122b6cf770a9d236cfa539/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18a0b1a6d0b300571ff5d59d0e92706403a6049f3122b6cf770a9d236cfa539/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:18 np0005535469 podman[75889]: 2025-11-25 15:52:18.323623976 +0000 UTC m=+0.198904977 container init 9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2 (image=quay.io/ceph/ceph:v18, name=eloquent_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:18 np0005535469 podman[75889]: 2025-11-25 15:52:18.328404426 +0000 UTC m=+0.203685407 container start 9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2 (image=quay.io/ceph/ceph:v18, name=eloquent_mcclintock, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:18 np0005535469 podman[75889]: 2025-11-25 15:52:18.353532502 +0000 UTC m=+0.228813483 container attach 9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2 (image=quay.io/ceph/ceph:v18, name=eloquent_mcclintock, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:18 np0005535469 ceph-mgr[75280]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 25 10:52:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:52:18 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 10:52:18 np0005535469 ceph-mon[74985]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 10:52:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 25 10:52:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3365335104' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 10:52:19 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3365335104' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 10:52:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3365335104' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr respawn  1: '-n'
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr respawn  2: 'mgr.compute-0.mavpeh'
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr respawn  3: '-f'
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr respawn  4: '--setuser'
Nov 25 10:52:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mavpeh(active, since 21s)
Nov 25 10:52:20 np0005535469 systemd[1]: libpod-9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2.scope: Deactivated successfully.
Nov 25 10:52:20 np0005535469 podman[75889]: 2025-11-25 15:52:20.072067856 +0000 UTC m=+1.947348837 container died 9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2 (image=quay.io/ceph/ceph:v18, name=eloquent_mcclintock, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:20 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: ignoring --setuser ceph since I am not root
Nov 25 10:52:20 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: ignoring --setgroup ceph since I am not root
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: pidfile_write: ignore empty --pid-file
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'alerts'
Nov 25 10:52:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d18a0b1a6d0b300571ff5d59d0e92706403a6049f3122b6cf770a9d236cfa539-merged.mount: Deactivated successfully.
Nov 25 10:52:20 np0005535469 podman[75889]: 2025-11-25 15:52:20.61907944 +0000 UTC m=+2.494360441 container remove 9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2 (image=quay.io/ceph/ceph:v18, name=eloquent_mcclintock, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:20 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:20.638+0000 7f2da9d78140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'balancer'
Nov 25 10:52:20 np0005535469 systemd[1]: libpod-conmon-9cf3487ec7ec144d3214743a6f6f4f25545dca8d9559cdb820a3dc7826b80db2.scope: Deactivated successfully.
Nov 25 10:52:20 np0005535469 podman[75969]: 2025-11-25 15:52:20.751093667 +0000 UTC m=+0.106619124 container create e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32 (image=quay.io/ceph/ceph:v18, name=trusting_khorana, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:20 np0005535469 podman[75969]: 2025-11-25 15:52:20.674831763 +0000 UTC m=+0.030357260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:20 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:20.888+0000 7f2da9d78140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 10:52:20 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'cephadm'
Nov 25 10:52:20 np0005535469 systemd[1]: Started libpod-conmon-e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32.scope.
Nov 25 10:52:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd97392d81172c06dcbb9e434aac75e1e6bec6d162529a2e8ad5a202e6cc7a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd97392d81172c06dcbb9e434aac75e1e6bec6d162529a2e8ad5a202e6cc7a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd97392d81172c06dcbb9e434aac75e1e6bec6d162529a2e8ad5a202e6cc7a0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:21 np0005535469 podman[75969]: 2025-11-25 15:52:21.047977884 +0000 UTC m=+0.403503391 container init e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32 (image=quay.io/ceph/ceph:v18, name=trusting_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 10:52:21 np0005535469 podman[75969]: 2025-11-25 15:52:21.05601811 +0000 UTC m=+0.411543597 container start e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32 (image=quay.io/ceph/ceph:v18, name=trusting_khorana, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:52:21 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3365335104' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 10:52:21 np0005535469 podman[75969]: 2025-11-25 15:52:21.583494152 +0000 UTC m=+0.939019619 container attach e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32 (image=quay.io/ceph/ceph:v18, name=trusting_khorana, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 10:52:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991141824' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 10:52:22 np0005535469 trusting_khorana[75985]: {
Nov 25 10:52:22 np0005535469 trusting_khorana[75985]:    "epoch": 5,
Nov 25 10:52:22 np0005535469 trusting_khorana[75985]:    "available": true,
Nov 25 10:52:22 np0005535469 trusting_khorana[75985]:    "active_name": "compute-0.mavpeh",
Nov 25 10:52:22 np0005535469 trusting_khorana[75985]:    "num_standby": 0
Nov 25 10:52:22 np0005535469 trusting_khorana[75985]: }
Nov 25 10:52:22 np0005535469 systemd[1]: libpod-e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32.scope: Deactivated successfully.
Nov 25 10:52:22 np0005535469 podman[75969]: 2025-11-25 15:52:22.149681758 +0000 UTC m=+1.505207255 container died e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32 (image=quay.io/ceph/ceph:v18, name=trusting_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3cd97392d81172c06dcbb9e434aac75e1e6bec6d162529a2e8ad5a202e6cc7a0-merged.mount: Deactivated successfully.
Nov 25 10:52:22 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'crash'
Nov 25 10:52:23 np0005535469 podman[75969]: 2025-11-25 15:52:23.047728496 +0000 UTC m=+2.403253963 container remove e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32 (image=quay.io/ceph/ceph:v18, name=trusting_khorana, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:23 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:23.126+0000 7f2da9d78140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 10:52:23 np0005535469 ceph-mgr[75280]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 10:52:23 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'dashboard'
Nov 25 10:52:23 np0005535469 podman[76034]: 2025-11-25 15:52:23.094953439 +0000 UTC m=+0.029120744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:23 np0005535469 podman[76034]: 2025-11-25 15:52:23.229355627 +0000 UTC m=+0.163522902 container create a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826 (image=quay.io/ceph/ceph:v18, name=recursing_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 10:52:23 np0005535469 systemd[1]: Started libpod-conmon-a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826.scope.
Nov 25 10:52:23 np0005535469 systemd[1]: libpod-conmon-e7c347d265de9c15ab642a9ce85a73f736817dee9d931d6381acb3755d768c32.scope: Deactivated successfully.
Nov 25 10:52:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8b1aa2856dc5a4ea9ffdf9f7637859f09723d73d245a96123e974793c2581a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8b1aa2856dc5a4ea9ffdf9f7637859f09723d73d245a96123e974793c2581a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8b1aa2856dc5a4ea9ffdf9f7637859f09723d73d245a96123e974793c2581a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:23 np0005535469 podman[76034]: 2025-11-25 15:52:23.417673023 +0000 UTC m=+0.351840298 container init a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826 (image=quay.io/ceph/ceph:v18, name=recursing_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:23 np0005535469 podman[76034]: 2025-11-25 15:52:23.423333529 +0000 UTC m=+0.357500804 container start a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826 (image=quay.io/ceph/ceph:v18, name=recursing_hypatia, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:23 np0005535469 podman[76034]: 2025-11-25 15:52:23.477602988 +0000 UTC m=+0.411770263 container attach a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826 (image=quay.io/ceph/ceph:v18, name=recursing_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:24 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'devicehealth'
Nov 25 10:52:24 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:24.829+0000 7f2da9d78140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 10:52:24 np0005535469 ceph-mgr[75280]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 10:52:24 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 10:52:25 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 10:52:25 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 10:52:25 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]:  from numpy import show_config as show_numpy_config
Nov 25 10:52:25 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:25.370+0000 7f2da9d78140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 10:52:25 np0005535469 ceph-mgr[75280]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 10:52:25 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'influx'
Nov 25 10:52:25 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:25.599+0000 7f2da9d78140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 10:52:25 np0005535469 ceph-mgr[75280]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 10:52:25 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'insights'
Nov 25 10:52:25 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'iostat'
Nov 25 10:52:26 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:26.085+0000 7f2da9d78140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 10:52:26 np0005535469 ceph-mgr[75280]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 10:52:26 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'k8sevents'
Nov 25 10:52:28 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'localpool'
Nov 25 10:52:28 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 10:52:28 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'mirroring'
Nov 25 10:52:29 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'nfs'
Nov 25 10:52:29 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:29.756+0000 7f2da9d78140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 10:52:29 np0005535469 ceph-mgr[75280]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 10:52:29 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'orchestrator'
Nov 25 10:52:30 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:30.410+0000 7f2da9d78140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 10:52:30 np0005535469 ceph-mgr[75280]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 10:52:30 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 10:52:30 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:30.663+0000 7f2da9d78140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 10:52:30 np0005535469 ceph-mgr[75280]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 10:52:30 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'osd_support'
Nov 25 10:52:30 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:30.895+0000 7f2da9d78140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 10:52:30 np0005535469 ceph-mgr[75280]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 10:52:30 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 10:52:31 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:31.165+0000 7f2da9d78140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 10:52:31 np0005535469 ceph-mgr[75280]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 10:52:31 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'progress'
Nov 25 10:52:31 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:31.406+0000 7f2da9d78140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 10:52:31 np0005535469 ceph-mgr[75280]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 10:52:31 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'prometheus'
Nov 25 10:52:32 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:32.402+0000 7f2da9d78140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 10:52:32 np0005535469 ceph-mgr[75280]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 10:52:32 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'rbd_support'
Nov 25 10:52:32 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:32.701+0000 7f2da9d78140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 10:52:32 np0005535469 ceph-mgr[75280]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 10:52:32 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'restful'
Nov 25 10:52:33 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'rgw'
Nov 25 10:52:34 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:34.130+0000 7f2da9d78140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 10:52:34 np0005535469 ceph-mgr[75280]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 10:52:34 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'rook'
Nov 25 10:52:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:36.250+0000 7f2da9d78140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 10:52:36 np0005535469 ceph-mgr[75280]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 10:52:36 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'selftest'
Nov 25 10:52:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:36.509+0000 7f2da9d78140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 10:52:36 np0005535469 ceph-mgr[75280]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 10:52:36 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'snap_schedule'
Nov 25 10:52:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:36.768+0000 7f2da9d78140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 10:52:36 np0005535469 ceph-mgr[75280]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 10:52:36 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'stats'
Nov 25 10:52:37 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'status'
Nov 25 10:52:37 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:37.310+0000 7f2da9d78140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 10:52:37 np0005535469 ceph-mgr[75280]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 10:52:37 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'telegraf'
Nov 25 10:52:37 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:37.557+0000 7f2da9d78140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 10:52:37 np0005535469 ceph-mgr[75280]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 10:52:37 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'telemetry'
Nov 25 10:52:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:38.191+0000 7f2da9d78140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 10:52:38 np0005535469 ceph-mgr[75280]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 10:52:38 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 10:52:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:38.925+0000 7f2da9d78140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 10:52:38 np0005535469 ceph-mgr[75280]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 10:52:38 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'volumes'
Nov 25 10:52:39 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:39.660+0000 7f2da9d78140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr[py] Loading python module 'zabbix'
Nov 25 10:52:39 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T15:52:39.894+0000 7f2da9d78140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mavpeh restarted
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: ms_deliver_dispatch: unhandled message 0x55b3049df1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mavpeh
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr handle_mgr_map Activating!
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr handle_mgr_map I am now activating
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mavpeh(active, starting, since 0.0182167s)
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mavpeh", "id": "compute-0.mavpeh"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mavpeh", "id": "compute-0.mavpeh"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: balancer
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Starting
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Manager daemon compute-0.mavpeh is now available
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:52:39
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] No pools available
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: Active manager daemon compute-0.mavpeh restarted
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: Activating manager daemon compute-0.mavpeh
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: Manager daemon compute-0.mavpeh is now available
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: cephadm
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: crash
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: devicehealth
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Starting
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: iostat
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: nfs
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: orchestrator
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: pg_autoscaler
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: progress
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [progress INFO root] Loading...
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [progress INFO root] No stored events to load
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [progress INFO root] Loaded [] historic events
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] recovery thread starting
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] starting setup
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: rbd_support
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: restful
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: status
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: telemetry
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/mirror_snapshot_schedule"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/mirror_snapshot_schedule"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [restful WARNING root] server not running: no certificate configured
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] PerfHandler: starting
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TaskHandler: starting
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/trash_purge_schedule"} v 0) v1
Nov 25 10:52:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/trash_purge_schedule"}]: dispatch
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 10:52:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] setup complete
Nov 25 10:52:40 np0005535469 ceph-mgr[75280]: mgr load Constructed class from module: volumes
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920931 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:40 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mavpeh(active, since 1.02928s)
Nov 25 10:52:40 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 10:52:40 np0005535469 recursing_hypatia[76050]: {
Nov 25 10:52:40 np0005535469 recursing_hypatia[76050]:    "mgrmap_epoch": 7,
Nov 25 10:52:40 np0005535469 recursing_hypatia[76050]:    "initialized": true
Nov 25 10:52:40 np0005535469 recursing_hypatia[76050]: }
Nov 25 10:52:40 np0005535469 systemd[1]: libpod-a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826.scope: Deactivated successfully.
Nov 25 10:52:40 np0005535469 podman[76034]: 2025-11-25 15:52:40.949152456 +0000 UTC m=+17.883319741 container died a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826 (image=quay.io/ceph/ceph:v18, name=recursing_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: Found migration_current of "None". Setting to last migration.
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/mirror_snapshot_schedule"}]: dispatch
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mavpeh/trash_purge_schedule"}]: dispatch
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7c8b1aa2856dc5a4ea9ffdf9f7637859f09723d73d245a96123e974793c2581a-merged.mount: Deactivated successfully.
Nov 25 10:52:41 np0005535469 podman[76034]: 2025-11-25 15:52:41.002981352 +0000 UTC m=+17.937148617 container remove a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826 (image=quay.io/ceph/ceph:v18, name=recursing_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:41 np0005535469 systemd[1]: libpod-conmon-a7ebec296940c40d8fbc83aa367f9b5ff19cef79920a4b4a022695e5406e4826.scope: Deactivated successfully.
Nov 25 10:52:41 np0005535469 podman[76200]: 2025-11-25 15:52:41.056791759 +0000 UTC m=+0.034802851 container create c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76 (image=quay.io/ceph/ceph:v18, name=eloquent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:41 np0005535469 systemd[1]: Started libpod-conmon-c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76.scope.
Nov 25 10:52:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b291564e655404d4db96fcb15c4a32716fb20bc1c894ff317dea6fa412573db6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b291564e655404d4db96fcb15c4a32716fb20bc1c894ff317dea6fa412573db6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b291564e655404d4db96fcb15c4a32716fb20bc1c894ff317dea6fa412573db6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:41 np0005535469 podman[76200]: 2025-11-25 15:52:41.119610989 +0000 UTC m=+0.097622101 container init c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76 (image=quay.io/ceph/ceph:v18, name=eloquent_hopper, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:41 np0005535469 podman[76200]: 2025-11-25 15:52:41.124586755 +0000 UTC m=+0.102597847 container start c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76 (image=quay.io/ceph/ceph:v18, name=eloquent_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:41 np0005535469 podman[76200]: 2025-11-25 15:52:41.128098308 +0000 UTC m=+0.106109390 container attach c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76 (image=quay.io/ceph/ceph:v18, name=eloquent_hopper, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:52:41 np0005535469 podman[76200]: 2025-11-25 15:52:41.042117769 +0000 UTC m=+0.020128881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: [cephadm INFO cherrypy.error] [25/Nov/2025:15:52:41] ENGINE Bus STARTING
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : [25/Nov/2025:15:52:41] ENGINE Bus STARTING
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: [cephadm INFO cherrypy.error] [25/Nov/2025:15:52:41] ENGINE Serving on http://192.168.122.100:8765
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : [25/Nov/2025:15:52:41] ENGINE Serving on http://192.168.122.100:8765
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 25 10:52:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 10:52:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 10:52:41 np0005535469 systemd[1]: libpod-c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76.scope: Deactivated successfully.
Nov 25 10:52:41 np0005535469 podman[76265]: 2025-11-25 15:52:41.751311285 +0000 UTC m=+0.020858073 container died c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76 (image=quay.io/ceph/ceph:v18, name=eloquent_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: [cephadm INFO cherrypy.error] [25/Nov/2025:15:52:41] ENGINE Serving on https://192.168.122.100:7150
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : [25/Nov/2025:15:52:41] ENGINE Serving on https://192.168.122.100:7150
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: [cephadm INFO cherrypy.error] [25/Nov/2025:15:52:41] ENGINE Bus STARTED
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : [25/Nov/2025:15:52:41] ENGINE Bus STARTED
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: [cephadm INFO cherrypy.error] [25/Nov/2025:15:52:41] ENGINE Client ('192.168.122.100', 37030) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : [25/Nov/2025:15:52:41] ENGINE Client ('192.168.122.100', 37030) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 10:52:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 10:52:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 10:52:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b291564e655404d4db96fcb15c4a32716fb20bc1c894ff317dea6fa412573db6-merged.mount: Deactivated successfully.
Nov 25 10:52:41 np0005535469 podman[76265]: 2025-11-25 15:52:41.785957199 +0000 UTC m=+0.055503987 container remove c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76 (image=quay.io/ceph/ceph:v18, name=eloquent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:41 np0005535469 systemd[1]: libpod-conmon-c23a28619475d4d4de0c340f3c34fe0c11a69d02207f501e74c017de097d0f76.scope: Deactivated successfully.
Nov 25 10:52:41 np0005535469 podman[76280]: 2025-11-25 15:52:41.851058966 +0000 UTC m=+0.042555007 container create 729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e (image=quay.io/ceph/ceph:v18, name=objective_lederberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:41 np0005535469 systemd[1]: Started libpod-conmon-729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e.scope.
Nov 25 10:52:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe97fefcaa68d9e1fa81365ff243568c186f8793b311017de83720a73af6a54/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe97fefcaa68d9e1fa81365ff243568c186f8793b311017de83720a73af6a54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe97fefcaa68d9e1fa81365ff243568c186f8793b311017de83720a73af6a54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:41 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:41 np0005535469 podman[76280]: 2025-11-25 15:52:41.829568216 +0000 UTC m=+0.021064307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:41 np0005535469 podman[76280]: 2025-11-25 15:52:41.94197268 +0000 UTC m=+0.133468741 container init 729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e (image=quay.io/ceph/ceph:v18, name=objective_lederberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 10:52:41 np0005535469 podman[76280]: 2025-11-25 15:52:41.95050451 +0000 UTC m=+0.142000551 container start 729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e (image=quay.io/ceph/ceph:v18, name=objective_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:52:41 np0005535469 podman[76280]: 2025-11-25 15:52:41.953417625 +0000 UTC m=+0.144913746 container attach 729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e (image=quay.io/ceph/ceph:v18, name=objective_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 10:52:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:42 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:42 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Set ssh ssh_user
Nov 25 10:52:42 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:42 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Set ssh ssh_config
Nov 25 10:52:42 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 25 10:52:42 np0005535469 ceph-mgr[75280]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 25 10:52:42 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 25 10:52:42 np0005535469 objective_lederberg[76296]: ssh user set to ceph-admin. sudo will be used
Nov 25 10:52:42 np0005535469 systemd[1]: libpod-729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e.scope: Deactivated successfully.
Nov 25 10:52:42 np0005535469 podman[76322]: 2025-11-25 15:52:42.562430256 +0000 UTC m=+0.028713442 container died 729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e (image=quay.io/ceph/ceph:v18, name=objective_lederberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ffe97fefcaa68d9e1fa81365ff243568c186f8793b311017de83720a73af6a54-merged.mount: Deactivated successfully.
Nov 25 10:52:42 np0005535469 podman[76322]: 2025-11-25 15:52:42.599282885 +0000 UTC m=+0.065566041 container remove 729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e (image=quay.io/ceph/ceph:v18, name=objective_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 10:52:42 np0005535469 systemd[1]: libpod-conmon-729e4474a7ab761c9bac9d4da6bb58cae7399c9ccecf47916f0892fd5d67781e.scope: Deactivated successfully.
Nov 25 10:52:42 np0005535469 podman[76337]: 2025-11-25 15:52:42.673613753 +0000 UTC m=+0.041050504 container create 624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b (image=quay.io/ceph/ceph:v18, name=naughty_dirac, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:42 np0005535469 systemd[1]: Started libpod-conmon-624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b.scope.
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mavpeh(active, since 2s)
Nov 25 10:52:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d904173b3bfcc7b99cf08299d14148df250c356b15f72445e8aeea01e3ca977/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d904173b3bfcc7b99cf08299d14148df250c356b15f72445e8aeea01e3ca977/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d904173b3bfcc7b99cf08299d14148df250c356b15f72445e8aeea01e3ca977/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d904173b3bfcc7b99cf08299d14148df250c356b15f72445e8aeea01e3ca977/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d904173b3bfcc7b99cf08299d14148df250c356b15f72445e8aeea01e3ca977/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:42 np0005535469 podman[76337]: 2025-11-25 15:52:42.736706711 +0000 UTC m=+0.104143522 container init 624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b (image=quay.io/ceph/ceph:v18, name=naughty_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 10:52:42 np0005535469 podman[76337]: 2025-11-25 15:52:42.746444877 +0000 UTC m=+0.113881638 container start 624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b (image=quay.io/ceph/ceph:v18, name=naughty_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:52:42 np0005535469 podman[76337]: 2025-11-25 15:52:42.748891828 +0000 UTC m=+0.116328599 container attach 624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b (image=quay.io/ceph/ceph:v18, name=naughty_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:42 np0005535469 podman[76337]: 2025-11-25 15:52:42.658198051 +0000 UTC m=+0.025634832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: [25/Nov/2025:15:52:41] ENGINE Bus STARTING
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: [25/Nov/2025:15:52:41] ENGINE Serving on http://192.168.122.100:8765
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: [25/Nov/2025:15:52:41] ENGINE Serving on https://192.168.122.100:7150
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: [25/Nov/2025:15:52:41] ENGINE Bus STARTED
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: [25/Nov/2025:15:52:41] ENGINE Client ('192.168.122.100', 37030) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:43 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 25 10:52:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:43 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 25 10:52:43 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 25 10:52:43 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Set ssh private key
Nov 25 10:52:43 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 25 10:52:43 np0005535469 systemd[1]: libpod-624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b.scope: Deactivated successfully.
Nov 25 10:52:43 np0005535469 podman[76337]: 2025-11-25 15:52:43.30845232 +0000 UTC m=+0.675889121 container died 624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b (image=quay.io/ceph/ceph:v18, name=naughty_dirac, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5d904173b3bfcc7b99cf08299d14148df250c356b15f72445e8aeea01e3ca977-merged.mount: Deactivated successfully.
Nov 25 10:52:43 np0005535469 podman[76337]: 2025-11-25 15:52:43.351052388 +0000 UTC m=+0.718489149 container remove 624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b (image=quay.io/ceph/ceph:v18, name=naughty_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:52:43 np0005535469 systemd[1]: libpod-conmon-624e6fd63324f831ea7aee3870dd0f10a93e31f7eb69c8e65316a6505c6a9d8b.scope: Deactivated successfully.
Nov 25 10:52:43 np0005535469 podman[76392]: 2025-11-25 15:52:43.406320767 +0000 UTC m=+0.039615572 container create 165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687 (image=quay.io/ceph/ceph:v18, name=clever_keller, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:52:43 np0005535469 systemd[1]: Started libpod-conmon-165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687.scope.
Nov 25 10:52:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff444cf422f23b603f54d6fbd5012b030a41ea7ece61e11f4d56169f3d53b0a9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff444cf422f23b603f54d6fbd5012b030a41ea7ece61e11f4d56169f3d53b0a9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff444cf422f23b603f54d6fbd5012b030a41ea7ece61e11f4d56169f3d53b0a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff444cf422f23b603f54d6fbd5012b030a41ea7ece61e11f4d56169f3d53b0a9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff444cf422f23b603f54d6fbd5012b030a41ea7ece61e11f4d56169f3d53b0a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:43 np0005535469 podman[76392]: 2025-11-25 15:52:43.481020505 +0000 UTC m=+0.114315320 container init 165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687 (image=quay.io/ceph/ceph:v18, name=clever_keller, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:43 np0005535469 podman[76392]: 2025-11-25 15:52:43.386562738 +0000 UTC m=+0.019857573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:43 np0005535469 podman[76392]: 2025-11-25 15:52:43.49379623 +0000 UTC m=+0.127091075 container start 165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687 (image=quay.io/ceph/ceph:v18, name=clever_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:43 np0005535469 podman[76392]: 2025-11-25 15:52:43.498004693 +0000 UTC m=+0.131299618 container attach 165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687 (image=quay.io/ceph/ceph:v18, name=clever_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:43 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:44 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:44 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 25 10:52:44 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 25 10:52:44 np0005535469 systemd[1]: libpod-165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687.scope: Deactivated successfully.
Nov 25 10:52:44 np0005535469 podman[76392]: 2025-11-25 15:52:44.125011021 +0000 UTC m=+0.758305866 container died 165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687 (image=quay.io/ceph/ceph:v18, name=clever_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 10:52:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ff444cf422f23b603f54d6fbd5012b030a41ea7ece61e11f4d56169f3d53b0a9-merged.mount: Deactivated successfully.
Nov 25 10:52:44 np0005535469 podman[76392]: 2025-11-25 15:52:44.189764958 +0000 UTC m=+0.823059793 container remove 165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687 (image=quay.io/ceph/ceph:v18, name=clever_keller, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:44 np0005535469 systemd[1]: libpod-conmon-165d97234fea97880cbb0204d8db5548f506d15ce9f66cb7995b9b3f3cc69687.scope: Deactivated successfully.
Nov 25 10:52:44 np0005535469 podman[76449]: 2025-11-25 15:52:44.285221504 +0000 UTC m=+0.066913381 container create 5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571 (image=quay.io/ceph/ceph:v18, name=vigilant_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: Set ssh ssh_user
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: Set ssh ssh_config
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: ssh user set to ceph-admin. sudo will be used
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: Set ssh ssh_identity_key
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: Set ssh private key
Nov 25 10:52:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:44 np0005535469 systemd[1]: Started libpod-conmon-5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571.scope.
Nov 25 10:52:44 np0005535469 podman[76449]: 2025-11-25 15:52:44.258203052 +0000 UTC m=+0.039894969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275a4a1024d469623cbe397ec3f67f9ec569e79543134002270d0e0e2c6d5da5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275a4a1024d469623cbe397ec3f67f9ec569e79543134002270d0e0e2c6d5da5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275a4a1024d469623cbe397ec3f67f9ec569e79543134002270d0e0e2c6d5da5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:44 np0005535469 podman[76449]: 2025-11-25 15:52:44.382856964 +0000 UTC m=+0.164548881 container init 5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571 (image=quay.io/ceph/ceph:v18, name=vigilant_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 10:52:44 np0005535469 podman[76449]: 2025-11-25 15:52:44.389772866 +0000 UTC m=+0.171464703 container start 5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571 (image=quay.io/ceph/ceph:v18, name=vigilant_hellman, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 10:52:44 np0005535469 podman[76449]: 2025-11-25 15:52:44.393908018 +0000 UTC m=+0.175599895 container attach 5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571 (image=quay.io/ceph/ceph:v18, name=vigilant_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 10:52:44 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:44 np0005535469 vigilant_hellman[76465]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCreftlYuOf26rBMHgABUl4JkV54ARFQSheuAoTiX9Dhde9VcmsSQvWk9JK11ktevGPhPXAsMXuwsSt7OV1AWy+g9eKzoE5CD5cRCfi7Xe9VgzBASA8dKF5ueUkIjcbVl6VxsgX9y2qhEC62Z17nwfodwnsXj0lGnFr679yQKwnN1eWHsu+FuzxcscUzffbZSwfYU1BKrIWN921/gOEUmrLVgZFhbIfq24Z1gdBDd2O0WHO7XssPslCpGtYuBL68+DzxKlz6Fv2OZ89CRlcC16gUoFzjQp22h1FfMYjyzx9kXk8b79sPyK6ooQrO8xgDVZYKXLwFxuVGVG3UL7cjcXODY2neXU72sRMT8xeSQEeA0anep3OkX0Ckm8sLiVnU+Ybde8dExa3sfrtip0MxOcyc761EtAOurkx6HC9sY/ehgxv27Hv8hRRjctvg5ryF4/w2aNQR0WiwqdUOqhXU5pDJHsolgZH4RMaUmT+hGKwstB/lSmYDryKsZqKJP0boms= zuul@controller
Nov 25 10:52:44 np0005535469 systemd[1]: libpod-5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571.scope: Deactivated successfully.
Nov 25 10:52:44 np0005535469 podman[76449]: 2025-11-25 15:52:44.960056913 +0000 UTC m=+0.741748760 container died 5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571 (image=quay.io/ceph/ceph:v18, name=vigilant_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 25 10:52:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-275a4a1024d469623cbe397ec3f67f9ec569e79543134002270d0e0e2c6d5da5-merged.mount: Deactivated successfully.
Nov 25 10:52:45 np0005535469 podman[76449]: 2025-11-25 15:52:45.004494285 +0000 UTC m=+0.786186122 container remove 5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571 (image=quay.io/ceph/ceph:v18, name=vigilant_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 10:52:45 np0005535469 systemd[1]: libpod-conmon-5ca6620292ebbddcb8c289310de88e09f584635ab5e880d9999e02ad333c1571.scope: Deactivated successfully.
Nov 25 10:52:45 np0005535469 podman[76503]: 2025-11-25 15:52:45.063540274 +0000 UTC m=+0.040062124 container create 6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c (image=quay.io/ceph/ceph:v18, name=dazzling_jang, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:45 np0005535469 systemd[1]: Started libpod-conmon-6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c.scope.
Nov 25 10:52:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7dc815738875c9d1ed562aeacd90d6a70f869768cbab6e971d21fb4e89b42e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7dc815738875c9d1ed562aeacd90d6a70f869768cbab6e971d21fb4e89b42e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7dc815738875c9d1ed562aeacd90d6a70f869768cbab6e971d21fb4e89b42e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:45 np0005535469 podman[76503]: 2025-11-25 15:52:45.135251766 +0000 UTC m=+0.111773636 container init 6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c (image=quay.io/ceph/ceph:v18, name=dazzling_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 10:52:45 np0005535469 podman[76503]: 2025-11-25 15:52:45.045063263 +0000 UTC m=+0.021585143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:45 np0005535469 podman[76503]: 2025-11-25 15:52:45.144440304 +0000 UTC m=+0.120962154 container start 6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c (image=quay.io/ceph/ceph:v18, name=dazzling_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:45 np0005535469 podman[76503]: 2025-11-25 15:52:45.147631258 +0000 UTC m=+0.124153158 container attach 6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c (image=quay.io/ceph/ceph:v18, name=dazzling_jang, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052956 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:52:45 np0005535469 ceph-mon[74985]: Set ssh ssh_identity_pub
Nov 25 10:52:45 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:45 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:45 np0005535469 systemd-logind[791]: New session 21 of user ceph-admin.
Nov 25 10:52:45 np0005535469 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 10:52:45 np0005535469 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 10:52:46 np0005535469 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 10:52:46 np0005535469 systemd[1]: Starting User Manager for UID 42477...
Nov 25 10:52:46 np0005535469 systemd[76549]: Queued start job for default target Main User Target.
Nov 25 10:52:46 np0005535469 systemd-logind[791]: New session 23 of user ceph-admin.
Nov 25 10:52:46 np0005535469 systemd[76549]: Created slice User Application Slice.
Nov 25 10:52:46 np0005535469 systemd[76549]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 10:52:46 np0005535469 systemd[76549]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 10:52:46 np0005535469 systemd[76549]: Reached target Paths.
Nov 25 10:52:46 np0005535469 systemd[76549]: Reached target Timers.
Nov 25 10:52:46 np0005535469 systemd[76549]: Starting D-Bus User Message Bus Socket...
Nov 25 10:52:46 np0005535469 systemd[76549]: Starting Create User's Volatile Files and Directories...
Nov 25 10:52:46 np0005535469 systemd[76549]: Finished Create User's Volatile Files and Directories.
Nov 25 10:52:46 np0005535469 systemd[76549]: Listening on D-Bus User Message Bus Socket.
Nov 25 10:52:46 np0005535469 systemd[76549]: Reached target Sockets.
Nov 25 10:52:46 np0005535469 systemd[76549]: Reached target Basic System.
Nov 25 10:52:46 np0005535469 systemd[76549]: Reached target Main User Target.
Nov 25 10:52:46 np0005535469 systemd[76549]: Startup finished in 148ms.
Nov 25 10:52:46 np0005535469 systemd[1]: Started User Manager for UID 42477.
Nov 25 10:52:46 np0005535469 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 10:52:46 np0005535469 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 10:52:46 np0005535469 systemd-logind[791]: New session 24 of user ceph-admin.
Nov 25 10:52:46 np0005535469 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 10:52:47 np0005535469 systemd-logind[791]: New session 25 of user ceph-admin.
Nov 25 10:52:47 np0005535469 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 10:52:47 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 25 10:52:47 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 25 10:52:47 np0005535469 systemd-logind[791]: New session 26 of user ceph-admin.
Nov 25 10:52:47 np0005535469 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 10:52:47 np0005535469 ceph-mon[74985]: Deploying cephadm binary to compute-0
Nov 25 10:52:47 np0005535469 systemd-logind[791]: New session 27 of user ceph-admin.
Nov 25 10:52:47 np0005535469 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 10:52:47 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:48 np0005535469 systemd-logind[791]: New session 28 of user ceph-admin.
Nov 25 10:52:48 np0005535469 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 10:52:48 np0005535469 systemd-logind[791]: New session 29 of user ceph-admin.
Nov 25 10:52:48 np0005535469 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 10:52:48 np0005535469 systemd-logind[791]: New session 30 of user ceph-admin.
Nov 25 10:52:48 np0005535469 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 10:52:49 np0005535469 systemd-logind[791]: New session 31 of user ceph-admin.
Nov 25 10:52:49 np0005535469 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 10:52:49 np0005535469 systemd-logind[791]: New session 32 of user ceph-admin.
Nov 25 10:52:49 np0005535469 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 10:52:49 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:50 np0005535469 systemd-logind[791]: New session 33 of user ceph-admin.
Nov 25 10:52:50 np0005535469 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 10:52:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054708 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:52:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 10:52:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:50 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Added host compute-0
Nov 25 10:52:50 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 10:52:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 10:52:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 10:52:50 np0005535469 dazzling_jang[76519]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 10:52:50 np0005535469 systemd[1]: libpod-6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c.scope: Deactivated successfully.
Nov 25 10:52:50 np0005535469 podman[77162]: 2025-11-25 15:52:50.64468771 +0000 UTC m=+0.023098617 container died 6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c (image=quay.io/ceph/ceph:v18, name=dazzling_jang, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:52:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7b7dc815738875c9d1ed562aeacd90d6a70f869768cbab6e971d21fb4e89b42e-merged.mount: Deactivated successfully.
Nov 25 10:52:50 np0005535469 podman[77162]: 2025-11-25 15:52:50.682334603 +0000 UTC m=+0.060745490 container remove 6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c (image=quay.io/ceph/ceph:v18, name=dazzling_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:52:50 np0005535469 systemd[1]: libpod-conmon-6345de8507515ef4fc5598b521edf92df2016c04a448df531cc0f87e5f477f7c.scope: Deactivated successfully.
Nov 25 10:52:50 np0005535469 podman[77217]: 2025-11-25 15:52:50.744712951 +0000 UTC m=+0.038035106 container create 311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca (image=quay.io/ceph/ceph:v18, name=eager_bose, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 10:52:50 np0005535469 systemd[1]: Started libpod-conmon-311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca.scope.
Nov 25 10:52:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3704c6418a6fc22bb7ff5f065209ddf58870245fc6d2ea6aadeaec97529d98c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3704c6418a6fc22bb7ff5f065209ddf58870245fc6d2ea6aadeaec97529d98c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3704c6418a6fc22bb7ff5f065209ddf58870245fc6d2ea6aadeaec97529d98c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:50 np0005535469 podman[77217]: 2025-11-25 15:52:50.822246452 +0000 UTC m=+0.115568627 container init 311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca (image=quay.io/ceph/ceph:v18, name=eager_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:52:50 np0005535469 podman[77217]: 2025-11-25 15:52:50.728353551 +0000 UTC m=+0.021675726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:50 np0005535469 podman[77217]: 2025-11-25 15:52:50.830565626 +0000 UTC m=+0.123887781 container start 311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca (image=quay.io/ceph/ceph:v18, name=eager_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:50 np0005535469 podman[77217]: 2025-11-25 15:52:50.834385877 +0000 UTC m=+0.127708052 container attach 311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca (image=quay.io/ceph/ceph:v18, name=eager_bose, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 10:52:51 np0005535469 podman[77317]: 2025-11-25 15:52:51.066331002 +0000 UTC m=+0.041828926 container create 8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e (image=quay.io/ceph/ceph:v18, name=youthful_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:51 np0005535469 systemd[1]: Started libpod-conmon-8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e.scope.
Nov 25 10:52:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:51 np0005535469 podman[77317]: 2025-11-25 15:52:51.122021834 +0000 UTC m=+0.097519798 container init 8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e (image=quay.io/ceph/ceph:v18, name=youthful_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:52:51 np0005535469 podman[77317]: 2025-11-25 15:52:51.130700148 +0000 UTC m=+0.106198092 container start 8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e (image=quay.io/ceph/ceph:v18, name=youthful_williamson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:51 np0005535469 podman[77317]: 2025-11-25 15:52:51.138877708 +0000 UTC m=+0.114375642 container attach 8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e (image=quay.io/ceph/ceph:v18, name=youthful_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 10:52:51 np0005535469 podman[77317]: 2025-11-25 15:52:51.044102181 +0000 UTC m=+0.019600135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:51 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:51 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 25 10:52:51 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:51 np0005535469 eager_bose[77259]: Scheduled mon update...
Nov 25 10:52:51 np0005535469 systemd[1]: libpod-311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca.scope: Deactivated successfully.
Nov 25 10:52:51 np0005535469 podman[77217]: 2025-11-25 15:52:51.421703183 +0000 UTC m=+0.715025358 container died 311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca (image=quay.io/ceph/ceph:v18, name=eager_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:51 np0005535469 youthful_williamson[77333]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 10:52:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3704c6418a6fc22bb7ff5f065209ddf58870245fc6d2ea6aadeaec97529d98c5-merged.mount: Deactivated successfully.
Nov 25 10:52:51 np0005535469 systemd[1]: libpod-8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e.scope: Deactivated successfully.
Nov 25 10:52:51 np0005535469 podman[77317]: 2025-11-25 15:52:51.449238469 +0000 UTC m=+0.424736393 container died 8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e (image=quay.io/ceph/ceph:v18, name=youthful_williamson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 10:52:51 np0005535469 podman[77217]: 2025-11-25 15:52:51.46666452 +0000 UTC m=+0.759986675 container remove 311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca (image=quay.io/ceph/ceph:v18, name=eager_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:52:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-990f30cd8d7bd524a5f37c1407a0bcac6380dcebcc878eaae220b697c9c2c8ef-merged.mount: Deactivated successfully.
Nov 25 10:52:51 np0005535469 podman[77317]: 2025-11-25 15:52:51.498454101 +0000 UTC m=+0.473952025 container remove 8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e (image=quay.io/ceph/ceph:v18, name=youthful_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 10:52:51 np0005535469 systemd[1]: libpod-conmon-8efbab7a5e99006438bb7568f7fa6f2c9dee329eb780aa2f7662e16712cf991e.scope: Deactivated successfully.
Nov 25 10:52:51 np0005535469 systemd[1]: libpod-conmon-311e1a547625e969ddcfed88f430016f645bbcbc361cf02013934e74224b48ca.scope: Deactivated successfully.
Nov 25 10:52:51 np0005535469 podman[77377]: 2025-11-25 15:52:51.523611638 +0000 UTC m=+0.038721326 container create b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b (image=quay.io/ceph/ceph:v18, name=cranky_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:51 np0005535469 systemd[1]: Started libpod-conmon-b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b.scope.
Nov 25 10:52:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd3dd842ebb8bfc5cc7238935379b52673abcf25533e5ea8b69e39e581f0c7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd3dd842ebb8bfc5cc7238935379b52673abcf25533e5ea8b69e39e581f0c7b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd3dd842ebb8bfc5cc7238935379b52673abcf25533e5ea8b69e39e581f0c7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:51 np0005535469 podman[77377]: 2025-11-25 15:52:51.583189783 +0000 UTC m=+0.098299481 container init b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b (image=quay.io/ceph/ceph:v18, name=cranky_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: Added host compute-0
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:51 np0005535469 podman[77377]: 2025-11-25 15:52:51.587668534 +0000 UTC m=+0.102778222 container start b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b (image=quay.io/ceph/ceph:v18, name=cranky_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 25 10:52:51 np0005535469 podman[77377]: 2025-11-25 15:52:51.591123595 +0000 UTC m=+0.106233283 container attach b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b (image=quay.io/ceph/ceph:v18, name=cranky_mcclintock, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:51 np0005535469 podman[77377]: 2025-11-25 15:52:51.507604559 +0000 UTC m=+0.022714267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:51 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:52:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:52 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:52 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 25 10:52:52 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:52 np0005535469 cranky_mcclintock[77403]: Scheduled mgr update...
Nov 25 10:52:52 np0005535469 systemd[1]: libpod-b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b.scope: Deactivated successfully.
Nov 25 10:52:52 np0005535469 podman[77377]: 2025-11-25 15:52:52.130447145 +0000 UTC m=+0.645556863 container died b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b (image=quay.io/ceph/ceph:v18, name=cranky_mcclintock, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dbd3dd842ebb8bfc5cc7238935379b52673abcf25533e5ea8b69e39e581f0c7b-merged.mount: Deactivated successfully.
Nov 25 10:52:52 np0005535469 podman[77377]: 2025-11-25 15:52:52.174485334 +0000 UTC m=+0.689595022 container remove b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b (image=quay.io/ceph/ceph:v18, name=cranky_mcclintock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 10:52:52 np0005535469 systemd[1]: libpod-conmon-b9a229c5fa46e6eebfd9b3d7ede2bc2743457da1aef4c165c267c71fe989343b.scope: Deactivated successfully.
Nov 25 10:52:52 np0005535469 podman[77656]: 2025-11-25 15:52:52.228084814 +0000 UTC m=+0.036240372 container create af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce (image=quay.io/ceph/ceph:v18, name=sharp_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 10:52:52 np0005535469 systemd[1]: Started libpod-conmon-af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce.scope.
Nov 25 10:52:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b203f5dc77cc28973701a34b518ecb3e67809e00de78d5f58d95d324923891a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b203f5dc77cc28973701a34b518ecb3e67809e00de78d5f58d95d324923891a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b203f5dc77cc28973701a34b518ecb3e67809e00de78d5f58d95d324923891a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:52 np0005535469 podman[77656]: 2025-11-25 15:52:52.29483182 +0000 UTC m=+0.102987388 container init af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce (image=quay.io/ceph/ceph:v18, name=sharp_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 10:52:52 np0005535469 podman[77656]: 2025-11-25 15:52:52.300394223 +0000 UTC m=+0.108549781 container start af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce (image=quay.io/ceph/ceph:v18, name=sharp_joliot, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 10:52:52 np0005535469 podman[77656]: 2025-11-25 15:52:52.304502924 +0000 UTC m=+0.112658502 container attach af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce (image=quay.io/ceph/ceph:v18, name=sharp_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 10:52:52 np0005535469 podman[77656]: 2025-11-25 15:52:52.213653252 +0000 UTC m=+0.021808830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:52 np0005535469 podman[77755]: 2025-11-25 15:52:52.640294691 +0000 UTC m=+0.068471328 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 10:52:52 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:52 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service crash spec with placement *
Nov 25 10:52:52 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:52 np0005535469 sharp_joliot[77677]: Scheduled crash update...
Nov 25 10:52:52 np0005535469 systemd[1]: libpod-af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce.scope: Deactivated successfully.
Nov 25 10:52:52 np0005535469 podman[77656]: 2025-11-25 15:52:52.865634111 +0000 UTC m=+0.673789669 container died af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce (image=quay.io/ceph/ceph:v18, name=sharp_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 10:52:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7b203f5dc77cc28973701a34b518ecb3e67809e00de78d5f58d95d324923891a-merged.mount: Deactivated successfully.
Nov 25 10:52:52 np0005535469 podman[77656]: 2025-11-25 15:52:52.910713802 +0000 UTC m=+0.718869360 container remove af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce (image=quay.io/ceph/ceph:v18, name=sharp_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:52 np0005535469 systemd[1]: libpod-conmon-af544aff728166fb376eee842504ab22cbbaf9e112f38899cc4080c6899fccce.scope: Deactivated successfully.
Nov 25 10:52:52 np0005535469 podman[77805]: 2025-11-25 15:52:52.973368547 +0000 UTC m=+0.042664421 container create 7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5 (image=quay.io/ceph/ceph:v18, name=condescending_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: Saving service mon spec with placement count:5
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: Saving service mgr spec with placement count:2
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:52 np0005535469 podman[77755]: 2025-11-25 15:52:52.999217925 +0000 UTC m=+0.427394542 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:53 np0005535469 systemd[1]: Started libpod-conmon-7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5.scope.
Nov 25 10:52:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89c120b3655d19e430348b4c3f2e3c0c5076f671d5387cd2816ec5dd19e4b74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89c120b3655d19e430348b4c3f2e3c0c5076f671d5387cd2816ec5dd19e4b74/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89c120b3655d19e430348b4c3f2e3c0c5076f671d5387cd2816ec5dd19e4b74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:53 np0005535469 podman[77805]: 2025-11-25 15:52:53.044305056 +0000 UTC m=+0.113600950 container init 7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5 (image=quay.io/ceph/ceph:v18, name=condescending_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:53 np0005535469 podman[77805]: 2025-11-25 15:52:52.954203706 +0000 UTC m=+0.023499630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:53 np0005535469 podman[77805]: 2025-11-25 15:52:53.050569239 +0000 UTC m=+0.119865113 container start 7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5 (image=quay.io/ceph/ceph:v18, name=condescending_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:53 np0005535469 podman[77805]: 2025-11-25 15:52:53.053765213 +0000 UTC m=+0.123061087 container attach 7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5 (image=quay.io/ceph/ceph:v18, name=condescending_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:52:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 25 10:52:53 np0005535469 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77984 (sysctl)
Nov 25 10:52:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2908949716' entity='client.admin' 
Nov 25 10:52:53 np0005535469 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 10:52:53 np0005535469 systemd[1]: libpod-7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5.scope: Deactivated successfully.
Nov 25 10:52:53 np0005535469 podman[77805]: 2025-11-25 15:52:53.58892106 +0000 UTC m=+0.658216934 container died 7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5 (image=quay.io/ceph/ceph:v18, name=condescending_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:53 np0005535469 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 10:52:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f89c120b3655d19e430348b4c3f2e3c0c5076f671d5387cd2816ec5dd19e4b74-merged.mount: Deactivated successfully.
Nov 25 10:52:53 np0005535469 podman[77805]: 2025-11-25 15:52:53.780380729 +0000 UTC m=+0.849676603 container remove 7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5 (image=quay.io/ceph/ceph:v18, name=condescending_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:53 np0005535469 systemd[1]: libpod-conmon-7d864836e9232ab29e553b6ade761e496a236916a96e97aadbeb80baa84218c5.scope: Deactivated successfully.
Nov 25 10:52:53 np0005535469 podman[78010]: 2025-11-25 15:52:53.824425448 +0000 UTC m=+0.021583732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:53 np0005535469 podman[78010]: 2025-11-25 15:52:53.92105987 +0000 UTC m=+0.118218114 container create 3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789 (image=quay.io/ceph/ceph:v18, name=zen_bhabha, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:53 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:53 np0005535469 systemd[1]: Started libpod-conmon-3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789.scope.
Nov 25 10:52:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e12c5f1e009fb79c7cad680e737f8e7dd442be3637309ea3075d2489b427e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e12c5f1e009fb79c7cad680e737f8e7dd442be3637309ea3075d2489b427e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e12c5f1e009fb79c7cad680e737f8e7dd442be3637309ea3075d2489b427e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:54 np0005535469 podman[78010]: 2025-11-25 15:52:54.04532114 +0000 UTC m=+0.242479404 container init 3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789 (image=quay.io/ceph/ceph:v18, name=zen_bhabha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:54 np0005535469 podman[78010]: 2025-11-25 15:52:54.051699866 +0000 UTC m=+0.248858110 container start 3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789 (image=quay.io/ceph/ceph:v18, name=zen_bhabha, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 10:52:54 np0005535469 podman[78010]: 2025-11-25 15:52:54.054605561 +0000 UTC m=+0.251763805 container attach 3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789 (image=quay.io/ceph/ceph:v18, name=zen_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:54 np0005535469 ceph-mon[74985]: Saving service crash spec with placement *
Nov 25 10:52:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:54 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2908949716' entity='client.admin' 
Nov 25 10:52:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:52:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:54 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 25 10:52:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:54 np0005535469 systemd[1]: libpod-3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789.scope: Deactivated successfully.
Nov 25 10:52:54 np0005535469 podman[78010]: 2025-11-25 15:52:54.62393532 +0000 UTC m=+0.821093564 container died 3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789 (image=quay.io/ceph/ceph:v18, name=zen_bhabha, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-61e12c5f1e009fb79c7cad680e737f8e7dd442be3637309ea3075d2489b427e3-merged.mount: Deactivated successfully.
Nov 25 10:52:54 np0005535469 podman[78010]: 2025-11-25 15:52:54.690323144 +0000 UTC m=+0.887481388 container remove 3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789 (image=quay.io/ceph/ceph:v18, name=zen_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:52:54 np0005535469 systemd[1]: libpod-conmon-3bfe7de890e3efe65fa313306092e846e1a30732bc37cd5f5910a47e5975f789.scope: Deactivated successfully.
Nov 25 10:52:54 np0005535469 podman[78294]: 2025-11-25 15:52:54.747730736 +0000 UTC m=+0.036553771 container create 2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7 (image=quay.io/ceph/ceph:v18, name=serene_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:54 np0005535469 systemd[1]: Started libpod-conmon-2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7.scope.
Nov 25 10:52:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98946df4c08d6bc722c6fcfa68710261192657b2ac1c4b75bc3a9e95dd67bbe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98946df4c08d6bc722c6fcfa68710261192657b2ac1c4b75bc3a9e95dd67bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98946df4c08d6bc722c6fcfa68710261192657b2ac1c4b75bc3a9e95dd67bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:54 np0005535469 podman[78294]: 2025-11-25 15:52:54.822860327 +0000 UTC m=+0.111683382 container init 2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7 (image=quay.io/ceph/ceph:v18, name=serene_feynman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:54 np0005535469 podman[78294]: 2025-11-25 15:52:54.731105029 +0000 UTC m=+0.019928094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:54 np0005535469 podman[78294]: 2025-11-25 15:52:54.829213133 +0000 UTC m=+0.118036168 container start 2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7 (image=quay.io/ceph/ceph:v18, name=serene_feynman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:54 np0005535469 podman[78294]: 2025-11-25 15:52:54.835365213 +0000 UTC m=+0.124188278 container attach 2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7 (image=quay.io/ceph/ceph:v18, name=serene_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:54 np0005535469 podman[78356]: 2025-11-25 15:52:54.944763688 +0000 UTC m=+0.044888626 container create 32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:54 np0005535469 systemd[1]: Started libpod-conmon-32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201.scope.
Nov 25 10:52:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:54 np0005535469 podman[78356]: 2025-11-25 15:52:54.997897085 +0000 UTC m=+0.098022113 container init 32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:55 np0005535469 podman[78356]: 2025-11-25 15:52:55.002816119 +0000 UTC m=+0.102941057 container start 32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:55 np0005535469 naughty_hypatia[78372]: 167 167
Nov 25 10:52:55 np0005535469 systemd[1]: libpod-32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201.scope: Deactivated successfully.
Nov 25 10:52:55 np0005535469 podman[78356]: 2025-11-25 15:52:55.008292549 +0000 UTC m=+0.108417587 container attach 32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:52:55 np0005535469 podman[78356]: 2025-11-25 15:52:55.008766123 +0000 UTC m=+0.108891101 container died 32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 10:52:55 np0005535469 podman[78356]: 2025-11-25 15:52:54.924950558 +0000 UTC m=+0.025075516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:52:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ecb88c4b68570a37ab9af943a5e476ef225ffae742415159ad97218e32cd3266-merged.mount: Deactivated successfully.
Nov 25 10:52:55 np0005535469 podman[78356]: 2025-11-25 15:52:55.06601569 +0000 UTC m=+0.166140628 container remove 32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hypatia, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:55 np0005535469 systemd[1]: libpod-conmon-32d3c0e96ba44571842485886ffae3615e7e05faeae1534644691e774f9a5201.scope: Deactivated successfully.
Nov 25 10:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:52:55 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 10:52:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:55 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Added label _admin to host compute-0
Nov 25 10:52:55 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 25 10:52:55 np0005535469 serene_feynman[78330]: Added label _admin to host compute-0
Nov 25 10:52:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:55 np0005535469 systemd[1]: libpod-2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7.scope: Deactivated successfully.
Nov 25 10:52:55 np0005535469 podman[78294]: 2025-11-25 15:52:55.424197943 +0000 UTC m=+0.713020988 container died 2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7 (image=quay.io/ceph/ceph:v18, name=serene_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d98946df4c08d6bc722c6fcfa68710261192657b2ac1c4b75bc3a9e95dd67bbe-merged.mount: Deactivated successfully.
Nov 25 10:52:55 np0005535469 podman[78294]: 2025-11-25 15:52:55.486224129 +0000 UTC m=+0.775047164 container remove 2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7 (image=quay.io/ceph/ceph:v18, name=serene_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:55 np0005535469 systemd[1]: libpod-conmon-2e85c49e3443bc8d9ae78788767e1de6e5fa76bbe830feaf800e735f5123c0b7.scope: Deactivated successfully.
Nov 25 10:52:55 np0005535469 podman[78424]: 2025-11-25 15:52:55.548030661 +0000 UTC m=+0.042228699 container create 7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209 (image=quay.io/ceph/ceph:v18, name=brave_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:52:55 np0005535469 systemd[1]: Started libpod-conmon-7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209.scope.
Nov 25 10:52:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54e8ea87391e4e27887f402836e37dcadbafae04015252572742407d5326c482/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54e8ea87391e4e27887f402836e37dcadbafae04015252572742407d5326c482/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54e8ea87391e4e27887f402836e37dcadbafae04015252572742407d5326c482/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:55 np0005535469 podman[78424]: 2025-11-25 15:52:55.613784097 +0000 UTC m=+0.107982145 container init 7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209 (image=quay.io/ceph/ceph:v18, name=brave_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:55 np0005535469 podman[78424]: 2025-11-25 15:52:55.618516356 +0000 UTC m=+0.112714374 container start 7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209 (image=quay.io/ceph/ceph:v18, name=brave_jang, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:55 np0005535469 podman[78424]: 2025-11-25 15:52:55.526612513 +0000 UTC m=+0.020810571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:55 np0005535469 podman[78424]: 2025-11-25 15:52:55.623174451 +0000 UTC m=+0.117372469 container attach 7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209 (image=quay.io/ceph/ceph:v18, name=brave_jang, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:52:55 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 25 10:52:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3517990277' entity='client.admin' 
Nov 25 10:52:56 np0005535469 systemd[1]: libpod-7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209.scope: Deactivated successfully.
Nov 25 10:52:56 np0005535469 podman[78424]: 2025-11-25 15:52:56.141346351 +0000 UTC m=+0.635544379 container died 7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209 (image=quay.io/ceph/ceph:v18, name=brave_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-54e8ea87391e4e27887f402836e37dcadbafae04015252572742407d5326c482-merged.mount: Deactivated successfully.
Nov 25 10:52:56 np0005535469 podman[78424]: 2025-11-25 15:52:56.183691531 +0000 UTC m=+0.677889549 container remove 7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209 (image=quay.io/ceph/ceph:v18, name=brave_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:56 np0005535469 systemd[1]: libpod-conmon-7687abdf9f2355e491b45f6a6bb18e1e0e3e1a43769ef8c30b1329f8fd803209.scope: Deactivated successfully.
Nov 25 10:52:56 np0005535469 podman[78478]: 2025-11-25 15:52:56.249539451 +0000 UTC m=+0.043880007 container create 80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960 (image=quay.io/ceph/ceph:v18, name=reverent_solomon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:56 np0005535469 systemd[1]: Started libpod-conmon-80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960.scope.
Nov 25 10:52:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99f93e14d060435648ea62464054d90c4cd458c53c54637c4920f003e60254f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99f93e14d060435648ea62464054d90c4cd458c53c54637c4920f003e60254f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99f93e14d060435648ea62464054d90c4cd458c53c54637c4920f003e60254f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:56 np0005535469 podman[78478]: 2025-11-25 15:52:56.324730834 +0000 UTC m=+0.119071470 container init 80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960 (image=quay.io/ceph/ceph:v18, name=reverent_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:56 np0005535469 podman[78478]: 2025-11-25 15:52:56.231165652 +0000 UTC m=+0.025506258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:56 np0005535469 podman[78478]: 2025-11-25 15:52:56.331734448 +0000 UTC m=+0.126075034 container start 80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960 (image=quay.io/ceph/ceph:v18, name=reverent_solomon, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 10:52:56 np0005535469 podman[78478]: 2025-11-25 15:52:56.337488907 +0000 UTC m=+0.131829463 container attach 80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960 (image=quay.io/ceph/ceph:v18, name=reverent_solomon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:56 np0005535469 ceph-mon[74985]: Added label _admin to host compute-0
Nov 25 10:52:56 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3517990277' entity='client.admin' 
Nov 25 10:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 25 10:52:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/480383733' entity='client.admin' 
Nov 25 10:52:56 np0005535469 reverent_solomon[78494]: set mgr/dashboard/cluster/status
Nov 25 10:52:56 np0005535469 systemd[1]: libpod-80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960.scope: Deactivated successfully.
Nov 25 10:52:56 np0005535469 podman[78478]: 2025-11-25 15:52:56.968963736 +0000 UTC m=+0.763304292 container died 80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960 (image=quay.io/ceph/ceph:v18, name=reverent_solomon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e99f93e14d060435648ea62464054d90c4cd458c53c54637c4920f003e60254f-merged.mount: Deactivated successfully.
Nov 25 10:52:57 np0005535469 podman[78478]: 2025-11-25 15:52:57.031183279 +0000 UTC m=+0.825523835 container remove 80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960 (image=quay.io/ceph/ceph:v18, name=reverent_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:57 np0005535469 systemd[1]: libpod-conmon-80802b4517e9c5aa6fcffa8071cbd2cb8e30dc8aa239f3eb1cbf58610db30960.scope: Deactivated successfully.
Nov 25 10:52:57 np0005535469 podman[78539]: 2025-11-25 15:52:57.213794398 +0000 UTC m=+0.029886207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:52:57 np0005535469 podman[78539]: 2025-11-25 15:52:57.456741935 +0000 UTC m=+0.272833724 container create 07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:52:57 np0005535469 python3[78579]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:52:57 np0005535469 systemd[1]: Started libpod-conmon-07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab.scope.
Nov 25 10:52:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d5591b448764075b6846e2f16690acaa7a6bffb9c0786afb7dc670f58f8ec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d5591b448764075b6846e2f16690acaa7a6bffb9c0786afb7dc670f58f8ec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d5591b448764075b6846e2f16690acaa7a6bffb9c0786afb7dc670f58f8ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d5591b448764075b6846e2f16690acaa7a6bffb9c0786afb7dc670f58f8ec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:57 np0005535469 podman[78580]: 2025-11-25 15:52:57.779900911 +0000 UTC m=+0.040485066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:57 np0005535469 podman[78580]: 2025-11-25 15:52:57.875750799 +0000 UTC m=+0.136334934 container create 50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0 (image=quay.io/ceph/ceph:v18, name=wizardly_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 10:52:57 np0005535469 podman[78539]: 2025-11-25 15:52:57.891759658 +0000 UTC m=+0.707851457 container init 07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 10:52:57 np0005535469 podman[78539]: 2025-11-25 15:52:57.903058589 +0000 UTC m=+0.719150378 container start 07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:52:57 np0005535469 podman[78539]: 2025-11-25 15:52:57.911742594 +0000 UTC m=+0.727834393 container attach 07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:52:57 np0005535469 ceph-mgr[75280]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 10:52:57 np0005535469 systemd[1]: Started libpod-conmon-50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0.scope.
Nov 25 10:52:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6ee5f7de81fa1ed3d0fb2d4526761bc10adad46ef0a2a8123d81eacc27156/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:57 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/480383733' entity='client.admin' 
Nov 25 10:52:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6ee5f7de81fa1ed3d0fb2d4526761bc10adad46ef0a2a8123d81eacc27156/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:57 np0005535469 podman[78580]: 2025-11-25 15:52:57.980966482 +0000 UTC m=+0.241550647 container init 50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0 (image=quay.io/ceph/ceph:v18, name=wizardly_shannon, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 10:52:57 np0005535469 podman[78580]: 2025-11-25 15:52:57.987303577 +0000 UTC m=+0.247887702 container start 50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0 (image=quay.io/ceph/ceph:v18, name=wizardly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:52:57 np0005535469 podman[78580]: 2025-11-25 15:52:57.994381725 +0000 UTC m=+0.254965860 container attach 50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0 (image=quay.io/ceph/ceph:v18, name=wizardly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 10:52:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 25 10:52:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1270384035' entity='client.admin' 
Nov 25 10:52:58 np0005535469 systemd[1]: libpod-50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0.scope: Deactivated successfully.
Nov 25 10:52:58 np0005535469 podman[78580]: 2025-11-25 15:52:58.527878103 +0000 UTC m=+0.788462268 container died 50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0 (image=quay.io/ceph/ceph:v18, name=wizardly_shannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:52:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a0d6ee5f7de81fa1ed3d0fb2d4526761bc10adad46ef0a2a8123d81eacc27156-merged.mount: Deactivated successfully.
Nov 25 10:52:58 np0005535469 podman[78580]: 2025-11-25 15:52:58.570133381 +0000 UTC m=+0.830717516 container remove 50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0 (image=quay.io/ceph/ceph:v18, name=wizardly_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:58 np0005535469 systemd[1]: libpod-conmon-50b1b23dcc67f1ef8753a23db19e8b12e1eaf602352a33badc8101441767a1f0.scope: Deactivated successfully.
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]: [
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:    {
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "available": false,
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "ceph_device": false,
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "lsm_data": {},
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "lvs": [],
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "path": "/dev/sr0",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "rejected_reasons": [
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "Insufficient space (<5GB)",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "Has a FileSystem"
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        ],
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        "sys_api": {
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "actuators": null,
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "device_nodes": "sr0",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "devname": "sr0",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "human_readable_size": "482.00 KB",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "id_bus": "ata",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "model": "QEMU DVD-ROM",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "nr_requests": "2",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "parent": "/dev/sr0",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "partitions": {},
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "path": "/dev/sr0",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "removable": "1",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "rev": "2.5+",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "ro": "0",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "rotational": "1",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "sas_address": "",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "sas_device_handle": "",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "scheduler_mode": "mq-deadline",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "sectors": 0,
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "sectorsize": "2048",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "size": 493568.0,
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "support_discard": "2048",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "type": "disk",
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:            "vendor": "QEMU"
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:        }
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]:    }
Nov 25 10:52:59 np0005535469 goofy_cerf[78592]: ]
Nov 25 10:52:59 np0005535469 systemd[1]: libpod-07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab.scope: Deactivated successfully.
Nov 25 10:52:59 np0005535469 podman[78539]: 2025-11-25 15:52:59.226074506 +0000 UTC m=+2.042166295 container died 07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:52:59 np0005535469 systemd[1]: libpod-07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab.scope: Consumed 1.353s CPU time.
Nov 25 10:52:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-79d5591b448764075b6846e2f16690acaa7a6bffb9c0786afb7dc670f58f8ec4-merged.mount: Deactivated successfully.
Nov 25 10:52:59 np0005535469 podman[78539]: 2025-11-25 15:52:59.279057428 +0000 UTC m=+2.095149217 container remove 07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 10:52:59 np0005535469 systemd[1]: libpod-conmon-07b06a73f84530b16c364b80faf898c8c1da8ed201c7cbb5151e0fd8cc0436ab.scope: Deactivated successfully.
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:52:59 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 10:52:59 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 10:52:59 np0005535469 ansible-async_wrapper.py[80709]: Invoked with j341092295275 30 /home/zuul/.ansible/tmp/ansible-tmp-1764085978.8967173-36715-50410568624369/AnsiballZ_command.py _
Nov 25 10:52:59 np0005535469 ansible-async_wrapper.py[80764]: Starting module and watcher
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1270384035' entity='client.admin' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 10:52:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:52:59 np0005535469 ansible-async_wrapper.py[80764]: Start watching 80765 (30)
Nov 25 10:52:59 np0005535469 ansible-async_wrapper.py[80765]: Start module (80765)
Nov 25 10:52:59 np0005535469 ansible-async_wrapper.py[80709]: Return async_wrapper task started.
Nov 25 10:52:59 np0005535469 python3[80767]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:52:59 np0005535469 podman[80835]: 2025-11-25 15:52:59.736030575 +0000 UTC m=+0.053552260 container create 21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3 (image=quay.io/ceph/ceph:v18, name=quirky_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:52:59 np0005535469 systemd[1]: Started libpod-conmon-21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3.scope.
Nov 25 10:52:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:52:59 np0005535469 podman[80835]: 2025-11-25 15:52:59.710700613 +0000 UTC m=+0.028222308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:52:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03981c51c49bfc6d2ff28d536ec7dbb302d024d5efc3a1e523b4d6f16416d11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03981c51c49bfc6d2ff28d536ec7dbb302d024d5efc3a1e523b4d6f16416d11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:52:59 np0005535469 podman[80835]: 2025-11-25 15:52:59.829224715 +0000 UTC m=+0.146746370 container init 21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3 (image=quay.io/ceph/ceph:v18, name=quirky_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:52:59 np0005535469 podman[80835]: 2025-11-25 15:52:59.837247701 +0000 UTC m=+0.154769356 container start 21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3 (image=quay.io/ceph/ceph:v18, name=quirky_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:52:59 np0005535469 podman[80835]: 2025-11-25 15:52:59.841450733 +0000 UTC m=+0.158972418 container attach 21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3 (image=quay.io/ceph/ceph:v18, name=quirky_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 10:52:59 np0005535469 ceph-mgr[75280]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 25 10:52:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:00 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 10:53:00 np0005535469 quirky_mccarthy[80881]: 
Nov 25 10:53:00 np0005535469 quirky_mccarthy[80881]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 10:53:00 np0005535469 systemd[1]: libpod-21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3.scope: Deactivated successfully.
Nov 25 10:53:00 np0005535469 podman[80835]: 2025-11-25 15:53:00.373399887 +0000 UTC m=+0.690921562 container died 21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3 (image=quay.io/ceph/ceph:v18, name=quirky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d03981c51c49bfc6d2ff28d536ec7dbb302d024d5efc3a1e523b4d6f16416d11-merged.mount: Deactivated successfully.
Nov 25 10:53:00 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/config/ceph.conf
Nov 25 10:53:00 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/config/ceph.conf
Nov 25 10:53:00 np0005535469 podman[80835]: 2025-11-25 15:53:00.437240197 +0000 UTC m=+0.754761852 container remove 21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3 (image=quay.io/ceph/ceph:v18, name=quirky_mccarthy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:53:00 np0005535469 systemd[1]: libpod-conmon-21f39c9c25816b201a08121f2ee1e29214b2452902def4cd239103ae25b31de3.scope: Deactivated successfully.
Nov 25 10:53:00 np0005535469 ansible-async_wrapper.py[80765]: Module complete (80765)
Nov 25 10:53:00 np0005535469 ceph-mon[74985]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 10:53:00 np0005535469 python3[81386]: ansible-ansible.legacy.async_status Invoked with jid=j341092295275.80709 mode=status _async_dir=/root/.ansible_async
Nov 25 10:53:01 np0005535469 python3[81560]: ansible-ansible.legacy.async_status Invoked with jid=j341092295275.80709 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 10:53:01 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 10:53:01 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 10:53:01 np0005535469 ceph-mon[74985]: Updating compute-0:/var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/config/ceph.conf
Nov 25 10:53:01 np0005535469 python3[81763]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:53:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:02 np0005535469 python3[81966]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:53:02 np0005535469 podman[82041]: 2025-11-25 15:53:02.355388418 +0000 UTC m=+0.054761616 container create 99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a (image=quay.io/ceph/ceph:v18, name=compassionate_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:02 np0005535469 systemd[1]: Started libpod-conmon-99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a.scope.
Nov 25 10:53:02 np0005535469 podman[82041]: 2025-11-25 15:53:02.325832262 +0000 UTC m=+0.025205470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:53:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c257ef51fcdea375b2eaa9c46797b6e1c62c730cfa7731b086ea98131ddc4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c257ef51fcdea375b2eaa9c46797b6e1c62c730cfa7731b086ea98131ddc4c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c257ef51fcdea375b2eaa9c46797b6e1c62c730cfa7731b086ea98131ddc4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:02 np0005535469 podman[82041]: 2025-11-25 15:53:02.482847552 +0000 UTC m=+0.182220760 container init 99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a (image=quay.io/ceph/ceph:v18, name=compassionate_euler, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:53:02 np0005535469 podman[82041]: 2025-11-25 15:53:02.492190385 +0000 UTC m=+0.191563573 container start 99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a (image=quay.io/ceph/ceph:v18, name=compassionate_euler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:02 np0005535469 podman[82041]: 2025-11-25 15:53:02.507267417 +0000 UTC m=+0.206640605 container attach 99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a (image=quay.io/ceph/ceph:v18, name=compassionate_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 10:53:02 np0005535469 ceph-mon[74985]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 10:53:02 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/config/ceph.client.admin.keyring
Nov 25 10:53:02 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/config/ceph.client.admin.keyring
Nov 25 10:53:03 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 10:53:03 np0005535469 compassionate_euler[82102]: 
Nov 25 10:53:03 np0005535469 compassionate_euler[82102]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 10:53:03 np0005535469 systemd[1]: libpod-99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a.scope: Deactivated successfully.
Nov 25 10:53:03 np0005535469 podman[82041]: 2025-11-25 15:53:03.048663327 +0000 UTC m=+0.748036545 container died 99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a (image=quay.io/ceph/ceph:v18, name=compassionate_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-49c257ef51fcdea375b2eaa9c46797b6e1c62c730cfa7731b086ea98131ddc4c-merged.mount: Deactivated successfully.
Nov 25 10:53:03 np0005535469 podman[82041]: 2025-11-25 15:53:03.121880041 +0000 UTC m=+0.821253239 container remove 99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a (image=quay.io/ceph/ceph:v18, name=compassionate_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:53:03 np0005535469 systemd[1]: libpod-conmon-99007a803b7177d7d74988373e63510120bc350a0e9562f58b1a20c9df2ad09a.scope: Deactivated successfully.
Nov 25 10:53:03 np0005535469 python3[82579]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: Updating compute-0:/var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/config/ceph.client.admin.keyring
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:03 np0005535469 podman[82636]: 2025-11-25 15:53:03.646674025 +0000 UTC m=+0.063930783 container create e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b (image=quay.io/ceph/ceph:v18, name=relaxed_spence, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:03 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 6c9c59e7-5e08-4685-a581-5863bf69bad5 (Updating crash deployment (+1 -> 1))
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 10:53:03 np0005535469 podman[82636]: 2025-11-25 15:53:03.611910717 +0000 UTC m=+0.029167515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 10:53:03 np0005535469 systemd[1]: Started libpod-conmon-e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b.scope.
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:03 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 25 10:53:03 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 25 10:53:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fab2a3030405819ce2b30ab0e33b9d24c500fa2fc1e502f35311ab578aa521/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fab2a3030405819ce2b30ab0e33b9d24c500fa2fc1e502f35311ab578aa521/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fab2a3030405819ce2b30ab0e33b9d24c500fa2fc1e502f35311ab578aa521/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:03 np0005535469 podman[82636]: 2025-11-25 15:53:03.761044045 +0000 UTC m=+0.178300813 container init e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b (image=quay.io/ceph/ceph:v18, name=relaxed_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:03 np0005535469 podman[82636]: 2025-11-25 15:53:03.766976529 +0000 UTC m=+0.184233277 container start e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b (image=quay.io/ceph/ceph:v18, name=relaxed_spence, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:03 np0005535469 podman[82636]: 2025-11-25 15:53:03.771353228 +0000 UTC m=+0.188609996 container attach e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b (image=quay.io/ceph/ceph:v18, name=relaxed_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 10:53:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:04 np0005535469 podman[82841]: 2025-11-25 15:53:04.260143226 +0000 UTC m=+0.041901279 container create 3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 10:53:04 np0005535469 systemd[1]: Started libpod-conmon-3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800.scope.
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/722947856' entity='client.admin' 
Nov 25 10:53:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:04 np0005535469 podman[82841]: 2025-11-25 15:53:04.239144651 +0000 UTC m=+0.020902684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:04 np0005535469 systemd[1]: libpod-e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b.scope: Deactivated successfully.
Nov 25 10:53:04 np0005535469 podman[82636]: 2025-11-25 15:53:04.337858063 +0000 UTC m=+0.755114811 container died e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b (image=quay.io/ceph/ceph:v18, name=relaxed_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:04 np0005535469 podman[82841]: 2025-11-25 15:53:04.357370064 +0000 UTC m=+0.139128107 container init 3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williams, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 10:53:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-45fab2a3030405819ce2b30ab0e33b9d24c500fa2fc1e502f35311ab578aa521-merged.mount: Deactivated successfully.
Nov 25 10:53:04 np0005535469 podman[82841]: 2025-11-25 15:53:04.362586147 +0000 UTC m=+0.144344160 container start 3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williams, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 10:53:04 np0005535469 mystifying_williams[82858]: 167 167
Nov 25 10:53:04 np0005535469 systemd[1]: libpod-3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800.scope: Deactivated successfully.
Nov 25 10:53:04 np0005535469 podman[82841]: 2025-11-25 15:53:04.36781078 +0000 UTC m=+0.149568833 container attach 3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:53:04 np0005535469 podman[82841]: 2025-11-25 15:53:04.368550662 +0000 UTC m=+0.150308685 container died 3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williams, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:04 np0005535469 podman[82636]: 2025-11-25 15:53:04.400504399 +0000 UTC m=+0.817761147 container remove e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b (image=quay.io/ceph/ceph:v18, name=relaxed_spence, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-110dba0c78d7fa187f53cd5207489c52c39cfc1849ce43c6ce3948a372c47833-merged.mount: Deactivated successfully.
Nov 25 10:53:04 np0005535469 systemd[1]: libpod-conmon-e969c5a9321a1fbb4b243701b7b491fc317a0d1f31ea25f31e4519c4ceca0d9b.scope: Deactivated successfully.
Nov 25 10:53:04 np0005535469 podman[82841]: 2025-11-25 15:53:04.429425695 +0000 UTC m=+0.211183718 container remove 3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:04 np0005535469 systemd[1]: libpod-conmon-3d356e2fbc0ceb8fc22c83e349ba43c7df701a8c99917491e80570029486c800.scope: Deactivated successfully.
Nov 25 10:53:04 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:04 np0005535469 ansible-async_wrapper.py[80764]: Done in kid B.
Nov 25 10:53:04 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:04 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: Deploying daemon crash.compute-0 on compute-0
Nov 25 10:53:04 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/722947856' entity='client.admin' 
Nov 25 10:53:04 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:04 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:04 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:04 np0005535469 python3[82956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:53:04 np0005535469 podman[82995]: 2025-11-25 15:53:04.940568099 +0000 UTC m=+0.057267709 container create 96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c (image=quay.io/ceph/ceph:v18, name=priceless_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 25 10:53:05 np0005535469 podman[82995]: 2025-11-25 15:53:04.912813185 +0000 UTC m=+0.029512885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:53:05 np0005535469 systemd[1]: Started libpod-conmon-96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c.scope.
Nov 25 10:53:05 np0005535469 systemd[1]: Starting Ceph crash.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:53:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1eb06dc30972295cb5f776ade9388060b669c72537d5fb4f3efb1840671852f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1eb06dc30972295cb5f776ade9388060b669c72537d5fb4f3efb1840671852f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1eb06dc30972295cb5f776ade9388060b669c72537d5fb4f3efb1840671852f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:05 np0005535469 podman[82995]: 2025-11-25 15:53:05.083350151 +0000 UTC m=+0.200049811 container init 96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c (image=quay.io/ceph/ceph:v18, name=priceless_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:05 np0005535469 podman[82995]: 2025-11-25 15:53:05.099618388 +0000 UTC m=+0.216318008 container start 96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c (image=quay.io/ceph/ceph:v18, name=priceless_euler, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:53:05 np0005535469 podman[82995]: 2025-11-25 15:53:05.103475421 +0000 UTC m=+0.220175051 container attach 96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c (image=quay.io/ceph/ceph:v18, name=priceless_euler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:05 np0005535469 podman[83065]: 2025-11-25 15:53:05.322149267 +0000 UTC m=+0.082679243 container create 5ca39303867613b73b3b5fb1bb14c7528c361df85e5d779b30da163706302d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:05 np0005535469 podman[83065]: 2025-11-25 15:53:05.266152997 +0000 UTC m=+0.026682953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c810fa7e70919c15883e2227f824433c8541b67e157964697e69cd62bcee8fcf/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c810fa7e70919c15883e2227f824433c8541b67e157964697e69cd62bcee8fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c810fa7e70919c15883e2227f824433c8541b67e157964697e69cd62bcee8fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c810fa7e70919c15883e2227f824433c8541b67e157964697e69cd62bcee8fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:05 np0005535469 podman[83065]: 2025-11-25 15:53:05.394772274 +0000 UTC m=+0.155302200 container init 5ca39303867613b73b3b5fb1bb14c7528c361df85e5d779b30da163706302d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 10:53:05 np0005535469 podman[83065]: 2025-11-25 15:53:05.402844521 +0000 UTC m=+0.163374457 container start 5ca39303867613b73b3b5fb1bb14c7528c361df85e5d779b30da163706302d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 10:53:05 np0005535469 bash[83065]: 5ca39303867613b73b3b5fb1bb14c7528c361df85e5d779b30da163706302d39
Nov 25 10:53:05 np0005535469 systemd[1]: Started Ceph crash.compute-0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:05 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 6c9c59e7-5e08-4685-a581-5863bf69bad5 (Updating crash deployment (+1 -> 1))
Nov 25 10:53:05 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 6c9c59e7-5e08-4685-a581-5863bf69bad5 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:05 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c4d665c3-46c6-4aaf-9429-25ac090d962e does not exist
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:05 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 367f7550-76b5-46eb-894d-c39ea17fe796 (Updating mgr deployment (+1 -> 2))
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mfjlcc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mfjlcc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mfjlcc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:05 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.mfjlcc on compute-0
Nov 25 10:53:05 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.mfjlcc on compute-0
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 25 10:53:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/10425561' entity='client.admin' 
Nov 25 10:53:05 np0005535469 systemd[1]: libpod-96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c.scope: Deactivated successfully.
Nov 25 10:53:05 np0005535469 podman[83182]: 2025-11-25 15:53:05.718598641 +0000 UTC m=+0.030623889 container died 96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c (image=quay.io/ceph/ceph:v18, name=priceless_euler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 10:53:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c1eb06dc30972295cb5f776ade9388060b669c72537d5fb4f3efb1840671852f-merged.mount: Deactivated successfully.
Nov 25 10:53:05 np0005535469 podman[83182]: 2025-11-25 15:53:05.766397241 +0000 UTC m=+0.078422489 container remove 96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c (image=quay.io/ceph/ceph:v18, name=priceless_euler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:05 np0005535469 systemd[1]: libpod-conmon-96e049db3ae7f679c96c430efddcbb4d25eb745074b1f0dcf70a423956ef1f2c.scope: Deactivated successfully.
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: 2025-11-25T15:53:05.839+0000 7f25defb8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: 2025-11-25T15:53:05.839+0000 7f25defb8640 -1 AuthRegistry(0x7f25d8066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: 2025-11-25T15:53:05.841+0000 7f25defb8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: 2025-11-25T15:53:05.841+0000 7f25defb8640 -1 AuthRegistry(0x7f25defb7000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: 2025-11-25T15:53:05.842+0000 7f25dcd2d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: 2025-11-25T15:53:05.842+0000 7f25defb8640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 10:53:05 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-crash-compute-0[83081]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 10:53:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:06 np0005535469 python3[83282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:53:06 np0005535469 podman[83301]: 2025-11-25 15:53:06.138370758 +0000 UTC m=+0.052881570 container create b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 10:53:06 np0005535469 systemd[1]: Started libpod-conmon-b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0.scope.
Nov 25 10:53:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:06 np0005535469 podman[83315]: 2025-11-25 15:53:06.19582406 +0000 UTC m=+0.051281983 container create a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75 (image=quay.io/ceph/ceph:v18, name=gracious_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 10:53:06 np0005535469 podman[83301]: 2025-11-25 15:53:06.204770783 +0000 UTC m=+0.119281645 container init b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:53:06 np0005535469 podman[83301]: 2025-11-25 15:53:06.212558841 +0000 UTC m=+0.127069653 container start b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 10:53:06 np0005535469 podman[83301]: 2025-11-25 15:53:06.215714154 +0000 UTC m=+0.130225016 container attach b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:53:06 np0005535469 boring_kepler[83328]: 167 167
Nov 25 10:53:06 np0005535469 podman[83301]: 2025-11-25 15:53:06.122337418 +0000 UTC m=+0.036848250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:06 np0005535469 systemd[1]: libpod-b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0.scope: Deactivated successfully.
Nov 25 10:53:06 np0005535469 podman[83301]: 2025-11-25 15:53:06.218140024 +0000 UTC m=+0.132650846 container died b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 25 10:53:06 np0005535469 systemd[1]: Started libpod-conmon-a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75.scope.
Nov 25 10:53:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d953fa2293cff7f0b7c3272d019505ca13af0bdc7bb9394fa441188cc6b7f11d-merged.mount: Deactivated successfully.
Nov 25 10:53:06 np0005535469 podman[83301]: 2025-11-25 15:53:06.25145531 +0000 UTC m=+0.165966132 container remove b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:53:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4cb7fec7617b29297d129a72b54ef39479400546f1612134f95381177e78e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4cb7fec7617b29297d129a72b54ef39479400546f1612134f95381177e78e3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa4cb7fec7617b29297d129a72b54ef39479400546f1612134f95381177e78e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:06 np0005535469 systemd[1]: libpod-conmon-b83af85f4962365781ad407e65a17efc04741c45fc23180d95fa69685e6e16b0.scope: Deactivated successfully.
Nov 25 10:53:06 np0005535469 podman[83315]: 2025-11-25 15:53:06.175342111 +0000 UTC m=+0.030800044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:53:06 np0005535469 podman[83315]: 2025-11-25 15:53:06.277817523 +0000 UTC m=+0.133275446 container init a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75 (image=quay.io/ceph/ceph:v18, name=gracious_williamson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:53:06 np0005535469 podman[83315]: 2025-11-25 15:53:06.286305571 +0000 UTC m=+0.141763464 container start a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75 (image=quay.io/ceph/ceph:v18, name=gracious_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 10:53:06 np0005535469 podman[83315]: 2025-11-25 15:53:06.28934499 +0000 UTC m=+0.144802913 container attach a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75 (image=quay.io/ceph/ceph:v18, name=gracious_williamson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:06 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:06 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:06 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mfjlcc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mfjlcc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: Deploying daemon mgr.compute-0.mfjlcc on compute-0
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/10425561' entity='client.admin' 
Nov 25 10:53:06 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:06 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:06 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 25 10:53:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/205161117' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 10:53:06 np0005535469 systemd[1]: Starting Ceph mgr.compute-0.mfjlcc for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:53:07 np0005535469 podman[83501]: 2025-11-25 15:53:07.15706673 +0000 UTC m=+0.058381932 container create 08c8059dc1d7f3e5ea3fc76e99d48319efc21a10667ec290b5a146337a571910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:53:07 np0005535469 podman[83501]: 2025-11-25 15:53:07.128258045 +0000 UTC m=+0.029573297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2ac14a89c1a8b6be1d7f5f6520d7014f3c391b9fca56bda29e03442717274/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2ac14a89c1a8b6be1d7f5f6520d7014f3c391b9fca56bda29e03442717274/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2ac14a89c1a8b6be1d7f5f6520d7014f3c391b9fca56bda29e03442717274/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2ac14a89c1a8b6be1d7f5f6520d7014f3c391b9fca56bda29e03442717274/merged/var/lib/ceph/mgr/ceph-compute-0.mfjlcc supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:07 np0005535469 podman[83501]: 2025-11-25 15:53:07.247607212 +0000 UTC m=+0.148922434 container init 08c8059dc1d7f3e5ea3fc76e99d48319efc21a10667ec290b5a146337a571910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:07 np0005535469 podman[83501]: 2025-11-25 15:53:07.254611366 +0000 UTC m=+0.155926568 container start 08c8059dc1d7f3e5ea3fc76e99d48319efc21a10667ec290b5a146337a571910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 10:53:07 np0005535469 bash[83501]: 08c8059dc1d7f3e5ea3fc76e99d48319efc21a10667ec290b5a146337a571910
Nov 25 10:53:07 np0005535469 systemd[1]: Started Ceph mgr.compute-0.mfjlcc for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: pidfile_write: ignore empty --pid-file
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 367f7550-76b5-46eb-894d-c39ea17fe796 (Updating mgr deployment (+1 -> 2))
Nov 25 10:53:07 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 367f7550-76b5-46eb-894d-c39ea17fe796 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'alerts'
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'balancer'
Nov 25 10:53:07 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]: 2025-11-25T15:53:07.701+0000 7f8363402140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/205161117' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/205161117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 10:53:07 np0005535469 gracious_williamson[83345]: set require_min_compat_client to mimic
Nov 25 10:53:07 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 25 10:53:07 np0005535469 systemd[1]: libpod-a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75.scope: Deactivated successfully.
Nov 25 10:53:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 10:53:07 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'cephadm'
Nov 25 10:53:07 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]: 2025-11-25T15:53:07.949+0000 7f8363402140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 10:53:07 np0005535469 podman[83709]: 2025-11-25 15:53:07.964733787 +0000 UTC m=+0.063474527 container died a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75 (image=quay.io/ceph/ceph:v18, name=gracious_williamson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-fa4cb7fec7617b29297d129a72b54ef39479400546f1612134f95381177e78e3-merged.mount: Deactivated successfully.
Nov 25 10:53:08 np0005535469 podman[83708]: 2025-11-25 15:53:08.112704343 +0000 UTC m=+0.210566329 container remove a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75 (image=quay.io/ceph/ceph:v18, name=gracious_williamson, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:08 np0005535469 systemd[1]: libpod-conmon-a4d01786f999a822f7a6f34b28b47dc2103097bcc499c8ce5ba0e2c65e29ae75.scope: Deactivated successfully.
Nov 25 10:53:08 np0005535469 podman[83779]: 2025-11-25 15:53:08.379797509 +0000 UTC m=+0.081644295 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:08 np0005535469 podman[83779]: 2025-11-25 15:53:08.487100402 +0000 UTC m=+0.188947168 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 10:53:08 np0005535469 python3[83857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:53:08 np0005535469 podman[83882]: 2025-11-25 15:53:08.770443615 +0000 UTC m=+0.042958175 container create fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f (image=quay.io/ceph/ceph:v18, name=lucid_fermi, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:08 np0005535469 systemd[1]: Started libpod-conmon-fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f.scope.
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 38c1111a-6507-487c-b7e0-869f2d874355 does not exist
Nov 25 10:53:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9098e9b8-dc31-41f4-8ddc-d3cd6c9d1515 does not exist
Nov 25 10:53:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 954c0a53-690f-459a-b8b5-35f7c375b1e2 does not exist
Nov 25 10:53:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6fbfbb830fa32e5096418d6d55757e7f1be3c025e522b116d99c1ae0cb7f98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6fbfbb830fa32e5096418d6d55757e7f1be3c025e522b116d99c1ae0cb7f98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6fbfbb830fa32e5096418d6d55757e7f1be3c025e522b116d99c1ae0cb7f98/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:08 np0005535469 podman[83882]: 2025-11-25 15:53:08.84842697 +0000 UTC m=+0.120941550 container init fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f (image=quay.io/ceph/ceph:v18, name=lucid_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:08 np0005535469 podman[83882]: 2025-11-25 15:53:08.752143333 +0000 UTC m=+0.024657893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:53:08 np0005535469 podman[83882]: 2025-11-25 15:53:08.855161262 +0000 UTC m=+0.127675822 container start fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f (image=quay.io/ceph/ceph:v18, name=lucid_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:08 np0005535469 podman[83882]: 2025-11-25 15:53:08.858932933 +0000 UTC m=+0.131447493 container attach fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f (image=quay.io/ceph/ceph:v18, name=lucid_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/205161117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:08 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 10:53:08 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:08 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 10:53:08 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 10:53:09 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:53:09 np0005535469 podman[84100]: 2025-11-25 15:53:09.430944461 +0000 UTC m=+0.042028690 container create 044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:09 np0005535469 systemd[1]: Started libpod-conmon-044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8.scope.
Nov 25 10:53:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:09 np0005535469 podman[84100]: 2025-11-25 15:53:09.409077124 +0000 UTC m=+0.020161353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:09 np0005535469 podman[84100]: 2025-11-25 15:53:09.693913787 +0000 UTC m=+0.304998006 container init 044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 10:53:09 np0005535469 podman[84100]: 2025-11-25 15:53:09.699753284 +0000 UTC m=+0.310837503 container start 044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:53:09 np0005535469 eloquent_dijkstra[84150]: 167 167
Nov 25 10:53:09 np0005535469 systemd[1]: libpod-044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8.scope: Deactivated successfully.
Nov 25 10:53:09 np0005535469 podman[84100]: 2025-11-25 15:53:09.796941236 +0000 UTC m=+0.408025485 container attach 044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 25 10:53:09 np0005535469 podman[84100]: 2025-11-25 15:53:09.797685635 +0000 UTC m=+0.408769854 container died 044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:09 np0005535469 ceph-mon[74985]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 10:53:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 10:53:09 np0005535469 ceph-mon[74985]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 10:53:09 np0005535469 ceph-mgr[75280]: [progress INFO root] Writing back 2 completed events
Nov 25 10:53:09 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:53:09 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:53:09 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'crash'
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:53:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-18d7a8c2543e0a73d2da7be7e01ba7666b06d8e23b13e7dbfa2fc71826801a17-merged.mount: Deactivated successfully.
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 podman[84100]: 2025-11-25 15:53:10.042132443 +0000 UTC m=+0.653216662 container remove 044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 10:53:10 np0005535469 systemd[1]: libpod-conmon-044335b02bd27ba3073a9c60c2adf42bf95289e8b9710d1ca5480c479d13d3c8.scope: Deactivated successfully.
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mavpeh (unknown last config time)...
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mavpeh (unknown last config time)...
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mavpeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mavpeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mavpeh on compute-0
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mavpeh on compute-0
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mgr[83520]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 10:53:10 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'dashboard'
Nov 25 10:53:10 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]: 2025-11-25T15:53:10.291+0000 7f8363402140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Added host compute-0
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 25 10:53:10 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 lucid_fermi[83911]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 10:53:10 np0005535469 lucid_fermi[83911]: Scheduled mon update...
Nov 25 10:53:10 np0005535469 lucid_fermi[83911]: Scheduled mgr update...
Nov 25 10:53:10 np0005535469 lucid_fermi[83911]: Scheduled osd.default_drive_group update...
Nov 25 10:53:10 np0005535469 systemd[1]: libpod-fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f.scope: Deactivated successfully.
Nov 25 10:53:10 np0005535469 podman[83882]: 2025-11-25 15:53:10.424918238 +0000 UTC m=+1.697432798 container died fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f (image=quay.io/ceph/ceph:v18, name=lucid_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7b6fbfbb830fa32e5096418d6d55757e7f1be3c025e522b116d99c1ae0cb7f98-merged.mount: Deactivated successfully.
Nov 25 10:53:10 np0005535469 podman[83882]: 2025-11-25 15:53:10.498877645 +0000 UTC m=+1.771392225 container remove fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f (image=quay.io/ceph/ceph:v18, name=lucid_fermi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:10 np0005535469 systemd[1]: libpod-conmon-fe9bd8bd5b01faa03b715d53c127192f78c637efb251146df2db862f3bb0d01f.scope: Deactivated successfully.
Nov 25 10:53:10 np0005535469 podman[84398]: 2025-11-25 15:53:10.704583043 +0000 UTC m=+0.048176656 container create e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:10 np0005535469 systemd[1]: Started libpod-conmon-e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758.scope.
Nov 25 10:53:10 np0005535469 podman[84398]: 2025-11-25 15:53:10.677018652 +0000 UTC m=+0.020612305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:10 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:10 np0005535469 podman[84398]: 2025-11-25 15:53:10.812817671 +0000 UTC m=+0.156411344 container init e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 10:53:10 np0005535469 podman[84398]: 2025-11-25 15:53:10.8217095 +0000 UTC m=+0.165303103 container start e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 10:53:10 np0005535469 agitated_lewin[84439]: 167 167
Nov 25 10:53:10 np0005535469 systemd[1]: libpod-e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758.scope: Deactivated successfully.
Nov 25 10:53:10 np0005535469 podman[84398]: 2025-11-25 15:53:10.827349111 +0000 UTC m=+0.170942734 container attach e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 10:53:10 np0005535469 podman[84398]: 2025-11-25 15:53:10.828309657 +0000 UTC m=+0.171903250 container died e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 10:53:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-808ddb5b9fa4a1478a2cbfae926cf0f1032f42e7a6c891e9ea5105d86d100758-merged.mount: Deactivated successfully.
Nov 25 10:53:10 np0005535469 podman[84398]: 2025-11-25 15:53:10.86751872 +0000 UTC m=+0.211112303 container remove e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 10:53:10 np0005535469 systemd[1]: libpod-conmon-e6e5a5fe649d1eb29725c490c8de929b120db7e85be50324534cc8ffa3933758.scope: Deactivated successfully.
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:10 np0005535469 python3[84441]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 podman[84460]: 2025-11-25 15:53:11.012232209 +0000 UTC m=+0.072679074 container create 229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097 (image=quay.io/ceph/ceph:v18, name=nostalgic_germain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: Reconfiguring mgr.compute-0.mavpeh (unknown last config time)...
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mavpeh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: Reconfiguring daemon mgr.compute-0.mavpeh on compute-0
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:11 np0005535469 podman[84460]: 2025-11-25 15:53:10.961493426 +0000 UTC m=+0.021940321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:53:11 np0005535469 systemd[1]: Started libpod-conmon-229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097.scope.
Nov 25 10:53:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32a9c60b1879a34c26b3a2438d693a5f92e7295cc2fc19106109d65cfd38c41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32a9c60b1879a34c26b3a2438d693a5f92e7295cc2fc19106109d65cfd38c41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32a9c60b1879a34c26b3a2438d693a5f92e7295cc2fc19106109d65cfd38c41/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:11 np0005535469 podman[84460]: 2025-11-25 15:53:11.095745403 +0000 UTC m=+0.156192258 container init 229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097 (image=quay.io/ceph/ceph:v18, name=nostalgic_germain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 10:53:11 np0005535469 podman[84460]: 2025-11-25 15:53:11.105493504 +0000 UTC m=+0.165940359 container start 229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097 (image=quay.io/ceph/ceph:v18, name=nostalgic_germain, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:53:11 np0005535469 podman[84460]: 2025-11-25 15:53:11.108777493 +0000 UTC m=+0.169224378 container attach 229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097 (image=quay.io/ceph/ceph:v18, name=nostalgic_germain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 10:53:11 np0005535469 podman[84670]: 2025-11-25 15:53:11.642628697 +0000 UTC m=+0.055670817 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 10:53:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229436841' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 10:53:11 np0005535469 nostalgic_germain[84525]: 
Nov 25 10:53:11 np0005535469 nostalgic_germain[84525]: {"fsid":"d82baeae-c742-50a4-b8f6-b5257c8a2c92","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":94,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-25T15:51:33.967034+0000","services":{}},"progress_events":{}}
Nov 25 10:53:11 np0005535469 podman[84670]: 2025-11-25 15:53:11.735091641 +0000 UTC m=+0.148133761 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:11 np0005535469 systemd[1]: libpod-229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097.scope: Deactivated successfully.
Nov 25 10:53:11 np0005535469 podman[84460]: 2025-11-25 15:53:11.749618791 +0000 UTC m=+0.810065666 container died 229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097 (image=quay.io/ceph/ceph:v18, name=nostalgic_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 10:53:11 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'devicehealth'
Nov 25 10:53:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c32a9c60b1879a34c26b3a2438d693a5f92e7295cc2fc19106109d65cfd38c41-merged.mount: Deactivated successfully.
Nov 25 10:53:11 np0005535469 podman[84460]: 2025-11-25 15:53:11.816511119 +0000 UTC m=+0.876957974 container remove 229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097 (image=quay.io/ceph/ceph:v18, name=nostalgic_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:11 np0005535469 systemd[1]: libpod-conmon-229c31318794269e4fd30536767131804563e03f5e9671415e2e91c4bb8c3097.scope: Deactivated successfully.
Nov 25 10:53:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:12 np0005535469 ceph-mgr[83520]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 10:53:12 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 10:53:12 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]: 2025-11-25T15:53:12.026+0000 7f8363402140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: Added host compute-0
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: Saving service mon spec with placement compute-0
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: Saving service mgr spec with placement compute-0
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: Saving service osd.default_drive_group spec with placement compute-0
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:12 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9ad1cb9b-811c-4524-ab07-f93c43402158 does not exist
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 10:53:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:12 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev ee37f1e2-0a0c-44a1-bdc7-ad6df7dd3d36 (Updating mgr deployment (-1 -> 1))
Nov 25 10:53:12 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.mfjlcc from compute-0 -- ports [8765]
Nov 25 10:53:12 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.mfjlcc from compute-0 -- ports [8765]
Nov 25 10:53:12 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 10:53:12 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 10:53:12 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]:  from numpy import show_config as show_numpy_config
Nov 25 10:53:12 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc[83516]: 2025-11-25T15:53:12.584+0000 7f8363402140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 10:53:12 np0005535469 ceph-mgr[83520]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 10:53:12 np0005535469 ceph-mgr[83520]: mgr[py] Loading python module 'influx'
Nov 25 10:53:12 np0005535469 systemd[1]: Stopping Ceph mgr.compute-0.mfjlcc for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:53:12 np0005535469 podman[84941]: 2025-11-25 15:53:12.81662186 +0000 UTC m=+0.077295548 container died 08c8059dc1d7f3e5ea3fc76e99d48319efc21a10667ec290b5a146337a571910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:53:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f5e2ac14a89c1a8b6be1d7f5f6520d7014f3c391b9fca56bda29e03442717274-merged.mount: Deactivated successfully.
Nov 25 10:53:12 np0005535469 podman[84941]: 2025-11-25 15:53:12.85568173 +0000 UTC m=+0.116355398 container remove 08c8059dc1d7f3e5ea3fc76e99d48319efc21a10667ec290b5a146337a571910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 10:53:12 np0005535469 bash[84941]: ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mfjlcc
Nov 25 10:53:12 np0005535469 systemd[1]: ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@mgr.compute-0.mfjlcc.service: Main process exited, code=exited, status=143/n/a
Nov 25 10:53:13 np0005535469 systemd[1]: ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@mgr.compute-0.mfjlcc.service: Failed with result 'exit-code'.
Nov 25 10:53:13 np0005535469 systemd[1]: Stopped Ceph mgr.compute-0.mfjlcc for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:53:13 np0005535469 systemd[1]: ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@mgr.compute-0.mfjlcc.service: Consumed 6.368s CPU time.
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: Removing daemon mgr.compute-0.mfjlcc from compute-0 -- ports [8765]
Nov 25 10:53:13 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:13 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:13 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:13 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.mfjlcc
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.mfjlcc"} v 0) v1
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.mfjlcc"}]: dispatch
Nov 25 10:53:13 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.mfjlcc
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mfjlcc"}]': finished
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev ee37f1e2-0a0c-44a1-bdc7-ad6df7dd3d36 (Updating mgr deployment (-1 -> 1))
Nov 25 10:53:13 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event ee37f1e2-0a0c-44a1-bdc7-ad6df7dd3d36 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:13 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0ed3086b-6a94-4ef6-bc64-6d3e59cbec9d does not exist
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.mfjlcc"}]: dispatch
Nov 25 10:53:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mfjlcc"}]': finished
Nov 25 10:53:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:53:14 np0005535469 podman[85175]: 2025-11-25 15:53:14.189741434 +0000 UTC m=+0.053036936 container create 182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:14 np0005535469 systemd[1]: Started libpod-conmon-182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa.scope.
Nov 25 10:53:14 np0005535469 podman[85175]: 2025-11-25 15:53:14.164538217 +0000 UTC m=+0.027833809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:14 np0005535469 podman[85175]: 2025-11-25 15:53:14.290904912 +0000 UTC m=+0.154200414 container init 182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_matsumoto, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:14 np0005535469 podman[85175]: 2025-11-25 15:53:14.299713519 +0000 UTC m=+0.163009011 container start 182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:14 np0005535469 blissful_matsumoto[85192]: 167 167
Nov 25 10:53:14 np0005535469 systemd[1]: libpod-182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa.scope: Deactivated successfully.
Nov 25 10:53:14 np0005535469 conmon[85192]: conmon 182bf85a299614610fe7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa.scope/container/memory.events
Nov 25 10:53:14 np0005535469 podman[85175]: 2025-11-25 15:53:14.309725898 +0000 UTC m=+0.173021400 container attach 182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 10:53:14 np0005535469 podman[85175]: 2025-11-25 15:53:14.310482348 +0000 UTC m=+0.173777850 container died 182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 10:53:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c548ebc88cda74b110e87db4031dafce36a03df92cce624545287a9b49e14269-merged.mount: Deactivated successfully.
Nov 25 10:53:14 np0005535469 podman[85175]: 2025-11-25 15:53:14.342937861 +0000 UTC m=+0.206233363 container remove 182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_matsumoto, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 10:53:14 np0005535469 systemd[1]: libpod-conmon-182bf85a299614610fe74ba12cca6ea764c83f5ed5d4e587585d9db993bd7cfa.scope: Deactivated successfully.
Nov 25 10:53:14 np0005535469 podman[85215]: 2025-11-25 15:53:14.51301912 +0000 UTC m=+0.055807861 container create 464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:14 np0005535469 systemd[1]: Started libpod-conmon-464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223.scope.
Nov 25 10:53:14 np0005535469 podman[85215]: 2025-11-25 15:53:14.490039283 +0000 UTC m=+0.032828054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/344981ce001a711e12f5ff31bf9f0bdbfb1874674d40ac78223e6bc16d51345e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/344981ce001a711e12f5ff31bf9f0bdbfb1874674d40ac78223e6bc16d51345e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/344981ce001a711e12f5ff31bf9f0bdbfb1874674d40ac78223e6bc16d51345e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/344981ce001a711e12f5ff31bf9f0bdbfb1874674d40ac78223e6bc16d51345e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/344981ce001a711e12f5ff31bf9f0bdbfb1874674d40ac78223e6bc16d51345e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:14 np0005535469 podman[85215]: 2025-11-25 15:53:14.615481744 +0000 UTC m=+0.158270505 container init 464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:53:14 np0005535469 podman[85215]: 2025-11-25 15:53:14.627150177 +0000 UTC m=+0.169938948 container start 464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:14 np0005535469 podman[85215]: 2025-11-25 15:53:14.634000511 +0000 UTC m=+0.176789332 container attach 464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:15 np0005535469 ceph-mgr[75280]: [progress INFO root] Writing back 3 completed events
Nov 25 10:53:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 10:53:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:15 np0005535469 ceph-mon[74985]: Removing key for mgr.compute-0.mfjlcc
Nov 25 10:53:15 np0005535469 cranky_kapitsa[85231]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:53:15 np0005535469 cranky_kapitsa[85231]: --> relative data size: 1.0
Nov 25 10:53:15 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 10:53:15 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ac347129-f7f8-46b6-8e4f-7d01f9efee15
Nov 25 10:53:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15"} v 0) v1
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343389048' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15"}]: dispatch
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1343389048' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15"}]': finished
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:16 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1343389048' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15"}]: dispatch
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1343389048' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15"}]': finished
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 10:53:16 np0005535469 lvm[85293]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 10:53:16 np0005535469 lvm[85293]: VG ceph_vg0 finished
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 10:53:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424103459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: stderr: got monmap epoch 1
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: --> Creating keyring file for osd.0
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 25 10:53:16 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ac347129-f7f8-46b6-8e4f-7d01f9efee15 --setuser ceph --setgroup ceph
Nov 25 10:53:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:18 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 10:53:18 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 10:53:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:19 np0005535469 ceph-mon[74985]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 10:53:19 np0005535469 ceph-mon[74985]: Cluster is now healthy
Nov 25 10:53:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:23 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:17.018+0000 7fcb3e6e5740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:23 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:17.018+0000 7fcb3e6e5740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:23 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:17.018+0000 7fcb3e6e5740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:23 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:17.018+0000 7fcb3e6e5740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 25 10:53:23 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 29af116d-8685-452a-b634-7ce2a4974adc
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "29af116d-8685-452a-b634-7ce2a4974adc"} v 0) v1
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3405753359' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29af116d-8685-452a-b634-7ce2a4974adc"}]: dispatch
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3405753359' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29af116d-8685-452a-b634-7ce2a4974adc"}]': finished
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:24 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:24 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 10:53:24 np0005535469 lvm[86234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 10:53:24 np0005535469 lvm[86234]: VG ceph_vg1 finished
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:24 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 25 10:53:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 10:53:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846572933' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 10:53:25 np0005535469 cranky_kapitsa[85231]: stderr: got monmap epoch 1
Nov 25 10:53:25 np0005535469 cranky_kapitsa[85231]: --> Creating keyring file for osd.1
Nov 25 10:53:25 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 25 10:53:25 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 25 10:53:25 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 29af116d-8685-452a-b634-7ce2a4974adc --setuser ceph --setgroup ceph
Nov 25 10:53:25 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3405753359' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29af116d-8685-452a-b634-7ce2a4974adc"}]: dispatch
Nov 25 10:53:25 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3405753359' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29af116d-8685-452a-b634-7ce2a4974adc"}]': finished
Nov 25 10:53:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:25.368+0000 7fbd617ad740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:25.368+0000 7fbd617ad740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:25.368+0000 7fbd617ad740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:25.369+0000 7fbd617ad740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 10:53:28 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9ec96548-6af1-446f-a950-cee8e59f7654
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "9ec96548-6af1-446f-a950-cee8e59f7654"} v 0) v1
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4027122938' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ec96548-6af1-446f-a950-cee8e59f7654"}]: dispatch
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4027122938' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9ec96548-6af1-446f-a950-cee8e59f7654"}]': finished
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:53:29 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:29 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:29 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:53:29 np0005535469 lvm[87171]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 10:53:29 np0005535469 lvm[87171]: VG ceph_vg2 finished
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1030249997' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: stderr: got monmap epoch 1
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: --> Creating keyring file for osd.2
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 25 10:53:29 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 9ec96548-6af1-446f-a950-cee8e59f7654 --setuser ceph --setgroup ceph
Nov 25 10:53:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4027122938' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ec96548-6af1-446f-a950-cee8e59f7654"}]: dispatch
Nov 25 10:53:29 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4027122938' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9ec96548-6af1-446f-a950-cee8e59f7654"}]': finished
Nov 25 10:53:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:29.967+0000 7f85cf5bd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:29.967+0000 7f85cf5bd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:29.967+0000 7f85cf5bd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: stderr: 2025-11-25T15:53:29.968+0000 7f85cf5bd740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 10:53:32 np0005535469 cranky_kapitsa[85231]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 25 10:53:32 np0005535469 systemd[1]: libpod-464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223.scope: Deactivated successfully.
Nov 25 10:53:32 np0005535469 systemd[1]: libpod-464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223.scope: Consumed 5.817s CPU time.
Nov 25 10:53:32 np0005535469 podman[88076]: 2025-11-25 15:53:32.40519466 +0000 UTC m=+0.024565750 container died 464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-344981ce001a711e12f5ff31bf9f0bdbfb1874674d40ac78223e6bc16d51345e-merged.mount: Deactivated successfully.
Nov 25 10:53:32 np0005535469 podman[88076]: 2025-11-25 15:53:32.459767297 +0000 UTC m=+0.079138367 container remove 464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:32 np0005535469 systemd[1]: libpod-conmon-464eb2748c3df86d9da5b9f9ffe6ac3edb97e7fc4a0b511b18e1d6412727f223.scope: Deactivated successfully.
Nov 25 10:53:33 np0005535469 podman[88230]: 2025-11-25 15:53:33.172554108 +0000 UTC m=+0.063450025 container create c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 10:53:33 np0005535469 systemd[1]: Started libpod-conmon-c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12.scope.
Nov 25 10:53:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:33 np0005535469 podman[88230]: 2025-11-25 15:53:33.149083308 +0000 UTC m=+0.039979355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:33 np0005535469 podman[88230]: 2025-11-25 15:53:33.243445233 +0000 UTC m=+0.134341170 container init c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 25 10:53:33 np0005535469 podman[88230]: 2025-11-25 15:53:33.254020687 +0000 UTC m=+0.144916614 container start c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:33 np0005535469 podman[88230]: 2025-11-25 15:53:33.258260931 +0000 UTC m=+0.149156938 container attach c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 10:53:33 np0005535469 affectionate_chaum[88246]: 167 167
Nov 25 10:53:33 np0005535469 podman[88230]: 2025-11-25 15:53:33.259974866 +0000 UTC m=+0.150870803 container died c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chaum, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:53:33 np0005535469 systemd[1]: libpod-c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12.scope: Deactivated successfully.
Nov 25 10:53:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2bc317cf7c197b79ae441a6a57dabc379f92679015b8ec94cd281b9af8685f3f-merged.mount: Deactivated successfully.
Nov 25 10:53:33 np0005535469 podman[88230]: 2025-11-25 15:53:33.322685102 +0000 UTC m=+0.213581069 container remove c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 10:53:33 np0005535469 systemd[1]: libpod-conmon-c90bec602eb44f7747f98195f1aeacb2352eb0de3f4a0b443298e9aa8d4a4c12.scope: Deactivated successfully.
Nov 25 10:53:33 np0005535469 podman[88269]: 2025-11-25 15:53:33.508418272 +0000 UTC m=+0.050378955 container create 0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:33 np0005535469 systemd[1]: Started libpod-conmon-0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d.scope.
Nov 25 10:53:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d31bc4cf1e85d2b0bfa7866208dd5f5c232c70882b057246755d99713497d8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d31bc4cf1e85d2b0bfa7866208dd5f5c232c70882b057246755d99713497d8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d31bc4cf1e85d2b0bfa7866208dd5f5c232c70882b057246755d99713497d8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d31bc4cf1e85d2b0bfa7866208dd5f5c232c70882b057246755d99713497d8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:33 np0005535469 podman[88269]: 2025-11-25 15:53:33.578802413 +0000 UTC m=+0.120763116 container init 0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:33 np0005535469 podman[88269]: 2025-11-25 15:53:33.487893711 +0000 UTC m=+0.029854424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:33 np0005535469 podman[88269]: 2025-11-25 15:53:33.58612494 +0000 UTC m=+0.128085623 container start 0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:53:33 np0005535469 podman[88269]: 2025-11-25 15:53:33.589264274 +0000 UTC m=+0.131224947 container attach 0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:53:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]: {
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:    "0": [
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:        {
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "devices": [
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "/dev/loop3"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            ],
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_name": "ceph_lv0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_size": "21470642176",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "name": "ceph_lv0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "tags": {
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cluster_name": "ceph",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.crush_device_class": "",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.encrypted": "0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osd_id": "0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.type": "block",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.vdo": "0"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            },
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "type": "block",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "vg_name": "ceph_vg0"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:        }
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:    ],
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:    "1": [
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:        {
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "devices": [
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "/dev/loop4"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            ],
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_name": "ceph_lv1",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_size": "21470642176",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "name": "ceph_lv1",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "tags": {
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cluster_name": "ceph",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.crush_device_class": "",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.encrypted": "0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osd_id": "1",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.type": "block",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.vdo": "0"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            },
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "type": "block",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "vg_name": "ceph_vg1"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:        }
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:    ],
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:    "2": [
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:        {
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "devices": [
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "/dev/loop5"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            ],
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_name": "ceph_lv2",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_size": "21470642176",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "name": "ceph_lv2",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "tags": {
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.cluster_name": "ceph",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.crush_device_class": "",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.encrypted": "0",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osd_id": "2",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.type": "block",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:                "ceph.vdo": "0"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            },
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "type": "block",
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:            "vg_name": "ceph_vg2"
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:        }
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]:    ]
Nov 25 10:53:34 np0005535469 hardcore_rhodes[88286]: }
Nov 25 10:53:34 np0005535469 systemd[1]: libpod-0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d.scope: Deactivated successfully.
Nov 25 10:53:34 np0005535469 podman[88295]: 2025-11-25 15:53:34.420041797 +0000 UTC m=+0.023032881 container died 0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0d31bc4cf1e85d2b0bfa7866208dd5f5c232c70882b057246755d99713497d8f-merged.mount: Deactivated successfully.
Nov 25 10:53:34 np0005535469 podman[88295]: 2025-11-25 15:53:34.468686803 +0000 UTC m=+0.071677887 container remove 0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 10:53:34 np0005535469 systemd[1]: libpod-conmon-0f2ecf0cd67fa96b13139a8597cb709f2aebdb62834619f49a4892c2bc3d687d.scope: Deactivated successfully.
Nov 25 10:53:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 25 10:53:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 10:53:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:34 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 25 10:53:34 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 25 10:53:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 10:53:35 np0005535469 podman[88451]: 2025-11-25 15:53:35.075631832 +0000 UTC m=+0.044317032 container create 28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 10:53:35 np0005535469 systemd[1]: Started libpod-conmon-28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1.scope.
Nov 25 10:53:35 np0005535469 podman[88451]: 2025-11-25 15:53:35.054493673 +0000 UTC m=+0.023178843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:35 np0005535469 podman[88451]: 2025-11-25 15:53:35.167793297 +0000 UTC m=+0.136478477 container init 28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:35 np0005535469 podman[88451]: 2025-11-25 15:53:35.17496216 +0000 UTC m=+0.143647320 container start 28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 10:53:35 np0005535469 bold_shockley[88467]: 167 167
Nov 25 10:53:35 np0005535469 systemd[1]: libpod-28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1.scope: Deactivated successfully.
Nov 25 10:53:35 np0005535469 podman[88451]: 2025-11-25 15:53:35.180430027 +0000 UTC m=+0.149115237 container attach 28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:53:35 np0005535469 podman[88451]: 2025-11-25 15:53:35.181037523 +0000 UTC m=+0.149722693 container died 28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 10:53:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-db4c0e9426c6c06384c7839d8f87b51d2c06eb25c20d753bc1aac42c846f3e1c-merged.mount: Deactivated successfully.
Nov 25 10:53:35 np0005535469 podman[88451]: 2025-11-25 15:53:35.231689214 +0000 UTC m=+0.200374374 container remove 28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shockley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 10:53:35 np0005535469 systemd[1]: libpod-conmon-28f8e4160bed853d00d1d6613f1513d68c444de8b3259a36f4254be433865ab1.scope: Deactivated successfully.
Nov 25 10:53:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:35 np0005535469 podman[88499]: 2025-11-25 15:53:35.536756201 +0000 UTC m=+0.040159630 container create e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:53:35 np0005535469 systemd[1]: Started libpod-conmon-e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea.scope.
Nov 25 10:53:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c79f4de5dd905c24858634592a2b8ac69bf2ef3d544e99209e382aadd99499c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c79f4de5dd905c24858634592a2b8ac69bf2ef3d544e99209e382aadd99499c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c79f4de5dd905c24858634592a2b8ac69bf2ef3d544e99209e382aadd99499c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c79f4de5dd905c24858634592a2b8ac69bf2ef3d544e99209e382aadd99499c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c79f4de5dd905c24858634592a2b8ac69bf2ef3d544e99209e382aadd99499c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:35 np0005535469 podman[88499]: 2025-11-25 15:53:35.612185777 +0000 UTC m=+0.115589226 container init e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:53:35 np0005535469 podman[88499]: 2025-11-25 15:53:35.516459796 +0000 UTC m=+0.019863255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:35 np0005535469 podman[88499]: 2025-11-25 15:53:35.622220037 +0000 UTC m=+0.125623466 container start e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:35 np0005535469 podman[88499]: 2025-11-25 15:53:35.625927587 +0000 UTC m=+0.129331016 container attach e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:53:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:36 np0005535469 ceph-mon[74985]: Deploying daemon osd.0 on compute-0
Nov 25 10:53:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test[88515]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 10:53:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test[88515]:                            [--no-systemd] [--no-tmpfs]
Nov 25 10:53:36 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test[88515]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 10:53:36 np0005535469 systemd[1]: libpod-e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea.scope: Deactivated successfully.
Nov 25 10:53:36 np0005535469 podman[88499]: 2025-11-25 15:53:36.26472219 +0000 UTC m=+0.768125619 container died e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7c79f4de5dd905c24858634592a2b8ac69bf2ef3d544e99209e382aadd99499c-merged.mount: Deactivated successfully.
Nov 25 10:53:36 np0005535469 podman[88499]: 2025-11-25 15:53:36.37634927 +0000 UTC m=+0.879752699 container remove e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate-test, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:53:36 np0005535469 systemd[1]: libpod-conmon-e08ac5fc6a724e48ee33a61c766e647ee0ebcc194a79a840d659c1533d87e1ea.scope: Deactivated successfully.
Nov 25 10:53:36 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:36 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:36 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:36 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:37 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:37 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:37 np0005535469 systemd[1]: Starting Ceph osd.0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:53:37 np0005535469 podman[88674]: 2025-11-25 15:53:37.509838994 +0000 UTC m=+0.056251682 container create 31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 10:53:37 np0005535469 podman[88674]: 2025-11-25 15:53:37.478425901 +0000 UTC m=+0.024838589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedb01da91beed0adc6c0691116511e9e3ebfe11a00c0d5c795f05ccb8d0465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedb01da91beed0adc6c0691116511e9e3ebfe11a00c0d5c795f05ccb8d0465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedb01da91beed0adc6c0691116511e9e3ebfe11a00c0d5c795f05ccb8d0465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedb01da91beed0adc6c0691116511e9e3ebfe11a00c0d5c795f05ccb8d0465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bedb01da91beed0adc6c0691116511e9e3ebfe11a00c0d5c795f05ccb8d0465/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:37 np0005535469 podman[88674]: 2025-11-25 15:53:37.669766492 +0000 UTC m=+0.216179180 container init 31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:53:37 np0005535469 podman[88674]: 2025-11-25 15:53:37.677743306 +0000 UTC m=+0.224155984 container start 31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:53:37 np0005535469 podman[88674]: 2025-11-25 15:53:37.697370383 +0000 UTC m=+0.243783101 container attach 31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 10:53:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate[88690]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 10:53:38 np0005535469 bash[88674]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 10:53:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate[88690]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 10:53:38 np0005535469 bash[88674]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 10:53:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate[88690]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 10:53:38 np0005535469 bash[88674]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 10:53:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate[88690]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 10:53:38 np0005535469 bash[88674]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 10:53:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate[88690]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:38 np0005535469 bash[88674]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate[88690]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 10:53:38 np0005535469 bash[88674]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 10:53:38 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate[88690]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 10:53:38 np0005535469 bash[88674]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 10:53:38 np0005535469 systemd[1]: libpod-31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7.scope: Deactivated successfully.
Nov 25 10:53:38 np0005535469 podman[88674]: 2025-11-25 15:53:38.720296909 +0000 UTC m=+1.266709577 container died 31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:38 np0005535469 systemd[1]: libpod-31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7.scope: Consumed 1.043s CPU time.
Nov 25 10:53:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5bedb01da91beed0adc6c0691116511e9e3ebfe11a00c0d5c795f05ccb8d0465-merged.mount: Deactivated successfully.
Nov 25 10:53:38 np0005535469 podman[88674]: 2025-11-25 15:53:38.818870557 +0000 UTC m=+1.365283225 container remove 31b9c6b19658b2e65910852037b64858c7942adc7837ea7e3ab089356f9547b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 10:53:39 np0005535469 podman[88871]: 2025-11-25 15:53:39.033493363 +0000 UTC m=+0.060356842 container create f5562c4a2a65e9b4246c52fd20311793457306d1e64844a899742ba5563cf653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 10:53:39 np0005535469 podman[88871]: 2025-11-25 15:53:38.997030383 +0000 UTC m=+0.023893882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d26e6b7b198eeae0febfefd281476877d46764c8ef08868288611bac6c1dad9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d26e6b7b198eeae0febfefd281476877d46764c8ef08868288611bac6c1dad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d26e6b7b198eeae0febfefd281476877d46764c8ef08868288611bac6c1dad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d26e6b7b198eeae0febfefd281476877d46764c8ef08868288611bac6c1dad9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d26e6b7b198eeae0febfefd281476877d46764c8ef08868288611bac6c1dad9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:39 np0005535469 podman[88871]: 2025-11-25 15:53:39.136898621 +0000 UTC m=+0.163762120 container init f5562c4a2a65e9b4246c52fd20311793457306d1e64844a899742ba5563cf653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:53:39 np0005535469 podman[88871]: 2025-11-25 15:53:39.142291246 +0000 UTC m=+0.169154735 container start f5562c4a2a65e9b4246c52fd20311793457306d1e64844a899742ba5563cf653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 10:53:39 np0005535469 bash[88871]: f5562c4a2a65e9b4246c52fd20311793457306d1e64844a899742ba5563cf653
Nov 25 10:53:39 np0005535469 systemd[1]: Started Ceph osd.0 for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: pidfile_write: ignore empty --pid-file
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f64807800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f64807800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f64807800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f64807800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f6563f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f6563f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f6563f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f6563f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f6563f800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f64807800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: load: jerasure load: lrc 
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 10:53:39 np0005535469 podman[89053]: 2025-11-25 15:53:39.900337894 +0000 UTC m=+0.040306504 container create be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:53:39
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] No pools available
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:39 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 10:53:39 np0005535469 podman[89053]: 2025-11-25 15:53:39.879196986 +0000 UTC m=+0.019165616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:53:39 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:53:40 np0005535469 systemd[1]: Started libpod-conmon-be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22.scope.
Nov 25 10:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:53:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:40 np0005535469 podman[89053]: 2025-11-25 15:53:40.067142046 +0000 UTC m=+0.207110656 container init be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:40 np0005535469 podman[89053]: 2025-11-25 15:53:40.074663198 +0000 UTC m=+0.214631808 container start be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_bartik, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:53:40 np0005535469 compassionate_bartik[89073]: 167 167
Nov 25 10:53:40 np0005535469 systemd[1]: libpod-be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22.scope: Deactivated successfully.
Nov 25 10:53:40 np0005535469 conmon[89073]: conmon be9895aa654e46a63bb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22.scope/container/memory.events
Nov 25 10:53:40 np0005535469 podman[89053]: 2025-11-25 15:53:40.088475519 +0000 UTC m=+0.228444149 container attach be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 10:53:40 np0005535469 podman[89053]: 2025-11-25 15:53:40.089036764 +0000 UTC m=+0.229005374 container died be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dd6f9fdc1cd8909fff8cccf2672d9536dbc6b719c71e910a08c8d4286798c229-merged.mount: Deactivated successfully.
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c0c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs mount
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs mount shared_bdev_used = 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Git sha 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: DB SUMMARY
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: DB Session ID:  9FPOSTXVLHUR03TTC2ZW
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                     Options.env: 0x563f65691c70
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                Options.info_log: 0x563f6488e8a0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.write_buffer_manager: 0x563f6579a460
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.row_cache: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                              Options.wal_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.wal_compression: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_background_jobs: 4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Compression algorithms supported:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kZSTD supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 podman[89053]: 2025-11-25 15:53:40.252200378 +0000 UTC m=+0.392168988 container remove be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 systemd[1]: libpod-conmon-be9895aa654e46a63bb131becbbb668eddcd84ac34b8b0072cb6ac842964ab22.scope: Deactivated successfully.
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f0083ec1-31c4-406c-aa3d-7686bf4b6293
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086020264277, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086020264482, "job": 1, "event": "recovery_finished"}
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: freelist init
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: freelist _read_cfg
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs umount
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 10:53:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 10:53:40 np0005535469 ceph-mon[74985]: Deploying daemon osd.1 on compute-0
Nov 25 10:53:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bdev(0x563f656c1400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs mount
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluefs mount shared_bdev_used = 4718592
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Git sha 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: DB SUMMARY
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: DB Session ID:  9FPOSTXVLHUR03TTC2ZX
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                     Options.env: 0x563f65842b60
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                Options.info_log: 0x563f6488e620
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.write_buffer_manager: 0x563f6579a460
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.row_cache: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                              Options.wal_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.wal_compression: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_background_jobs: 4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Compression algorithms supported:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kZSTD supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f6488e380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f6487b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f0083ec1-31c4-406c-aa3d-7686bf4b6293
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086020542505, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086020561255, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086020, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f0083ec1-31c4-406c-aa3d-7686bf4b6293", "db_session_id": "9FPOSTXVLHUR03TTC2ZX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086020571955, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086020, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f0083ec1-31c4-406c-aa3d-7686bf4b6293", "db_session_id": "9FPOSTXVLHUR03TTC2ZX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086020590170, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086020, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f0083ec1-31c4-406c-aa3d-7686bf4b6293", "db_session_id": "9FPOSTXVLHUR03TTC2ZX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086020593277, "job": 1, "event": "recovery_finished"}
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 10:53:40 np0005535469 podman[89483]: 2025-11-25 15:53:40.633770111 +0000 UTC m=+0.063628561 container create 1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563f649e8000
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: DB pointer 0x563f65783a00
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 460.80 MB usag
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: _get_class not permitted to load lua
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: _get_class not permitted to load sdk
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: _get_class not permitted to load test_remote_reads
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0 0 load_pgs
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0 0 load_pgs opened 0 pgs
Nov 25 10:53:40 np0005535469 ceph-osd[88890]: osd.0 0 log_to_monitors true
Nov 25 10:53:40 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0[88886]: 2025-11-25T15:53:40.679+0000 7f8b9990d740 -1 osd.0 0 log_to_monitors true
Nov 25 10:53:40 np0005535469 systemd[1]: Started libpod-conmon-1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5.scope.
Nov 25 10:53:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 25 10:53:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 10:53:40 np0005535469 podman[89483]: 2025-11-25 15:53:40.598606546 +0000 UTC m=+0.028465016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f2d7507564ed489df483bb060a3b4f6df16f714ef1e30f446e15094335130/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f2d7507564ed489df483bb060a3b4f6df16f714ef1e30f446e15094335130/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f2d7507564ed489df483bb060a3b4f6df16f714ef1e30f446e15094335130/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f2d7507564ed489df483bb060a3b4f6df16f714ef1e30f446e15094335130/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f2d7507564ed489df483bb060a3b4f6df16f714ef1e30f446e15094335130/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:40 np0005535469 podman[89483]: 2025-11-25 15:53:40.727432937 +0000 UTC m=+0.157291487 container init 1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 10:53:40 np0005535469 podman[89483]: 2025-11-25 15:53:40.734416875 +0000 UTC m=+0.164275325 container start 1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:40 np0005535469 podman[89483]: 2025-11-25 15:53:40.753194569 +0000 UTC m=+0.183053049 container attach 1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:41 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test[89532]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 10:53:41 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test[89532]:                            [--no-systemd] [--no-tmpfs]
Nov 25 10:53:41 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test[89532]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 10:53:41 np0005535469 systemd[1]: libpod-1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5.scope: Deactivated successfully.
Nov 25 10:53:41 np0005535469 podman[89483]: 2025-11-25 15:53:41.373445025 +0000 UTC m=+0.803303485 container died 1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:53:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:53:41 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:41 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:41 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:53:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b19f2d7507564ed489df483bb060a3b4f6df16f714ef1e30f446e15094335130-merged.mount: Deactivated successfully.
Nov 25 10:53:41 np0005535469 podman[89483]: 2025-11-25 15:53:41.556857842 +0000 UTC m=+0.986716312 container remove 1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:41 np0005535469 systemd[1]: libpod-conmon-1589330b6e9fff8c1238ad6b4faba74648bbdeae445c2e92f16e83bec0525ca5.scope: Deactivated successfully.
Nov 25 10:53:41 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 10:53:41 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 10:53:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:42 np0005535469 python3[89588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:53:42 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:42 np0005535469 podman[89596]: 2025-11-25 15:53:42.192338797 +0000 UTC m=+0.069624232 container create e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981 (image=quay.io/ceph/ceph:v18, name=ecstatic_nash, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:53:42 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:42 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:42 np0005535469 podman[89596]: 2025-11-25 15:53:42.150663977 +0000 UTC m=+0.027949432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:42 np0005535469 systemd[1]: Started libpod-conmon-e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981.scope.
Nov 25 10:53:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88ad8f6320d7965ae2a1c81e1756f851f928a96ab9e1964e566301bbc843ab3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88ad8f6320d7965ae2a1c81e1756f851f928a96ab9e1964e566301bbc843ab3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88ad8f6320d7965ae2a1c81e1756f851f928a96ab9e1964e566301bbc843ab3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:42 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:42 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:42 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 25 10:53:42 np0005535469 ceph-osd[88890]: osd.0 0 done with init, starting boot process
Nov 25 10:53:42 np0005535469 ceph-osd[88890]: osd.0 0 start_boot
Nov 25 10:53:42 np0005535469 ceph-osd[88890]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 10:53:42 np0005535469 ceph-osd[88890]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 10:53:42 np0005535469 ceph-osd[88890]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 10:53:42 np0005535469 ceph-osd[88890]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 10:53:42 np0005535469 ceph-osd[88890]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 25 10:53:42 np0005535469 podman[89596]: 2025-11-25 15:53:42.590353081 +0000 UTC m=+0.467638546 container init e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981 (image=quay.io/ceph/ceph:v18, name=ecstatic_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 25 10:53:42 np0005535469 podman[89596]: 2025-11-25 15:53:42.598339876 +0000 UTC m=+0.475625321 container start e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981 (image=quay.io/ceph/ceph:v18, name=ecstatic_nash, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 10:53:42 np0005535469 systemd[1]: Starting Ceph osd.1 for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:42 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:53:42 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:42 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:53:42 np0005535469 podman[89596]: 2025-11-25 15:53:42.831054058 +0000 UTC m=+0.708339513 container attach e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981 (image=quay.io/ceph/ceph:v18, name=ecstatic_nash, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:42 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:42 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 10:53:42 np0005535469 ceph-mon[74985]: from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 10:53:43 np0005535469 podman[89757]: 2025-11-25 15:53:42.987594824 +0000 UTC m=+0.023039369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 10:53:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887766432' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 10:53:43 np0005535469 ecstatic_nash[89649]: 
Nov 25 10:53:43 np0005535469 ecstatic_nash[89649]: {"fsid":"d82baeae-c742-50a4-b8f6-b5257c8a2c92","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":125,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764086009,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T15:53:41.930601+0000","services":{}},"progress_events":{}}
Nov 25 10:53:43 np0005535469 systemd[1]: libpod-e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981.scope: Deactivated successfully.
Nov 25 10:53:43 np0005535469 conmon[89649]: conmon e84fb594a565695772db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981.scope/container/memory.events
Nov 25 10:53:43 np0005535469 podman[89757]: 2025-11-25 15:53:43.232115834 +0000 UTC m=+0.267560359 container create 27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:43 np0005535469 podman[89596]: 2025-11-25 15:53:43.526795271 +0000 UTC m=+1.404080736 container died e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981 (image=quay.io/ceph/ceph:v18, name=ecstatic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bf3cb3900473f6150a3790e0630982c7c5b262b370cfc8dd29e5969c6bbcd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bf3cb3900473f6150a3790e0630982c7c5b262b370cfc8dd29e5969c6bbcd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bf3cb3900473f6150a3790e0630982c7c5b262b370cfc8dd29e5969c6bbcd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bf3cb3900473f6150a3790e0630982c7c5b262b370cfc8dd29e5969c6bbcd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bf3cb3900473f6150a3790e0630982c7c5b262b370cfc8dd29e5969c6bbcd0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:43 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:43 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b88ad8f6320d7965ae2a1c81e1756f851f928a96ab9e1964e566301bbc843ab3-merged.mount: Deactivated successfully.
Nov 25 10:53:44 np0005535469 ceph-mon[74985]: from='osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 10:53:44 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:44 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:44 np0005535469 podman[89772]: 2025-11-25 15:53:44.978138757 +0000 UTC m=+1.766346250 container remove e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981 (image=quay.io/ceph/ceph:v18, name=ecstatic_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:53:44 np0005535469 systemd[1]: libpod-conmon-e84fb594a565695772db5bfc9d6549973f5bbf9b326756841adb873ac577d981.scope: Deactivated successfully.
Nov 25 10:53:45 np0005535469 podman[89757]: 2025-11-25 15:53:45.073582752 +0000 UTC m=+2.109027287 container init 27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 10:53:45 np0005535469 podman[89757]: 2025-11-25 15:53:45.081429723 +0000 UTC m=+2.116874258 container start 27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:45 np0005535469 podman[89757]: 2025-11-25 15:53:45.366832281 +0000 UTC m=+2.402276806 container attach 27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:45 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:45 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:46 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate[89784]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 10:53:46 np0005535469 bash[89757]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 10:53:46 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate[89784]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 10:53:46 np0005535469 bash[89757]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 10:53:46 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate[89784]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 10:53:46 np0005535469 bash[89757]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 10:53:46 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate[89784]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 10:53:46 np0005535469 bash[89757]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 10:53:46 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate[89784]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:46 np0005535469 bash[89757]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:46 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate[89784]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 10:53:46 np0005535469 bash[89757]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 10:53:46 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate[89784]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 10:53:46 np0005535469 bash[89757]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 10:53:46 np0005535469 systemd[1]: libpod-27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315.scope: Deactivated successfully.
Nov 25 10:53:46 np0005535469 podman[89757]: 2025-11-25 15:53:46.091274026 +0000 UTC m=+3.126718551 container died 27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 10:53:46 np0005535469 systemd[1]: libpod-27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315.scope: Consumed 1.021s CPU time.
Nov 25 10:53:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-47bf3cb3900473f6150a3790e0630982c7c5b262b370cfc8dd29e5969c6bbcd0-merged.mount: Deactivated successfully.
Nov 25 10:53:46 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:46 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:46 np0005535469 podman[89757]: 2025-11-25 15:53:46.947498843 +0000 UTC m=+3.982943368 container remove 27bce5d739f1e96dbd8d5d01f0c0ffc840d9861916c12792b66ebff7bad40315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 10:53:47 np0005535469 podman[89971]: 2025-11-25 15:53:47.145226505 +0000 UTC m=+0.027223593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:47 np0005535469 podman[89971]: 2025-11-25 15:53:47.274879419 +0000 UTC m=+0.156876487 container create 6e8437ef0c777c7e484f35f0916fa270636414f0bf5d76af32cc6aaed2babd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 10:53:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3d0e888ee173427265fe499bf737f302ab2ded6a2f200fa561caae854ffd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3d0e888ee173427265fe499bf737f302ab2ded6a2f200fa561caae854ffd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3d0e888ee173427265fe499bf737f302ab2ded6a2f200fa561caae854ffd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3d0e888ee173427265fe499bf737f302ab2ded6a2f200fa561caae854ffd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf3d0e888ee173427265fe499bf737f302ab2ded6a2f200fa561caae854ffd1/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:47 np0005535469 podman[89971]: 2025-11-25 15:53:47.443602991 +0000 UTC m=+0.325600079 container init 6e8437ef0c777c7e484f35f0916fa270636414f0bf5d76af32cc6aaed2babd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 25 10:53:47 np0005535469 podman[89971]: 2025-11-25 15:53:47.448960065 +0000 UTC m=+0.330957133 container start 6e8437ef0c777c7e484f35f0916fa270636414f0bf5d76af32cc6aaed2babd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: pidfile_write: ignore empty --pid-file
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618da6dd800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618da6dd800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618da6dd800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618da6dd800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618db515800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618db515800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618db515800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618db515800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618db515800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 10:53:47 np0005535469 bash[89971]: 6e8437ef0c777c7e484f35f0916fa270636414f0bf5d76af32cc6aaed2babd37
Nov 25 10:53:47 np0005535469 systemd[1]: Started Ceph osd.1 for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: bdev(0x5618da6dd800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 10:53:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:53:47 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:47 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:47 np0005535469 ceph-osd[89991]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: load: jerasure load: lrc 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:53:48 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 25 10:53:48 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db596c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs mount
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs mount shared_bdev_used = 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Git sha 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: DB SUMMARY
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: DB Session ID:  096I81I41VVBJIW9Y47M
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                     Options.env: 0x5618db567c70
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                Options.info_log: 0x5618da7648a0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.write_buffer_manager: 0x5618db670460
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.row_cache: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                              Options.wal_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.wal_compression: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_background_jobs: 4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Compression algorithms supported:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kZSTD supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da7642c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da7642c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da7642c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da7642c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da7642c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da7642c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da7642c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da751090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da751090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da751090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f267584e-790a-4b60-9cf4-6d921e25b9b8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086028568805, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086028569047, "job": 1, "event": "recovery_finished"}
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: freelist init
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: freelist _read_cfg
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs umount
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bdev(0x5618db597400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs mount
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluefs mount shared_bdev_used = 4718592
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Git sha 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: DB SUMMARY
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: DB Session ID:  096I81I41VVBJIW9Y47N
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                     Options.env: 0x5618db718b60
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                Options.info_log: 0x5618da764600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.write_buffer_manager: 0x5618db670460
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.row_cache: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                              Options.wal_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.wal_compression: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_background_jobs: 4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Compression algorithms supported:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kZSTD supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da7511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da751090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da751090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:           Options.merge_operator: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5618da764380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618da751090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.compression: LZ4
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.num_levels: 7
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f267584e-790a-4b60-9cf4-6d921e25b9b8
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086028838597, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 10:53:48 np0005535469 ceph-osd[89991]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086028994218, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086028, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f267584e-790a-4b60-9cf4-6d921e25b9b8", "db_session_id": "096I81I41VVBJIW9Y47N", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:53:49 np0005535469 podman[90532]: 2025-11-25 15:53:48.965950465 +0000 UTC m=+0.023379128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:53:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 10:53:49 np0005535469 podman[90532]: 2025-11-25 15:53:49.15292219 +0000 UTC m=+0.210350843 container create fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:53:49 np0005535469 ceph-osd[89991]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086029187473, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086028, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f267584e-790a-4b60-9cf4-6d921e25b9b8", "db_session_id": "096I81I41VVBJIW9Y47N", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:53:49 np0005535469 systemd[1]: Started libpod-conmon-fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae.scope.
Nov 25 10:53:49 np0005535469 ceph-osd[89991]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086029387353, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086029, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f267584e-790a-4b60-9cf4-6d921e25b9b8", "db_session_id": "096I81I41VVBJIW9Y47N", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:53:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:49 np0005535469 ceph-osd[89991]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086029487300, "job": 1, "event": "recovery_finished"}
Nov 25 10:53:49 np0005535469 ceph-osd[89991]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 10:53:49 np0005535469 podman[90532]: 2025-11-25 15:53:49.53593746 +0000 UTC m=+0.593366133 container init fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 10:53:49 np0005535469 podman[90532]: 2025-11-25 15:53:49.546971996 +0000 UTC m=+0.604400649 container start fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:53:49 np0005535469 systemd[1]: libpod-fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae.scope: Deactivated successfully.
Nov 25 10:53:49 np0005535469 goofy_mendel[90549]: 167 167
Nov 25 10:53:49 np0005535469 conmon[90549]: conmon fa214081b324dff3b1f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae.scope/container/memory.events
Nov 25 10:53:49 np0005535469 podman[90532]: 2025-11-25 15:53:49.603176436 +0000 UTC m=+0.660605079 container attach fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:49 np0005535469 podman[90532]: 2025-11-25 15:53:49.603986069 +0000 UTC m=+0.661414712 container died fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:53:49 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:49 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d0019ff4dfccde70f6e23ad078a0f2fca84c914583f335a0d80d6e7b627184db-merged.mount: Deactivated successfully.
Nov 25 10:53:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5618da8be000
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: rocksdb: DB pointer 0x5618db659a00
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.2 total, 1.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.2 total, 1.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.2 total, 1.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.2 total, 1.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 460.80 MB usag
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: _get_class not permitted to load lua
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: _get_class not permitted to load sdk
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: _get_class not permitted to load test_remote_reads
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: osd.1 0 load_pgs
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: osd.1 0 load_pgs opened 0 pgs
Nov 25 10:53:50 np0005535469 ceph-osd[89991]: osd.1 0 log_to_monitors true
Nov 25 10:53:50 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1[89987]: 2025-11-25T15:53:50.074+0000 7ff11fba5740 -1 osd.1 0 log_to_monitors true
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: Deploying daemon osd.2 on compute-0
Nov 25 10:53:50 np0005535469 podman[90532]: 2025-11-25 15:53:50.316354459 +0000 UTC m=+1.373783102 container remove fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:50 np0005535469 systemd[1]: libpod-conmon-fa214081b324dff3b1f22234d10a54ef3ade12320f34577ced409224bc6c3bae.scope: Deactivated successfully.
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:50 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:50 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:53:50 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:53:50 np0005535469 podman[90614]: 2025-11-25 15:53:50.700253354 +0000 UTC m=+0.020480041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:50 np0005535469 podman[90614]: 2025-11-25 15:53:50.830427211 +0000 UTC m=+0.150653878 container create 1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 10:53:50 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:50 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:51 np0005535469 systemd[1]: Started libpod-conmon-1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28.scope.
Nov 25 10:53:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 10:53:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 10:53:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f5e00b2c606bb75046c9a58d73e330707ea880a85573c0cac0f3d0cb72475b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f5e00b2c606bb75046c9a58d73e330707ea880a85573c0cac0f3d0cb72475b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f5e00b2c606bb75046c9a58d73e330707ea880a85573c0cac0f3d0cb72475b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f5e00b2c606bb75046c9a58d73e330707ea880a85573c0cac0f3d0cb72475b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f5e00b2c606bb75046c9a58d73e330707ea880a85573c0cac0f3d0cb72475b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:51 np0005535469 podman[90614]: 2025-11-25 15:53:51.294999194 +0000 UTC m=+0.615225881 container init 1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 10:53:51 np0005535469 podman[90614]: 2025-11-25 15:53:51.304389717 +0000 UTC m=+0.624616374 container start 1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:53:51 np0005535469 podman[90614]: 2025-11-25 15:53:51.456903374 +0000 UTC m=+0.777130041 container attach 1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:51 np0005535469 ceph-mon[74985]: from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 10:53:51 np0005535469 ceph-mon[74985]: from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 10:53:51 np0005535469 ceph-mon[74985]: from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 10:53:51 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:53:52 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:52 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test[90630]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 10:53:52 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test[90630]:                            [--no-systemd] [--no-tmpfs]
Nov 25 10:53:52 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test[90630]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 10:53:52 np0005535469 systemd[1]: libpod-1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28.scope: Deactivated successfully.
Nov 25 10:53:52 np0005535469 podman[90614]: 2025-11-25 15:53:52.454124028 +0000 UTC m=+1.774350705 container died 1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 10:53:52 np0005535469 ceph-osd[89991]: osd.1 0 done with init, starting boot process
Nov 25 10:53:52 np0005535469 ceph-osd[89991]: osd.1 0 start_boot
Nov 25 10:53:52 np0005535469 ceph-osd[89991]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 10:53:52 np0005535469 ceph-osd[89991]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 10:53:52 np0005535469 ceph-osd[89991]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 10:53:52 np0005535469 ceph-osd[89991]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 10:53:52 np0005535469 ceph-osd[89991]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Nov 25 10:53:52 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:53:52 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:52 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:52 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:53:52 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:52 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b0f5e00b2c606bb75046c9a58d73e330707ea880a85573c0cac0f3d0cb72475b-merged.mount: Deactivated successfully.
Nov 25 10:53:53 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:53 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:53 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:53 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:54 np0005535469 podman[90614]: 2025-11-25 15:53:54.708722837 +0000 UTC m=+4.028949514 container remove 1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate-test, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:53:54 np0005535469 systemd[1]: libpod-conmon-1883b57bbc212b978ef1f02d3dc90969afaa794c39d3a41d81db8b679e1c0c28.scope: Deactivated successfully.
Nov 25 10:53:54 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:54 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:54 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:54 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:54 np0005535469 ceph-mon[74985]: from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 10:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:53:55 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:55 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:55 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:55 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:56 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:56 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:57 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:57 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:57 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:57 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:57 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:57 np0005535469 systemd[1]: Reloading.
Nov 25 10:53:57 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:53:57 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:53:57 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:57 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:57 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:57 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:53:58 np0005535469 systemd[1]: Starting Ceph osd.2 for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:53:58 np0005535469 podman[90789]: 2025-11-25 15:53:58.355630425 +0000 UTC m=+0.024870889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:53:58 np0005535469 podman[90789]: 2025-11-25 15:53:58.644957629 +0000 UTC m=+0.314198103 container create 31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:53:58 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:53:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:53:58 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:53:58 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:53:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:53:58 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:53:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:53:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906cdc27c441942761a465f5caf545e7c6bb273ac057f7bfa40c26cd8265af68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906cdc27c441942761a465f5caf545e7c6bb273ac057f7bfa40c26cd8265af68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906cdc27c441942761a465f5caf545e7c6bb273ac057f7bfa40c26cd8265af68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906cdc27c441942761a465f5caf545e7c6bb273ac057f7bfa40c26cd8265af68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906cdc27c441942761a465f5caf545e7c6bb273ac057f7bfa40c26cd8265af68/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:53:59 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:53:59 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:53:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:00 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:00 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:00 np0005535469 podman[90789]: 2025-11-25 15:54:00.319922173 +0000 UTC m=+1.989162627 container init 31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 10:54:00 np0005535469 podman[90789]: 2025-11-25 15:54:00.327399504 +0000 UTC m=+1.996639978 container start 31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:00 np0005535469 podman[90789]: 2025-11-25 15:54:00.609419621 +0000 UTC m=+2.278660095 container attach 31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 10:54:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:00 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:00 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate[90805]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 10:54:01 np0005535469 bash[90789]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 10:54:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate[90805]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 10:54:01 np0005535469 bash[90789]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 10:54:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate[90805]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 10:54:01 np0005535469 bash[90789]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 10:54:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate[90805]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 10:54:01 np0005535469 bash[90789]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 10:54:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate[90805]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:01 np0005535469 bash[90789]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate[90805]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 10:54:01 np0005535469 bash[90789]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 10:54:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate[90805]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 10:54:01 np0005535469 bash[90789]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 10:54:01 np0005535469 systemd[1]: libpod-31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118.scope: Deactivated successfully.
Nov 25 10:54:01 np0005535469 podman[90789]: 2025-11-25 15:54:01.345804546 +0000 UTC m=+3.015045000 container died 31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:54:01 np0005535469 systemd[1]: libpod-31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118.scope: Consumed 1.022s CPU time.
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:01 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:01 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:01 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:01 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:01 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:01 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:02 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:02 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:02 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:02 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-906cdc27c441942761a465f5caf545e7c6bb273ac057f7bfa40c26cd8265af68-merged.mount: Deactivated successfully.
Nov 25 10:54:03 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:03 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:03 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:03 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:04 np0005535469 podman[90789]: 2025-11-25 15:54:04.128272778 +0000 UTC m=+5.797513222 container remove 31f4f107c3160daaf22a7f00147429e70e8007c2b4870ba1d75bd47591279118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2-activate, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:04 np0005535469 podman[90975]: 2025-11-25 15:54:04.29360303 +0000 UTC m=+0.021691783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:04 np0005535469 podman[90975]: 2025-11-25 15:54:04.405787884 +0000 UTC m=+0.133876587 container create fe06467ae9661a3f0c1044ab0463e2ce7637a337a64da0f583e9ff1d3a1d0a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acb3b8c6cc5b51cbd7f5b0ff7633dc990c09f01a30f098c20cd89473922c44b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acb3b8c6cc5b51cbd7f5b0ff7633dc990c09f01a30f098c20cd89473922c44b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acb3b8c6cc5b51cbd7f5b0ff7633dc990c09f01a30f098c20cd89473922c44b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acb3b8c6cc5b51cbd7f5b0ff7633dc990c09f01a30f098c20cd89473922c44b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acb3b8c6cc5b51cbd7f5b0ff7633dc990c09f01a30f098c20cd89473922c44b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:04 np0005535469 podman[90975]: 2025-11-25 15:54:04.645603558 +0000 UTC m=+0.373692281 container init fe06467ae9661a3f0c1044ab0463e2ce7637a337a64da0f583e9ff1d3a1d0a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:04 np0005535469 podman[90975]: 2025-11-25 15:54:04.651075775 +0000 UTC m=+0.379164478 container start fe06467ae9661a3f0c1044ab0463e2ce7637a337a64da0f583e9ff1d3a1d0a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: pidfile_write: ignore empty --pid-file
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750a53f800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750a53f800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750a53f800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750a53f800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750b377800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750b377800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750b377800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750b377800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750b377800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 10:54:04 np0005535469 bash[90975]: fe06467ae9661a3f0c1044ab0463e2ce7637a337a64da0f583e9ff1d3a1d0a28
Nov 25 10:54:04 np0005535469 systemd[1]: Started Ceph osd.2 for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:54:04 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:04 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:04 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:04 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:04 np0005535469 ceph-osd[90994]: bdev(0x55750a53f800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: load: jerasure load: lrc 
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a708c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluefs mount
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluefs mount shared_bdev_used = 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Git sha 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: DB SUMMARY
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: DB Session ID:  2XD89B6L7AFBPWM5PQV3
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                                     Options.env: 0x55750b3c9d50
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                                Options.info_log: 0x55750a5cad00
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.write_buffer_manager: 0x55750b4da460
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.row_cache: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                              Options.wal_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.wal_compression: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_background_jobs: 4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Compression algorithms supported:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kZSTD supported: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 podman[91157]: 2025-11-25 15:54:05.796894291 +0000 UTC m=+0.103821580 container create 6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d6af63e2-1d4d-4ede-bfc4-0b564dbc211a
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086045802411, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086045804004, "job": 1, "event": "recovery_finished"}
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: freelist init
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: freelist _read_cfg
Nov 25 10:54:05 np0005535469 podman[91157]: 2025-11-25 15:54:05.71532549 +0000 UTC m=+0.022252789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bluefs umount
Nov 25 10:54:05 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 10:54:05 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:05 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:05 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:05 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:05 np0005535469 systemd[1]: Started libpod-conmon-6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e.scope.
Nov 25 10:54:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bdev(0x55750a709400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bluefs mount
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bluefs mount shared_bdev_used = 4718592
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: RocksDB version: 7.9.2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Git sha 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: DB SUMMARY
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: DB Session ID:  2XD89B6L7AFBPWM5PQV2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: CURRENT file:  CURRENT
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                         Options.error_if_exists: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.create_if_missing: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                                     Options.env: 0x55750b58a460
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                                Options.info_log: 0x55750a5caa40
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                              Options.statistics: (nil)
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.use_fsync: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                              Options.db_log_dir: 
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.write_buffer_manager: 0x55750b4da460
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.unordered_write: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.row_cache: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                              Options.wal_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.two_write_queues: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.wal_compression: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.atomic_flush: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_background_jobs: 4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_background_compactions: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_subcompactions: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.max_open_files: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Compression algorithms supported:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kZSTD supported: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kXpressCompression supported: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kZlibCompression supported: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cae80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cae80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cae80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cae80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cae80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cae80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cae80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb440)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb440)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:           Options.merge_operator: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55750a5cb440)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55750a5b2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.compression: LZ4
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.num_levels: 7
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.bloom_locality: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                               Options.ttl: 2592000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                       Options.enable_blob_files: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                           Options.min_blob_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d6af63e2-1d4d-4ede-bfc4-0b564dbc211a
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086046058577, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 10:54:06 np0005535469 podman[91157]: 2025-11-25 15:54:06.092992047 +0000 UTC m=+0.399919366 container init 6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:54:06 np0005535469 podman[91157]: 2025-11-25 15:54:06.099883562 +0000 UTC m=+0.406810851 container start 6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 10:54:06 np0005535469 jolly_khayyam[91368]: 167 167
Nov 25 10:54:06 np0005535469 systemd[1]: libpod-6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e.scope: Deactivated successfully.
Nov 25 10:54:06 np0005535469 podman[91157]: 2025-11-25 15:54:06.128274045 +0000 UTC m=+0.435201334 container attach 6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 10:54:06 np0005535469 podman[91157]: 2025-11-25 15:54:06.129595701 +0000 UTC m=+0.436522990 container died 6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086046156147, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086046, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d6af63e2-1d4d-4ede-bfc4-0b564dbc211a", "db_session_id": "2XD89B6L7AFBPWM5PQV2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086046171284, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086046, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d6af63e2-1d4d-4ede-bfc4-0b564dbc211a", "db_session_id": "2XD89B6L7AFBPWM5PQV2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086046289184, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086046, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d6af63e2-1d4d-4ede-bfc4-0b564dbc211a", "db_session_id": "2XD89B6L7AFBPWM5PQV2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:54:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4151d1633e5669d1278aaa07e15c39f83ff18cc9e1db2beef44f9d28af2dff47-merged.mount: Deactivated successfully.
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086046435899, "job": 1, "event": "recovery_finished"}
Nov 25 10:54:06 np0005535469 ceph-osd[90994]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 10:54:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:06 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:06 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:06 np0005535469 podman[91157]: 2025-11-25 15:54:06.959979662 +0000 UTC m=+1.266906951 container remove 6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 10:54:06 np0005535469 systemd[1]: libpod-conmon-6af0be30d100cb8915c024ab9907dde2813e861669557f94f08fca5a57c3e86e.scope: Deactivated successfully.
Nov 25 10:54:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:07 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:07 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:07 np0005535469 podman[91579]: 2025-11-25 15:54:07.114673028 +0000 UTC m=+0.023768209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:07 np0005535469 podman[91579]: 2025-11-25 15:54:07.339603882 +0000 UTC m=+0.248699043 container create d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_davinci, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:54:07 np0005535469 systemd[1]: Started libpod-conmon-d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776.scope.
Nov 25 10:54:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f93539b8cc7d60e659efa7eb392ee604958c05747a605a0a07fe9583bd58997/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f93539b8cc7d60e659efa7eb392ee604958c05747a605a0a07fe9583bd58997/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f93539b8cc7d60e659efa7eb392ee604958c05747a605a0a07fe9583bd58997/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f93539b8cc7d60e659efa7eb392ee604958c05747a605a0a07fe9583bd58997/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55750b596000
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: rocksdb: DB pointer 0x55750b4cba00
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.5 total, 1.5 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.5 total, 1.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.5 total, 1.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.5 total, 1.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 460.80 MB usag
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: _get_class not permitted to load lua
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: _get_class not permitted to load sdk
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: _get_class not permitted to load test_remote_reads
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: osd.2 0 load_pgs
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: osd.2 0 load_pgs opened 0 pgs
Nov 25 10:54:07 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2[90990]: 2025-11-25T15:54:07.592+0000 7f35fa0e0740 -1 osd.2 0 log_to_monitors true
Nov 25 10:54:07 np0005535469 ceph-osd[90994]: osd.2 0 log_to_monitors true
Nov 25 10:54:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 25 10:54:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 10:54:07 np0005535469 podman[91579]: 2025-11-25 15:54:07.617009536 +0000 UTC m=+0.526104707 container init d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 10:54:07 np0005535469 podman[91579]: 2025-11-25 15:54:07.625964826 +0000 UTC m=+0.535059977 container start d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:07 np0005535469 podman[91579]: 2025-11-25 15:54:07.792347196 +0000 UTC m=+0.701442387 container attach d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_davinci, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:07 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:07 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 25 10:54:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:54:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:08 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:08 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:08 np0005535469 ceph-mon[74985]: from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 10:54:08 np0005535469 magical_davinci[91595]: {
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "osd_id": 1,
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "type": "bluestore"
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:    },
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "osd_id": 2,
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "type": "bluestore"
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:    },
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "osd_id": 0,
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:        "type": "bluestore"
Nov 25 10:54:08 np0005535469 magical_davinci[91595]:    }
Nov 25 10:54:08 np0005535469 magical_davinci[91595]: }
Nov 25 10:54:08 np0005535469 systemd[1]: libpod-d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776.scope: Deactivated successfully.
Nov 25 10:54:08 np0005535469 podman[91579]: 2025-11-25 15:54:08.547232819 +0000 UTC m=+1.456327980 container died d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_davinci, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:54:08 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 10:54:08 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 10:54:08 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:08 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:09 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:09 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 10:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e11 e11: 3 total, 0 up, 3 in
Nov 25 10:54:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 0 up, 3 in
Nov 25 10:54:09 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:09 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:09 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:54:09 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:54:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0f93539b8cc7d60e659efa7eb392ee604958c05747a605a0a07fe9583bd58997-merged.mount: Deactivated successfully.
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e11 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:54:10 np0005535469 podman[91579]: 2025-11-25 15:54:10.689764336 +0000 UTC m=+3.598859517 container remove d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:54:10 np0005535469 systemd[1]: libpod-conmon-d6cabdfa5c46bdb2f754cbf9f0d2b52e16afd1fb64bf306da8f76f18a7ea2776.scope: Deactivated successfully.
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e12 e12: 3 total, 0 up, 3 in
Nov 25 10:54:10 np0005535469 ceph-osd[90994]: osd.2 0 done with init, starting boot process
Nov 25 10:54:10 np0005535469 ceph-osd[90994]: osd.2 0 start_boot
Nov 25 10:54:10 np0005535469 ceph-osd[90994]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 10:54:10 np0005535469 ceph-osd[90994]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 10:54:10 np0005535469 ceph-osd[90994]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 10:54:10 np0005535469 ceph-osd[90994]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 10:54:10 np0005535469 ceph-osd[90994]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 0 up, 3 in
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:10 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: from='osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:11 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:11 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:11 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:11 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:11 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:11 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:12 np0005535469 podman[91896]: 2025-11-25 15:54:12.075231671 +0000 UTC m=+0.232051176 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:12 np0005535469 podman[91896]: 2025-11-25 15:54:12.16935159 +0000 UTC m=+0.326171085 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:12 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:12 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:12 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 1.981 iops: 507.107 elapsed_sec: 5.916
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: log_channel(cluster) log [WRN] : OSD bench result of 507.107228 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 0 waiting for initial osdmap
Nov 25 10:54:13 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0[88886]: 2025-11-25T15:54:13.436+0000 7f8b9588d640 -1 osd.0 0 waiting for initial osdmap
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 12 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 12 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 12 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 12 check_osdmap_features require_osd_release unknown -> reef
Nov 25 10:54:13 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-0[88886]: 2025-11-25T15:54:13.606+0000 7f8b90eb5640 -1 osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 12 set_numa_affinity not setting numa affinity
Nov 25 10:54:13 np0005535469 ceph-osd[88890]: osd.0 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1296196428; not ready for session (expect reconnect)
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:13 np0005535469 ceph-mon[74985]: OSD bench result of 507.107228 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 10:54:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e12 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428] boot
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:14 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:14 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:14 np0005535469 ceph-osd[88890]: osd.0 13 state: booting -> active
Nov 25 10:54:14 np0005535469 podman[92289]: 2025-11-25 15:54:14.566816725 +0000 UTC m=+0.066066950 container create 8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 10:54:14 np0005535469 systemd[1]: Started libpod-conmon-8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467.scope.
Nov 25 10:54:14 np0005535469 podman[92289]: 2025-11-25 15:54:14.520429101 +0000 UTC m=+0.019679346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:14 np0005535469 podman[92289]: 2025-11-25 15:54:14.695377257 +0000 UTC m=+0.194627502 container init 8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:54:14 np0005535469 podman[92289]: 2025-11-25 15:54:14.708721486 +0000 UTC m=+0.207971711 container start 8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 10:54:14 np0005535469 reverent_jang[92305]: 167 167
Nov 25 10:54:14 np0005535469 systemd[1]: libpod-8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467.scope: Deactivated successfully.
Nov 25 10:54:14 np0005535469 podman[92289]: 2025-11-25 15:54:14.777866345 +0000 UTC m=+0.277116570 container attach 8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 10:54:14 np0005535469 podman[92289]: 2025-11-25 15:54:14.778738568 +0000 UTC m=+0.277988803 container died 8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:54:14 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:14 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:14 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:14 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f0b630762f094e0eec707c126c6ac84bd03a8ef0b685e6e4da5aa94dccbd8083-merged.mount: Deactivated successfully.
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: osd.0 [v2:192.168.122.100:6802/1296196428,v1:192.168.122.100:6803/1296196428] boot
Nov 25 10:54:15 np0005535469 podman[92289]: 2025-11-25 15:54:15.106329727 +0000 UTC m=+0.605579952 container remove 8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 10:54:15 np0005535469 systemd[1]: libpod-conmon-8142585fbba4c87ff63558b712c364ee7f82477be7e016b31c73602e6504e467.scope: Deactivated successfully.
Nov 25 10:54:15 np0005535469 python3[92348]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:15 np0005535469 podman[92355]: 2025-11-25 15:54:15.251392131 +0000 UTC m=+0.023163156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e13 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:54:15 np0005535469 podman[92355]: 2025-11-25 15:54:15.472118965 +0000 UTC m=+0.243889970 container create a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mccarthy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:15 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:15 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:15 np0005535469 systemd[1]: Started libpod-conmon-a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56.scope.
Nov 25 10:54:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a115f6ef3931ab02458cb60d0e3c1a23a90275b4afe0c383c8e30a37859fb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a115f6ef3931ab02458cb60d0e3c1a23a90275b4afe0c383c8e30a37859fb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a115f6ef3931ab02458cb60d0e3c1a23a90275b4afe0c383c8e30a37859fb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a115f6ef3931ab02458cb60d0e3c1a23a90275b4afe0c383c8e30a37859fb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:15 np0005535469 podman[92370]: 2025-11-25 15:54:15.55834659 +0000 UTC m=+0.256464419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:15 np0005535469 podman[92370]: 2025-11-25 15:54:15.764373769 +0000 UTC m=+0.462491568 container create 15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b (image=quay.io/ceph/ceph:v18, name=reverent_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:15 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:15 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:15 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:15 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:15 np0005535469 systemd[1]: Started libpod-conmon-15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b.scope.
Nov 25 10:54:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 10:54:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082a2c0059bac8d28b0fc7f5bbcf3694d7b1ead10b586e766fbeb80c35a596bf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082a2c0059bac8d28b0fc7f5bbcf3694d7b1ead10b586e766fbeb80c35a596bf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082a2c0059bac8d28b0fc7f5bbcf3694d7b1ead10b586e766fbeb80c35a596bf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:16 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] creating mgr pool
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 10:54:16 np0005535469 podman[92355]: 2025-11-25 15:54:16.404240665 +0000 UTC m=+1.176011700 container init a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:16 np0005535469 podman[92355]: 2025-11-25 15:54:16.416977399 +0000 UTC m=+1.188748414 container start a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:16 np0005535469 podman[92355]: 2025-11-25 15:54:16.504047787 +0000 UTC m=+1.275818892 container attach a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e14 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 10:54:16 np0005535469 podman[92370]: 2025-11-25 15:54:16.780094128 +0000 UTC m=+1.478211937 container init 15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b (image=quay.io/ceph/ceph:v18, name=reverent_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:16 np0005535469 podman[92370]: 2025-11-25 15:54:16.785480389 +0000 UTC m=+1.483598188 container start 15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b (image=quay.io/ceph/ceph:v18, name=reverent_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 10:54:16 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:16 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:16 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:16 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 10:54:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e15 e15: 3 total, 1 up, 3 in
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e15 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 10:54:17 np0005535469 podman[92370]: 2025-11-25 15:54:17.174286098 +0000 UTC m=+1.872403897 container attach 15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b (image=quay.io/ceph/ceph:v18, name=reverent_kowalevski, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 1 up, 3 in
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:17 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 10:54:17 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:17 np0005535469 ceph-osd[88890]: osd.0 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 10:54:17 np0005535469 ceph-osd[88890]: osd.0 15 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 25 10:54:17 np0005535469 ceph-osd[88890]: osd.0 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214531004' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 10:54:17 np0005535469 reverent_kowalevski[92392]: 
Nov 25 10:54:17 np0005535469 reverent_kowalevski[92392]: {"fsid":"d82baeae-c742-50a4-b8f6-b5257c8a2c92","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":160,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":15,"num_osds":3,"num_up_osds":1,"osd_up_since":1764086054,"num_in_osds":3,"osd_in_since":1764086009,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":447016960,"bytes_avail":21023625216,"bytes_total":21470642176},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T15:53:41.930601+0000","services":{}},"progress_events":{}}
Nov 25 10:54:17 np0005535469 systemd[1]: libpod-15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b.scope: Deactivated successfully.
Nov 25 10:54:17 np0005535469 podman[92370]: 2025-11-25 15:54:17.616218568 +0000 UTC m=+2.314336387 container died 15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b (image=quay.io/ceph/ceph:v18, name=reverent_kowalevski, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]: [
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:    {
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "available": false,
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "ceph_device": false,
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "lsm_data": {},
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "lvs": [],
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "path": "/dev/sr0",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "rejected_reasons": [
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "Insufficient space (<5GB)",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "Has a FileSystem"
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        ],
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        "sys_api": {
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "actuators": null,
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "device_nodes": "sr0",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "devname": "sr0",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "human_readable_size": "482.00 KB",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "id_bus": "ata",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "model": "QEMU DVD-ROM",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "nr_requests": "2",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "parent": "/dev/sr0",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "partitions": {},
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "path": "/dev/sr0",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "removable": "1",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "rev": "2.5+",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "ro": "0",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "rotational": "1",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "sas_address": "",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "sas_device_handle": "",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "scheduler_mode": "mq-deadline",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "sectors": 0,
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "sectorsize": "2048",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "size": 493568.0,
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "support_discard": "2048",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "type": "disk",
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:            "vendor": "QEMU"
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:        }
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]:    }
Nov 25 10:54:17 np0005535469 tender_mccarthy[92385]: ]
Nov 25 10:54:17 np0005535469 systemd[1]: libpod-a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56.scope: Deactivated successfully.
Nov 25 10:54:17 np0005535469 systemd[1]: libpod-a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56.scope: Consumed 1.295s CPU time.
Nov 25 10:54:17 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:17 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:17 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:17 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 10:54:17 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 25 10:54:18 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:18 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:18 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:18 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 10:54:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e16 e16: 3 total, 1 up, 3 in
Nov 25 10:54:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-082a2c0059bac8d28b0fc7f5bbcf3694d7b1ead10b586e766fbeb80c35a596bf-merged.mount: Deactivated successfully.
Nov 25 10:54:19 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:19 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 10:54:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 10:54:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 1 up, 3 in
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:20 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:20 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 10:54:20 np0005535469 podman[92370]: 2025-11-25 15:54:20.648538735 +0000 UTC m=+5.346656564 container remove 15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b (image=quay.io/ceph/ceph:v18, name=reverent_kowalevski, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:20 np0005535469 systemd[1]: libpod-conmon-15504c15d08214d40198b286b43b7a37eedee5fd3d838648551417923eea3c4b.scope: Deactivated successfully.
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 10:54:20 np0005535469 podman[92355]: 2025-11-25 15:54:20.744605047 +0000 UTC m=+5.516376052 container died a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mccarthy, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:20 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:20 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:20 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:20 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-14a115f6ef3931ab02458cb60d0e3c1a23a90275b4afe0c383c8e30a37859fb3-merged.mount: Deactivated successfully.
Nov 25 10:54:21 np0005535469 python3[94296]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:21 np0005535469 podman[94256]: 2025-11-25 15:54:21.782436903 +0000 UTC m=+4.057259636 container remove a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mccarthy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:54:21 np0005535469 systemd[1]: libpod-conmon-a6a17aa6f9d07d47cd1fc7ce8fa2a30aa2d6c4a0ea571d1932db3a6e14a6ec56.scope: Deactivated successfully.
Nov 25 10:54:21 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:21 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 10:54:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:21 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:21 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:21 np0005535469 podman[94297]: 2025-11-25 15:54:21.876032172 +0000 UTC m=+0.659543883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: Cluster is now healthy
Nov 25 10:54:22 np0005535469 podman[94297]: 2025-11-25 15:54:22.299747235 +0000 UTC m=+1.083258916 container create e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7 (image=quay.io/ceph/ceph:v18, name=strange_turing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:22 np0005535469 systemd[1]: Started libpod-conmon-e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7.scope.
Nov 25 10:54:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db16ab9ff07fb79cabbf8f3407b25d9b8ea3ccb1fc1e920aa57c470efe95f0a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db16ab9ff07fb79cabbf8f3407b25d9b8ea3ccb1fc1e920aa57c470efe95f0a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:22 np0005535469 podman[94297]: 2025-11-25 15:54:22.507314074 +0000 UTC m=+1.290825805 container init e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7 (image=quay.io/ceph/ceph:v18, name=strange_turing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:22 np0005535469 podman[94297]: 2025-11-25 15:54:22.517026559 +0000 UTC m=+1.300538260 container start e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7 (image=quay.io/ceph/ceph:v18, name=strange_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:54:22 np0005535469 podman[94297]: 2025-11-25 15:54:22.580441577 +0000 UTC m=+1.363953278 container attach e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7 (image=quay.io/ceph/ceph:v18, name=strange_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f0f13c59-10f2-4f3e-b1ef-abc73d4d27d8 does not exist
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3ff101bc-e523-4d16-8097-6d545257b5a5 does not exist
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 88485f36-5706-4c0f-a2a2-4f5508f868cc does not exist
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 1.551 iops: 397.132 elapsed_sec: 7.554
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_mclock_max_capacity_iops_hdd}] v 0) v1
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 0 waiting for initial osdmap
Nov 25 10:54:22 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1[89987]: 2025-11-25T15:54:22.813+0000 7ff11c33c640 -1 osd.1 0 waiting for initial osdmap
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' 
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 396056.12 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 10:54:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:22 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 16 check_osdmap_features require_osd_release unknown -> reef
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 16 set_numa_affinity not setting numa affinity
Nov 25 10:54:22 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-1[89987]: 2025-11-25T15:54:22.898+0000 7ff11714d640 -1 osd.1 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 10:54:22 np0005535469 ceph-osd[89991]: osd.1 16 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3307089777' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600]' entity='osd.1' 
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3307089777' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:23 np0005535469 podman[94478]: 2025-11-25 15:54:23.344052622 +0000 UTC m=+0.086660478 container create 2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:23 np0005535469 podman[94478]: 2025-11-25 15:54:23.286802384 +0000 UTC m=+0.029410280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:23 np0005535469 systemd[1]: Started libpod-conmon-2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778.scope.
Nov 25 10:54:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:23 np0005535469 podman[94478]: 2025-11-25 15:54:23.486785035 +0000 UTC m=+0.229392921 container init 2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:54:23 np0005535469 podman[94478]: 2025-11-25 15:54:23.493365457 +0000 UTC m=+0.235973313 container start 2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 10:54:23 np0005535469 jovial_shannon[94493]: 167 167
Nov 25 10:54:23 np0005535469 systemd[1]: libpod-2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778.scope: Deactivated successfully.
Nov 25 10:54:23 np0005535469 conmon[94493]: conmon 2214a5898b2e2fd055e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778.scope/container/memory.events
Nov 25 10:54:23 np0005535469 podman[94478]: 2025-11-25 15:54:23.531392352 +0000 UTC m=+0.274000238 container attach 2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:54:23 np0005535469 podman[94478]: 2025-11-25 15:54:23.532057849 +0000 UTC m=+0.274665705 container died 2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:54:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7295d35b0f311a3f55b2eceaad93934d23700fcb11325cff3014b4c375907cf2-merged.mount: Deactivated successfully.
Nov 25 10:54:23 np0005535469 ceph-osd[89991]: osd.1 16 tick checking mon for new map
Nov 25 10:54:23 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 25 10:54:23 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/585131600; not ready for session (expect reconnect)
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3307089777' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 25 10:54:23 np0005535469 strange_turing[94312]: pool 'vms' created
Nov 25 10:54:23 np0005535469 systemd[1]: libpod-e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7.scope: Deactivated successfully.
Nov 25 10:54:23 np0005535469 podman[94478]: 2025-11-25 15:54:23.927098112 +0000 UTC m=+0.669705998 container remove 2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600] boot
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:23 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:23 np0005535469 ceph-osd[89991]: osd.1 17 state: booting -> active
Nov 25 10:54:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:23 np0005535469 podman[94297]: 2025-11-25 15:54:23.938280975 +0000 UTC m=+2.721792656 container died e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7 (image=quay.io/ceph/ceph:v18, name=strange_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:54:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v60: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 10:54:23 np0005535469 systemd[1]: libpod-conmon-2214a5898b2e2fd055e836a2db621147e824a49a9f1da93b8f936e12320f4778.scope: Deactivated successfully.
Nov 25 10:54:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-db16ab9ff07fb79cabbf8f3407b25d9b8ea3ccb1fc1e920aa57c470efe95f0a8-merged.mount: Deactivated successfully.
Nov 25 10:54:24 np0005535469 podman[94297]: 2025-11-25 15:54:24.093219107 +0000 UTC m=+2.876730788 container remove e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7 (image=quay.io/ceph/ceph:v18, name=strange_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 10:54:24 np0005535469 systemd[1]: libpod-conmon-e06737a61093bab937fdaca8f25f88a1047523b136a61c3fb97423862b5d47b7.scope: Deactivated successfully.
Nov 25 10:54:24 np0005535469 podman[94533]: 2025-11-25 15:54:24.133975124 +0000 UTC m=+0.068116944 container create 2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:24 np0005535469 systemd[1]: Started libpod-conmon-2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421.scope.
Nov 25 10:54:24 np0005535469 podman[94533]: 2025-11-25 15:54:24.108609 +0000 UTC m=+0.042750850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b7b2893e68875f67abc476f135c02d5c6ddbc6ae83ebbe1a091b03dee18e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b7b2893e68875f67abc476f135c02d5c6ddbc6ae83ebbe1a091b03dee18e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b7b2893e68875f67abc476f135c02d5c6ddbc6ae83ebbe1a091b03dee18e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b7b2893e68875f67abc476f135c02d5c6ddbc6ae83ebbe1a091b03dee18e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b7b2893e68875f67abc476f135c02d5c6ddbc6ae83ebbe1a091b03dee18e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:24 np0005535469 podman[94533]: 2025-11-25 15:54:24.233303091 +0000 UTC m=+0.167444921 container init 2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:54:24 np0005535469 podman[94533]: 2025-11-25 15:54:24.242736699 +0000 UTC m=+0.176878519 container start 2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:24 np0005535469 podman[94533]: 2025-11-25 15:54:24.246022805 +0000 UTC m=+0.180164615 container attach 2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:54:24 np0005535469 python3[94579]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3307089777' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: osd.1 [v2:192.168.122.100:6806/585131600,v1:192.168.122.100:6807/585131600] boot
Nov 25 10:54:24 np0005535469 podman[94581]: 2025-11-25 15:54:24.43281766 +0000 UTC m=+0.024125731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:24 np0005535469 podman[94581]: 2025-11-25 15:54:24.52950953 +0000 UTC m=+0.120817581 container create 8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169 (image=quay.io/ceph/ceph:v18, name=trusting_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:54:24 np0005535469 systemd[1]: Started libpod-conmon-8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169.scope.
Nov 25 10:54:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f7b25e9ba01572dff8f49539596408b22204576806276931f16c20b9e33ebd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f7b25e9ba01572dff8f49539596408b22204576806276931f16c20b9e33ebd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:24 np0005535469 podman[94581]: 2025-11-25 15:54:24.633922331 +0000 UTC m=+0.225230402 container init 8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169 (image=quay.io/ceph/ceph:v18, name=trusting_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:24 np0005535469 podman[94581]: 2025-11-25 15:54:24.639043245 +0000 UTC m=+0.230351296 container start 8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169 (image=quay.io/ceph/ceph:v18, name=trusting_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 10:54:24 np0005535469 podman[94581]: 2025-11-25 15:54:24.644929809 +0000 UTC m=+0.236237880 container attach 8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169 (image=quay.io/ceph/ceph:v18, name=trusting_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.261 iops: 4930.848 elapsed_sec: 0.608
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: log_channel(cluster) log [WRN] : OSD bench result of 4930.847607 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 0 waiting for initial osdmap
Nov 25 10:54:24 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2[90990]: 2025-11-25T15:54:24.717+0000 7f35f6877640 -1 osd.2 0 waiting for initial osdmap
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 17 check_osdmap_features require_osd_release unknown -> reef
Nov 25 10:54:24 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-osd-2[90990]: 2025-11-25T15:54:24.750+0000 7f35f1688640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 17 set_numa_affinity not setting numa affinity
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 25 10:54:24 np0005535469 ceph-mgr[75280]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3843923788; not ready for session (expect reconnect)
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:24 np0005535469 ceph-mgr[75280]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788] boot
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 10:54:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 18 state: booting -> active
Nov 25 10:54:24 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:24 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:24 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] creating main.db for devicehealth
Nov 25 10:54:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/236119449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:25 np0005535469 upbeat_bardeen[94552]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:54:25 np0005535469 upbeat_bardeen[94552]: --> relative data size: 1.0
Nov 25 10:54:25 np0005535469 upbeat_bardeen[94552]: --> All data devices are unavailable
Nov 25 10:54:25 np0005535469 systemd[1]: libpod-2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421.scope: Deactivated successfully.
Nov 25 10:54:25 np0005535469 podman[94533]: 2025-11-25 15:54:25.307451329 +0000 UTC m=+1.241593209 container died 2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bardeen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-974b7b2893e68875f67abc476f135c02d5c6ddbc6ae83ebbe1a091b03dee18e2-merged.mount: Deactivated successfully.
Nov 25 10:54:25 np0005535469 podman[94533]: 2025-11-25 15:54:25.402491555 +0000 UTC m=+1.336633375 container remove 2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bardeen, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:54:25 np0005535469 systemd[1]: libpod-conmon-2c4129841b5715fd4cbc3db94ef6bda1cc351f59d51fb67bd3ef504f9420b421.scope: Deactivated successfully.
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: osd.2 [v2:192.168.122.100:6810/3843923788,v1:192.168.122.100:6811/3843923788] boot
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/236119449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/236119449' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 25 10:54:25 np0005535469 trusting_sanderson[94597]: pool 'volumes' created
Nov 25 10:54:25 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 25 10:54:25 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:25 np0005535469 systemd[1]: libpod-8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169.scope: Deactivated successfully.
Nov 25 10:54:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v63: 3 pgs: 1 unknown, 2 creating+peering; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 25 10:54:25 np0005535469 podman[94817]: 2025-11-25 15:54:25.984139579 +0000 UTC m=+0.043845337 container create 26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 10:54:25 np0005535469 podman[94824]: 2025-11-25 15:54:25.998977617 +0000 UTC m=+0.042263967 container died 8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169 (image=quay.io/ceph/ceph:v18, name=trusting_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 10:54:26 np0005535469 systemd[1]: Started libpod-conmon-26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08.scope.
Nov 25 10:54:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e0f7b25e9ba01572dff8f49539596408b22204576806276931f16c20b9e33ebd-merged.mount: Deactivated successfully.
Nov 25 10:54:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:26 np0005535469 podman[94817]: 2025-11-25 15:54:25.962480532 +0000 UTC m=+0.022186270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:26 np0005535469 podman[94824]: 2025-11-25 15:54:26.291628402 +0000 UTC m=+0.334914752 container remove 8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169 (image=quay.io/ceph/ceph:v18, name=trusting_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 10:54:26 np0005535469 systemd[1]: libpod-conmon-8d5cc3c7488307b9abcdd57418347e35e7ef4f043d8a9286e8922c7fbbf4d169.scope: Deactivated successfully.
Nov 25 10:54:26 np0005535469 podman[94817]: 2025-11-25 15:54:26.377870108 +0000 UTC m=+0.437575926 container init 26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 10:54:26 np0005535469 podman[94817]: 2025-11-25 15:54:26.385361033 +0000 UTC m=+0.445066781 container start 26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:54:26 np0005535469 podman[94817]: 2025-11-25 15:54:26.390007425 +0000 UTC m=+0.449713163 container attach 26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 10:54:26 np0005535469 fervent_shtern[94844]: 167 167
Nov 25 10:54:26 np0005535469 systemd[1]: libpod-26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08.scope: Deactivated successfully.
Nov 25 10:54:26 np0005535469 podman[94817]: 2025-11-25 15:54:26.392000548 +0000 UTC m=+0.451706286 container died 26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:26 np0005535469 ceph-mon[74985]: OSD bench result of 4930.847607 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 10:54:26 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/236119449' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0cead005eb1d83ba8f6ac9b37256e113d312350b8d9862bd0729cad49809c91b-merged.mount: Deactivated successfully.
Nov 25 10:54:26 np0005535469 podman[94817]: 2025-11-25 15:54:26.443120055 +0000 UTC m=+0.502825793 container remove 26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:54:26 np0005535469 systemd[1]: libpod-conmon-26d9f8391b305c6b82fecf39f253f9ac5906dd42547f47b8d5e4599e53406e08.scope: Deactivated successfully.
Nov 25 10:54:26 np0005535469 python3[94888]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:26 np0005535469 podman[94894]: 2025-11-25 15:54:26.588915538 +0000 UTC m=+0.046351604 container create 1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:26 np0005535469 systemd[1]: Started libpod-conmon-1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a.scope.
Nov 25 10:54:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f214ea62f0485f337cf0476b290b336274f9dfeadfe3dfe5727176c20ba6dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f214ea62f0485f337cf0476b290b336274f9dfeadfe3dfe5727176c20ba6dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f214ea62f0485f337cf0476b290b336274f9dfeadfe3dfe5727176c20ba6dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f214ea62f0485f337cf0476b290b336274f9dfeadfe3dfe5727176c20ba6dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:26 np0005535469 podman[94894]: 2025-11-25 15:54:26.563631377 +0000 UTC m=+0.021067473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:26 np0005535469 podman[94909]: 2025-11-25 15:54:26.676295494 +0000 UTC m=+0.076287417 container create 0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca (image=quay.io/ceph/ceph:v18, name=bold_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:54:26 np0005535469 podman[94909]: 2025-11-25 15:54:26.627859667 +0000 UTC m=+0.027851610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:26 np0005535469 systemd[1]: Started libpod-conmon-0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca.scope.
Nov 25 10:54:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df7cf272d6ca390cdcf07af145ab85059ca72b4b0babb7287aef987f0f5aac2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df7cf272d6ca390cdcf07af145ab85059ca72b4b0babb7287aef987f0f5aac2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:26 np0005535469 podman[94894]: 2025-11-25 15:54:26.890694912 +0000 UTC m=+0.348130998 container init 1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:26 np0005535469 podman[94894]: 2025-11-25 15:54:26.901961207 +0000 UTC m=+0.359397273 container start 1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 25 10:54:26 np0005535469 podman[94909]: 2025-11-25 15:54:26.943164764 +0000 UTC m=+0.343156767 container init 0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca (image=quay.io/ceph/ceph:v18, name=bold_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:26 np0005535469 podman[94909]: 2025-11-25 15:54:26.948616546 +0000 UTC m=+0.348608479 container start 0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca (image=quay.io/ceph/ceph:v18, name=bold_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:54:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 25 10:54:26 np0005535469 podman[94909]: 2025-11-25 15:54:26.957316065 +0000 UTC m=+0.357307988 container attach 0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca (image=quay.io/ceph/ceph:v18, name=bold_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:26 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 25 10:54:26 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mavpeh(active, since 107s)
Nov 25 10:54:26 np0005535469 podman[94894]: 2025-11-25 15:54:26.984842724 +0000 UTC m=+0.442278810 container attach 1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 25 10:54:26 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 10:54:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/665717184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:27 np0005535469 clever_raman[94922]: {
Nov 25 10:54:27 np0005535469 clever_raman[94922]:    "0": [
Nov 25 10:54:27 np0005535469 clever_raman[94922]:        {
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "devices": [
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "/dev/loop3"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            ],
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_name": "ceph_lv0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_size": "21470642176",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "name": "ceph_lv0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "tags": {
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cluster_name": "ceph",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.crush_device_class": "",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.encrypted": "0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osd_id": "0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.type": "block",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.vdo": "0"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            },
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "type": "block",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "vg_name": "ceph_vg0"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:        }
Nov 25 10:54:27 np0005535469 clever_raman[94922]:    ],
Nov 25 10:54:27 np0005535469 clever_raman[94922]:    "1": [
Nov 25 10:54:27 np0005535469 clever_raman[94922]:        {
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "devices": [
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "/dev/loop4"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            ],
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_name": "ceph_lv1",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_size": "21470642176",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "name": "ceph_lv1",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "tags": {
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cluster_name": "ceph",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.crush_device_class": "",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.encrypted": "0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osd_id": "1",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.type": "block",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.vdo": "0"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            },
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "type": "block",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "vg_name": "ceph_vg1"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:        }
Nov 25 10:54:27 np0005535469 clever_raman[94922]:    ],
Nov 25 10:54:27 np0005535469 clever_raman[94922]:    "2": [
Nov 25 10:54:27 np0005535469 clever_raman[94922]:        {
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "devices": [
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "/dev/loop5"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            ],
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_name": "ceph_lv2",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_size": "21470642176",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "name": "ceph_lv2",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "tags": {
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.cluster_name": "ceph",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.crush_device_class": "",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.encrypted": "0",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osd_id": "2",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.type": "block",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:                "ceph.vdo": "0"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            },
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "type": "block",
Nov 25 10:54:27 np0005535469 clever_raman[94922]:            "vg_name": "ceph_vg2"
Nov 25 10:54:27 np0005535469 clever_raman[94922]:        }
Nov 25 10:54:27 np0005535469 clever_raman[94922]:    ]
Nov 25 10:54:27 np0005535469 clever_raman[94922]: }
Nov 25 10:54:27 np0005535469 systemd[1]: libpod-1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a.scope: Deactivated successfully.
Nov 25 10:54:27 np0005535469 podman[94894]: 2025-11-25 15:54:27.66172769 +0000 UTC m=+1.119163806 container died 1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 10:54:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e1f214ea62f0485f337cf0476b290b336274f9dfeadfe3dfe5727176c20ba6dd-merged.mount: Deactivated successfully.
Nov 25 10:54:27 np0005535469 podman[94894]: 2025-11-25 15:54:27.720148158 +0000 UTC m=+1.177584224 container remove 1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:27 np0005535469 systemd[1]: libpod-conmon-1545bcf572fdf20f716751f8e17f5f5bcf2039a1e41f09211d5a844ae3f0727a.scope: Deactivated successfully.
Nov 25 10:54:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 1 unknown, 2 creating+peering; 0 B data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 25 10:54:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 25 10:54:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/665717184' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 25 10:54:27 np0005535469 bold_perlman[94930]: pool 'backups' created
Nov 25 10:54:27 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/665717184' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:27 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 25 10:54:27 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:27 np0005535469 systemd[1]: libpod-0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca.scope: Deactivated successfully.
Nov 25 10:54:28 np0005535469 podman[95076]: 2025-11-25 15:54:28.026382318 +0000 UTC m=+0.022898899 container died 0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca (image=quay.io/ceph/ceph:v18, name=bold_perlman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 10:54:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5df7cf272d6ca390cdcf07af145ab85059ca72b4b0babb7287aef987f0f5aac2-merged.mount: Deactivated successfully.
Nov 25 10:54:28 np0005535469 podman[95076]: 2025-11-25 15:54:28.115079769 +0000 UTC m=+0.111596340 container remove 0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca (image=quay.io/ceph/ceph:v18, name=bold_perlman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:28 np0005535469 systemd[1]: libpod-conmon-0b4d06bf703a72924ef17d74741d1d2355be59a0a9ddfbd54d23b7c40292a6ca.scope: Deactivated successfully.
Nov 25 10:54:28 np0005535469 podman[95149]: 2025-11-25 15:54:28.304024221 +0000 UTC m=+0.078568646 container create a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:28 np0005535469 systemd[1]: Started libpod-conmon-a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2.scope.
Nov 25 10:54:28 np0005535469 podman[95149]: 2025-11-25 15:54:28.245355216 +0000 UTC m=+0.019899661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:28 np0005535469 podman[95149]: 2025-11-25 15:54:28.375214844 +0000 UTC m=+0.149759269 container init a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 10:54:28 np0005535469 podman[95149]: 2025-11-25 15:54:28.380979644 +0000 UTC m=+0.155524069 container start a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 10:54:28 np0005535469 infallible_agnesi[95173]: 167 167
Nov 25 10:54:28 np0005535469 python3[95170]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:28 np0005535469 systemd[1]: libpod-a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2.scope: Deactivated successfully.
Nov 25 10:54:28 np0005535469 podman[95149]: 2025-11-25 15:54:28.391479298 +0000 UTC m=+0.166023723 container attach a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:28 np0005535469 podman[95149]: 2025-11-25 15:54:28.391721585 +0000 UTC m=+0.166266010 container died a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ec475d276d1a29660eb5b06af5ef9867cef93f07f2e65dd061c65b6ee34dd698-merged.mount: Deactivated successfully.
Nov 25 10:54:28 np0005535469 podman[95149]: 2025-11-25 15:54:28.468572855 +0000 UTC m=+0.243117280 container remove a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:28 np0005535469 podman[95179]: 2025-11-25 15:54:28.475250439 +0000 UTC m=+0.077498027 container create c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39 (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 10:54:28 np0005535469 systemd[1]: libpod-conmon-a10e17d14eea09fcb3355b4ebfbaaeb9dcc29470cacfda3a957187f9114742b2.scope: Deactivated successfully.
Nov 25 10:54:28 np0005535469 systemd[1]: Started libpod-conmon-c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39.scope.
Nov 25 10:54:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd50220ecad0d33bbfa421a46efca47b5e16a5b98c06fe13f2ea82baa5c413a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd50220ecad0d33bbfa421a46efca47b5e16a5b98c06fe13f2ea82baa5c413a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:28 np0005535469 podman[95179]: 2025-11-25 15:54:28.533016141 +0000 UTC m=+0.135263739 container init c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39 (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 10:54:28 np0005535469 podman[95179]: 2025-11-25 15:54:28.435663664 +0000 UTC m=+0.037911282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:28 np0005535469 podman[95179]: 2025-11-25 15:54:28.542848509 +0000 UTC m=+0.145096097 container start c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39 (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:28 np0005535469 podman[95179]: 2025-11-25 15:54:28.545713924 +0000 UTC m=+0.147961512 container attach c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39 (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:28 np0005535469 podman[95220]: 2025-11-25 15:54:28.631569249 +0000 UTC m=+0.058537283 container create 7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:28 np0005535469 podman[95220]: 2025-11-25 15:54:28.597545049 +0000 UTC m=+0.024513113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:28 np0005535469 systemd[1]: Started libpod-conmon-7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55.scope.
Nov 25 10:54:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3864bed40b45f7cc3378cfbf00fd55ea760ad35aca997f129f5fa7e2ad686cf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3864bed40b45f7cc3378cfbf00fd55ea760ad35aca997f129f5fa7e2ad686cf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3864bed40b45f7cc3378cfbf00fd55ea760ad35aca997f129f5fa7e2ad686cf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3864bed40b45f7cc3378cfbf00fd55ea760ad35aca997f129f5fa7e2ad686cf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:28 np0005535469 podman[95220]: 2025-11-25 15:54:28.817576995 +0000 UTC m=+0.244545029 container init 7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 25 10:54:28 np0005535469 podman[95220]: 2025-11-25 15:54:28.825128182 +0000 UTC m=+0.252096216 container start 7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 10:54:28 np0005535469 podman[95220]: 2025-11-25 15:54:28.833420288 +0000 UTC m=+0.260388332 container attach 7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 10:54:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/640286504' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/665717184' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:29 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]: {
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "osd_id": 1,
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "type": "bluestore"
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:    },
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "osd_id": 2,
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "type": "bluestore"
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:    },
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "osd_id": 0,
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:        "type": "bluestore"
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]:    }
Nov 25 10:54:29 np0005535469 amazing_khayyam[95236]: }
Nov 25 10:54:29 np0005535469 systemd[1]: libpod-7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55.scope: Deactivated successfully.
Nov 25 10:54:29 np0005535469 podman[95220]: 2025-11-25 15:54:29.738561934 +0000 UTC m=+1.165529988 container died 7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:54:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3864bed40b45f7cc3378cfbf00fd55ea760ad35aca997f129f5fa7e2ad686cf8-merged.mount: Deactivated successfully.
Nov 25 10:54:29 np0005535469 podman[95220]: 2025-11-25 15:54:29.898298493 +0000 UTC m=+1.325266537 container remove 7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 10:54:29 np0005535469 systemd[1]: libpod-conmon-7b61aef998ccc72a303d65df2853a114ce9a3ba5b916185c5ad858dbd10fca55.scope: Deactivated successfully.
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v68: 4 pgs: 3 active+clean, 1 creating+peering; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/640286504' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 25 10:54:30 np0005535469 nifty_dijkstra[95209]: pool 'images' created
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 25 10:54:30 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:30 np0005535469 systemd[1]: libpod-c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39.scope: Deactivated successfully.
Nov 25 10:54:30 np0005535469 podman[95179]: 2025-11-25 15:54:30.084718359 +0000 UTC m=+1.686965947 container died c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39 (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:54:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bcd50220ecad0d33bbfa421a46efca47b5e16a5b98c06fe13f2ea82baa5c413a-merged.mount: Deactivated successfully.
Nov 25 10:54:30 np0005535469 podman[95179]: 2025-11-25 15:54:30.138974928 +0000 UTC m=+1.741222526 container remove c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39 (image=quay.io/ceph/ceph:v18, name=nifty_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:30 np0005535469 systemd[1]: libpod-conmon-c2a0a0a3b3d4c67ba2e2635568cd24e9fac60d1a49c5b8948410c3f5fa27ab39.scope: Deactivated successfully.
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/640286504' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/640286504' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:30 np0005535469 python3[95476]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:30 np0005535469 podman[95495]: 2025-11-25 15:54:30.501003569 +0000 UTC m=+0.073987217 container create c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89 (image=quay.io/ceph/ceph:v18, name=flamboyant_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 10:54:30 np0005535469 systemd[1]: Started libpod-conmon-c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89.scope.
Nov 25 10:54:30 np0005535469 podman[95495]: 2025-11-25 15:54:30.467522173 +0000 UTC m=+0.040505841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04359b459b26bab5e38dab053c175c75d8ca9197badcd977719b952e1728fa3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04359b459b26bab5e38dab053c175c75d8ca9197badcd977719b952e1728fa3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:30 np0005535469 podman[95495]: 2025-11-25 15:54:30.58747048 +0000 UTC m=+0.160454158 container init c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89 (image=quay.io/ceph/ceph:v18, name=flamboyant_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:54:30 np0005535469 podman[95495]: 2025-11-25 15:54:30.594768141 +0000 UTC m=+0.167751789 container start c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89 (image=quay.io/ceph/ceph:v18, name=flamboyant_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:30 np0005535469 podman[95495]: 2025-11-25 15:54:30.598461327 +0000 UTC m=+0.171444975 container attach c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89 (image=quay.io/ceph/ceph:v18, name=flamboyant_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:30 np0005535469 podman[95580]: 2025-11-25 15:54:30.873428969 +0000 UTC m=+0.071092080 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:54:30 np0005535469 podman[95580]: 2025-11-25 15:54:30.972998294 +0000 UTC m=+0.170661395 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1414265810' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:31 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1414265810' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5150d390-6c6c-4cd5-8e69-5091b3d079d8 does not exist
Nov 25 10:54:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 728ec9e4-aff0-4c29-b4b2-888b4e535576 does not exist
Nov 25 10:54:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b265e2f6-0c67-4b23-85d3-8d8d3b854a76 does not exist
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:54:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:54:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:31 np0005535469 podman[95867]: 2025-11-25 15:54:31.987111661 +0000 UTC m=+0.036266750 container create a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_fermi, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 10:54:32 np0005535469 systemd[1]: Started libpod-conmon-a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae.scope.
Nov 25 10:54:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:32 np0005535469 podman[95867]: 2025-11-25 15:54:32.064191558 +0000 UTC m=+0.113346657 container init a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:54:32 np0005535469 podman[95867]: 2025-11-25 15:54:31.969005168 +0000 UTC m=+0.018160277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:32 np0005535469 podman[95867]: 2025-11-25 15:54:32.072577207 +0000 UTC m=+0.121732296 container start a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:32 np0005535469 optimistic_fermi[95885]: 167 167
Nov 25 10:54:32 np0005535469 podman[95867]: 2025-11-25 15:54:32.075951135 +0000 UTC m=+0.125106244 container attach a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_fermi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:54:32 np0005535469 systemd[1]: libpod-a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae.scope: Deactivated successfully.
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 25 10:54:32 np0005535469 podman[95867]: 2025-11-25 15:54:32.077874275 +0000 UTC m=+0.127029364 container died a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1414265810' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 10:54:32 np0005535469 flamboyant_turing[95527]: pool 'cephfs.cephfs.meta' created
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 25 10:54:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-84a596bd3b34d78cfb1c602789dcfb8d1deb9afb2011507250186e8a2f156b72-merged.mount: Deactivated successfully.
Nov 25 10:54:32 np0005535469 systemd[1]: libpod-c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89.scope: Deactivated successfully.
Nov 25 10:54:32 np0005535469 podman[95495]: 2025-11-25 15:54:32.115018067 +0000 UTC m=+1.688001725 container died c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89 (image=quay.io/ceph/ceph:v18, name=flamboyant_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:32 np0005535469 podman[95867]: 2025-11-25 15:54:32.129627349 +0000 UTC m=+0.178782438 container remove a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-04359b459b26bab5e38dab053c175c75d8ca9197badcd977719b952e1728fa3b-merged.mount: Deactivated successfully.
Nov 25 10:54:32 np0005535469 systemd[1]: libpod-conmon-a51433b84daf73c43c3d1320ce35e0b57242312885b8408f3bf6342b2a7d5dae.scope: Deactivated successfully.
Nov 25 10:54:32 np0005535469 podman[95495]: 2025-11-25 15:54:32.176624379 +0000 UTC m=+1.749608027 container remove c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89 (image=quay.io/ceph/ceph:v18, name=flamboyant_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 10:54:32 np0005535469 systemd[1]: libpod-conmon-c828f0f9eba002c77ee81ba7393637c29978a99f51e95954278040d40f28bf89.scope: Deactivated successfully.
Nov 25 10:54:32 np0005535469 podman[95921]: 2025-11-25 15:54:32.277673232 +0000 UTC m=+0.037517383 container create c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_solomon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:32 np0005535469 systemd[1]: Started libpod-conmon-c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927.scope.
Nov 25 10:54:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b4e2762e630ae1fb98a7a185bba9fbef4d4bb4eb269baf4f342fa2e5368d03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b4e2762e630ae1fb98a7a185bba9fbef4d4bb4eb269baf4f342fa2e5368d03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b4e2762e630ae1fb98a7a185bba9fbef4d4bb4eb269baf4f342fa2e5368d03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b4e2762e630ae1fb98a7a185bba9fbef4d4bb4eb269baf4f342fa2e5368d03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b4e2762e630ae1fb98a7a185bba9fbef4d4bb4eb269baf4f342fa2e5368d03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:32 np0005535469 podman[95921]: 2025-11-25 15:54:32.261993361 +0000 UTC m=+0.021837532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:32 np0005535469 podman[95921]: 2025-11-25 15:54:32.375886431 +0000 UTC m=+0.135730602 container init c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:32 np0005535469 podman[95921]: 2025-11-25 15:54:32.382410001 +0000 UTC m=+0.142254152 container start c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_solomon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:54:32 np0005535469 podman[95921]: 2025-11-25 15:54:32.439886005 +0000 UTC m=+0.199730156 container attach c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_solomon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:32 np0005535469 python3[95965]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:54:32 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1414265810' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:32 np0005535469 podman[95969]: 2025-11-25 15:54:32.529824137 +0000 UTC m=+0.066109870 container create 9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63 (image=quay.io/ceph/ceph:v18, name=flamboyant_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 10:54:32 np0005535469 systemd[1]: Started libpod-conmon-9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63.scope.
Nov 25 10:54:32 np0005535469 podman[95969]: 2025-11-25 15:54:32.483087904 +0000 UTC m=+0.019373667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f904d65bdf9809c5a86720acfe6e2833fb2c76edc57fbf6ab84ea83c414e366c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f904d65bdf9809c5a86720acfe6e2833fb2c76edc57fbf6ab84ea83c414e366c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:32 np0005535469 podman[95969]: 2025-11-25 15:54:32.610797886 +0000 UTC m=+0.147083649 container init 9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63 (image=quay.io/ceph/ceph:v18, name=flamboyant_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 10:54:32 np0005535469 podman[95969]: 2025-11-25 15:54:32.616884354 +0000 UTC m=+0.153170107 container start 9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63 (image=quay.io/ceph/ceph:v18, name=flamboyant_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 10:54:32 np0005535469 podman[95969]: 2025-11-25 15:54:32.620573871 +0000 UTC m=+0.156859604 container attach 9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63 (image=quay.io/ceph/ceph:v18, name=flamboyant_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:32 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 10:54:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1893288992' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:33 np0005535469 confident_solomon[95963]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:54:33 np0005535469 confident_solomon[95963]: --> relative data size: 1.0
Nov 25 10:54:33 np0005535469 confident_solomon[95963]: --> All data devices are unavailable
Nov 25 10:54:33 np0005535469 systemd[1]: libpod-c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927.scope: Deactivated successfully.
Nov 25 10:54:33 np0005535469 systemd[1]: libpod-c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927.scope: Consumed 1.016s CPU time.
Nov 25 10:54:33 np0005535469 podman[95921]: 2025-11-25 15:54:33.442364366 +0000 UTC m=+1.202208537 container died c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_solomon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 10:54:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c4b4e2762e630ae1fb98a7a185bba9fbef4d4bb4eb269baf4f342fa2e5368d03-merged.mount: Deactivated successfully.
Nov 25 10:54:33 np0005535469 podman[95921]: 2025-11-25 15:54:33.497260413 +0000 UTC m=+1.257104564 container remove c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:33 np0005535469 systemd[1]: libpod-conmon-c3a9fe522fc291589d350cc07aad040c5eecc420af1b8ab49515d806f8642927.scope: Deactivated successfully.
Nov 25 10:54:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 25 10:54:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1893288992' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 10:54:33 np0005535469 flamboyant_wozniak[95985]: pool 'cephfs.cephfs.data' created
Nov 25 10:54:33 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 25 10:54:33 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:33 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1893288992' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 10:54:33 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:33 np0005535469 systemd[1]: libpod-9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63.scope: Deactivated successfully.
Nov 25 10:54:33 np0005535469 podman[95969]: 2025-11-25 15:54:33.575423547 +0000 UTC m=+1.111709280 container died 9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63 (image=quay.io/ceph/ceph:v18, name=flamboyant_wozniak, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 10:54:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f904d65bdf9809c5a86720acfe6e2833fb2c76edc57fbf6ab84ea83c414e366c-merged.mount: Deactivated successfully.
Nov 25 10:54:33 np0005535469 podman[95969]: 2025-11-25 15:54:33.724728482 +0000 UTC m=+1.261014215 container remove 9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63 (image=quay.io/ceph/ceph:v18, name=flamboyant_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:33 np0005535469 systemd[1]: libpod-conmon-9cb4118f573c09627f16b8ff80fb3124300bc5e91838a9c413f08a9b4e3e4f63.scope: Deactivated successfully.
Nov 25 10:54:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 2 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:34 np0005535469 python3[96188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:34 np0005535469 podman[96223]: 2025-11-25 15:54:34.064422939 +0000 UTC m=+0.041512387 container create e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee (image=quay.io/ceph/ceph:v18, name=kind_kapitsa, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:34 np0005535469 systemd[1]: Started libpod-conmon-e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee.scope.
Nov 25 10:54:34 np0005535469 podman[96237]: 2025-11-25 15:54:34.102617647 +0000 UTC m=+0.055705568 container create 6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brown, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:54:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36a85330c2fecc9353b9d5c8f12644be744728c306cb156bd45d895422983a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36a85330c2fecc9353b9d5c8f12644be744728c306cb156bd45d895422983a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:34 np0005535469 systemd[1]: Started libpod-conmon-6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b.scope.
Nov 25 10:54:34 np0005535469 podman[96223]: 2025-11-25 15:54:34.141872674 +0000 UTC m=+0.118962132 container init e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee (image=quay.io/ceph/ceph:v18, name=kind_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:54:34 np0005535469 podman[96223]: 2025-11-25 15:54:34.046017597 +0000 UTC m=+0.023107065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:34 np0005535469 podman[96223]: 2025-11-25 15:54:34.1485666 +0000 UTC m=+0.125656038 container start e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee (image=quay.io/ceph/ceph:v18, name=kind_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:34 np0005535469 podman[96223]: 2025-11-25 15:54:34.154261199 +0000 UTC m=+0.131350637 container attach e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee (image=quay.io/ceph/ceph:v18, name=kind_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:54:34 np0005535469 podman[96237]: 2025-11-25 15:54:34.162231316 +0000 UTC m=+0.115319247 container init 6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:54:34 np0005535469 podman[96237]: 2025-11-25 15:54:34.167488234 +0000 UTC m=+0.120576155 container start 6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brown, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 25 10:54:34 np0005535469 bold_brown[96261]: 167 167
Nov 25 10:54:34 np0005535469 podman[96237]: 2025-11-25 15:54:34.082795499 +0000 UTC m=+0.035883440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:34 np0005535469 systemd[1]: libpod-6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b.scope: Deactivated successfully.
Nov 25 10:54:34 np0005535469 podman[96237]: 2025-11-25 15:54:34.182577979 +0000 UTC m=+0.135665910 container attach 6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:34 np0005535469 podman[96237]: 2025-11-25 15:54:34.183977295 +0000 UTC m=+0.137065216 container died 6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7cee1f51f4406ea0e5abf578d208548bd572ed89eb65f9ead36c06196291ee22-merged.mount: Deactivated successfully.
Nov 25 10:54:34 np0005535469 podman[96237]: 2025-11-25 15:54:34.233778518 +0000 UTC m=+0.186866449 container remove 6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brown, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:34 np0005535469 systemd[1]: libpod-conmon-6c3065d240dd2c4a2a20c04fd4d16388e1f5ca239f970d6b5f5265e9d215cb1b.scope: Deactivated successfully.
Nov 25 10:54:34 np0005535469 podman[96287]: 2025-11-25 15:54:34.43724113 +0000 UTC m=+0.089328707 container create 41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bassi, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:54:34 np0005535469 podman[96287]: 2025-11-25 15:54:34.369178669 +0000 UTC m=+0.021266276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:34 np0005535469 systemd[1]: Started libpod-conmon-41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe.scope.
Nov 25 10:54:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad630b22f865528e57128a111522de8ad8a67da3b2a971ac5127bd2eb17bbbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad630b22f865528e57128a111522de8ad8a67da3b2a971ac5127bd2eb17bbbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad630b22f865528e57128a111522de8ad8a67da3b2a971ac5127bd2eb17bbbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad630b22f865528e57128a111522de8ad8a67da3b2a971ac5127bd2eb17bbbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:34 np0005535469 podman[96287]: 2025-11-25 15:54:34.546751485 +0000 UTC m=+0.198839082 container init 41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 10:54:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 25 10:54:34 np0005535469 podman[96287]: 2025-11-25 15:54:34.558372709 +0000 UTC m=+0.210460326 container start 41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 10:54:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 10:54:34 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 25 10:54:34 np0005535469 podman[96287]: 2025-11-25 15:54:34.585347225 +0000 UTC m=+0.237434912 container attach 41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bassi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 10:54:34 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1893288992' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 10:54:34 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 25 10:54:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2842559614' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]: {
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:    "0": [
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:        {
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "devices": [
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "/dev/loop3"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            ],
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_name": "ceph_lv0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_size": "21470642176",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "name": "ceph_lv0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "tags": {
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cluster_name": "ceph",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.crush_device_class": "",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.encrypted": "0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osd_id": "0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.type": "block",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.vdo": "0"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            },
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "type": "block",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "vg_name": "ceph_vg0"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:        }
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:    ],
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:    "1": [
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:        {
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "devices": [
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "/dev/loop4"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            ],
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_name": "ceph_lv1",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_size": "21470642176",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "name": "ceph_lv1",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "tags": {
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cluster_name": "ceph",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.crush_device_class": "",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.encrypted": "0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osd_id": "1",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.type": "block",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.vdo": "0"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            },
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "type": "block",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "vg_name": "ceph_vg1"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:        }
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:    ],
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:    "2": [
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:        {
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "devices": [
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "/dev/loop5"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            ],
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_name": "ceph_lv2",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_size": "21470642176",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "name": "ceph_lv2",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "tags": {
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.cluster_name": "ceph",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.crush_device_class": "",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.encrypted": "0",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osd_id": "2",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.type": "block",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:                "ceph.vdo": "0"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            },
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "type": "block",
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:            "vg_name": "ceph_vg2"
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:        }
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]:    ]
Nov 25 10:54:35 np0005535469 suspicious_bassi[96324]: }
Nov 25 10:54:35 np0005535469 systemd[1]: libpod-41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe.scope: Deactivated successfully.
Nov 25 10:54:35 np0005535469 podman[96287]: 2025-11-25 15:54:35.349398659 +0000 UTC m=+1.001486276 container died 41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9ad630b22f865528e57128a111522de8ad8a67da3b2a971ac5127bd2eb17bbbf-merged.mount: Deactivated successfully.
Nov 25 10:54:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 25 10:54:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2842559614' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 10:54:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 10:54:35 np0005535469 kind_kapitsa[96254]: enabled application 'rbd' on pool 'vms'
Nov 25 10:54:35 np0005535469 systemd[1]: libpod-e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee.scope: Deactivated successfully.
Nov 25 10:54:35 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 25 10:54:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:36 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:36 np0005535469 podman[96287]: 2025-11-25 15:54:36.08647872 +0000 UTC m=+1.738566347 container remove 41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 10:54:36 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2842559614' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 10:54:36 np0005535469 podman[96223]: 2025-11-25 15:54:36.108705451 +0000 UTC m=+2.085794889 container died e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee (image=quay.io/ceph/ceph:v18, name=kind_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 10:54:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e36a85330c2fecc9353b9d5c8f12644be744728c306cb156bd45d895422983a9-merged.mount: Deactivated successfully.
Nov 25 10:54:36 np0005535469 systemd[1]: libpod-conmon-41060bd15d560222faf897d59eba4ee53b769ec122139a03b5c2b2fd73f79dfe.scope: Deactivated successfully.
Nov 25 10:54:36 np0005535469 podman[96223]: 2025-11-25 15:54:36.670669231 +0000 UTC m=+2.647758689 container remove e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee (image=quay.io/ceph/ceph:v18, name=kind_kapitsa, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 10:54:36 np0005535469 systemd[1]: libpod-conmon-e69a8b64385ad2a726f4837a96dcbb4d8de3c7f65e618d47fd9eb4ba9e091bee.scope: Deactivated successfully.
Nov 25 10:54:36 np0005535469 podman[96525]: 2025-11-25 15:54:36.877759348 +0000 UTC m=+0.027870150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:36 np0005535469 podman[96525]: 2025-11-25 15:54:36.984549951 +0000 UTC m=+0.134660723 container create f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 10:54:37 np0005535469 python3[96524]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:37 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2842559614' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 10:54:37 np0005535469 ceph-mon[74985]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:37 np0005535469 systemd[1]: Started libpod-conmon-f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4.scope.
Nov 25 10:54:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:37 np0005535469 podman[96540]: 2025-11-25 15:54:37.114528591 +0000 UTC m=+0.081872953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:37 np0005535469 podman[96540]: 2025-11-25 15:54:37.22304847 +0000 UTC m=+0.190392842 container create ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f (image=quay.io/ceph/ceph:v18, name=elegant_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 10:54:37 np0005535469 systemd[1]: Started libpod-conmon-ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f.scope.
Nov 25 10:54:37 np0005535469 podman[96525]: 2025-11-25 15:54:37.329962736 +0000 UTC m=+0.480073548 container init f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:37 np0005535469 podman[96525]: 2025-11-25 15:54:37.338187371 +0000 UTC m=+0.488298143 container start f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:37 np0005535469 pedantic_blackwell[96555]: 167 167
Nov 25 10:54:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:37 np0005535469 systemd[1]: libpod-f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4.scope: Deactivated successfully.
Nov 25 10:54:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134e4220265e8ea8a9b0b67a58c23795b9aeb9491983b75eee0131a60ea355c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134e4220265e8ea8a9b0b67a58c23795b9aeb9491983b75eee0131a60ea355c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:37 np0005535469 podman[96525]: 2025-11-25 15:54:37.368664508 +0000 UTC m=+0.518775280 container attach f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 10:54:37 np0005535469 podman[96525]: 2025-11-25 15:54:37.369153511 +0000 UTC m=+0.519264283 container died f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 10:54:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d6ebb8cdba0a87e95c8ca9891f93692d21014956d52ecc0945c3a9e4bed3c694-merged.mount: Deactivated successfully.
Nov 25 10:54:37 np0005535469 podman[96525]: 2025-11-25 15:54:37.818381432 +0000 UTC m=+0.968492214 container remove f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:37 np0005535469 systemd[1]: libpod-conmon-f9256f6306d45eee1f03845ddd208410525b4f5631a693a7d6939abf6c22d7b4.scope: Deactivated successfully.
Nov 25 10:54:37 np0005535469 podman[96540]: 2025-11-25 15:54:37.936583644 +0000 UTC m=+0.903928006 container init ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f (image=quay.io/ceph/ceph:v18, name=elegant_goldberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:37 np0005535469 podman[96540]: 2025-11-25 15:54:37.945421805 +0000 UTC m=+0.912766157 container start ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f (image=quay.io/ceph/ceph:v18, name=elegant_goldberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 25 10:54:38 np0005535469 podman[96540]: 2025-11-25 15:54:38.008823334 +0000 UTC m=+0.976167776 container attach ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f (image=quay.io/ceph/ceph:v18, name=elegant_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 10:54:38 np0005535469 podman[96587]: 2025-11-25 15:54:38.111511409 +0000 UTC m=+0.140486385 container create 974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:54:38 np0005535469 podman[96587]: 2025-11-25 15:54:38.027868901 +0000 UTC m=+0.056843927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:38 np0005535469 systemd[1]: Started libpod-conmon-974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e.scope.
Nov 25 10:54:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e230bc261250eec04d694f4dabd31c0e6ebc5694c6afe7d598dca147f59d8f66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e230bc261250eec04d694f4dabd31c0e6ebc5694c6afe7d598dca147f59d8f66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e230bc261250eec04d694f4dabd31c0e6ebc5694c6afe7d598dca147f59d8f66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e230bc261250eec04d694f4dabd31c0e6ebc5694c6afe7d598dca147f59d8f66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:38 np0005535469 podman[96587]: 2025-11-25 15:54:38.389911242 +0000 UTC m=+0.418886208 container init 974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:54:38 np0005535469 podman[96587]: 2025-11-25 15:54:38.400249752 +0000 UTC m=+0.429224698 container start 974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 10:54:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 25 10:54:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4149317910' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 10:54:38 np0005535469 podman[96587]: 2025-11-25 15:54:38.516568375 +0000 UTC m=+0.545543361 container attach 974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4149317910' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]: {
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "osd_id": 1,
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "type": "bluestore"
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:    },
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "osd_id": 2,
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "type": "bluestore"
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:    },
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "osd_id": 0,
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:        "type": "bluestore"
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]:    }
Nov 25 10:54:39 np0005535469 suspicious_heyrovsky[96604]: }
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4149317910' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 10:54:39 np0005535469 elegant_goldberg[96561]: enabled application 'rbd' on pool 'volumes'
Nov 25 10:54:39 np0005535469 systemd[1]: libpod-974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e.scope: Deactivated successfully.
Nov 25 10:54:39 np0005535469 podman[96587]: 2025-11-25 15:54:39.459203541 +0000 UTC m=+1.488178487 container died 974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 10:54:39 np0005535469 systemd[1]: libpod-974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e.scope: Consumed 1.067s CPU time.
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 25 10:54:39 np0005535469 systemd[1]: libpod-ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f.scope: Deactivated successfully.
Nov 25 10:54:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e230bc261250eec04d694f4dabd31c0e6ebc5694c6afe7d598dca147f59d8f66-merged.mount: Deactivated successfully.
Nov 25 10:54:39 np0005535469 podman[96587]: 2025-11-25 15:54:39.797778308 +0000 UTC m=+1.826753254 container remove 974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 10:54:39 np0005535469 podman[96540]: 2025-11-25 15:54:39.803508527 +0000 UTC m=+2.770852889 container died ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f (image=quay.io/ceph/ceph:v18, name=elegant_goldberg, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 10:54:39 np0005535469 systemd[1]: libpod-conmon-974ff348b83428cd5e62354f6ee9c9695c8d3397684ec9bb8a0f6564b5ac566e.scope: Deactivated successfully.
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-134e4220265e8ea8a9b0b67a58c23795b9aeb9491983b75eee0131a60ea355c0-merged.mount: Deactivated successfully.
Nov 25 10:54:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:54:39
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr']
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:54:39 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:40 np0005535469 podman[96659]: 2025-11-25 15:54:40.122714737 +0000 UTC m=+0.634436576 container remove ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f (image=quay.io/ceph/ceph:v18, name=elegant_goldberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:54:40 np0005535469 systemd[1]: libpod-conmon-ffb08739a48ee692840a840ad7738a1b3e5d378841c1fd36b03a842d0c53699f.scope: Deactivated successfully.
Nov 25 10:54:40 np0005535469 python3[96761]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4149317910' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:40 np0005535469 podman[96762]: 2025-11-25 15:54:40.475150686 +0000 UTC m=+0.049641880 container create 385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854 (image=quay.io/ceph/ceph:v18, name=gallant_chaplygin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 10:54:40 np0005535469 systemd[1]: Started libpod-conmon-385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854.scope.
Nov 25 10:54:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f7a8c4ee842a36f724ebf8d2111e7c0c80856cf1aa228cdb5caadb9ed5839b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f7a8c4ee842a36f724ebf8d2111e7c0c80856cf1aa228cdb5caadb9ed5839b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:40 np0005535469 podman[96762]: 2025-11-25 15:54:40.45273913 +0000 UTC m=+0.027230424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:40 np0005535469 podman[96762]: 2025-11-25 15:54:40.548231238 +0000 UTC m=+0.122722462 container init 385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854 (image=quay.io/ceph/ceph:v18, name=gallant_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:40 np0005535469 podman[96762]: 2025-11-25 15:54:40.553591968 +0000 UTC m=+0.128083162 container start 385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854 (image=quay.io/ceph/ceph:v18, name=gallant_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 10:54:40 np0005535469 podman[96762]: 2025-11-25 15:54:40.619466811 +0000 UTC m=+0.193958055 container attach 385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854 (image=quay.io/ceph/ceph:v18, name=gallant_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 25 10:54:40 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 3242fb50-5514-478c-9946-c1e2d9d6e668 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:54:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1778424123' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1778424123' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 10:54:41 np0005535469 gallant_chaplygin[96777]: enabled application 'rbd' on pool 'backups'
Nov 25 10:54:41 np0005535469 systemd[1]: libpod-385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854.scope: Deactivated successfully.
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 25 10:54:41 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 67ddfbe3-321c-4d18-bf87-78e11cb7455f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1778424123' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 10:54:41 np0005535469 podman[96802]: 2025-11-25 15:54:41.933925464 +0000 UTC m=+0.021368890 container died 385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854 (image=quay.io/ceph/ceph:v18, name=gallant_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 10:54:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-12f7a8c4ee842a36f724ebf8d2111e7c0c80856cf1aa228cdb5caadb9ed5839b-merged.mount: Deactivated successfully.
Nov 25 10:54:42 np0005535469 podman[96802]: 2025-11-25 15:54:42.879865277 +0000 UTC m=+0.967308703 container remove 385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854 (image=quay.io/ceph/ceph:v18, name=gallant_chaplygin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 25 10:54:42 np0005535469 systemd[1]: libpod-conmon-385c6ce62de496b28e7c35959402bd96b161365cf39f775ab22765ffbbd03854.scope: Deactivated successfully.
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 25 10:54:43 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 4f548923-af01-401b-b6d7-38920b3fb05b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:43 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32 pruub=15.877822876s) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active pruub 68.915748596s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:43 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32 pruub=15.877822876s) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown pruub 68.915748596s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1778424123' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:43 np0005535469 python3[96843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:43 np0005535469 podman[96844]: 2025-11-25 15:54:43.243088168 +0000 UTC m=+0.051742594 container create 672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:43 np0005535469 podman[96844]: 2025-11-25 15:54:43.211456131 +0000 UTC m=+0.020110537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:43 np0005535469 systemd[1]: Started libpod-conmon-672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b.scope.
Nov 25 10:54:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51171da0ddb3df31e1252ea1707e62ffeb9a50cb859cc497a914d134e8838415/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51171da0ddb3df31e1252ea1707e62ffeb9a50cb859cc497a914d134e8838415/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:43 np0005535469 podman[96844]: 2025-11-25 15:54:43.456432279 +0000 UTC m=+0.265086685 container init 672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 10:54:43 np0005535469 podman[96844]: 2025-11-25 15:54:43.464541041 +0000 UTC m=+0.273195427 container start 672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 10:54:43 np0005535469 podman[96844]: 2025-11-25 15:54:43.480984751 +0000 UTC m=+0.289639137 container attach 672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:54:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2450948697' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2450948697' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 10:54:44 np0005535469 affectionate_satoshi[96859]: enabled application 'rbd' on pool 'images'
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 25 10:54:44 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 0e69f76d-96e6-4f9d-9c73-af75a2b40a50 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 32 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=32 pruub=13.879760742s) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active pruub 50.352001190s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=32 pruub=13.879760742s) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown pruub 50.352001190s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.1e( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.c( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.1a( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.1( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.10( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.12( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.14( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 33 pg[2.e( empty local-lis/les=18/19 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1c( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.4( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.b( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.2( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.d( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.10( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.14( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.13( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.19( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1c( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 systemd[1]: libpod-672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b.scope: Deactivated successfully.
Nov 25 10:54:44 np0005535469 podman[96844]: 2025-11-25 15:54:44.066493147 +0000 UTC m=+0.875147533 container died 672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=32/33 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.b( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.d( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.4( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.2( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.10( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.14( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.19( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.13( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.1a( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-51171da0ddb3df31e1252ea1707e62ffeb9a50cb859cc497a914d134e8838415-merged.mount: Deactivated successfully.
Nov 25 10:54:44 np0005535469 podman[96844]: 2025-11-25 15:54:44.115410416 +0000 UTC m=+0.924064812 container remove 672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:44 np0005535469 systemd[1]: libpod-conmon-672cdddf5bfd97a6f2cee9df3a62032d808d7edffda56c2e3df65600ac90203b.scope: Deactivated successfully.
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2450948697' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2450948697' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 10:54:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:44 np0005535469 python3[96924]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:44 np0005535469 podman[96925]: 2025-11-25 15:54:44.443313133 +0000 UTC m=+0.050954534 container create 4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9 (image=quay.io/ceph/ceph:v18, name=beautiful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 10:54:44 np0005535469 systemd[1]: Started libpod-conmon-4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9.scope.
Nov 25 10:54:44 np0005535469 podman[96925]: 2025-11-25 15:54:44.41259268 +0000 UTC m=+0.020234071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3022a7665ddcfe3140f09726ea69481f23e0c64ea86892f9d90a4af4efa9156/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3022a7665ddcfe3140f09726ea69481f23e0c64ea86892f9d90a4af4efa9156/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:44 np0005535469 podman[96925]: 2025-11-25 15:54:44.551145324 +0000 UTC m=+0.158786705 container init 4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9 (image=quay.io/ceph/ceph:v18, name=beautiful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:44 np0005535469 podman[96925]: 2025-11-25 15:54:44.555666042 +0000 UTC m=+0.163307413 container start 4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9 (image=quay.io/ceph/ceph:v18, name=beautiful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 10:54:44 np0005535469 podman[96925]: 2025-11-25 15:54:44.564584995 +0000 UTC m=+0.172226366 container attach 4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9 (image=quay.io/ceph/ceph:v18, name=beautiful_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:44 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=8.572353363s) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active pruub 72.493965149s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:44 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=8.572353363s) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown pruub 72.493965149s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 25 10:54:45 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev fccd8038-48de-4980-897c-bbcc0d36b552 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.7( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.4( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.10( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.11( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.12( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.16( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.17( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=32/34 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.0( empty local-lis/les=33/34 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 34 pg[4.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [0] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/753874178' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 10:54:45 np0005535469 ceph-mgr[75280]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Nov 25 10:54:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v88: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/753874178' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 10:54:46 np0005535469 beautiful_payne[96941]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev c4ea04b4-6ca5-413f-8de5-edf5fcba4225 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 10:54:46 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=11.461922646s) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active pruub 76.888999939s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 3242fb50-5514-478c-9946-c1e2d9d6e668 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 3242fb50-5514-478c-9946-c1e2d9d6e668 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 67ddfbe3-321c-4d18-bf87-78e11cb7455f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 67ddfbe3-321c-4d18-bf87-78e11cb7455f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 25 10:54:46 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=11.461922646s) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown pruub 76.888999939s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 4f548923-af01-401b-b6d7-38920b3fb05b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 4f548923-af01-401b-b6d7-38920b3fb05b (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 0e69f76d-96e6-4f9d-9c73-af75a2b40a50 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 0e69f76d-96e6-4f9d-9c73-af75a2b40a50 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev fccd8038-48de-4980-897c-bbcc0d36b552 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event fccd8038-48de-4980-897c-bbcc0d36b552 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev c4ea04b4-6ca5-413f-8de5-edf5fcba4225 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 10:54:46 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event c4ea04b4-6ca5-413f-8de5-edf5fcba4225 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 25 10:54:46 np0005535469 systemd[1]: libpod-4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9.scope: Deactivated successfully.
Nov 25 10:54:46 np0005535469 podman[96925]: 2025-11-25 15:54:46.128884963 +0000 UTC m=+1.736526404 container died 4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9 (image=quay.io/ceph/ceph:v18, name=beautiful_payne, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/753874178' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/753874178' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e3022a7665ddcfe3140f09726ea69481f23e0c64ea86892f9d90a4af4efa9156-merged.mount: Deactivated successfully.
Nov 25 10:54:46 np0005535469 podman[96925]: 2025-11-25 15:54:46.181290274 +0000 UTC m=+1.788931645 container remove 4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9 (image=quay.io/ceph/ceph:v18, name=beautiful_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:46 np0005535469 systemd[1]: libpod-conmon-4ef3877b01974aba59f20bf9fcee3b986d295013833bb3c0de81aa1a6e5592c9.scope: Deactivated successfully.
Nov 25 10:54:46 np0005535469 python3[97003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:46 np0005535469 podman[97004]: 2025-11-25 15:54:46.491429957 +0000 UTC m=+0.044156916 container create 526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312 (image=quay.io/ceph/ceph:v18, name=beautiful_cartwright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:46 np0005535469 systemd[76549]: Starting Mark boot as successful...
Nov 25 10:54:46 np0005535469 systemd[1]: Started libpod-conmon-526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312.scope.
Nov 25 10:54:46 np0005535469 systemd[76549]: Finished Mark boot as successful.
Nov 25 10:54:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5158de08718e8bd975566ed31949424d75a4bb5c26dcfeafbb26d5076e35f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5158de08718e8bd975566ed31949424d75a4bb5c26dcfeafbb26d5076e35f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:46 np0005535469 podman[97004]: 2025-11-25 15:54:46.554573168 +0000 UTC m=+0.107300197 container init 526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312 (image=quay.io/ceph/ceph:v18, name=beautiful_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:46 np0005535469 podman[97004]: 2025-11-25 15:54:46.469303958 +0000 UTC m=+0.022030947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:46 np0005535469 podman[97004]: 2025-11-25 15:54:46.566214383 +0000 UTC m=+0.118941362 container start 526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312 (image=quay.io/ceph/ceph:v18, name=beautiful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:46 np0005535469 podman[97004]: 2025-11-25 15:54:46.569939831 +0000 UTC m=+0.122666770 container attach 526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312 (image=quay.io/ceph/ceph:v18, name=beautiful_cartwright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 10:54:46 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 25 10:54:46 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 25 10:54:46 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Nov 25 10:54:46 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2304917746' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2304917746' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 10:54:47 np0005535469 beautiful_cartwright[97020]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.14( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.16( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.11( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.10( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.13( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.6( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.18( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.4( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.9( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.0( empty local-lis/les=35/36 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [0] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:47 np0005535469 systemd[1]: libpod-526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312.scope: Deactivated successfully.
Nov 25 10:54:47 np0005535469 podman[97004]: 2025-11-25 15:54:47.12730074 +0000 UTC m=+0.680027719 container died 526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312 (image=quay.io/ceph/ceph:v18, name=beautiful_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 10:54:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4c5158de08718e8bd975566ed31949424d75a4bb5c26dcfeafbb26d5076e35f5-merged.mount: Deactivated successfully.
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2304917746' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/2304917746' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 10:54:47 np0005535469 podman[97004]: 2025-11-25 15:54:47.188458139 +0000 UTC m=+0.741185078 container remove 526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312 (image=quay.io/ceph/ceph:v18, name=beautiful_cartwright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 10:54:47 np0005535469 systemd[1]: libpod-conmon-526e1d34ffda0bac873a7aa3edf8c1660d7329a5c693fb9631518107d2d60312.scope: Deactivated successfully.
Nov 25 10:54:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v91: 162 pgs: 1 peering, 93 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 25 10:54:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 25 10:54:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 10:54:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 10:54:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 25 10:54:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=14.973811150s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 55.578937531s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=14.973811150s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown pruub 55.578937531s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.10( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.11( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.14( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.15( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.12( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.13( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.18( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.19( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.17( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.16( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.1c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.1d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.1a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.1b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.1e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.1f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.4( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.5( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.2( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.3( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.8( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.9( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.6( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.7( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 37 pg[5.1( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 python3[97132]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:54:48 np0005535469 python3[97203]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764086087.9900825-36831-105048721604911/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:54:48 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.898608208s) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 68.584541321s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:48 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.898608208s) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown pruub 68.584541321s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 25 10:54:48 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 25 10:54:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 25 10:54:49 np0005535469 python3[97305]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:54:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 10:54:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:49 np0005535469 ceph-mon[74985]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 25 10:54:49 np0005535469 ceph-mon[74985]: Cluster is now healthy
Nov 25 10:54:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=35/38 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:49 np0005535469 python3[97380]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764086088.8779826-36845-161117706461078/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=01d90be7ad3da509efc6efa48d0a92e8ebba21ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:54:49 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 10:54:49 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 10:54:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:50 np0005535469 python3[97430]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:50 np0005535469 podman[97431]: 2025-11-25 15:54:50.104285289 +0000 UTC m=+0.067992479 container create c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb (image=quay.io/ceph/ceph:v18, name=wonderful_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:54:50 np0005535469 podman[97431]: 2025-11-25 15:54:50.063054331 +0000 UTC m=+0.026761531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:50 np0005535469 systemd[1]: Started libpod-conmon-c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb.scope.
Nov 25 10:54:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f28dbfb68a7e9d20a57c290d01bfd219d91a05a9c34a3d4e79854eab60ef3f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f28dbfb68a7e9d20a57c290d01bfd219d91a05a9c34a3d4e79854eab60ef3f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f28dbfb68a7e9d20a57c290d01bfd219d91a05a9c34a3d4e79854eab60ef3f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:50 np0005535469 podman[97431]: 2025-11-25 15:54:50.230710305 +0000 UTC m=+0.194417515 container init c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb (image=quay.io/ceph/ceph:v18, name=wonderful_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:50 np0005535469 podman[97431]: 2025-11-25 15:54:50.238433898 +0000 UTC m=+0.202141088 container start c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb (image=quay.io/ceph/ceph:v18, name=wonderful_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:50 np0005535469 podman[97431]: 2025-11-25 15:54:50.2411894 +0000 UTC m=+0.204896590 container attach c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb (image=quay.io/ceph/ceph:v18, name=wonderful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:54:50 np0005535469 ceph-mgr[75280]: [progress INFO root] Writing back 9 completed events
Nov 25 10:54:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 10:54:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 10:54:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/944287139' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 10:54:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/944287139' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 10:54:50 np0005535469 wonderful_kare[97446]: 
Nov 25 10:54:50 np0005535469 wonderful_kare[97446]: [global]
Nov 25 10:54:50 np0005535469 wonderful_kare[97446]: #011fsid = d82baeae-c742-50a4-b8f6-b5257c8a2c92
Nov 25 10:54:50 np0005535469 wonderful_kare[97446]: #011mon_host = 192.168.122.100
Nov 25 10:54:50 np0005535469 systemd[1]: libpod-c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb.scope: Deactivated successfully.
Nov 25 10:54:50 np0005535469 podman[97431]: 2025-11-25 15:54:50.820578645 +0000 UTC m=+0.784285845 container died c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb (image=quay.io/ceph/ceph:v18, name=wonderful_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c0f28dbfb68a7e9d20a57c290d01bfd219d91a05a9c34a3d4e79854eab60ef3f-merged.mount: Deactivated successfully.
Nov 25 10:54:50 np0005535469 podman[97431]: 2025-11-25 15:54:50.940296557 +0000 UTC m=+0.904003747 container remove c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb (image=quay.io/ceph/ceph:v18, name=wonderful_kare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:50 np0005535469 systemd[1]: libpod-conmon-c40295b0e2b311ed12f992c75d88dbd568ce8d536571dcf6153f01f761bc0ebb.scope: Deactivated successfully.
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 25 10:54:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 25 10:54:51 np0005535469 python3[97608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:51 np0005535469 podman[97621]: 2025-11-25 15:54:51.343911055 +0000 UTC m=+0.091975017 container create 00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d (image=quay.io/ceph/ceph:v18, name=laughing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/944287139' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/944287139' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 10:54:51 np0005535469 podman[97621]: 2025-11-25 15:54:51.278424351 +0000 UTC m=+0.026488333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:51 np0005535469 systemd[1]: Started libpod-conmon-00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d.scope.
Nov 25 10:54:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8115d21b146cd71cfc45c47893caf4b2f919462f45f4e136f91948e8a89689/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8115d21b146cd71cfc45c47893caf4b2f919462f45f4e136f91948e8a89689/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8115d21b146cd71cfc45c47893caf4b2f919462f45f4e136f91948e8a89689/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:51 np0005535469 podman[97621]: 2025-11-25 15:54:51.521248483 +0000 UTC m=+0.269312465 container init 00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d (image=quay.io/ceph/ceph:v18, name=laughing_tharp, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 25 10:54:51 np0005535469 podman[97621]: 2025-11-25 15:54:51.528818241 +0000 UTC m=+0.276882223 container start 00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d (image=quay.io/ceph/ceph:v18, name=laughing_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:51 np0005535469 podman[97621]: 2025-11-25 15:54:51.593574825 +0000 UTC m=+0.341638787 container attach 00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d (image=quay.io/ceph/ceph:v18, name=laughing_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 10:54:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:54:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:51 np0005535469 podman[97700]: 2025-11-25 15:54:51.961994292 +0000 UTC m=+0.171851256 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:54:52 np0005535469 podman[97700]: 2025-11-25 15:54:52.062212343 +0000 UTC m=+0.272069267 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3749776931' entity='client.admin' 
Nov 25 10:54:52 np0005535469 laughing_tharp[97664]: set ssl_option
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 10:54:52 np0005535469 systemd[1]: libpod-00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d.scope: Deactivated successfully.
Nov 25 10:54:52 np0005535469 podman[97621]: 2025-11-25 15:54:52.452558004 +0000 UTC m=+1.200621966 container died 00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d (image=quay.io/ceph/ceph:v18, name=laughing_tharp, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:54:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.481195450s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522884369s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690629959s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.732349396s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690623283s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.732357025s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.481073380s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522815704s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.481093407s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522884369s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690562248s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.732349396s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.481009483s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522815704s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690546989s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.732357025s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480768204s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522850037s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480717659s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522808075s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480689049s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522785187s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480698586s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522808075s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480742455s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522850037s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480658531s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522785187s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690158844s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.732418060s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690142632s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.732418060s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480595589s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522914886s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480568886s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522914886s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696331978s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.738719940s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690067291s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.732452393s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696314812s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.738719940s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.690035820s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.732452393s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480346680s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522842407s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696158409s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.738666534s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480325699s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522842407s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696139336s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.738666534s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696104050s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.738708496s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480099678s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522739410s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696078300s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.738708496s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.480075836s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522739410s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695982933s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.738662720s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695966721s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.738662720s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479691505s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522655487s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696037292s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739082336s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479669571s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522731781s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696012497s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739082336s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479647636s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522731781s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696021080s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739280701s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.696002007s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739280701s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479249001s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522575378s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479142189s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522483826s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479121208s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522483826s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479220390s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522575378s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695891380s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739307404s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695872307s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739307404s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695824623s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739326477s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.478820801s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522350311s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474684715s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.518222809s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.478803635s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522350311s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474666595s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.518222809s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695800781s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739326477s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695674896s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739318848s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474525452s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.518177032s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695658684s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739318848s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474508286s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.518177032s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695605278s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739330292s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474406242s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.518173218s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695568085s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739356995s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695579529s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739330292s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474391937s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.518173218s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695551872s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739356995s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.475190163s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.519077301s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695483208s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739387512s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.475171089s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.519077301s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695466995s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739387512s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474058151s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.518013000s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474026680s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.518013000s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474051476s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.518085480s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474093437s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.518146515s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474032402s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.518085480s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.474077225s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.518146515s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695407867s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739551544s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.478257179s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.522449493s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.473715782s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.517913818s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.478239059s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522449493s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695351601s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739551544s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.473695755s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.517913818s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.473523140s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 53.517894745s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695191383s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739582062s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.473505974s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.517894745s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695171356s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739582062s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695114136s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739540100s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695061684s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 57.739559174s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695042610s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739559174s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=35/38 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39 pruub=12.695058823s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 57.739540100s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=32/34 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39 pruub=8.479662895s) [1] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.522655487s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-db8115d21b146cd71cfc45c47893caf4b2f919462f45f4e136f91948e8a89689-merged.mount: Deactivated successfully.
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.19( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.18( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.1d( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.1c( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.f( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.2( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.1f( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.b( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.8( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.16( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.13( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[2.11( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.9( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.382018089s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.437705994s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.6( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.381989479s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.437705994s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.7( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.382061005s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.437911987s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.4( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.382042885s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.437911987s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392601967s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.448585510s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392580032s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.448585510s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.3( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.a( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.d( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.15( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.381500244s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.437644958s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.381757736s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.437911987s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.381479263s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.437644958s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.381703377s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.437911987s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.381366730s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.437675476s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.381346703s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.437675476s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392148018s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.448616028s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.380613327s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.437141418s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379952431s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436523438s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.380578995s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.437141418s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379931450s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436523438s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392025948s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.448684692s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379795074s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436523438s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391969681s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.448684692s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379654884s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436431885s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379760742s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436523438s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379633904s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436431885s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391757011s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.448684692s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379466057s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436431885s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391726494s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.448684692s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379446030s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436431885s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391955376s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.448616028s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379403114s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436500549s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.379385948s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436500549s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391856194s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.449005127s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391833305s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.449005127s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391685486s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.448966980s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391721725s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.449028015s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391666412s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.448966980s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391701698s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.449028015s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.378934860s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436363220s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391388893s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.448806763s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.378884315s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436332703s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.378911972s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436363220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.378860474s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436332703s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391345978s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.448806763s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391420364s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.449058533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391399384s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.449058533s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.378643990s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436325073s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.378624916s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436325073s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391824722s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.449562073s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391793251s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.449562073s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377804756s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.435600281s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377777100s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.435600281s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377697945s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.435577393s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377676964s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.435577393s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377615929s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.435569763s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377525330s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.435523987s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377590179s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.435569763s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392232895s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.450241089s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377502441s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.435523987s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.17( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392213821s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.450241089s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391439438s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.449569702s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[2.1b( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391420364s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.449569702s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377358437s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.435516357s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392274857s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.450439453s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377318382s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.435516357s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.392217636s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.450439453s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.375875473s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.434165955s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.375852585s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.434165955s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377868652s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.436370850s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.371156693s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 80.429687500s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.377846718s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.436370850s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.371135712s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.429687500s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391805649s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.450386047s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391663551s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.450317383s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391773224s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.450386047s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391767502s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.450447083s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391645432s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.450317383s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391749382s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.450447083s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391616821s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 82.450424194s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=10.391596794s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.450424194s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.585149765s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251396179s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.335497856s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001800537s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.18( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[6.15( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[6.14( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.13( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.11( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[6.13( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.584359169s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251358032s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.584330559s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251358032s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.584332466s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251396179s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.334537506s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001701355s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.334666252s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001800537s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.334408760s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001609802s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.334490776s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001701355s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.334384918s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001609802s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.584095001s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251358032s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.334508896s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001792908s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.584060669s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251358032s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.334486961s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001792908s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333964348s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001419067s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333939552s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001419067s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.584036827s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251541138s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583978653s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251541138s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333875656s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001502991s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333491325s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001197815s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583836555s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251548767s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333462715s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001197815s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583811760s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251548767s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333377838s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001144409s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333329201s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001144409s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583646774s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251579285s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333328247s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001266479s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333599091s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001502991s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.333305359s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001266479s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583621025s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251579285s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583518982s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251586914s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583513260s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251640320s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583488464s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251586914s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583416939s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251594543s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583389282s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251594543s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583502769s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251747131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583479881s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251747131s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332735062s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001083374s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332620621s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001007080s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332728386s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001129150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332592010s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001007080s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332699776s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001129150s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332694054s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001083374s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583234787s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251754761s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582991600s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251739502s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332182884s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.000946045s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582963943s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251739502s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332158089s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.000946045s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582916260s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251754761s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582881927s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251754761s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582918167s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251754761s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332051277s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.000968933s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.332027435s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.000968933s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582755089s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251777649s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582732201s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251777649s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.331988335s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 78.001091003s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.326522827s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 77.995712280s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.326495171s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.995712280s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.583483696s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251640320s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582443237s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251792908s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.326266289s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 77.995658875s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.326245308s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.995658875s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582379341s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251792908s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582262039s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251861572s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582172394s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251785278s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582179070s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251808167s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582221031s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251861572s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582136154s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251785278s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.582136154s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251808167s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.325932503s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 77.995613098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.325875282s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.995613098s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.325913429s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 77.995712280s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.325891495s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.995712280s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.581923485s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251846313s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.581904411s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 75.251846313s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.581871033s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251846313s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=12.581885338s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.251846313s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[6.11( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.16( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.324819565s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 77.995582581s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.324741364s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.995582581s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[6.f( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.324670792s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active pruub 77.995605469s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.331761360s) [2] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.001091003s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=32/33 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39 pruub=15.324606895s) [0] r=-1 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.995605469s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.1( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.17( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.14( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.18( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.f( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.15( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.e( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.17( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.11( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.12( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.1a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.f( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.c( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.12( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.10( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[6.8( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.d( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.e( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.3( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.1( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.6( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.9( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.a( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.e( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.5( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.7( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.1b( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.1b( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.2( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.2( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[3.1f( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.1d( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[4.1c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.8( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[6.1f( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[3.1e( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.c( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.1( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.4( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.6( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.9( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.5( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.b( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 39 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.4( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.7( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[4.8( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.1e( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.1c( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 39 pg[6.1d( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 39 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 10:54:52 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 10:54:52 np0005535469 podman[97621]: 2025-11-25 15:54:52.906319743 +0000 UTC m=+1.654383705 container remove 00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d (image=quay.io/ceph/ceph:v18, name=laughing_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:54:52 np0005535469 systemd[1]: libpod-conmon-00be49e9d7e0cf5b9df7287af55049ef23255304316ecd3af7b0538e7d252a5d.scope: Deactivated successfully.
Nov 25 10:54:53 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 10:54:53 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 10:54:53 np0005535469 python3[97856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:54:53 np0005535469 podman[97886]: 2025-11-25 15:54:53.254921702 +0000 UTC m=+0.029699418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 25 10:54:53 np0005535469 podman[97886]: 2025-11-25 15:54:53.514391909 +0000 UTC m=+0.289169585 container create 51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a (image=quay.io/ceph/ceph:v18, name=relaxed_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 25 10:54:53 np0005535469 systemd[1]: Started libpod-conmon-51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a.scope.
Nov 25 10:54:53 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 25 10:54:53 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 25 10:54:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235fd32728ae61481975cd17d8932e82664f4ea755258d402e8c6b40ea08a0f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235fd32728ae61481975cd17d8932e82664f4ea755258d402e8c6b40ea08a0f6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235fd32728ae61481975cd17d8932e82664f4ea755258d402e8c6b40ea08a0f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 10:54:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3749776931' entity='client.admin' 
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:54:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:54 np0005535469 podman[97886]: 2025-11-25 15:54:54.091115044 +0000 UTC m=+0.865892740 container init 51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a (image=quay.io/ceph/ceph:v18, name=relaxed_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:54:54 np0005535469 podman[97886]: 2025-11-25 15:54:54.103002276 +0000 UTC m=+0.877779952 container start 51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a (image=quay.io/ceph/ceph:v18, name=relaxed_ganguly, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 25 10:54:54 np0005535469 podman[97886]: 2025-11-25 15:54:54.535628872 +0000 UTC m=+1.310406558 container attach 51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a (image=quay.io/ceph/ceph:v18, name=relaxed_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6ad0202c-a708-4f7f-af18-e5584ec6be4e does not exist
Nov 25 10:54:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0584a0b8-8d42-45a7-b1b6-2aeb6c7937e1 does not exist
Nov 25 10:54:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c12750f1-edb1-40b5-84f6-a00f25b45fd8 does not exist
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=32/17 lis/c=32/32 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 25 10:54:54 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=39/40 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=39/40 n=0 ec=32/19 lis/c=32/32 les/c/f=33/33/0 sis=39) [2] r=0 lpr=39 pi=[32,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=39/40 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:54:54 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 25 10:54:55 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 10:54:55 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 10:54:55 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event fe806bde-7b8b-4971-acbb-27d802b6c121 (Global Recovery Event) in 10 seconds
Nov 25 10:54:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:55 np0005535469 relaxed_ganguly[97901]: Scheduled rgw.rgw update...
Nov 25 10:54:55 np0005535469 systemd[1]: libpod-51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a.scope: Deactivated successfully.
Nov 25 10:54:55 np0005535469 podman[97886]: 2025-11-25 15:54:55.371920188 +0000 UTC m=+2.146697874 container died 51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a (image=quay.io/ceph/ceph:v18, name=relaxed_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:55 np0005535469 podman[98066]: 2025-11-25 15:54:55.353458284 +0000 UTC m=+0.040501069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:55 np0005535469 podman[98066]: 2025-11-25 15:54:55.675978701 +0000 UTC m=+0.363021476 container create 7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_booth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:55 np0005535469 systemd[1]: Started libpod-conmon-7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd.scope.
Nov 25 10:54:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-235fd32728ae61481975cd17d8932e82664f4ea755258d402e8c6b40ea08a0f6-merged.mount: Deactivated successfully.
Nov 25 10:54:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:54:56 np0005535469 podman[97886]: 2025-11-25 15:54:56.265917743 +0000 UTC m=+3.040695459 container remove 51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a (image=quay.io/ceph/ceph:v18, name=relaxed_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 10:54:56 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 10:54:56 np0005535469 podman[98066]: 2025-11-25 15:54:56.455388528 +0000 UTC m=+1.142431323 container init 7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_booth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 10:54:56 np0005535469 podman[98066]: 2025-11-25 15:54:56.467387503 +0000 UTC m=+1.154430258 container start 7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 10:54:56 np0005535469 stupefied_booth[98097]: 167 167
Nov 25 10:54:56 np0005535469 systemd[1]: libpod-7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd.scope: Deactivated successfully.
Nov 25 10:54:56 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:54:56 np0005535469 ceph-mon[74985]: Saving service rgw.rgw spec with placement compute-0
Nov 25 10:54:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:56 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 10:54:56 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 10:54:56 np0005535469 podman[98066]: 2025-11-25 15:54:56.868587637 +0000 UTC m=+1.555630402 container attach 7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_booth, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:54:56 np0005535469 podman[98066]: 2025-11-25 15:54:56.870066985 +0000 UTC m=+1.557109760 container died 7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_booth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:57 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 10:54:57 np0005535469 python3[98189]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:54:57 np0005535469 python3[98260]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764086096.948988-36886-86613690917724/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:54:57 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 25 10:54:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 2 active+clean+scrubbing, 191 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:54:57 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 25 10:54:58 np0005535469 python3[98311]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:54:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6ee42172f4e7dd71e7b13b80aa25c671737256058c5255991708ba5f49458040-merged.mount: Deactivated successfully.
Nov 25 10:54:58 np0005535469 podman[98066]: 2025-11-25 15:54:58.475434767 +0000 UTC m=+3.162477522 container remove 7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:54:58 np0005535469 podman[98312]: 2025-11-25 15:54:58.49770084 +0000 UTC m=+0.352225064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:54:58 np0005535469 podman[98312]: 2025-11-25 15:54:58.660315803 +0000 UTC m=+0.514839947 container create 371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56 (image=quay.io/ceph/ceph:v18, name=cranky_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:58 np0005535469 podman[98332]: 2025-11-25 15:54:58.671336182 +0000 UTC m=+0.057478034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:54:58 np0005535469 systemd[1]: libpod-conmon-7ab8d33b4aea66c320dc3e99a058ec908e65bee6b63ea8037e9f374d44300ffd.scope: Deactivated successfully.
Nov 25 10:54:58 np0005535469 systemd[1]: Started libpod-conmon-371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56.scope.
Nov 25 10:54:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af85f3e84e1734a440437a1efcb0d8d34584f8eb778b872e4be47f28596ce86f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af85f3e84e1734a440437a1efcb0d8d34584f8eb778b872e4be47f28596ce86f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af85f3e84e1734a440437a1efcb0d8d34584f8eb778b872e4be47f28596ce86f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:58 np0005535469 podman[98332]: 2025-11-25 15:54:58.842326355 +0000 UTC m=+0.228468207 container create d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:54:58 np0005535469 systemd[1]: Started libpod-conmon-d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858.scope.
Nov 25 10:54:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea3be2f0b9e4887bd17cd391498cc293853b7d82166b5943a817c7be783657/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea3be2f0b9e4887bd17cd391498cc293853b7d82166b5943a817c7be783657/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea3be2f0b9e4887bd17cd391498cc293853b7d82166b5943a817c7be783657/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea3be2f0b9e4887bd17cd391498cc293853b7d82166b5943a817c7be783657/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea3be2f0b9e4887bd17cd391498cc293853b7d82166b5943a817c7be783657/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:54:59 np0005535469 podman[98312]: 2025-11-25 15:54:59.065857312 +0000 UTC m=+0.920381476 container init 371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56 (image=quay.io/ceph/ceph:v18, name=cranky_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:59 np0005535469 podman[98312]: 2025-11-25 15:54:59.074358624 +0000 UTC m=+0.928882768 container start 371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56 (image=quay.io/ceph/ceph:v18, name=cranky_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:54:59 np0005535469 podman[98312]: 2025-11-25 15:54:59.122429662 +0000 UTC m=+0.976953826 container attach 371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56 (image=quay.io/ceph/ceph:v18, name=cranky_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:54:59 np0005535469 systemd[1]: libpod-conmon-51e8fdbf35842f85abee83c1790c5caa9194b73502d980cc43c576f4ef5c696a.scope: Deactivated successfully.
Nov 25 10:54:59 np0005535469 podman[98332]: 2025-11-25 15:54:59.174857003 +0000 UTC m=+0.560998825 container init d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 10:54:59 np0005535469 podman[98332]: 2025-11-25 15:54:59.181497207 +0000 UTC m=+0.567639009 container start d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:54:59 np0005535469 podman[98332]: 2025-11-25 15:54:59.189089405 +0000 UTC m=+0.575231217 container attach d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 10:54:59 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:54:59 np0005535469 ceph-mgr[75280]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 25 10:54:59 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0[74981]: 2025-11-25T15:54:59.655+0000 7ff20c348640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e2 new map
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T15:54:59.655953+0000#012modified#0112025-11-25T15:54:59.655997+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 25 10:54:59 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 10:54:59 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 10:54:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:54:59 np0005535469 ceph-mgr[75280]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 10:54:59 np0005535469 systemd[1]: libpod-371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56.scope: Deactivated successfully.
Nov 25 10:54:59 np0005535469 podman[98312]: 2025-11-25 15:54:59.765269057 +0000 UTC m=+1.619793201 container died 371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56 (image=quay.io/ceph/ceph:v18, name=cranky_bell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:59 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Nov 25 10:54:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 2 active+clean+scrubbing, 191 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-af85f3e84e1734a440437a1efcb0d8d34584f8eb778b872e4be47f28596ce86f-merged.mount: Deactivated successfully.
Nov 25 10:55:00 np0005535469 clever_nobel[98354]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:55:00 np0005535469 clever_nobel[98354]: --> relative data size: 1.0
Nov 25 10:55:00 np0005535469 clever_nobel[98354]: --> All data devices are unavailable
Nov 25 10:55:00 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Nov 25 10:55:00 np0005535469 systemd[1]: libpod-d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858.scope: Deactivated successfully.
Nov 25 10:55:00 np0005535469 ceph-mgr[75280]: [progress INFO root] Writing back 10 completed events
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:00 np0005535469 podman[98312]: 2025-11-25 15:55:00.68448072 +0000 UTC m=+2.539004905 container remove 371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56 (image=quay.io/ceph/ceph:v18, name=cranky_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 10:55:00 np0005535469 systemd[1]: libpod-conmon-371034bc875a844e8b989ae6494d50d112ebd52acbf56b00328f4a4923693a56.scope: Deactivated successfully.
Nov 25 10:55:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:00 np0005535469 podman[98332]: 2025-11-25 15:55:00.768631172 +0000 UTC m=+2.154772974 container died d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 10:55:00 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 10:55:00 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 10:55:01 np0005535469 python3[98454]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8bea3be2f0b9e4887bd17cd391498cc293853b7d82166b5943a817c7be783657-merged.mount: Deactivated successfully.
Nov 25 10:55:01 np0005535469 podman[98455]: 2025-11-25 15:55:01.107685581 +0000 UTC m=+0.081780020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 1 active+clean+scrubbing+deep, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:03 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 25 10:55:03 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 25 10:55:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 1 active+clean+scrubbing+deep, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:04 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 10:55:04 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 10:55:05 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 25 10:55:05 np0005535469 ceph-mon[74985]: Saving service mds.cephfs spec with placement compute-0
Nov 25 10:55:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:05 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 25 10:55:05 np0005535469 podman[98416]: 2025-11-25 15:55:05.549433795 +0000 UTC m=+5.287556829 container remove d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 10:55:05 np0005535469 systemd[1]: libpod-conmon-d8d2055d015464ef85069aa0b228f643dc0a38da8a09b3c293b15a4f3be60858.scope: Deactivated successfully.
Nov 25 10:55:05 np0005535469 podman[98455]: 2025-11-25 15:55:05.805937154 +0000 UTC m=+4.780031513 container create b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1 (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:05 np0005535469 systemd[1]: Started libpod-conmon-b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1.scope.
Nov 25 10:55:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ea489ea748432ed513e49213f47bc9969e544d81a0bee2fa8312ef6ebd097/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ea489ea748432ed513e49213f47bc9969e544d81a0bee2fa8312ef6ebd097/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ea489ea748432ed513e49213f47bc9969e544d81a0bee2fa8312ef6ebd097/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:06 np0005535469 podman[98455]: 2025-11-25 15:55:06.0535036 +0000 UTC m=+5.027598049 container init b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1 (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:06 np0005535469 podman[98455]: 2025-11-25 15:55:06.060193935 +0000 UTC m=+5.034288284 container start b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1 (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:06 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 25 10:55:06 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 25 10:55:06 np0005535469 podman[98455]: 2025-11-25 15:55:06.103584459 +0000 UTC m=+5.077678918 container attach b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1 (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:55:06 np0005535469 podman[98614]: 2025-11-25 15:55:06.27332267 +0000 UTC m=+0.043264352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:06 np0005535469 podman[98614]: 2025-11-25 15:55:06.397237381 +0000 UTC m=+0.167179023 container create 6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:06 np0005535469 systemd[1]: Started libpod-conmon-6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f.scope.
Nov 25 10:55:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:06 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 10:55:06 np0005535469 ceph-mgr[75280]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 10:55:06 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 10:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 10:55:06 np0005535469 podman[98614]: 2025-11-25 15:55:06.644752276 +0000 UTC m=+0.414693958 container init 6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:06 np0005535469 podman[98614]: 2025-11-25 15:55:06.65024557 +0000 UTC m=+0.420187212 container start 6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 10:55:06 np0005535469 determined_mclean[98649]: 167 167
Nov 25 10:55:06 np0005535469 systemd[1]: libpod-6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f.scope: Deactivated successfully.
Nov 25 10:55:06 np0005535469 podman[98614]: 2025-11-25 15:55:06.688297705 +0000 UTC m=+0.458239357 container attach 6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:55:06 np0005535469 podman[98614]: 2025-11-25 15:55:06.688628963 +0000 UTC m=+0.458570605 container died 6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 10:55:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:06 np0005535469 condescending_ritchie[98571]: Scheduled mds.cephfs update...
Nov 25 10:55:06 np0005535469 systemd[1]: libpod-b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1.scope: Deactivated successfully.
Nov 25 10:55:06 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 25 10:55:06 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 25 10:55:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-11a0015055e0e9c982504963847d5754d5babf90532886b4b1c70cde45a04a82-merged.mount: Deactivated successfully.
Nov 25 10:55:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:08 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 10:55:08 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 10:55:08 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Nov 25 10:55:09 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Nov 25 10:55:09 np0005535469 ceph-mon[74985]: Saving service mds.cephfs spec with placement compute-0
Nov 25 10:55:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:55:10 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 10:55:10 np0005535469 podman[98614]: 2025-11-25 15:55:10.234447534 +0000 UTC m=+4.004389206 container remove 6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:55:10 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 10:55:10 np0005535469 podman[98455]: 2025-11-25 15:55:10.251464428 +0000 UTC m=+9.225558807 container died b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1 (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 10:55:10 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 25 10:55:10 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 25 10:55:10 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 25 10:55:11 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 25 10:55:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-579ea489ea748432ed513e49213f47bc9969e544d81a0bee2fa8312ef6ebd097-merged.mount: Deactivated successfully.
Nov 25 10:55:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:12 np0005535469 podman[98455]: 2025-11-25 15:55:12.823429385 +0000 UTC m=+11.797523734 container remove b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1 (image=quay.io/ceph/ceph:v18, name=condescending_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:12 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 10:55:12 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 10:55:12 np0005535469 systemd[1]: libpod-conmon-6f323ac07e5b751f9c4ea9a282850d49842a72fe5aaf0f5c72d5111e25a13e8f.scope: Deactivated successfully.
Nov 25 10:55:13 np0005535469 podman[98688]: 2025-11-25 15:55:12.957338388 +0000 UTC m=+2.575263334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:13 np0005535469 podman[98688]: 2025-11-25 15:55:13.238933754 +0000 UTC m=+2.856858630 container create e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:55:13 np0005535469 python3[98782]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 10:55:13 np0005535469 systemd[1]: Started libpod-conmon-e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152.scope.
Nov 25 10:55:13 np0005535469 systemd[1]: libpod-conmon-b05cf0ef2f7f0a991b34f7c620a1c9fd526b7b27e347097b2fc8adcb5e2b0bd1.scope: Deactivated successfully.
Nov 25 10:55:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:13 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Nov 25 10:55:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697c78bf081ac328c5d58f32cc6732b4bbd63fa6d7b8f300db9378f0c2b0cb11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:13 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 10:55:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697c78bf081ac328c5d58f32cc6732b4bbd63fa6d7b8f300db9378f0c2b0cb11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697c78bf081ac328c5d58f32cc6732b4bbd63fa6d7b8f300db9378f0c2b0cb11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697c78bf081ac328c5d58f32cc6732b4bbd63fa6d7b8f300db9378f0c2b0cb11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:13 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Nov 25 10:55:13 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 10:55:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:13 np0005535469 python3[98856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764086113.2606883-36916-263766151841685/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=bf49f99c554355929b34667e0fb24fbf9e247d1b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:55:14 np0005535469 python3[98910]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:14 np0005535469 podman[98688]: 2025-11-25 15:55:14.713219368 +0000 UTC m=+4.331144264 container init e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:14 np0005535469 podman[98688]: 2025-11-25 15:55:14.726258768 +0000 UTC m=+4.344183634 container start e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 10:55:14 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 25 10:55:14 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]: {
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:    "0": [
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:        {
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "devices": [
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "/dev/loop3"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            ],
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_name": "ceph_lv0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_size": "21470642176",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "name": "ceph_lv0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "tags": {
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.crush_device_class": "",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.encrypted": "0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osd_id": "0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.type": "block",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.vdo": "0"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            },
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "type": "block",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "vg_name": "ceph_vg0"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:        }
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:    ],
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:    "1": [
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:        {
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "devices": [
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "/dev/loop4"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            ],
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_name": "ceph_lv1",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_size": "21470642176",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "name": "ceph_lv1",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "tags": {
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.crush_device_class": "",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.encrypted": "0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osd_id": "1",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.type": "block",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.vdo": "0"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            },
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "type": "block",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "vg_name": "ceph_vg1"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:        }
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:    ],
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:    "2": [
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:        {
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "devices": [
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "/dev/loop5"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            ],
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_name": "ceph_lv2",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_size": "21470642176",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "name": "ceph_lv2",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "tags": {
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.crush_device_class": "",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.encrypted": "0",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osd_id": "2",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.type": "block",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:                "ceph.vdo": "0"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            },
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "type": "block",
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:            "vg_name": "ceph_vg2"
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:        }
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]:    ]
Nov 25 10:55:15 np0005535469 frosty_hypatia[98858]: }
Nov 25 10:55:15 np0005535469 systemd[1]: libpod-e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152.scope: Deactivated successfully.
Nov 25 10:55:15 np0005535469 podman[98688]: 2025-11-25 15:55:15.777461396 +0000 UTC m=+5.395386292 container attach e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 10:55:15 np0005535469 podman[98688]: 2025-11-25 15:55:15.778232186 +0000 UTC m=+5.396157062 container died e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:15 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 25 10:55:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:16 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 25 10:55:16 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 25 10:55:16 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 25 10:55:16 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 10:55:16 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 10:55:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:17 np0005535469 podman[98911]: 2025-11-25 15:55:16.907222717 +0000 UTC m=+2.376132184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-697c78bf081ac328c5d58f32cc6732b4bbd63fa6d7b8f300db9378f0c2b0cb11-merged.mount: Deactivated successfully.
Nov 25 10:55:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 25 10:55:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 25 10:55:17 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 10:55:17 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 10:55:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:19 np0005535469 podman[98688]: 2025-11-25 15:55:19.2195435 +0000 UTC m=+8.837468376 container remove e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:55:19 np0005535469 podman[98911]: 2025-11-25 15:55:19.770084773 +0000 UTC m=+5.238994250 container create a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89 (image=quay.io/ceph/ceph:v18, name=nifty_perlman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 10:55:19 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Nov 25 10:55:19 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Nov 25 10:55:19 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 10:55:19 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 10:55:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:20 np0005535469 systemd[1]: Started libpod-conmon-a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89.scope.
Nov 25 10:55:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b36202e34a94691db95ec54134fd194f532de795bc8a3d934c15bdd2970801/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b36202e34a94691db95ec54134fd194f532de795bc8a3d934c15bdd2970801/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:20 np0005535469 systemd[1]: libpod-conmon-e5d4e42e28830f90bcc1fce576b7caf68620a065033450428db4cd3df982d152.scope: Deactivated successfully.
Nov 25 10:55:20 np0005535469 podman[98911]: 2025-11-25 15:55:20.751172153 +0000 UTC m=+6.220081600 container init a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89 (image=quay.io/ceph/ceph:v18, name=nifty_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 10:55:20 np0005535469 podman[98911]: 2025-11-25 15:55:20.757777703 +0000 UTC m=+6.226687140 container start a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89 (image=quay.io/ceph/ceph:v18, name=nifty_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 10:55:21 np0005535469 podman[98911]: 2025-11-25 15:55:21.375793458 +0000 UTC m=+6.844702895 container attach a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89 (image=quay.io/ceph/ceph:v18, name=nifty_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:21 np0005535469 podman[99103]: 2025-11-25 15:55:21.515775107 +0000 UTC m=+0.020363003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:21 np0005535469 podman[99103]: 2025-11-25 15:55:21.807487006 +0000 UTC m=+0.312074892 container create eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:21 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 10:55:21 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 25 10:55:21 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 10:55:21 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 10:55:21 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 25 10:55:21 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 10:55:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 25 10:55:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/29462530' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 10:55:21 np0005535469 systemd[1]: Started libpod-conmon-eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9.scope.
Nov 25 10:55:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/29462530' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 10:55:22 np0005535469 podman[99103]: 2025-11-25 15:55:22.252820433 +0000 UTC m=+0.757408349 container init eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:22 np0005535469 podman[99103]: 2025-11-25 15:55:22.262959539 +0000 UTC m=+0.767547455 container start eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 10:55:22 np0005535469 determined_mestorf[99121]: 167 167
Nov 25 10:55:22 np0005535469 systemd[1]: libpod-eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9.scope: Deactivated successfully.
Nov 25 10:55:22 np0005535469 conmon[99121]: conmon eac8b4b471383531c699 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9.scope/container/memory.events
Nov 25 10:55:22 np0005535469 systemd[1]: libpod-a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89.scope: Deactivated successfully.
Nov 25 10:55:22 np0005535469 podman[99103]: 2025-11-25 15:55:22.304516927 +0000 UTC m=+0.809104943 container attach eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mestorf, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:55:22 np0005535469 podman[99103]: 2025-11-25 15:55:22.305222586 +0000 UTC m=+0.809810502 container died eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:55:22 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 10:55:22 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/29462530' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 10:55:22 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/29462530' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 10:55:22 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 10:55:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2d4f9435e9cdda2777063e1472a1f2013616367c60eb003dc794a546e7739132-merged.mount: Deactivated successfully.
Nov 25 10:55:23 np0005535469 podman[99103]: 2025-11-25 15:55:23.432992677 +0000 UTC m=+1.937580553 container remove eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mestorf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:55:23 np0005535469 systemd[1]: libpod-conmon-eac8b4b471383531c6995408934d6dc690326a46bdd17a26390bb54cf5dc6fb9.scope: Deactivated successfully.
Nov 25 10:55:23 np0005535469 podman[98911]: 2025-11-25 15:55:23.50494758 +0000 UTC m=+8.973857057 container died a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89 (image=quay.io/ceph/ceph:v18, name=nifty_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:55:23 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Nov 25 10:55:23 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Nov 25 10:55:23 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 25 10:55:23 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 25 10:55:23 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 10:55:23 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 10:55:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v114: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-39b36202e34a94691db95ec54134fd194f532de795bc8a3d934c15bdd2970801-merged.mount: Deactivated successfully.
Nov 25 10:55:24 np0005535469 podman[98911]: 2025-11-25 15:55:24.394107126 +0000 UTC m=+9.863016563 container remove a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89 (image=quay.io/ceph/ceph:v18, name=nifty_perlman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 10:55:24 np0005535469 podman[99156]: 2025-11-25 15:55:24.411383025 +0000 UTC m=+0.761160512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:24 np0005535469 podman[99156]: 2025-11-25 15:55:24.603274733 +0000 UTC m=+0.953052170 container create c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:24 np0005535469 systemd[1]: libpod-conmon-a2928ea50011c26ff20e3ff5b4fcf4f38c892d9a089dc9973c57c8f178291d89.scope: Deactivated successfully.
Nov 25 10:55:24 np0005535469 systemd[1]: Started libpod-conmon-c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd.scope.
Nov 25 10:55:24 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 25 10:55:24 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 25 10:55:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a720b4e92ea5592b5211ebb669f4307a150d0d1631313f06dc0add21c3352f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a720b4e92ea5592b5211ebb669f4307a150d0d1631313f06dc0add21c3352f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a720b4e92ea5592b5211ebb669f4307a150d0d1631313f06dc0add21c3352f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a720b4e92ea5592b5211ebb669f4307a150d0d1631313f06dc0add21c3352f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:25 np0005535469 python3[99201]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:25 np0005535469 podman[99156]: 2025-11-25 15:55:25.34144846 +0000 UTC m=+1.691225887 container init c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:25 np0005535469 podman[99156]: 2025-11-25 15:55:25.349444527 +0000 UTC m=+1.699221964 container start c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:25 np0005535469 podman[99156]: 2025-11-25 15:55:25.596272327 +0000 UTC m=+1.946049724 container attach c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:55:25 np0005535469 podman[99203]: 2025-11-25 15:55:25.642438979 +0000 UTC m=+0.463328697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:25 np0005535469 podman[99203]: 2025-11-25 15:55:25.834917274 +0000 UTC m=+0.655806972 container create 31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d (image=quay.io/ceph/ceph:v18, name=nice_germain, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v115: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:26 np0005535469 systemd[1]: Started libpod-conmon-31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d.scope.
Nov 25 10:55:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2909efc332e802326dcffcfba880c270b9f20e18aa0fa445197e4d7ff95aa522/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2909efc332e802326dcffcfba880c270b9f20e18aa0fa445197e4d7ff95aa522/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:26 np0005535469 podman[99203]: 2025-11-25 15:55:26.18367871 +0000 UTC m=+1.004568438 container init 31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d (image=quay.io/ceph/ceph:v18, name=nice_germain, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 10:55:26 np0005535469 podman[99203]: 2025-11-25 15:55:26.188970804 +0000 UTC m=+1.009860502 container start 31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d (image=quay.io/ceph/ceph:v18, name=nice_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:26 np0005535469 podman[99203]: 2025-11-25 15:55:26.225466675 +0000 UTC m=+1.046356393 container attach 31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d (image=quay.io/ceph/ceph:v18, name=nice_germain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]: {
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "osd_id": 1,
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "type": "bluestore"
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:    },
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "osd_id": 2,
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "type": "bluestore"
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:    },
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "osd_id": 0,
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:        "type": "bluestore"
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]:    }
Nov 25 10:55:26 np0005535469 peaceful_heisenberg[99173]: }
Nov 25 10:55:26 np0005535469 systemd[1]: libpod-c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd.scope: Deactivated successfully.
Nov 25 10:55:26 np0005535469 podman[99253]: 2025-11-25 15:55:26.380718249 +0000 UTC m=+0.033936122 container died c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:55:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8a720b4e92ea5592b5211ebb669f4307a150d0d1631313f06dc0add21c3352f6-merged.mount: Deactivated successfully.
Nov 25 10:55:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 10:55:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/692956388' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 10:55:26 np0005535469 nice_germain[99226]: 
Nov 25 10:55:26 np0005535469 nice_germain[99226]: {"fsid":"d82baeae-c742-50a4-b8f6-b5257c8a2c92","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":229,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1764086064,"num_in_osds":3,"osd_in_since":1764086009,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84131840,"bytes_avail":64327794688,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-25T15:55:25.960546+0000","services":{}},"progress_events":{}}
Nov 25 10:55:26 np0005535469 systemd[1]: libpod-31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d.scope: Deactivated successfully.
Nov 25 10:55:26 np0005535469 podman[99253]: 2025-11-25 15:55:26.951314387 +0000 UTC m=+0.604532220 container remove c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 10:55:26 np0005535469 systemd[1]: libpod-conmon-c03356253cef2b176356a2cca886ff3c3cc1b76bdb0db9792fb63afb768777bd.scope: Deactivated successfully.
Nov 25 10:55:26 np0005535469 podman[99203]: 2025-11-25 15:55:26.961084322 +0000 UTC m=+1.781974090 container died 31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d (image=quay.io/ceph/ceph:v18, name=nice_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:55:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:55:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2909efc332e802326dcffcfba880c270b9f20e18aa0fa445197e4d7ff95aa522-merged.mount: Deactivated successfully.
Nov 25 10:55:27 np0005535469 podman[99203]: 2025-11-25 15:55:27.714916144 +0000 UTC m=+2.535805842 container remove 31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d (image=quay.io/ceph/ceph:v18, name=nice_germain, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:55:27 np0005535469 systemd[1]: libpod-conmon-31b9da28c16a061af3f311ed05d830e77d8fce2f670d161df243809802b2fd6d.scope: Deactivated successfully.
Nov 25 10:55:27 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:27 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:28 np0005535469 python3[99517]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:28 np0005535469 podman[99548]: 2025-11-25 15:55:28.52599567 +0000 UTC m=+0.489038665 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:28 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 25 10:55:28 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 25 10:55:29 np0005535469 podman[99562]: 2025-11-25 15:55:28.968324976 +0000 UTC m=+0.853696983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:29 np0005535469 podman[99562]: 2025-11-25 15:55:29.314057341 +0000 UTC m=+1.199429318 container create 295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40 (image=quay.io/ceph/ceph:v18, name=peaceful_mahavira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 10:55:29 np0005535469 systemd[1]: Started libpod-conmon-295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40.scope.
Nov 25 10:55:29 np0005535469 podman[99548]: 2025-11-25 15:55:29.640152792 +0000 UTC m=+1.603195797 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c4ee102558f3cdfc52d0fc501bba4eeefa25b4fb3f2a970b704c75d31a8749/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c4ee102558f3cdfc52d0fc501bba4eeefa25b4fb3f2a970b704c75d31a8749/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:29 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 10:55:29 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 10:55:29 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 10:55:29 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 10:55:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:30 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 10:55:30 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 10:55:30 np0005535469 podman[99562]: 2025-11-25 15:55:30.961192111 +0000 UTC m=+2.846564168 container init 295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40 (image=quay.io/ceph/ceph:v18, name=peaceful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 10:55:30 np0005535469 podman[99562]: 2025-11-25 15:55:30.972006244 +0000 UTC m=+2.857378221 container start 295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40 (image=quay.io/ceph/ceph:v18, name=peaceful_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:31 np0005535469 podman[99562]: 2025-11-25 15:55:31.619131369 +0000 UTC m=+3.504503416 container attach 295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40 (image=quay.io/ceph/ceph:v18, name=peaceful_mahavira, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 10:55:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 10:55:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401881086' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 10:55:31 np0005535469 peaceful_mahavira[99594]: 
Nov 25 10:55:31 np0005535469 peaceful_mahavira[99594]: {"epoch":1,"fsid":"d82baeae-c742-50a4-b8f6-b5257c8a2c92","modified":"2025-11-25T15:51:31.565973Z","created":"2025-11-25T15:51:31.565973Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 25 10:55:31 np0005535469 peaceful_mahavira[99594]: dumped monmap epoch 1
Nov 25 10:55:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:31 np0005535469 systemd[1]: libpod-295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40.scope: Deactivated successfully.
Nov 25 10:55:31 np0005535469 podman[99562]: 2025-11-25 15:55:31.96588528 +0000 UTC m=+3.851257257 container died 295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40 (image=quay.io/ceph/ceph:v18, name=peaceful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e0c4ee102558f3cdfc52d0fc501bba4eeefa25b4fb3f2a970b704c75d31a8749-merged.mount: Deactivated successfully.
Nov 25 10:55:32 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 10:55:32 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 10:55:33 np0005535469 podman[99562]: 2025-11-25 15:55:33.264997394 +0000 UTC m=+5.150369391 container remove 295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40 (image=quay.io/ceph/ceph:v18, name=peaceful_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:55:33 np0005535469 systemd[1]: libpod-conmon-295328168b63c7e0dcfc4019a8e56bd52b92e2757e9dd1782ad126b3339a4d40.scope: Deactivated successfully.
Nov 25 10:55:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:55:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:55:33 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 25 10:55:33 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 25 10:55:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v119: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:34 np0005535469 python3[99748]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:34 np0005535469 podman[99752]: 2025-11-25 15:55:34.081821675 +0000 UTC m=+0.043427790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:34 np0005535469 podman[99752]: 2025-11-25 15:55:34.350248552 +0000 UTC m=+0.311854597 container create 06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0 (image=quay.io/ceph/ceph:v18, name=hopeful_stonebraker, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:55:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:34 np0005535469 systemd[1]: Started libpod-conmon-06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0.scope.
Nov 25 10:55:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb9e7f627408f92b84286ef3b06cf5c43cb77db1eed89821ad599ec70f13ed8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb9e7f627408f92b84286ef3b06cf5c43cb77db1eed89821ad599ec70f13ed8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:34 np0005535469 podman[99752]: 2025-11-25 15:55:34.864432458 +0000 UTC m=+0.826038503 container init 06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0 (image=quay.io/ceph/ceph:v18, name=hopeful_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 10:55:34 np0005535469 podman[99752]: 2025-11-25 15:55:34.870894823 +0000 UTC m=+0.832500898 container start 06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0 (image=quay.io/ceph/ceph:v18, name=hopeful_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:34 np0005535469 podman[99752]: 2025-11-25 15:55:34.907887918 +0000 UTC m=+0.869493953 container attach 06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0 (image=quay.io/ceph/ceph:v18, name=hopeful_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 42d29e86-c8f7-4e3f-8b30-bff73155bed0 does not exist
Nov 25 10:55:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4aac3d10-a8f3-412d-8621-8c7386cbabaa does not exist
Nov 25 10:55:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6ec36093-4eda-4146-8adf-f14f8b879e0f does not exist
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1121959833' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 10:55:35 np0005535469 hopeful_stonebraker[99877]: [client.openstack]
Nov 25 10:55:35 np0005535469 hopeful_stonebraker[99877]: #011key = AQBp0CVpAAAAABAAPqzP0LDiXSzlS8QYNEfjuA==
Nov 25 10:55:35 np0005535469 hopeful_stonebraker[99877]: #011caps mgr = "allow *"
Nov 25 10:55:35 np0005535469 hopeful_stonebraker[99877]: #011caps mon = "profile rbd"
Nov 25 10:55:35 np0005535469 hopeful_stonebraker[99877]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 25 10:55:35 np0005535469 systemd[1]: libpod-06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0.scope: Deactivated successfully.
Nov 25 10:55:35 np0005535469 podman[99752]: 2025-11-25 15:55:35.853481174 +0000 UTC m=+1.815087219 container died 06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0 (image=quay.io/ceph/ceph:v18, name=hopeful_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:55:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v120: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8cb9e7f627408f92b84286ef3b06cf5c43cb77db1eed89821ad599ec70f13ed8-merged.mount: Deactivated successfully.
Nov 25 10:55:36 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 10:55:36 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 10:55:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:37 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 25 10:55:37 np0005535469 podman[99752]: 2025-11-25 15:55:37.822966114 +0000 UTC m=+3.784572179 container remove 06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0 (image=quay.io/ceph/ceph:v18, name=hopeful_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:37 np0005535469 systemd[1]: libpod-conmon-06b28a1cd9e05748ad40ca463d7ba47c7d974f147f2be90ed50761dec01733d0.scope: Deactivated successfully.
Nov 25 10:55:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v121: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:38 np0005535469 podman[100073]: 2025-11-25 15:55:38.070051841 +0000 UTC m=+0.030722245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:38 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 25 10:55:38 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/1121959833' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 10:55:38 np0005535469 podman[100073]: 2025-11-25 15:55:38.643507466 +0000 UTC m=+0.604177780 container create ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 10:55:38 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 10:55:38 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 10:55:39 np0005535469 systemd[1]: Started libpod-conmon-ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66.scope.
Nov 25 10:55:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:39 np0005535469 ansible-async_wrapper.py[100242]: Invoked with j341060345477 30 /home/zuul/.ansible/tmp/ansible-tmp-1764086139.0384743-36988-251904887599446/AnsiballZ_command.py _
Nov 25 10:55:39 np0005535469 ansible-async_wrapper.py[100245]: Starting module and watcher
Nov 25 10:55:39 np0005535469 ansible-async_wrapper.py[100245]: Start watching 100246 (30)
Nov 25 10:55:39 np0005535469 ansible-async_wrapper.py[100246]: Start module (100246)
Nov 25 10:55:39 np0005535469 ansible-async_wrapper.py[100242]: Return async_wrapper task started.
Nov 25 10:55:39 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 10:55:39 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 25 10:55:39 np0005535469 podman[100073]: 2025-11-25 15:55:39.782896993 +0000 UTC m=+1.743567397 container init ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 10:55:39 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 10:55:39 np0005535469 podman[100073]: 2025-11-25 15:55:39.79200996 +0000 UTC m=+1.752680284 container start ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:39 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 25 10:55:39 np0005535469 confident_wright[100144]: 167 167
Nov 25 10:55:39 np0005535469 systemd[1]: libpod-ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66.scope: Deactivated successfully.
Nov 25 10:55:39 np0005535469 python3[100247]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:55:39
Nov 25 10:55:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:55:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:55:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', 'vms']
Nov 25 10:55:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 10:55:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v122: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:55:40 np0005535469 podman[100073]: 2025-11-25 15:55:40.082726441 +0000 UTC m=+2.043396805 container attach ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 10:55:40 np0005535469 podman[100073]: 2025-11-25 15:55:40.083160233 +0000 UTC m=+2.043830557 container died ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:55:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5b6362f3f1da7ce26f139e451ead6c3c696c3ed04773a32dd886a7cd23acd005-merged.mount: Deactivated successfully.
Nov 25 10:55:40 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 25 10:55:40 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 25 10:55:40 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 10:55:40 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 10:55:40 np0005535469 podman[100073]: 2025-11-25 15:55:40.830872078 +0000 UTC m=+2.791542432 container remove ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wright, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:40 np0005535469 python3[100319]: ansible-ansible.legacy.async_status Invoked with jid=j341060345477.100242 mode=status _async_dir=/root/.ansible_async
Nov 25 10:55:40 np0005535469 podman[100261]: 2025-11-25 15:55:40.848569978 +0000 UTC m=+0.962432144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:41 np0005535469 podman[100261]: 2025-11-25 15:55:41.142406594 +0000 UTC m=+1.256268730 container create 0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723 (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:41 np0005535469 systemd[1]: Started libpod-conmon-0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723.scope.
Nov 25 10:55:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e310613e533278ae1061b32457b2ad70526d3cbdf36541d7c6e6f744a2844fa4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e310613e533278ae1061b32457b2ad70526d3cbdf36541d7c6e6f744a2844fa4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:41 np0005535469 systemd[1]: libpod-conmon-ea9d9d4c14d1c1a8971e0ae14b90c7c5531761e68b4c85ac65b3754385c41e66.scope: Deactivated successfully.
Nov 25 10:55:41 np0005535469 podman[100330]: 2025-11-25 15:55:41.384395883 +0000 UTC m=+0.450576581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:41 np0005535469 podman[100261]: 2025-11-25 15:55:41.744264191 +0000 UTC m=+1.858126357 container init 0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723 (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 10:55:41 np0005535469 podman[100261]: 2025-11-25 15:55:41.749957156 +0000 UTC m=+1.863819302 container start 0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723 (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:41 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 10:55:41 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 10:55:41 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 10:55:41 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 10:55:41 np0005535469 podman[100261]: 2025-11-25 15:55:41.845503359 +0000 UTC m=+1.959365585 container attach 0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723 (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v123: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:42 np0005535469 podman[100330]: 2025-11-25 15:55:42.096849262 +0000 UTC m=+1.163029940 container create bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_darwin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:42 np0005535469 python3[100399]: ansible-ansible.legacy.async_status Invoked with jid=j341060345477.100242 mode=status _async_dir=/root/.ansible_async
Nov 25 10:55:42 np0005535469 systemd[1]: Started libpod-conmon-bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532.scope.
Nov 25 10:55:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f6f8dfd13e8da9967eac7e244f040be02dd5e66e846057733ef456f42475e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f6f8dfd13e8da9967eac7e244f040be02dd5e66e846057733ef456f42475e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f6f8dfd13e8da9967eac7e244f040be02dd5e66e846057733ef456f42475e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f6f8dfd13e8da9967eac7e244f040be02dd5e66e846057733ef456f42475e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f6f8dfd13e8da9967eac7e244f040be02dd5e66e846057733ef456f42475e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:42 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 10:55:42 np0005535469 romantic_lovelace[100345]: 
Nov 25 10:55:42 np0005535469 romantic_lovelace[100345]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 10:55:42 np0005535469 podman[100330]: 2025-11-25 15:55:42.338502781 +0000 UTC m=+1.404683469 container init bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_darwin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:42 np0005535469 podman[100330]: 2025-11-25 15:55:42.351617257 +0000 UTC m=+1.417797915 container start bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_darwin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:55:42 np0005535469 systemd[1]: libpod-0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723.scope: Deactivated successfully.
Nov 25 10:55:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:42 np0005535469 podman[100330]: 2025-11-25 15:55:42.481484092 +0000 UTC m=+1.547664750 container attach bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 10:55:42 np0005535469 podman[100261]: 2025-11-25 15:55:42.484772361 +0000 UTC m=+2.598634517 container died 0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723 (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e310613e533278ae1061b32457b2ad70526d3cbdf36541d7c6e6f744a2844fa4-merged.mount: Deactivated successfully.
Nov 25 10:55:42 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 25 10:55:42 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 25 10:55:43 np0005535469 podman[100261]: 2025-11-25 15:55:43.196468339 +0000 UTC m=+3.310330475 container remove 0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723 (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 10:55:43 np0005535469 systemd[1]: libpod-conmon-0b37bc71097a55222e1db8b3eb2e1f7b4cbbe49667484d77b111916188f4f723.scope: Deactivated successfully.
Nov 25 10:55:43 np0005535469 ansible-async_wrapper.py[100246]: Module complete (100246)
Nov 25 10:55:43 np0005535469 vigilant_darwin[100420]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:55:43 np0005535469 vigilant_darwin[100420]: --> relative data size: 1.0
Nov 25 10:55:43 np0005535469 vigilant_darwin[100420]: --> All data devices are unavailable
Nov 25 10:55:43 np0005535469 systemd[1]: libpod-bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532.scope: Deactivated successfully.
Nov 25 10:55:43 np0005535469 podman[100330]: 2025-11-25 15:55:43.491689253 +0000 UTC m=+2.557869911 container died bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:43 np0005535469 systemd[1]: libpod-bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532.scope: Consumed 1.066s CPU time.
Nov 25 10:55:43 np0005535469 python3[100507]: ansible-ansible.legacy.async_status Invoked with jid=j341060345477.100242 mode=status _async_dir=/root/.ansible_async
Nov 25 10:55:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d2f6f8dfd13e8da9967eac7e244f040be02dd5e66e846057733ef456f42475e4-merged.mount: Deactivated successfully.
Nov 25 10:55:43 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 10:55:43 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 10:55:43 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 10:55:43 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 10:55:43 np0005535469 python3[100574]: ansible-ansible.legacy.async_status Invoked with jid=j341060345477.100242 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 10:55:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v124: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:44 np0005535469 podman[100330]: 2025-11-25 15:55:44.169780389 +0000 UTC m=+3.235961057 container remove bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_darwin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:44 np0005535469 systemd[1]: libpod-conmon-bc758d50a3f556ccc54256533614c4fca55ddbd9a676ee4447e03690f64bd532.scope: Deactivated successfully.
Nov 25 10:55:44 np0005535469 python3[100653]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:44 np0005535469 podman[100701]: 2025-11-25 15:55:44.614084808 +0000 UTC m=+0.089467799 container create 52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a (image=quay.io/ceph/ceph:v18, name=vibrant_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:55:44 np0005535469 ansible-async_wrapper.py[100245]: Done in kid B.
Nov 25 10:55:44 np0005535469 podman[100701]: 2025-11-25 15:55:44.551047288 +0000 UTC m=+0.026430289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:44 np0005535469 systemd[1]: Started libpod-conmon-52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a.scope.
Nov 25 10:55:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8496f807aa8ff2a7fbf59d7e6224a23c52b8fe7bcac664933628867478eb7b20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8496f807aa8ff2a7fbf59d7e6224a23c52b8fe7bcac664933628867478eb7b20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:44 np0005535469 podman[100701]: 2025-11-25 15:55:44.807402865 +0000 UTC m=+0.282785886 container init 52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a (image=quay.io/ceph/ceph:v18, name=vibrant_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:44 np0005535469 podman[100701]: 2025-11-25 15:55:44.819062593 +0000 UTC m=+0.294445584 container start 52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a (image=quay.io/ceph/ceph:v18, name=vibrant_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:55:44 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 25 10:55:44 np0005535469 podman[100701]: 2025-11-25 15:55:44.85836477 +0000 UTC m=+0.333747751 container attach 52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a (image=quay.io/ceph/ceph:v18, name=vibrant_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:55:44 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 25 10:55:45 np0005535469 podman[100759]: 2025-11-25 15:55:44.97002466 +0000 UTC m=+0.023488709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:45 np0005535469 podman[100759]: 2025-11-25 15:55:45.141565357 +0000 UTC m=+0.195029376 container create 04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 10:55:45 np0005535469 systemd[1]: Started libpod-conmon-04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356.scope.
Nov 25 10:55:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:45 np0005535469 podman[100759]: 2025-11-25 15:55:45.392911768 +0000 UTC m=+0.446375777 container init 04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_greider, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:45 np0005535469 podman[100759]: 2025-11-25 15:55:45.398360726 +0000 UTC m=+0.451824725 container start 04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_greider, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:55:45 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 10:55:45 np0005535469 distracted_greider[100794]: 167 167
Nov 25 10:55:45 np0005535469 vibrant_villani[100732]: 
Nov 25 10:55:45 np0005535469 systemd[1]: libpod-04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356.scope: Deactivated successfully.
Nov 25 10:55:45 np0005535469 vibrant_villani[100732]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 10:55:45 np0005535469 systemd[1]: libpod-52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a.scope: Deactivated successfully.
Nov 25 10:55:45 np0005535469 podman[100759]: 2025-11-25 15:55:45.470909446 +0000 UTC m=+0.524373475 container attach 04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 10:55:45 np0005535469 podman[100759]: 2025-11-25 15:55:45.473294481 +0000 UTC m=+0.526758490 container died 04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 10:55:45 np0005535469 podman[100701]: 2025-11-25 15:55:45.506721118 +0000 UTC m=+0.982104109 container died 52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a (image=quay.io/ceph/ceph:v18, name=vibrant_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 10:55:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f4c3123b7ae707cff6e5531638d5453c2c46767da1667fbdd2fd19da98f4f327-merged.mount: Deactivated successfully.
Nov 25 10:55:45 np0005535469 podman[100759]: 2025-11-25 15:55:45.557232089 +0000 UTC m=+0.610696098 container remove 04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_greider, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8496f807aa8ff2a7fbf59d7e6224a23c52b8fe7bcac664933628867478eb7b20-merged.mount: Deactivated successfully.
Nov 25 10:55:45 np0005535469 podman[100701]: 2025-11-25 15:55:45.658930559 +0000 UTC m=+1.134313540 container remove 52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a (image=quay.io/ceph/ceph:v18, name=vibrant_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 10:55:45 np0005535469 systemd[1]: libpod-conmon-52832ff79b03453a1811b27d8ec62f129575c9399e717e951b91a3511d0d642a.scope: Deactivated successfully.
Nov 25 10:55:45 np0005535469 systemd[1]: libpod-conmon-04a1b946498dea91c62ced586c403aaa3836cc0a12fb2ffa826007d0e3764356.scope: Deactivated successfully.
Nov 25 10:55:45 np0005535469 podman[100833]: 2025-11-25 15:55:45.704494907 +0000 UTC m=+0.042458855 container create 94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:45 np0005535469 systemd[1]: Started libpod-conmon-94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c.scope.
Nov 25 10:55:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ccfb6eab258dd8597e38c3ae94995a7b2bced29a2eee0a132bc852f20d1f37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ccfb6eab258dd8597e38c3ae94995a7b2bced29a2eee0a132bc852f20d1f37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ccfb6eab258dd8597e38c3ae94995a7b2bced29a2eee0a132bc852f20d1f37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ccfb6eab258dd8597e38c3ae94995a7b2bced29a2eee0a132bc852f20d1f37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:45 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 25 10:55:45 np0005535469 podman[100833]: 2025-11-25 15:55:45.685572893 +0000 UTC m=+0.023536861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:45 np0005535469 podman[100833]: 2025-11-25 15:55:45.787429307 +0000 UTC m=+0.125393275 container init 94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:45 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 25 10:55:45 np0005535469 podman[100833]: 2025-11-25 15:55:45.796246216 +0000 UTC m=+0.134210164 container start 94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 10:55:45 np0005535469 podman[100833]: 2025-11-25 15:55:45.800383899 +0000 UTC m=+0.138347847 container attach 94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:55:45 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 10:55:45 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 10:55:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v125: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 32)
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:55:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:55:46 np0005535469 objective_euclid[100847]: {
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:    "0": [
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:        {
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "devices": [
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "/dev/loop3"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            ],
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_name": "ceph_lv0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_size": "21470642176",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "name": "ceph_lv0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "tags": {
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.crush_device_class": "",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.encrypted": "0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osd_id": "0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.type": "block",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.vdo": "0"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            },
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "type": "block",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "vg_name": "ceph_vg0"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:        }
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:    ],
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:    "1": [
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:        {
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "devices": [
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "/dev/loop4"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            ],
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_name": "ceph_lv1",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_size": "21470642176",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "name": "ceph_lv1",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "tags": {
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.crush_device_class": "",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.encrypted": "0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osd_id": "1",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.type": "block",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.vdo": "0"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            },
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "type": "block",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "vg_name": "ceph_vg1"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:        }
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:    ],
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:    "2": [
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:        {
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "devices": [
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "/dev/loop5"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            ],
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_name": "ceph_lv2",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_size": "21470642176",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "name": "ceph_lv2",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "tags": {
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.crush_device_class": "",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.encrypted": "0",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osd_id": "2",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.type": "block",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:                "ceph.vdo": "0"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            },
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "type": "block",
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:            "vg_name": "ceph_vg2"
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:        }
Nov 25 10:55:46 np0005535469 objective_euclid[100847]:    ]
Nov 25 10:55:46 np0005535469 objective_euclid[100847]: }
Nov 25 10:55:46 np0005535469 systemd[1]: libpod-94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c.scope: Deactivated successfully.
Nov 25 10:55:46 np0005535469 podman[100833]: 2025-11-25 15:55:46.583979359 +0000 UTC m=+0.921943337 container died 94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-45ccfb6eab258dd8597e38c3ae94995a7b2bced29a2eee0a132bc852f20d1f37-merged.mount: Deactivated successfully.
Nov 25 10:55:46 np0005535469 python3[100879]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:46 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 25 10:55:46 np0005535469 podman[100833]: 2025-11-25 15:55:46.683401517 +0000 UTC m=+1.021365465 container remove 94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:46 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 25 10:55:46 np0005535469 systemd[1]: libpod-conmon-94fe8af626c76f9e515ce45b5bd3a6d78d335c409481387832daca924fd52d5c.scope: Deactivated successfully.
Nov 25 10:55:46 np0005535469 podman[100892]: 2025-11-25 15:55:46.719098516 +0000 UTC m=+0.068602723 container create 3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd (image=quay.io/ceph/ceph:v18, name=sleepy_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 10:55:46 np0005535469 podman[100892]: 2025-11-25 15:55:46.698915509 +0000 UTC m=+0.048419726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:46 np0005535469 systemd[1]: Started libpod-conmon-3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd.scope.
Nov 25 10:55:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3522156b30741ea63e55c348e6d1b7a61a88e238be29f3c64e5dee90088771/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3522156b30741ea63e55c348e6d1b7a61a88e238be29f3c64e5dee90088771/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:46 np0005535469 podman[100892]: 2025-11-25 15:55:46.85148489 +0000 UTC m=+0.200989117 container init 3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd (image=quay.io/ceph/ceph:v18, name=sleepy_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:46 np0005535469 podman[100892]: 2025-11-25 15:55:46.859446476 +0000 UTC m=+0.208950673 container start 3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd (image=quay.io/ceph/ceph:v18, name=sleepy_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 10:55:46 np0005535469 podman[100892]: 2025-11-25 15:55:46.867820183 +0000 UTC m=+0.217324390 container attach 3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd (image=quay.io/ceph/ceph:v18, name=sleepy_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:55:47 np0005535469 podman[101070]: 2025-11-25 15:55:47.313376297 +0000 UTC m=+0.055087326 container create e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:55:47 np0005535469 systemd[1]: Started libpod-conmon-e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef.scope.
Nov 25 10:55:47 np0005535469 podman[101070]: 2025-11-25 15:55:47.283772193 +0000 UTC m=+0.025483292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:47 np0005535469 podman[101070]: 2025-11-25 15:55:47.406152855 +0000 UTC m=+0.147863894 container init e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:47 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 10:55:47 np0005535469 podman[101070]: 2025-11-25 15:55:47.418326005 +0000 UTC m=+0.160037024 container start e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:47 np0005535469 sleepy_leakey[100931]: 
Nov 25 10:55:47 np0005535469 sleepy_leakey[100931]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 25 10:55:47 np0005535469 youthful_faraday[101086]: 167 167
Nov 25 10:55:47 np0005535469 systemd[1]: libpod-e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef.scope: Deactivated successfully.
Nov 25 10:55:47 np0005535469 podman[101070]: 2025-11-25 15:55:47.422286723 +0000 UTC m=+0.163997732 container attach e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:55:47 np0005535469 podman[101070]: 2025-11-25 15:55:47.422778527 +0000 UTC m=+0.164489526 container died e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 10:55:47 np0005535469 systemd[1]: libpod-3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd.scope: Deactivated successfully.
Nov 25 10:55:47 np0005535469 podman[100892]: 2025-11-25 15:55:47.453229313 +0000 UTC m=+0.802733550 container died 3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd (image=quay.io/ceph/ceph:v18, name=sleepy_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e4174ce1e3de514754bd9e64f45864e30cdd37ed933de601c92b20f9c826dfd1-merged.mount: Deactivated successfully.
Nov 25 10:55:47 np0005535469 podman[101070]: 2025-11-25 15:55:47.486227429 +0000 UTC m=+0.227938448 container remove e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 10:55:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8f3522156b30741ea63e55c348e6d1b7a61a88e238be29f3c64e5dee90088771-merged.mount: Deactivated successfully.
Nov 25 10:55:47 np0005535469 podman[100892]: 2025-11-25 15:55:47.532008302 +0000 UTC m=+0.881512509 container remove 3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd (image=quay.io/ceph/ceph:v18, name=sleepy_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:55:47 np0005535469 systemd[1]: libpod-conmon-e1568615912fa3a22817a63cb394d5b0e7d160e9adc233b6af8f4ad30569bdef.scope: Deactivated successfully.
Nov 25 10:55:47 np0005535469 systemd[1]: libpod-conmon-3fe54d76fadb454a8832d5dc528582b4dea8698362839f7e269108d3946698fd.scope: Deactivated successfully.
Nov 25 10:55:47 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.12 deep-scrub starts
Nov 25 10:55:47 np0005535469 podman[101121]: 2025-11-25 15:55:47.668566918 +0000 UTC m=+0.051575591 container create 3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:47 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.12 deep-scrub ok
Nov 25 10:55:47 np0005535469 systemd[1]: Started libpod-conmon-3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585.scope.
Nov 25 10:55:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a45ba6c220ea2ade4e4e9ed45e131a57699ac19b61352eec49025aaeceadd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a45ba6c220ea2ade4e4e9ed45e131a57699ac19b61352eec49025aaeceadd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a45ba6c220ea2ade4e4e9ed45e131a57699ac19b61352eec49025aaeceadd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a45ba6c220ea2ade4e4e9ed45e131a57699ac19b61352eec49025aaeceadd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:47 np0005535469 podman[101121]: 2025-11-25 15:55:47.645723219 +0000 UTC m=+0.028731922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:47 np0005535469 podman[101121]: 2025-11-25 15:55:47.756922976 +0000 UTC m=+0.139931679 container init 3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 10:55:47 np0005535469 podman[101121]: 2025-11-25 15:55:47.764927114 +0000 UTC m=+0.147935827 container start 3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 10:55:47 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 10:55:47 np0005535469 podman[101121]: 2025-11-25 15:55:47.769790796 +0000 UTC m=+0.152799499 container attach 3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:47 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 10:55:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v126: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:48 np0005535469 python3[101167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:48 np0005535469 podman[101173]: 2025-11-25 15:55:48.563171431 +0000 UTC m=+0.055925569 container create 8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed (image=quay.io/ceph/ceph:v18, name=angry_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:55:48 np0005535469 systemd[1]: Started libpod-conmon-8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed.scope.
Nov 25 10:55:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:48 np0005535469 podman[101173]: 2025-11-25 15:55:48.533837035 +0000 UTC m=+0.026591193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65d23aa51168ba9ad24deaeaf637929da710e0a29bcdc59d9228e245a9ebea5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65d23aa51168ba9ad24deaeaf637929da710e0a29bcdc59d9228e245a9ebea5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:48 np0005535469 podman[101173]: 2025-11-25 15:55:48.65302889 +0000 UTC m=+0.145783048 container init 8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed (image=quay.io/ceph/ceph:v18, name=angry_cray, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:48 np0005535469 podman[101173]: 2025-11-25 15:55:48.658427547 +0000 UTC m=+0.151181685 container start 8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed (image=quay.io/ceph/ceph:v18, name=angry_cray, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:55:48 np0005535469 podman[101173]: 2025-11-25 15:55:48.66186693 +0000 UTC m=+0.154621098 container attach 8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed (image=quay.io/ceph/ceph:v18, name=angry_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:55:48 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]: {
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "osd_id": 1,
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "type": "bluestore"
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:    },
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "osd_id": 2,
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "type": "bluestore"
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:    },
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "osd_id": 0,
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:        "type": "bluestore"
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]:    }
Nov 25 10:55:48 np0005535469 reverent_roentgen[101137]: }
Nov 25 10:55:48 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 10:55:48 np0005535469 systemd[1]: libpod-3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585.scope: Deactivated successfully.
Nov 25 10:55:48 np0005535469 podman[101121]: 2025-11-25 15:55:48.878048068 +0000 UTC m=+1.261056761 container died 3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 10:55:48 np0005535469 systemd[1]: libpod-3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585.scope: Consumed 1.108s CPU time.
Nov 25 10:55:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-046a45ba6c220ea2ade4e4e9ed45e131a57699ac19b61352eec49025aaeceadd-merged.mount: Deactivated successfully.
Nov 25 10:55:48 np0005535469 podman[101121]: 2025-11-25 15:55:48.947787121 +0000 UTC m=+1.330795804 container remove 3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:48 np0005535469 systemd[1]: libpod-conmon-3e7ddf8a2f94094b4d0eaf4091bce5b96f4340e437b36c2c1d422ec99f34b585.scope: Deactivated successfully.
Nov 25 10:55:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:49 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 9833277a-75a5-4ed8-8036-22b9921678a1 (Updating rgw.rgw deployment (+1 -> 1))
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mkcggr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mkcggr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mkcggr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:55:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:55:49 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.mkcggr on compute-0
Nov 25 10:55:49 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.mkcggr on compute-0
Nov 25 10:55:49 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 10:55:49 np0005535469 angry_cray[101197]: 
Nov 25 10:55:49 np0005535469 angry_cray[101197]: [{"container_id": "5ca393038676", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.31%", "created": "2025-11-25T15:53:05.414621Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-25T15:53:05.457822Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T15:55:33.421538Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-11-25T15:53:05.270064Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@crash.compute-0", "version": "18.2.7"}, {"container_id": "3aa2be67788f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "20.29%", "created": "2025-11-25T15:51:39.301177Z", "daemon_id": "compute-0.mavpeh", "daemon_name": "mgr.compute-0.mavpeh", "daemon_type": "mgr", "events": ["2025-11-25T15:53:10.934051Z daemon:mgr.compute-0.mavpeh [INFO] \"Reconfigured mgr.compute-0.mavpeh on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T15:55:33.421448Z", "memory_usage": 551865548, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-25T15:51:39.188215Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@mgr.compute-0.mavpeh", "version": "18.2.7"}, {"container_id": "5ca41d126058", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.64%", "created": "2025-11-25T15:51:33.827030Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-25T15:53:10.192239Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T15:55:33.421332Z", "memory_request": 2147483648, "memory_usage": 42949672, "ports": [], "service_name": "mon", "started": "2025-11-25T15:51:36.878813Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@mon.compute-0", "version": "18.2.7"}, {"container_id": "f5562c4a2a65", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.27%", "created": "2025-11-25T15:53:39.162521Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-25T15:53:39.292835Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T15:55:33.421623Z", "memory_request": 4294967296, "memory_usage": 66112716, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T15:53:39.000669Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@osd.0", "version": "18.2.7"}, {"container_id": "6e8437ef0c77", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.37%", "created": "2025-11-25T15:53:47.564744Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-25T15:53:48.378727Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T15:55:33.421734Z", "memory_request": 4294967296, "memory_usage": 67192750, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T15:53:47.152010Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@osd.1", "version": "18.2.7"}, {"container_id": "fe06467ae966", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.50%", "created": "2025-11-25T15:54:04.788201Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-25T15:54:05.152312Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T15:55:33.421816Z", "memory_request": 4294967296, "memory_usage": 66353889, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T15:54:04.298739Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92@osd.2", "version": "18.2.7"}]
Nov 25 10:55:49 np0005535469 systemd[1]: libpod-8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed.scope: Deactivated successfully.
Nov 25 10:55:49 np0005535469 podman[101173]: 2025-11-25 15:55:49.283821272 +0000 UTC m=+0.776575410 container died 8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed (image=quay.io/ceph/ceph:v18, name=angry_cray, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b65d23aa51168ba9ad24deaeaf637929da710e0a29bcdc59d9228e245a9ebea5-merged.mount: Deactivated successfully.
Nov 25 10:55:49 np0005535469 podman[101173]: 2025-11-25 15:55:49.348824547 +0000 UTC m=+0.841578685 container remove 8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed (image=quay.io/ceph/ceph:v18, name=angry_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:49 np0005535469 systemd[1]: libpod-conmon-8464fba4248a874521d47c5ba45b0c14317bb111a88799d075b1e2e9f036f9ed.scope: Deactivated successfully.
Nov 25 10:55:49 np0005535469 podman[101400]: 2025-11-25 15:55:49.674015554 +0000 UTC m=+0.047433759 container create 1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 10:55:49 np0005535469 systemd[1]: Started libpod-conmon-1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a.scope.
Nov 25 10:55:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:49 np0005535469 podman[101400]: 2025-11-25 15:55:49.649494258 +0000 UTC m=+0.022912493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:49 np0005535469 podman[101400]: 2025-11-25 15:55:49.754053416 +0000 UTC m=+0.127471631 container init 1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:49 np0005535469 podman[101400]: 2025-11-25 15:55:49.762405202 +0000 UTC m=+0.135823417 container start 1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:55:49 np0005535469 podman[101400]: 2025-11-25 15:55:49.767378598 +0000 UTC m=+0.140796843 container attach 1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 10:55:49 np0005535469 vibrant_borg[101416]: 167 167
Nov 25 10:55:49 np0005535469 systemd[1]: libpod-1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a.scope: Deactivated successfully.
Nov 25 10:55:49 np0005535469 podman[101400]: 2025-11-25 15:55:49.771004256 +0000 UTC m=+0.144422491 container died 1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:55:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e9cc440f33d23f7b8a167ad2edc86758ba444354a94969de0ae9742e190dae82-merged.mount: Deactivated successfully.
Nov 25 10:55:49 np0005535469 podman[101400]: 2025-11-25 15:55:49.851757248 +0000 UTC m=+0.225175503 container remove 1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 10:55:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 10:55:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 10:55:49 np0005535469 systemd[1]: libpod-conmon-1f08583bdf837a43a066831f8c84759f76329b5f140dfc0b9df69a6575fb412a.scope: Deactivated successfully.
Nov 25 10:55:49 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 10:55:49 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 10:55:49 np0005535469 systemd[1]: Reloading.
Nov 25 10:55:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v127: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mkcggr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 10:55:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mkcggr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 10:55:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:50 np0005535469 ceph-mon[74985]: Deploying daemon rgw.rgw.compute-0.mkcggr on compute-0
Nov 25 10:55:50 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:55:50 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:55:50 np0005535469 systemd[1]: Reloading.
Nov 25 10:55:50 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:55:50 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:55:50 np0005535469 python3[101499]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:50 np0005535469 podman[101538]: 2025-11-25 15:55:50.483032343 +0000 UTC m=+0.072695045 container create 488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279 (image=quay.io/ceph/ceph:v18, name=reverent_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 10:55:50 np0005535469 podman[101538]: 2025-11-25 15:55:50.436846339 +0000 UTC m=+0.026509061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:50 np0005535469 systemd[1]: Started libpod-conmon-488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279.scope.
Nov 25 10:55:50 np0005535469 systemd[1]: Starting Ceph rgw.rgw.compute-0.mkcggr for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:55:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/756b34be9c96e401f1914fbe0deec14ecf26b0f5c0d8b90f38b7bb317d637907/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/756b34be9c96e401f1914fbe0deec14ecf26b0f5c0d8b90f38b7bb317d637907/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:50 np0005535469 podman[101538]: 2025-11-25 15:55:50.63694425 +0000 UTC m=+0.226607192 container init 488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279 (image=quay.io/ceph/ceph:v18, name=reverent_aryabhata, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 10:55:50 np0005535469 podman[101538]: 2025-11-25 15:55:50.647911018 +0000 UTC m=+0.237573720 container start 488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279 (image=quay.io/ceph/ceph:v18, name=reverent_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 10:55:50 np0005535469 podman[101538]: 2025-11-25 15:55:50.653999103 +0000 UTC m=+0.243661835 container attach 488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279 (image=quay.io/ceph/ceph:v18, name=reverent_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:50 np0005535469 podman[101609]: 2025-11-25 15:55:50.855935355 +0000 UTC m=+0.055320712 container create 840d788c9870e0283a39835cc3697ed25693b96890d026584e9a8b1fc7b220e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-rgw-rgw-compute-0-mkcggr, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:55:50 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 10:55:50 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 10:55:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0bb3a46d442833adf615c9cb2c410f2a8310c63c2d91005b0398fdac0bc067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0bb3a46d442833adf615c9cb2c410f2a8310c63c2d91005b0398fdac0bc067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0bb3a46d442833adf615c9cb2c410f2a8310c63c2d91005b0398fdac0bc067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0bb3a46d442833adf615c9cb2c410f2a8310c63c2d91005b0398fdac0bc067/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.mkcggr supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:50 np0005535469 podman[101609]: 2025-11-25 15:55:50.832504309 +0000 UTC m=+0.031889666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:50 np0005535469 podman[101609]: 2025-11-25 15:55:50.954537402 +0000 UTC m=+0.153922789 container init 840d788c9870e0283a39835cc3697ed25693b96890d026584e9a8b1fc7b220e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-rgw-rgw-compute-0-mkcggr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:50 np0005535469 podman[101609]: 2025-11-25 15:55:50.960849543 +0000 UTC m=+0.160234890 container start 840d788c9870e0283a39835cc3697ed25693b96890d026584e9a8b1fc7b220e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-rgw-rgw-compute-0-mkcggr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:55:50 np0005535469 bash[101609]: 840d788c9870e0283a39835cc3697ed25693b96890d026584e9a8b1fc7b220e5
Nov 25 10:55:50 np0005535469 systemd[1]: Started Ceph rgw.rgw.compute-0.mkcggr for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:55:51 np0005535469 radosgw[101638]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:55:51 np0005535469 radosgw[101638]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 25 10:55:51 np0005535469 radosgw[101638]: framework: beast
Nov 25 10:55:51 np0005535469 radosgw[101638]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 25 10:55:51 np0005535469 radosgw[101638]: init_numa not setting numa affinity
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:55:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 42 pg[8.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 9833277a-75a5-4ed8-8036-22b9921678a1 (Updating rgw.rgw deployment (+1 -> 1))
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 9833277a-75a5-4ed8-8036-22b9921678a1 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 87e4cb9b-368b-4712-b869-56771242dded (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aidjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aidjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aidjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.aidjys on compute-0
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.aidjys on compute-0
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 10:55:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582680536' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 10:55:51 np0005535469 reverent_aryabhata[101556]: 
Nov 25 10:55:51 np0005535469 reverent_aryabhata[101556]: {"fsid":"d82baeae-c742-50a4-b8f6-b5257c8a2c92","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":254,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764086064,"num_in_osds":3,"osd_in_since":1764086009,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84131840,"bytes_avail":64327794688,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-25T15:55:25.960546+0000","services":{}},"progress_events":{"9833277a-75a5-4ed8-8036-22b9921678a1":{"message":"Updating rgw.rgw deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 25 10:55:51 np0005535469 systemd[1]: libpod-488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279.scope: Deactivated successfully.
Nov 25 10:55:51 np0005535469 podman[101538]: 2025-11-25 15:55:51.276112141 +0000 UTC m=+0.865774873 container died 488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279 (image=quay.io/ceph/ceph:v18, name=reverent_aryabhata, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 10:55:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-756b34be9c96e401f1914fbe0deec14ecf26b0f5c0d8b90f38b7bb317d637907-merged.mount: Deactivated successfully.
Nov 25 10:55:51 np0005535469 podman[101538]: 2025-11-25 15:55:51.362448933 +0000 UTC m=+0.952111645 container remove 488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279 (image=quay.io/ceph/ceph:v18, name=reverent_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 25 10:55:51 np0005535469 systemd[1]: libpod-conmon-488294ed74ae3d013031a40d0eb89db0e5098f78f652b7e8b5d8ea4fccd5e279.scope: Deactivated successfully.
Nov 25 10:55:51 np0005535469 podman[101864]: 2025-11-25 15:55:51.894337161 +0000 UTC m=+0.044228182 container create 007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:55:51 np0005535469 systemd[1]: Started libpod-conmon-007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca.scope.
Nov 25 10:55:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:51 np0005535469 podman[101864]: 2025-11-25 15:55:51.875953072 +0000 UTC m=+0.025844113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v129: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:51 np0005535469 podman[101864]: 2025-11-25 15:55:51.988374994 +0000 UTC m=+0.138266055 container init 007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhaskara, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:51 np0005535469 podman[101864]: 2025-11-25 15:55:51.996401962 +0000 UTC m=+0.146292983 container start 007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:55:52 np0005535469 podman[101864]: 2025-11-25 15:55:52.002556399 +0000 UTC m=+0.152447420 container attach 007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 25 10:55:52 np0005535469 laughing_bhaskara[101880]: 167 167
Nov 25 10:55:52 np0005535469 systemd[1]: libpod-007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca.scope: Deactivated successfully.
Nov 25 10:55:52 np0005535469 conmon[101880]: conmon 007a4d8688c2e346fedf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca.scope/container/memory.events
Nov 25 10:55:52 np0005535469 podman[101864]: 2025-11-25 15:55:52.006224358 +0000 UTC m=+0.156115399 container died 007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhaskara, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-89082a75a96297cf23e5890fa953937cb337df5f7ed673a9a99880724f6c2f35-merged.mount: Deactivated successfully.
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 25 10:55:52 np0005535469 podman[101864]: 2025-11-25 15:55:52.0987795 +0000 UTC m=+0.248670521 container remove 007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 10:55:52 np0005535469 systemd[1]: libpod-conmon-007a4d8688c2e346fedf3ef817f3c10c65e03be55d6c3a298635b06bbe4c2aca.scope: Deactivated successfully.
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: Saving service rgw.rgw spec with placement compute-0
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aidjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aidjys", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: Deploying daemon mds.cephfs.compute-0.aidjys on compute-0
Nov 25 10:55:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:55:52 np0005535469 systemd[1]: Reloading.
Nov 25 10:55:52 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:55:52 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:55:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:55:52 np0005535469 systemd[1]: Reloading.
Nov 25 10:55:52 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:55:52 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:55:52 np0005535469 python3[101964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:52 np0005535469 podman[102003]: 2025-11-25 15:55:52.791221825 +0000 UTC m=+0.051750415 container create acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f (image=quay.io/ceph/ceph:v18, name=infallible_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 10:55:52 np0005535469 podman[102003]: 2025-11-25 15:55:52.760854972 +0000 UTC m=+0.021383592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:52 np0005535469 systemd[1]: Started libpod-conmon-acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f.scope.
Nov 25 10:55:52 np0005535469 systemd[1]: Starting Ceph mds.cephfs.compute-0.aidjys for d82baeae-c742-50a4-b8f6-b5257c8a2c92...
Nov 25 10:55:52 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 25 10:55:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:52 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 25 10:55:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683cfef338b8fab4705be6ab98ae5329efb53f7067a8a817120e892ddd4d15fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683cfef338b8fab4705be6ab98ae5329efb53f7067a8a817120e892ddd4d15fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:52 np0005535469 podman[102003]: 2025-11-25 15:55:52.934161815 +0000 UTC m=+0.194690425 container init acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f (image=quay.io/ceph/ceph:v18, name=infallible_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:52 np0005535469 podman[102003]: 2025-11-25 15:55:52.94282301 +0000 UTC m=+0.203351600 container start acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f (image=quay.io/ceph/ceph:v18, name=infallible_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:55:52 np0005535469 podman[102003]: 2025-11-25 15:55:52.948043582 +0000 UTC m=+0.208572172 container attach acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f (image=quay.io/ceph/ceph:v18, name=infallible_curran, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:53 np0005535469 podman[102070]: 2025-11-25 15:55:53.145395419 +0000 UTC m=+0.056724850 container create 9ee39656f5b5329c426c45f68138c23ad1b295d9ccc73bef392b078540dee7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mds-cephfs-compute-0-aidjys, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 25 10:55:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a4dbb9d26fc1603b872e61def6f193d0e4f83ae0f0a52d812682db981e5f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a4dbb9d26fc1603b872e61def6f193d0e4f83ae0f0a52d812682db981e5f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a4dbb9d26fc1603b872e61def6f193d0e4f83ae0f0a52d812682db981e5f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a4dbb9d26fc1603b872e61def6f193d0e4f83ae0f0a52d812682db981e5f8/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.aidjys supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:53 np0005535469 podman[102070]: 2025-11-25 15:55:53.110582024 +0000 UTC m=+0.021911475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 25 10:55:53 np0005535469 podman[102070]: 2025-11-25 15:55:53.224200078 +0000 UTC m=+0.135529529 container init 9ee39656f5b5329c426c45f68138c23ad1b295d9ccc73bef392b078540dee7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mds-cephfs-compute-0-aidjys, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:53 np0005535469 podman[102070]: 2025-11-25 15:55:53.230857489 +0000 UTC m=+0.142186920 container start 9ee39656f5b5329c426c45f68138c23ad1b295d9ccc73bef392b078540dee7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mds-cephfs-compute-0-aidjys, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 10:55:53 np0005535469 bash[102070]: 9ee39656f5b5329c426c45f68138c23ad1b295d9ccc73bef392b078540dee7cf
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 10:55:53 np0005535469 systemd[1]: Started Ceph mds.cephfs.compute-0.aidjys for d82baeae-c742-50a4-b8f6-b5257c8a2c92.
Nov 25 10:55:53 np0005535469 ceph-mds[102090]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:55:53 np0005535469 ceph-mds[102090]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 25 10:55:53 np0005535469 ceph-mds[102090]: main not setting numa affinity
Nov 25 10:55:53 np0005535469 ceph-mds[102090]: pidfile_write: ignore empty --pid-file
Nov 25 10:55:53 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mds-cephfs-compute-0-aidjys[102086]: starting mds.cephfs.compute-0.aidjys at 
Nov 25 10:55:53 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys Updating MDS map to version 2 from mon.0
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:53 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 87e4cb9b-368b-4712-b869-56771242dded (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 10:55:53 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 87e4cb9b-368b-4712-b869-56771242dded (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 10:55:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3920245798' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 10:55:53 np0005535469 infallible_curran[102020]: 
Nov 25 10:55:53 np0005535469 infallible_curran[102020]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd.1","name":"osd_mclock_max_capacity_iops_hdd","value":"397.131597","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.mkcggr","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 25 10:55:53 np0005535469 systemd[1]: libpod-acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f.scope: Deactivated successfully.
Nov 25 10:55:53 np0005535469 podman[102003]: 2025-11-25 15:55:53.648234508 +0000 UTC m=+0.908763098 container died acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f (image=quay.io/ceph/ceph:v18, name=infallible_curran, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-683cfef338b8fab4705be6ab98ae5329efb53f7067a8a817120e892ddd4d15fa-merged.mount: Deactivated successfully.
Nov 25 10:55:53 np0005535469 podman[102003]: 2025-11-25 15:55:53.714137577 +0000 UTC m=+0.974666167 container remove acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f (image=quay.io/ceph/ceph:v18, name=infallible_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:53 np0005535469 systemd[1]: libpod-conmon-acc48f3964e83eabe35619cf8e91b9c8ef9b3495792359b2508b0166b7af579f.scope: Deactivated successfully.
Nov 25 10:55:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 10:55:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 10:55:53 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 44 pg[9.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:55:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v132: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e3 new map
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T15:54:59.655953+0000#012modified#0112025-11-25T15:54:59.655997+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.aidjys{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/3971242172,v1:192.168.122.100:6815/3971242172] compat {c=[1],r=[1],i=[7ff]}]
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys Updating MDS map to version 3 from mon.0
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys Monitors have assigned me to become a standby.
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3971242172,v1:192.168.122.100:6815/3971242172] up:boot
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3971242172,v1:192.168.122.100:6815/3971242172] as mds.0
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.aidjys assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.aidjys"} v 0) v1
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.aidjys"}]: dispatch
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e3 all = 0
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e4 new map
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T15:54:59.655953+0000#012modified#0112025-11-25T15:55:54.283333+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.aidjys{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/3971242172,v1:192.168.122.100:6815/3971242172] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys Updating MDS map to version 4 from mon.0
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x1
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.aidjys=up:creating}
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x100
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x600
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x601
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x602
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x603
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x604
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x605
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x606
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x607
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x608
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.cache creating system inode with ino:0x609
Nov 25 10:55:54 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:55:54 np0005535469 podman[102363]: 2025-11-25 15:55:54.301728507 +0000 UTC m=+0.087120517 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:54 np0005535469 ceph-mds[102090]: mds.0.4 creating_done
Nov 25 10:55:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.aidjys is now active in filesystem cephfs as rank 0
Nov 25 10:55:54 np0005535469 podman[102363]: 2025-11-25 15:55:54.425980429 +0000 UTC m=+0.211372409 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 10:55:54 np0005535469 python3[102468]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:54 np0005535469 podman[102502]: 2025-11-25 15:55:54.812771428 +0000 UTC m=+0.053402901 container create 148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262 (image=quay.io/ceph/ceph:v18, name=beautiful_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 10:55:54 np0005535469 systemd[1]: Started libpod-conmon-148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262.scope.
Nov 25 10:55:54 np0005535469 podman[102502]: 2025-11-25 15:55:54.787855151 +0000 UTC m=+0.028486644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6193fb7fb3b19070356a1e03c7ee51a2251fd90c8a32609690a998e46b634f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6193fb7fb3b19070356a1e03c7ee51a2251fd90c8a32609690a998e46b634f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:54 np0005535469 podman[102502]: 2025-11-25 15:55:54.98599963 +0000 UTC m=+0.226631183 container init 148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262 (image=quay.io/ceph/ceph:v18, name=beautiful_hopper, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:55:54 np0005535469 podman[102502]: 2025-11-25 15:55:54.997896043 +0000 UTC m=+0.238527516 container start 148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262 (image=quay.io/ceph/ceph:v18, name=beautiful_hopper, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:55 np0005535469 podman[102502]: 2025-11-25 15:55:55.062358392 +0000 UTC m=+0.302989935 container attach 148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262 (image=quay.io/ceph/ceph:v18, name=beautiful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 54c07445-2ed6-447e-8746-c172e4e3cd94 does not exist
Nov 25 10:55:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 284dade1-54f9-42e7-98b1-a3bd8c3ac9e9 does not exist
Nov 25 10:55:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7731ca8f-5b37-4e35-bf70-31fa637f134d does not exist
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: daemon mds.cephfs.compute-0.aidjys assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: daemon mds.cephfs.compute-0.aidjys is now active in filesystem cephfs as rank 0
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e5 new map
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T15:54:59.655953+0000#012modified#0112025-11-25T15:55:55.293192+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.aidjys{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3971242172,v1:192.168.122.100:6815/3971242172] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 25 10:55:55 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys Updating MDS map to version 5 from mon.0
Nov 25 10:55:55 np0005535469 ceph-mds[102090]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 10:55:55 np0005535469 ceph-mds[102090]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 25 10:55:55 np0005535469 ceph-mds[102090]: mds.0.4 recovery_done -- successful recovery!
Nov 25 10:55:55 np0005535469 ceph-mds[102090]: mds.0.4 active_start
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3971242172,v1:192.168.122.100:6815/3971242172] up:active
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.aidjys=up:active}
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888396164' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 10:55:55 np0005535469 beautiful_hopper[102542]: mimic
Nov 25 10:55:55 np0005535469 systemd[1]: libpod-148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262.scope: Deactivated successfully.
Nov 25 10:55:55 np0005535469 podman[102502]: 2025-11-25 15:55:55.601764274 +0000 UTC m=+0.842395747 container died 148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262 (image=quay.io/ceph/ceph:v18, name=beautiful_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:55:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e6193fb7fb3b19070356a1e03c7ee51a2251fd90c8a32609690a998e46b634f9-merged.mount: Deactivated successfully.
Nov 25 10:55:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 25 10:55:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 25 10:55:55 np0005535469 podman[102502]: 2025-11-25 15:55:55.705384847 +0000 UTC m=+0.946016320 container remove 148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262 (image=quay.io/ceph/ceph:v18, name=beautiful_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 10:55:55 np0005535469 systemd[1]: libpod-conmon-148a4faad909626681665375eb682b565cacbfee3b49fbde71faf68998150262.scope: Deactivated successfully.
Nov 25 10:55:55 np0005535469 ceph-mgr[75280]: [progress INFO root] Writing back 12 completed events
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 10:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:55 np0005535469 podman[102756]: 2025-11-25 15:55:55.839103296 +0000 UTC m=+0.041822207 container create d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaum, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:55:55 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 46 pg[10.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:55:55 np0005535469 systemd[1]: Started libpod-conmon-d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974.scope.
Nov 25 10:55:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:55 np0005535469 podman[102756]: 2025-11-25 15:55:55.913087805 +0000 UTC m=+0.115806736 container init d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaum, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:55:55 np0005535469 podman[102756]: 2025-11-25 15:55:55.816553944 +0000 UTC m=+0.019272875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:55 np0005535469 podman[102756]: 2025-11-25 15:55:55.919250242 +0000 UTC m=+0.121969143 container start d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaum, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 10:55:55 np0005535469 vigilant_chaum[102772]: 167 167
Nov 25 10:55:55 np0005535469 systemd[1]: libpod-d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974.scope: Deactivated successfully.
Nov 25 10:55:55 np0005535469 podman[102756]: 2025-11-25 15:55:55.926368045 +0000 UTC m=+0.129086966 container attach d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 10:55:55 np0005535469 podman[102756]: 2025-11-25 15:55:55.927003662 +0000 UTC m=+0.129722573 container died d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaum, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c30157c5adca62e8c3c91167f8b1809aa5b09e8c057512c99980b24fb0e024b9-merged.mount: Deactivated successfully.
Nov 25 10:55:55 np0005535469 podman[102756]: 2025-11-25 15:55:55.972162948 +0000 UTC m=+0.174881859 container remove d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaum, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:55:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v135: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.2 KiB/s wr, 8 op/s
Nov 25 10:55:55 np0005535469 systemd[1]: libpod-conmon-d7e36b439304739824e30b6c6108dc8c826a1f9924c3694e73728f4f021bd974.scope: Deactivated successfully.
Nov 25 10:55:56 np0005535469 podman[102798]: 2025-11-25 15:55:56.137755073 +0000 UTC m=+0.051867250 container create 68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 10:55:56 np0005535469 systemd[1]: Started libpod-conmon-68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c.scope.
Nov 25 10:55:56 np0005535469 podman[102798]: 2025-11-25 15:55:56.112709723 +0000 UTC m=+0.026821920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347ef4499f2c88b8a76a480038f3d8f3ca4f6ac7a254e3b7ecaab0b3767e37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347ef4499f2c88b8a76a480038f3d8f3ca4f6ac7a254e3b7ecaab0b3767e37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347ef4499f2c88b8a76a480038f3d8f3ca4f6ac7a254e3b7ecaab0b3767e37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347ef4499f2c88b8a76a480038f3d8f3ca4f6ac7a254e3b7ecaab0b3767e37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347ef4499f2c88b8a76a480038f3d8f3ca4f6ac7a254e3b7ecaab0b3767e37f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:56 np0005535469 podman[102798]: 2025-11-25 15:55:56.234068406 +0000 UTC m=+0.148180603 container init 68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:55:56 np0005535469 podman[102798]: 2025-11-25 15:55:56.246178636 +0000 UTC m=+0.160290833 container start 68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 10:55:56 np0005535469 podman[102798]: 2025-11-25 15:55:56.25444918 +0000 UTC m=+0.168561387 container attach 68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 10:55:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:55:56 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 10:55:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:55:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 25 10:55:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 10:55:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 25 10:55:56 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 25 10:55:56 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:55:56 np0005535469 python3[102857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:55:56 np0005535469 podman[102858]: 2025-11-25 15:55:56.763718763 +0000 UTC m=+0.064636725 container create 5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0 (image=quay.io/ceph/ceph:v18, name=boring_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:55:56 np0005535469 systemd[1]: Started libpod-conmon-5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0.scope.
Nov 25 10:55:56 np0005535469 podman[102858]: 2025-11-25 15:55:56.738087697 +0000 UTC m=+0.039005649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:55:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e732568bd4f7a654a29b0d5c3538dda9949caca7863462f5e741f8a00b740cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e732568bd4f7a654a29b0d5c3538dda9949caca7863462f5e741f8a00b740cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:56 np0005535469 podman[102858]: 2025-11-25 15:55:56.854430526 +0000 UTC m=+0.155348478 container init 5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0 (image=quay.io/ceph/ceph:v18, name=boring_williamson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 10:55:56 np0005535469 podman[102858]: 2025-11-25 15:55:56.861841506 +0000 UTC m=+0.162759428 container start 5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0 (image=quay.io/ceph/ceph:v18, name=boring_williamson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 10:55:56 np0005535469 podman[102858]: 2025-11-25 15:55:56.865399464 +0000 UTC m=+0.166317436 container attach 5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0 (image=quay.io/ceph/ceph:v18, name=boring_williamson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/3602161875' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 10:55:57 np0005535469 jovial_sutherland[102814]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:55:57 np0005535469 jovial_sutherland[102814]: --> relative data size: 1.0
Nov 25 10:55:57 np0005535469 jovial_sutherland[102814]: --> All data devices are unavailable
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 25 10:55:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215953421' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 10:55:57 np0005535469 boring_williamson[102874]: 
Nov 25 10:55:57 np0005535469 boring_williamson[102874]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 25 10:55:57 np0005535469 systemd[1]: libpod-5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0.scope: Deactivated successfully.
Nov 25 10:55:57 np0005535469 podman[102858]: 2025-11-25 15:55:57.495825516 +0000 UTC m=+0.796743458 container died 5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0 (image=quay.io/ceph/ceph:v18, name=boring_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:55:57 np0005535469 systemd[1]: libpod-68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c.scope: Deactivated successfully.
Nov 25 10:55:57 np0005535469 systemd[1]: libpod-68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c.scope: Consumed 1.179s CPU time.
Nov 25 10:55:57 np0005535469 podman[102798]: 2025-11-25 15:55:57.503437472 +0000 UTC m=+1.417549679 container died 68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 10:55:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5e732568bd4f7a654a29b0d5c3538dda9949caca7863462f5e741f8a00b740cf-merged.mount: Deactivated successfully.
Nov 25 10:55:57 np0005535469 podman[102858]: 2025-11-25 15:55:57.549981275 +0000 UTC m=+0.850899197 container remove 5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0 (image=quay.io/ceph/ceph:v18, name=boring_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 10:55:57 np0005535469 systemd[1]: libpod-conmon-5dbda0da8d30cedb15c0ae2fd7ed8c14a75a9b25d4abb02f3b8783c54391dbc0.scope: Deactivated successfully.
Nov 25 10:55:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6347ef4499f2c88b8a76a480038f3d8f3ca4f6ac7a254e3b7ecaab0b3767e37f-merged.mount: Deactivated successfully.
Nov 25 10:55:57 np0005535469 podman[102798]: 2025-11-25 15:55:57.601928296 +0000 UTC m=+1.516040483 container remove 68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 10:55:57 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 48 pg[11.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:55:57 np0005535469 systemd[1]: libpod-conmon-68c0de9428a58f3fe4871a18c5fd19ecf2d67acdfe0bcd63385c6aa83307608c.scope: Deactivated successfully.
Nov 25 10:55:57 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.11 deep-scrub starts
Nov 25 10:55:57 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.11 deep-scrub ok
Nov 25 10:55:57 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 10:55:57 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 10:55:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 25 10:55:58 np0005535469 podman[103086]: 2025-11-25 15:55:58.236444958 +0000 UTC m=+0.058612872 container create b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:58 np0005535469 systemd[1]: Started libpod-conmon-b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff.scope.
Nov 25 10:55:58 np0005535469 podman[103086]: 2025-11-25 15:55:58.206976249 +0000 UTC m=+0.029144253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:58 np0005535469 podman[103086]: 2025-11-25 15:55:58.353786973 +0000 UTC m=+0.175954907 container init b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hopper, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 10:55:58 np0005535469 podman[103086]: 2025-11-25 15:55:58.363812296 +0000 UTC m=+0.185980220 container start b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:55:58 np0005535469 fervent_hopper[103102]: 167 167
Nov 25 10:55:58 np0005535469 systemd[1]: libpod-b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff.scope: Deactivated successfully.
Nov 25 10:55:58 np0005535469 podman[103086]: 2025-11-25 15:55:58.37430441 +0000 UTC m=+0.196472324 container attach b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hopper, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:58 np0005535469 podman[103086]: 2025-11-25 15:55:58.374819555 +0000 UTC m=+0.196987469 container died b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hopper, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 10:55:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 25 10:55:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 10:55:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 25 10:55:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 25 10:55:58 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:55:58 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 10:55:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f5ca399e7fac13778fc13ef2c315c93825d4f648499d3991d3d62b3063817722-merged.mount: Deactivated successfully.
Nov 25 10:55:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 25 10:55:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 10:55:58 np0005535469 podman[103086]: 2025-11-25 15:55:58.44023266 +0000 UTC m=+0.262400574 container remove b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hopper, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:55:58 np0005535469 systemd[1]: libpod-conmon-b18ee91f83730de8fbbaf5c8661b4ef51130688a5847672df23229a4e53cd1ff.scope: Deactivated successfully.
Nov 25 10:55:58 np0005535469 podman[103127]: 2025-11-25 15:55:58.611727125 +0000 UTC m=+0.056565777 container create 6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:55:58 np0005535469 systemd[1]: Started libpod-conmon-6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007.scope.
Nov 25 10:55:58 np0005535469 podman[103127]: 2025-11-25 15:55:58.579337856 +0000 UTC m=+0.024176528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:55:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:55:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a18c07510926d86aae64884784792305bcc220cc0b205562a2561b124c8d588/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a18c07510926d86aae64884784792305bcc220cc0b205562a2561b124c8d588/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a18c07510926d86aae64884784792305bcc220cc0b205562a2561b124c8d588/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a18c07510926d86aae64884784792305bcc220cc0b205562a2561b124c8d588/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:55:58 np0005535469 podman[103127]: 2025-11-25 15:55:58.716686814 +0000 UTC m=+0.161525526 container init 6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:58 np0005535469 podman[103127]: 2025-11-25 15:55:58.728115104 +0000 UTC m=+0.172953736 container start 6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_antonelli, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 10:55:58 np0005535469 podman[103127]: 2025-11-25 15:55:58.732185744 +0000 UTC m=+0.177024426 container attach 6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:55:59 np0005535469 ceph-mds[102090]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 25 10:55:59 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mds-cephfs-compute-0-aidjys[102086]: 2025-11-25T15:55:59.296+0000 7fcde0951640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 25 10:55:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 25 10:55:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 10:55:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 25 10:55:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 25 10:55:59 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 10:55:59 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 10:55:59 np0005535469 ceph-mon[74985]: from='client.? 192.168.122.100:0/4119337572' entity='client.rgw.rgw.compute-0.mkcggr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]: {
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:    "0": [
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:        {
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "devices": [
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "/dev/loop3"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            ],
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_name": "ceph_lv0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_size": "21470642176",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "name": "ceph_lv0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "tags": {
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.crush_device_class": "",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.encrypted": "0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osd_id": "0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.type": "block",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.vdo": "0"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            },
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "type": "block",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "vg_name": "ceph_vg0"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:        }
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:    ],
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:    "1": [
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:        {
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "devices": [
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "/dev/loop4"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            ],
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_name": "ceph_lv1",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_size": "21470642176",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "name": "ceph_lv1",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "tags": {
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.crush_device_class": "",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.encrypted": "0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osd_id": "1",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.type": "block",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.vdo": "0"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            },
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "type": "block",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "vg_name": "ceph_vg1"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:        }
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:    ],
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:    "2": [
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:        {
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "devices": [
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "/dev/loop5"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            ],
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_name": "ceph_lv2",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_size": "21470642176",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "name": "ceph_lv2",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "tags": {
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.cluster_name": "ceph",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.crush_device_class": "",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.encrypted": "0",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osd_id": "2",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.type": "block",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:                "ceph.vdo": "0"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            },
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "type": "block",
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:            "vg_name": "ceph_vg2"
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:        }
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]:    ]
Nov 25 10:55:59 np0005535469 wizardly_antonelli[103143]: }
Nov 25 10:55:59 np0005535469 radosgw[101638]: LDAP not started since no server URIs were provided in the configuration.
Nov 25 10:55:59 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-rgw-rgw-compute-0-mkcggr[101624]: 2025-11-25T15:55:59.552+0000 7fe352a13940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 25 10:55:59 np0005535469 radosgw[101638]: framework: beast
Nov 25 10:55:59 np0005535469 radosgw[101638]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 25 10:55:59 np0005535469 radosgw[101638]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 25 10:55:59 np0005535469 systemd[1]: libpod-6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007.scope: Deactivated successfully.
Nov 25 10:55:59 np0005535469 podman[103127]: 2025-11-25 15:55:59.57902854 +0000 UTC m=+1.023867172 container died 6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:55:59 np0005535469 radosgw[101638]: starting handler: beast
Nov 25 10:55:59 np0005535469 radosgw[101638]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 10:55:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1a18c07510926d86aae64884784792305bcc220cc0b205562a2561b124c8d588-merged.mount: Deactivated successfully.
Nov 25 10:55:59 np0005535469 radosgw[101638]: mgrc service_daemon_register rgw.14277 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.mkcggr,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=569c5505-fb49-4baa-badd-e9ac4c376cf1,zone_name=default,zonegroup_id=4f19bc4e-3c93-4265-85c1-4a7ee8922489,zonegroup_name=default}
Nov 25 10:55:59 np0005535469 podman[103127]: 2025-11-25 15:55:59.643719186 +0000 UTC m=+1.088557818 container remove 6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:55:59 np0005535469 systemd[1]: libpod-conmon-6d236404b652711b96fa06377919e39cf3645a2a6898bf8af0d436f5d7822007.scope: Deactivated successfully.
Nov 25 10:55:59 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 10:55:59 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 10:55:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v141: 197 pgs: 197 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 123 op/s
Nov 25 10:56:00 np0005535469 podman[103850]: 2025-11-25 15:56:00.327873948 +0000 UTC m=+0.047646605 container create 928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elion, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:56:00 np0005535469 systemd[1]: Started libpod-conmon-928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de.scope.
Nov 25 10:56:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:00 np0005535469 podman[103850]: 2025-11-25 15:56:00.307017001 +0000 UTC m=+0.026789678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:00 np0005535469 podman[103850]: 2025-11-25 15:56:00.414030035 +0000 UTC m=+0.133802792 container init 928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:56:00 np0005535469 podman[103850]: 2025-11-25 15:56:00.423338749 +0000 UTC m=+0.143111406 container start 928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:56:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 10:56:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 10:56:00 np0005535469 podman[103850]: 2025-11-25 15:56:00.427969784 +0000 UTC m=+0.147742441 container attach 928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 10:56:00 np0005535469 determined_elion[103866]: 167 167
Nov 25 10:56:00 np0005535469 systemd[1]: libpod-928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de.scope: Deactivated successfully.
Nov 25 10:56:00 np0005535469 conmon[103866]: conmon 928ca1f1a1bc375cc815 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de.scope/container/memory.events
Nov 25 10:56:00 np0005535469 podman[103850]: 2025-11-25 15:56:00.43110446 +0000 UTC m=+0.150877117 container died 928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 10:56:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d774a8f7129c9233676d23ff80cf3c77f6d002e6e5781be2bef60e6f98128cf2-merged.mount: Deactivated successfully.
Nov 25 10:56:00 np0005535469 podman[103850]: 2025-11-25 15:56:00.474033255 +0000 UTC m=+0.193805912 container remove 928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 10:56:00 np0005535469 systemd[1]: libpod-conmon-928ca1f1a1bc375cc815eae30bfc03ece4c0ec693dd6204102a99abebcdbb9de.scope: Deactivated successfully.
Nov 25 10:56:00 np0005535469 podman[103890]: 2025-11-25 15:56:00.643159415 +0000 UTC m=+0.051144419 container create 13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sammet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 10:56:00 np0005535469 systemd[1]: Started libpod-conmon-13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b.scope.
Nov 25 10:56:00 np0005535469 podman[103890]: 2025-11-25 15:56:00.615961917 +0000 UTC m=+0.023946971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8522c61ef6f69c1d51f80663f63d905a8826b467df62045f57f530335d56d245/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8522c61ef6f69c1d51f80663f63d905a8826b467df62045f57f530335d56d245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8522c61ef6f69c1d51f80663f63d905a8826b467df62045f57f530335d56d245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8522c61ef6f69c1d51f80663f63d905a8826b467df62045f57f530335d56d245/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:00 np0005535469 podman[103890]: 2025-11-25 15:56:00.740939909 +0000 UTC m=+0.148924943 container init 13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sammet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:56:00 np0005535469 podman[103890]: 2025-11-25 15:56:00.749478331 +0000 UTC m=+0.157463335 container start 13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sammet, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:56:00 np0005535469 podman[103890]: 2025-11-25 15:56:00.753218583 +0000 UTC m=+0.161203587 container attach 13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:56:00 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 25 10:56:00 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 25 10:56:01 np0005535469 ceph-mon[74985]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 10:56:01 np0005535469 ceph-mon[74985]: Cluster is now healthy
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]: {
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "osd_id": 1,
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "type": "bluestore"
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:    },
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "osd_id": 2,
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "type": "bluestore"
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:    },
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "osd_id": 0,
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:        "type": "bluestore"
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]:    }
Nov 25 10:56:01 np0005535469 upbeat_sammet[103907]: }
Nov 25 10:56:01 np0005535469 systemd[1]: libpod-13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b.scope: Deactivated successfully.
Nov 25 10:56:01 np0005535469 podman[103940]: 2025-11-25 15:56:01.800290354 +0000 UTC m=+0.033838790 container died 13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sammet, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 10:56:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8522c61ef6f69c1d51f80663f63d905a8826b467df62045f57f530335d56d245-merged.mount: Deactivated successfully.
Nov 25 10:56:01 np0005535469 podman[103940]: 2025-11-25 15:56:01.871210859 +0000 UTC m=+0.104759275 container remove 13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_sammet, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 10:56:01 np0005535469 systemd[1]: libpod-conmon-13879c486e9e6425c046f96e8b3a80fd842e41895e07a3860dfb189a886c7a6b.scope: Deactivated successfully.
Nov 25 10:56:01 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 10:56:01 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 10:56:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:56:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:56:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7da533b0-4f23-4401-b846-18fcc656bb0c does not exist
Nov 25 10:56:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 34e72b46-4368-441f-a032-880e21ad1af0 does not exist
Nov 25 10:56:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v142: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 8.3 KiB/s wr, 87 op/s
Nov 25 10:56:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:02 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Nov 25 10:56:02 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Nov 25 10:56:02 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 10:56:02 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 10:56:02 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 25 10:56:02 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 25 10:56:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:03 np0005535469 podman[104177]: 2025-11-25 15:56:03.007478202 +0000 UTC m=+0.192478786 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:56:03 np0005535469 podman[104177]: 2025-11-25 15:56:03.129218927 +0000 UTC m=+0.314219501 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:56:03 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 25 10:56:03 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 25 10:56:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v143: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 72 op/s
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 43e1c68d-ccf0-4629-b026-a35e03b7de5a does not exist
Nov 25 10:56:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b1bbbd4d-ea4e-435d-bc6a-c02ad4b83ed0 does not exist
Nov 25 10:56:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 99492512-2ec9-4e14-9d5c-831ae1c548df does not exist
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:56:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:56:05 np0005535469 podman[104473]: 2025-11-25 15:56:05.146821492 +0000 UTC m=+0.072123949 container create 40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:56:05 np0005535469 podman[104473]: 2025-11-25 15:56:05.098087649 +0000 UTC m=+0.023390086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:56:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:56:05 np0005535469 systemd[1]: Started libpod-conmon-40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac.scope.
Nov 25 10:56:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:05 np0005535469 podman[104473]: 2025-11-25 15:56:05.337266211 +0000 UTC m=+0.262568658 container init 40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:56:05 np0005535469 podman[104473]: 2025-11-25 15:56:05.344896348 +0000 UTC m=+0.270198765 container start 40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:56:05 np0005535469 friendly_leakey[104489]: 167 167
Nov 25 10:56:05 np0005535469 systemd[1]: libpod-40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac.scope: Deactivated successfully.
Nov 25 10:56:05 np0005535469 podman[104473]: 2025-11-25 15:56:05.350742337 +0000 UTC m=+0.276044774 container attach 40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:56:05 np0005535469 podman[104473]: 2025-11-25 15:56:05.351407895 +0000 UTC m=+0.276710312 container died 40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 10:56:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-813c66c44719da962ea7f8026dfd02fdfd774f64250986af02b7f3ce789ed8d7-merged.mount: Deactivated successfully.
Nov 25 10:56:05 np0005535469 podman[104473]: 2025-11-25 15:56:05.51409462 +0000 UTC m=+0.439397037 container remove 40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:56:05 np0005535469 systemd[1]: libpod-conmon-40626292d0e967292b2d27824ec6688e0a66f398a968dd7ed1bf1aafca8921ac.scope: Deactivated successfully.
Nov 25 10:56:05 np0005535469 podman[104515]: 2025-11-25 15:56:05.668766089 +0000 UTC m=+0.043266695 container create 9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 10:56:05 np0005535469 systemd[1]: Started libpod-conmon-9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3.scope.
Nov 25 10:56:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881af4f844953871e928f25dd1f37ea2d73313bbe53f9018ea54bab477a387bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:05 np0005535469 podman[104515]: 2025-11-25 15:56:05.65038481 +0000 UTC m=+0.024885436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881af4f844953871e928f25dd1f37ea2d73313bbe53f9018ea54bab477a387bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881af4f844953871e928f25dd1f37ea2d73313bbe53f9018ea54bab477a387bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881af4f844953871e928f25dd1f37ea2d73313bbe53f9018ea54bab477a387bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881af4f844953871e928f25dd1f37ea2d73313bbe53f9018ea54bab477a387bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:05 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 25 10:56:05 np0005535469 podman[104515]: 2025-11-25 15:56:05.761314731 +0000 UTC m=+0.135815357 container init 9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 10:56:05 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 25 10:56:05 np0005535469 podman[104515]: 2025-11-25 15:56:05.769965495 +0000 UTC m=+0.144466101 container start 9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 10:56:05 np0005535469 podman[104515]: 2025-11-25 15:56:05.77307033 +0000 UTC m=+0.147570956 container attach 9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:56:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v144: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 138 op/s
Nov 25 10:56:06 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 25 10:56:06 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 25 10:56:06 np0005535469 youthful_poitras[104531]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:56:06 np0005535469 youthful_poitras[104531]: --> relative data size: 1.0
Nov 25 10:56:06 np0005535469 youthful_poitras[104531]: --> All data devices are unavailable
Nov 25 10:56:06 np0005535469 systemd[1]: libpod-9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3.scope: Deactivated successfully.
Nov 25 10:56:06 np0005535469 systemd[1]: libpod-9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3.scope: Consumed 1.095s CPU time.
Nov 25 10:56:06 np0005535469 podman[104515]: 2025-11-25 15:56:06.92950831 +0000 UTC m=+1.304008956 container died 9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_poitras, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 10:56:06 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 25 10:56:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-881af4f844953871e928f25dd1f37ea2d73313bbe53f9018ea54bab477a387bf-merged.mount: Deactivated successfully.
Nov 25 10:56:06 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 25 10:56:06 np0005535469 podman[104515]: 2025-11-25 15:56:06.993889017 +0000 UTC m=+1.368389633 container remove 9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_poitras, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:56:07 np0005535469 systemd[1]: libpod-conmon-9e5d3a6f408be502ad27fc6b909f8c700c23148c2961e2b5a91392ae5bb61ab3.scope: Deactivated successfully.
Nov 25 10:56:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:07 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Nov 25 10:56:07 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 25 10:56:07 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Nov 25 10:56:07 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 25 10:56:07 np0005535469 podman[104713]: 2025-11-25 15:56:07.74958123 +0000 UTC m=+0.058217482 container create a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:56:07 np0005535469 systemd[1]: Started libpod-conmon-a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b.scope.
Nov 25 10:56:07 np0005535469 podman[104713]: 2025-11-25 15:56:07.722091853 +0000 UTC m=+0.030728215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:07 np0005535469 podman[104713]: 2025-11-25 15:56:07.831901744 +0000 UTC m=+0.140538016 container init a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:56:07 np0005535469 podman[104713]: 2025-11-25 15:56:07.838205016 +0000 UTC m=+0.146841268 container start a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 10:56:07 np0005535469 podman[104713]: 2025-11-25 15:56:07.840892639 +0000 UTC m=+0.149528891 container attach a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:56:07 np0005535469 romantic_bassi[104729]: 167 167
Nov 25 10:56:07 np0005535469 systemd[1]: libpod-a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b.scope: Deactivated successfully.
Nov 25 10:56:07 np0005535469 podman[104713]: 2025-11-25 15:56:07.857610722 +0000 UTC m=+0.166246974 container died a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 10:56:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7166af0113a4c2dc861d4f4444b094c23ac64dac1dbbbe112df708abb7f1ed16-merged.mount: Deactivated successfully.
Nov 25 10:56:07 np0005535469 podman[104713]: 2025-11-25 15:56:07.89143593 +0000 UTC m=+0.200072182 container remove a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:56:07 np0005535469 systemd[1]: libpod-conmon-a450978070f49556c5689a699f479682d73a0717f2f5edd812c02a07305af40b.scope: Deactivated successfully.
Nov 25 10:56:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v145: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 KiB/s wr, 115 op/s
Nov 25 10:56:08 np0005535469 podman[104754]: 2025-11-25 15:56:08.06679923 +0000 UTC m=+0.052424503 container create a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:56:08 np0005535469 systemd[1]: Started libpod-conmon-a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b.scope.
Nov 25 10:56:08 np0005535469 podman[104754]: 2025-11-25 15:56:08.045132363 +0000 UTC m=+0.030757646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dd38e4027c0095903d251c957abbc8d2a63827ccfdbd4c976bf7e7c8b3282e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dd38e4027c0095903d251c957abbc8d2a63827ccfdbd4c976bf7e7c8b3282e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dd38e4027c0095903d251c957abbc8d2a63827ccfdbd4c976bf7e7c8b3282e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dd38e4027c0095903d251c957abbc8d2a63827ccfdbd4c976bf7e7c8b3282e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:08 np0005535469 podman[104754]: 2025-11-25 15:56:08.167688978 +0000 UTC m=+0.153314231 container init a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:56:08 np0005535469 podman[104754]: 2025-11-25 15:56:08.181665038 +0000 UTC m=+0.167290291 container start a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:56:08 np0005535469 podman[104754]: 2025-11-25 15:56:08.185873283 +0000 UTC m=+0.171498536 container attach a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:56:08 np0005535469 magical_villani[104770]: {
Nov 25 10:56:08 np0005535469 magical_villani[104770]:    "0": [
Nov 25 10:56:08 np0005535469 magical_villani[104770]:        {
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "devices": [
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "/dev/loop3"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            ],
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_name": "ceph_lv0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_size": "21470642176",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "name": "ceph_lv0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "tags": {
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cluster_name": "ceph",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.crush_device_class": "",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.encrypted": "0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osd_id": "0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.type": "block",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.vdo": "0"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            },
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "type": "block",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "vg_name": "ceph_vg0"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:        }
Nov 25 10:56:08 np0005535469 magical_villani[104770]:    ],
Nov 25 10:56:08 np0005535469 magical_villani[104770]:    "1": [
Nov 25 10:56:08 np0005535469 magical_villani[104770]:        {
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "devices": [
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "/dev/loop4"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            ],
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_name": "ceph_lv1",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_size": "21470642176",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "name": "ceph_lv1",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "tags": {
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cluster_name": "ceph",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.crush_device_class": "",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.encrypted": "0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osd_id": "1",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.type": "block",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.vdo": "0"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            },
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "type": "block",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "vg_name": "ceph_vg1"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:        }
Nov 25 10:56:08 np0005535469 magical_villani[104770]:    ],
Nov 25 10:56:08 np0005535469 magical_villani[104770]:    "2": [
Nov 25 10:56:08 np0005535469 magical_villani[104770]:        {
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "devices": [
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "/dev/loop5"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            ],
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_name": "ceph_lv2",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_size": "21470642176",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "name": "ceph_lv2",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "tags": {
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.cluster_name": "ceph",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.crush_device_class": "",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.encrypted": "0",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osd_id": "2",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.type": "block",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:                "ceph.vdo": "0"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            },
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "type": "block",
Nov 25 10:56:08 np0005535469 magical_villani[104770]:            "vg_name": "ceph_vg2"
Nov 25 10:56:08 np0005535469 magical_villani[104770]:        }
Nov 25 10:56:08 np0005535469 magical_villani[104770]:    ]
Nov 25 10:56:08 np0005535469 magical_villani[104770]: }
Nov 25 10:56:09 np0005535469 systemd[1]: libpod-a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b.scope: Deactivated successfully.
Nov 25 10:56:09 np0005535469 podman[104754]: 2025-11-25 15:56:09.017552058 +0000 UTC m=+1.003177311 container died a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:56:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a7dd38e4027c0095903d251c957abbc8d2a63827ccfdbd4c976bf7e7c8b3282e-merged.mount: Deactivated successfully.
Nov 25 10:56:09 np0005535469 podman[104754]: 2025-11-25 15:56:09.108180437 +0000 UTC m=+1.093805690 container remove a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:56:09 np0005535469 systemd[1]: libpod-conmon-a839eb9c11a6bca2da6113cbdacee4e3defb2de7bf9eb47551e62dda61239d0b.scope: Deactivated successfully.
Nov 25 10:56:09 np0005535469 podman[104932]: 2025-11-25 15:56:09.798165815 +0000 UTC m=+0.040560951 container create ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:56:09 np0005535469 systemd[1]: Started libpod-conmon-ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1.scope.
Nov 25 10:56:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:09 np0005535469 podman[104932]: 2025-11-25 15:56:09.782343147 +0000 UTC m=+0.024738303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:09 np0005535469 podman[104932]: 2025-11-25 15:56:09.879864733 +0000 UTC m=+0.122259899 container init ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:56:09 np0005535469 podman[104932]: 2025-11-25 15:56:09.887920652 +0000 UTC m=+0.130315808 container start ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:56:09 np0005535469 podman[104932]: 2025-11-25 15:56:09.891548701 +0000 UTC m=+0.133943867 container attach ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brahmagupta, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:56:09 np0005535469 optimistic_brahmagupta[104949]: 167 167
Nov 25 10:56:09 np0005535469 systemd[1]: libpod-ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1.scope: Deactivated successfully.
Nov 25 10:56:09 np0005535469 conmon[104949]: conmon ae7b351caea9322aa587 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1.scope/container/memory.events
Nov 25 10:56:09 np0005535469 podman[104932]: 2025-11-25 15:56:09.895671662 +0000 UTC m=+0.138066808 container died ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brahmagupta, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 10:56:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9badfd01f404e86145da437d6a0406fc83c8ebdced1048fcea9ff83c540dbf73-merged.mount: Deactivated successfully.
Nov 25 10:56:09 np0005535469 podman[104932]: 2025-11-25 15:56:09.935086172 +0000 UTC m=+0.177481298 container remove ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 10:56:09 np0005535469 systemd[1]: libpod-conmon-ae7b351caea9322aa587145629762dae88572a82f4f718030bd631a46bd82fd1.scope: Deactivated successfully.
Nov 25 10:56:09 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 10:56:09 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 10:56:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v146: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 104 op/s
Nov 25 10:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:56:10 np0005535469 podman[104973]: 2025-11-25 15:56:10.10777715 +0000 UTC m=+0.050461501 container create e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:56:10 np0005535469 systemd[1]: Started libpod-conmon-e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152.scope.
Nov 25 10:56:10 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:10 np0005535469 podman[104973]: 2025-11-25 15:56:10.0842069 +0000 UTC m=+0.026891301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:56:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680d7a8e43c0deb32b312cb3c48bca7017baae28669316bea07c05799154858d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680d7a8e43c0deb32b312cb3c48bca7017baae28669316bea07c05799154858d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680d7a8e43c0deb32b312cb3c48bca7017baae28669316bea07c05799154858d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680d7a8e43c0deb32b312cb3c48bca7017baae28669316bea07c05799154858d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:10 np0005535469 podman[104973]: 2025-11-25 15:56:10.190779212 +0000 UTC m=+0.133463563 container init e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:56:10 np0005535469 podman[104973]: 2025-11-25 15:56:10.200479626 +0000 UTC m=+0.143163987 container start e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 10:56:10 np0005535469 podman[104973]: 2025-11-25 15:56:10.20430585 +0000 UTC m=+0.146990271 container attach e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]: {
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "osd_id": 1,
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "type": "bluestore"
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:    },
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "osd_id": 2,
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "type": "bluestore"
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:    },
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "osd_id": 0,
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:        "type": "bluestore"
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]:    }
Nov 25 10:56:11 np0005535469 wonderful_darwin[104989]: }
Nov 25 10:56:11 np0005535469 systemd[1]: libpod-e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152.scope: Deactivated successfully.
Nov 25 10:56:11 np0005535469 podman[104973]: 2025-11-25 15:56:11.19421992 +0000 UTC m=+1.136904271 container died e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 10:56:11 np0005535469 systemd[1]: libpod-e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152.scope: Consumed 1.002s CPU time.
Nov 25 10:56:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-680d7a8e43c0deb32b312cb3c48bca7017baae28669316bea07c05799154858d-merged.mount: Deactivated successfully.
Nov 25 10:56:11 np0005535469 podman[104973]: 2025-11-25 15:56:11.303553528 +0000 UTC m=+1.246237879 container remove e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:56:11 np0005535469 systemd[1]: libpod-conmon-e1bc391bf9347301d90aba3571d34664c5ca67bc1975a4e0040f8add069e2152.scope: Deactivated successfully.
Nov 25 10:56:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:56:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:56:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 402b9434-fc41-4600-9d05-dd6bc252caec does not exist
Nov 25 10:56:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 372ec6d8-00c9-4286-8525-6dd1a90b5959 does not exist
Nov 25 10:56:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v147: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 52 op/s
Nov 25 10:56:12 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 25 10:56:12 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 25 10:56:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:13 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 25 10:56:13 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 25 10:56:13 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 25 10:56:13 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 25 10:56:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v148: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 52 op/s
Nov 25 10:56:14 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 10:56:14 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 10:56:14 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 10:56:14 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 10:56:15 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 25 10:56:15 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 25 10:56:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v149: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 52 op/s
Nov 25 10:56:16 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 25 10:56:16 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 25 10:56:16 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 10:56:16 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 10:56:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v150: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:18 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 10:56:18 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 10:56:18 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 10:56:18 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 10:56:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v151: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:21 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 10:56:21 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 10:56:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v152: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:23 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Nov 25 10:56:23 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Nov 25 10:56:23 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 25 10:56:23 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 25 10:56:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v153: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:24 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 25 10:56:24 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 25 10:56:24 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 25 10:56:24 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 25 10:56:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v154: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:26 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.1e deep-scrub starts
Nov 25 10:56:26 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.1e deep-scrub ok
Nov 25 10:56:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:27 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 25 10:56:27 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 25 10:56:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v155: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:28 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 10:56:28 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 10:56:29 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 25 10:56:29 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 25 10:56:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v156: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:30 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 25 10:56:30 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 25 10:56:31 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 10:56:31 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 10:56:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v157: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:32 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 10:56:32 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 10:56:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v158: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v159: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:36 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 25 10:56:36 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 25 10:56:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v160: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 25 10:56:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 25 10:56:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:56:39
Nov 25 10:56:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:56:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:56:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'images', 'backups', 'volumes']
Nov 25 10:56:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 10:56:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v161: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:56:40 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 25 10:56:40 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 25 10:56:41 np0005535469 python3[105115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:56:41 np0005535469 podman[105116]: 2025-11-25 15:56:41.902618361 +0000 UTC m=+0.044813538 container create bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d (image=quay.io/ceph/ceph:v18, name=zen_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 10:56:41 np0005535469 systemd[1]: Started libpod-conmon-bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d.scope.
Nov 25 10:56:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9d4ab8eaa241680a0e01ebea97a004c73af17544a3bcc87be0de70bf2993d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9d4ab8eaa241680a0e01ebea97a004c73af17544a3bcc87be0de70bf2993d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:41 np0005535469 podman[105116]: 2025-11-25 15:56:41.884772897 +0000 UTC m=+0.026968104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:56:41 np0005535469 podman[105116]: 2025-11-25 15:56:41.986589411 +0000 UTC m=+0.128784638 container init bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d (image=quay.io/ceph/ceph:v18, name=zen_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 10:56:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v162: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:41 np0005535469 podman[105116]: 2025-11-25 15:56:41.994292291 +0000 UTC m=+0.136487468 container start bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d (image=quay.io/ceph/ceph:v18, name=zen_carver, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 10:56:41 np0005535469 podman[105116]: 2025-11-25 15:56:41.99866532 +0000 UTC m=+0.140860507 container attach bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d (image=quay.io/ceph/ceph:v18, name=zen_carver, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:56:42 np0005535469 zen_carver[105133]: could not fetch user info: no user info saved
Nov 25 10:56:42 np0005535469 systemd[1]: libpod-bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d.scope: Deactivated successfully.
Nov 25 10:56:42 np0005535469 podman[105116]: 2025-11-25 15:56:42.159354502 +0000 UTC m=+0.301549679 container died bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d (image=quay.io/ceph/ceph:v18, name=zen_carver, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:56:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-be9d4ab8eaa241680a0e01ebea97a004c73af17544a3bcc87be0de70bf2993d7-merged.mount: Deactivated successfully.
Nov 25 10:56:42 np0005535469 podman[105116]: 2025-11-25 15:56:42.255467822 +0000 UTC m=+0.397662999 container remove bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d (image=quay.io/ceph/ceph:v18, name=zen_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 10:56:42 np0005535469 systemd[1]: libpod-conmon-bf23d240c7ba7e5d544ef4936b3890eb1f72a489670b2f90602bc4f75df2297d.scope: Deactivated successfully.
Nov 25 10:56:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:42 np0005535469 python3[105255]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:56:42 np0005535469 podman[105256]: 2025-11-25 15:56:42.661721963 +0000 UTC m=+0.090145239 container create c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce (image=quay.io/ceph/ceph:v18, name=modest_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:56:42 np0005535469 podman[105256]: 2025-11-25 15:56:42.611915131 +0000 UTC m=+0.040338467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 10:56:42 np0005535469 systemd[1]: Started libpod-conmon-c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce.scope.
Nov 25 10:56:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:56:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33de5d194e998b88928ccedd537c3dc7ca5e40b1681b471efcbaeda2d1e13458/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33de5d194e998b88928ccedd537c3dc7ca5e40b1681b471efcbaeda2d1e13458/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:56:42 np0005535469 podman[105256]: 2025-11-25 15:56:42.794614282 +0000 UTC m=+0.223037588 container init c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce (image=quay.io/ceph/ceph:v18, name=modest_wright, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 10:56:42 np0005535469 podman[105256]: 2025-11-25 15:56:42.800210424 +0000 UTC m=+0.228633670 container start c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce (image=quay.io/ceph/ceph:v18, name=modest_wright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 10:56:42 np0005535469 podman[105256]: 2025-11-25 15:56:42.812949959 +0000 UTC m=+0.241373215 container attach c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce (image=quay.io/ceph/ceph:v18, name=modest_wright, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 10:56:43 np0005535469 modest_wright[105271]: {
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "user_id": "openstack",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "display_name": "openstack",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "email": "",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "suspended": 0,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "max_buckets": 1000,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "subusers": [],
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "keys": [
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        {
Nov 25 10:56:43 np0005535469 modest_wright[105271]:            "user": "openstack",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:            "access_key": "MRUXRM7Z4BZDFXYETUMD",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:            "secret_key": "hYzUERxuj4AzaJB7wtZ81pUbVXrP5yuHxQjAmcEB"
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        }
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    ],
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "swift_keys": [],
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "caps": [],
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "op_mask": "read, write, delete",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "default_placement": "",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "default_storage_class": "",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "placement_tags": [],
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "bucket_quota": {
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "enabled": false,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "check_on_raw": false,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "max_size": -1,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "max_size_kb": 0,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "max_objects": -1
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    },
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "user_quota": {
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "enabled": false,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "check_on_raw": false,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "max_size": -1,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "max_size_kb": 0,
Nov 25 10:56:43 np0005535469 modest_wright[105271]:        "max_objects": -1
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    },
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "temp_url_keys": [],
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "type": "rgw",
Nov 25 10:56:43 np0005535469 modest_wright[105271]:    "mfa_ids": []
Nov 25 10:56:43 np0005535469 modest_wright[105271]: }
Nov 25 10:56:43 np0005535469 modest_wright[105271]: 
Nov 25 10:56:43 np0005535469 systemd[1]: libpod-c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce.scope: Deactivated successfully.
Nov 25 10:56:43 np0005535469 podman[105256]: 2025-11-25 15:56:43.730518154 +0000 UTC m=+1.158941410 container died c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce (image=quay.io/ceph/ceph:v18, name=modest_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 10:56:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-33de5d194e998b88928ccedd537c3dc7ca5e40b1681b471efcbaeda2d1e13458-merged.mount: Deactivated successfully.
Nov 25 10:56:43 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 10:56:43 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 10:56:43 np0005535469 podman[105256]: 2025-11-25 15:56:43.870238628 +0000 UTC m=+1.298661894 container remove c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce (image=quay.io/ceph/ceph:v18, name=modest_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:56:43 np0005535469 systemd[1]: libpod-conmon-c2dcb25183c1e018a7fa28c27d6c9c8dcabce3b41db69979cc5d6dbe9b417cce.scope: Deactivated successfully.
Nov 25 10:56:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v163: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v164: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:56:46 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Nov 25 10:56:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:56:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:46 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 10:56:46 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 10:56:46 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 10:56:46 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 25 10:56:47 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev d46a767e-5eac-49cd-b43e-c4c4ff2c3e6d (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v166: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 204 B/s wr, 1 op/s
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:56:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 25 10:56:48 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 6c190af9-7262-479b-8141-4a59e439496f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:48 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 52 pg[8.0( v 43'4 (0'0,43'4] local-lis/les=42/43 n=4 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.080038071s) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 43'3 mlcod 43'3 active pruub 186.133148193s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:48 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 52 pg[8.0( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.080038071s) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 43'3 mlcod 0'0 unknown pruub 186.133148193s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:48 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Nov 25 10:56:48 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 25 10:56:49 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev 08209501-999e-49ad-9ca7-3ed66df3fb11 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.12( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.11( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.13( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1d( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1a( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.18( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1e( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1b( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.4( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.5( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.19( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.6( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.7( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.9( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.b( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.a( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.8( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.e( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.d( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.3( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.2( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1( v 43'4 (0'0,43'4] local-lis/les=42/43 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.10( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.17( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.16( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.15( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.14( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.13( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.5( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.7( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.0( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 43'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.19( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.a( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1e( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.8( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.3( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.1( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.16( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.17( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 53 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Nov 25 10:56:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Nov 25 10:56:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v169: 228 pgs: 228 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:56:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] update: starting ev a25558cd-0d03-40ce-a6b0-f02b30faca23 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev d46a767e-5eac-49cd-b43e-c4c4ff2c3e6d (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event d46a767e-5eac-49cd-b43e-c4c4ff2c3e6d (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 6c190af9-7262-479b-8141-4a59e439496f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 6c190af9-7262-479b-8141-4a59e439496f (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev 08209501-999e-49ad-9ca7-3ed66df3fb11 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event 08209501-999e-49ad-9ca7-3ed66df3fb11 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] complete: finished ev a25558cd-0d03-40ce-a6b0-f02b30faca23 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] Completed event a25558cd-0d03-40ce-a6b0-f02b30faca23 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 25 10:56:50 np0005535469 ceph-mgr[75280]: [progress INFO root] Writing back 16 completed events
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[10.0( v 50'64 (0'0,50'64] local-lis/les=46/47 n=8 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=9.240034103s) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 50'63 mlcod 50'63 active pruub 172.775115967s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[10.0( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=9.240034103s) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 50'63 mlcod 0'0 unknown pruub 172.775115967s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.597875595s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.073135376s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.598161697s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.073348999s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.597989082s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.073348999s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596759796s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072937012s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[9.0( v 50'389 (0'0,50'389] local-lis/les=44/45 n=177 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=14.752571106s) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 50'388 mlcod 50'388 active pruub 196.228820801s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596593857s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072937012s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596490860s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.073104858s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596234322s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072845459s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596064568s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072891235s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596015930s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072891235s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.14( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596328735s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.073135376s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596176147s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072845459s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.595501900s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072814941s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.595461845s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072814941s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.595570564s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.073379517s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.596031189s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.073104858s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.2( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.595526695s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.073379517s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.c( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.594168663s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072372437s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.594127655s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072372437s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.d( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.593737602s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072326660s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.593409538s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072036743s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.593684196s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072326660s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.593371391s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072036743s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592828751s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.071960449s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592794418s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.071960449s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592642784s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.071899414s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592589378s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.071899414s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592560768s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.071914673s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592134476s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.071548462s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592515945s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.071914673s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592107773s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.071548462s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592156410s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.071701050s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592131615s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.071701050s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592117310s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.071838379s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592252731s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072021484s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592303276s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.072082520s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=52/53 n=1 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592225075s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072021484s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592077255s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.071838379s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.592266083s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.072082520s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.586389542s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 195.066314697s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=52/53 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=13.586364746s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.066314697s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.e( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.10( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.15( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.f( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.1b( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.b( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.1c( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.6( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.4( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.9( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.12( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 54 pg[8.11( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.18( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.1d( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.1f( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 54 pg[8.1a( empty local-lis/les=0/0 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 54 pg[9.0( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=14.752571106s) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 50'388 mlcod 0'0 unknown pruub 196.228820801s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 10:56:51 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 10:56:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 10:56:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 25 10:56:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 10:56:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:56:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:56:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v171: 290 pgs: 9 peering, 62 unknown, 219 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 10:56:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 25 10:56:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.15( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.14( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.16( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.17( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.3( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.2( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.11( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.d( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.f( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.9( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.c( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.b( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.a( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1e( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1b( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.b( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.a( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.d( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.13( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.8( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.6( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.e( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.7( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.4( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.5( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.12( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1a( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.18( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.19( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1e( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.11( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1d( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.10( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1f( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1c( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1d( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1a( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1c( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1f( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.19( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.12( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.18( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.7( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.6( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.13( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.5( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1b( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.4( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.10( v 50'389 lc 0'0 (0'0,50'389] local-lis/les=44/45 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.8( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.f( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.9( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.c( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.e( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.2( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.3( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.14( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.15( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.17( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.16( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=54/55 n=1 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 55 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.0( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 50'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.14( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.2( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.4( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1a( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.12( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.10( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 55 pg[9.a( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=50'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.d( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.a( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1f( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1c( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1d( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=54/55 n=1 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.5( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.18( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.0( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 50'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.c( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.e( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.9( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.3( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.14( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.15( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=54/55 n=0 ec=52/42 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 55 pg[10.1b( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=46/46 les/c/f=47/47/0 sis=54) [2] r=0 lpr=54 pi=[46,54)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 25 10:56:52 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 25 10:56:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 25 10:56:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 25 10:56:53 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 25 10:56:53 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 10:56:53 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 10:56:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 9 peering, 93 unknown, 219 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:56:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 10:56:54 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 10:56:54 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 56 pg[11.0( v 50'2 (0'0,50'2] local-lis/les=48/49 n=2 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=15.382628441s) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 50'1 mlcod 50'1 active pruub 200.347534180s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 56 pg[11.0( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=15.382628441s) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 50'1 mlcod 0'0 unknown pruub 200.347534180s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 25 10:56:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 25 10:56:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.17( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.16( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.15( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.14( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.13( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.2( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=1 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=48/49 n=1 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.f( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.d( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.b( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.e( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.9( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.c( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.8( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.3( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.a( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.4( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.5( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.6( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.7( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.18( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1a( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1b( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1c( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1d( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1e( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1f( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.10( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.11( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.12( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.19( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.16( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.13( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=56/57 n=1 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=56/57 n=1 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.c( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.a( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.0( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 50'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.5( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.7( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1d( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 57 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:56:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 25 10:56:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 25 10:56:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 9 peering, 62 unknown, 250 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Nov 25 10:56:56 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 25 10:56:56 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 25 10:56:56 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 25 10:56:56 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 25 10:56:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:56:57 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 10:56:57 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 10:56:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v177: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 25 10:56:58 np0005535469 systemd-logind[791]: New session 34 of user zuul.
Nov 25 10:56:58 np0005535469 systemd[1]: Started Session 34 of User zuul.
Nov 25 10:56:59 np0005535469 python3.9[105524]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:56:59 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Nov 25 10:56:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Nov 25 10:56:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:56:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:56:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 25 10:56:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 10:56:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:56:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176797867s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.498336792s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.980028152s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301635742s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176706314s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.498367310s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176641464s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.498336792s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.979940414s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301635742s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176643372s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.498367310s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.979711533s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301376343s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.979533195s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301376343s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.979741096s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301345825s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.979216576s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301345825s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177179337s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.499572754s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177148819s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.499572754s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=56/57 n=1 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978815079s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301361084s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=56/57 n=1 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978983879s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301559448s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=56/57 n=1 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978785515s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301361084s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=56/57 n=1 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978956223s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301559448s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978747368s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301681519s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978831291s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301788330s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978787422s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301788330s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175827026s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.498977661s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175788879s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.498977661s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 10:57:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978146553s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301666260s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978124619s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301666260s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175422668s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.499023438s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175392151s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.499023438s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978062630s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301712036s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978048325s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301712036s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978343964s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.302108765s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175643921s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.499420166s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978322029s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.302108765s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175622940s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.499420166s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175014496s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.498901367s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.174980164s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.498901367s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977716446s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301681519s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175317764s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.499435425s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175298691s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.499435425s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977436066s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301803589s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977405548s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301803589s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977431297s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301849365s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977414131s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301849365s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175075531s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.499633789s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175407410s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.499969482s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175037384s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.499633789s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977190971s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.301879883s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175883293s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.500595093s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175869942s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.500595093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977165222s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.301879883s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977207184s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.302062988s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977127075s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.302154541s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175229073s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.499969482s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977070808s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.302261353s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175520897s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.500717163s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.976924896s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.302154541s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.977042198s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.302261353s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175457001s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.500717163s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.976871490s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.302062988s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.976944923s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.302307129s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.976928711s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.302307129s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.979290009s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.304824829s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.979272842s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.304824829s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175242424s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.500823975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.976616859s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.302215576s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175219536s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.500823975s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.976602554s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.302215576s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175162315s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.500778198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175136566s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.500778198s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978868484s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.304611206s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978852272s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.304611206s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175005913s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.500869751s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978664398s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.304687500s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978585243s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.304626465s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978561401s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.304626465s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978627205s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.304687500s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978557587s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.304580688s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.174989700s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.500869751s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.174620628s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 198.500823975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.174591064s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.500823975s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978450775s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.304580688s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978166580s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 201.304748535s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=56/57 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.978141785s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.304748535s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.15( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.2( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.d( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.9( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.18( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.11( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.b( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[11.1f( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179459572s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002365112s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179430962s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002365112s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179371834s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002426147s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179351807s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002426147s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179504395s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002792358s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179482460s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002792358s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179316521s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002746582s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179296494s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002746582s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179234505s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002853394s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179217339s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002853394s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179097176s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002853394s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179081917s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002853394s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178906441s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002868652s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178889275s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002868652s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178861618s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002944946s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178845406s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002944946s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.1e( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178711891s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.002990723s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178692818s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.002990723s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178392410s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003051758s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178367615s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003051758s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177290916s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003112793s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177261353s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003112793s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177150726s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003143311s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177130699s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003143311s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.9( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177211761s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 181.003326416s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.9( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.177187920s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 181.003326416s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.178246498s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003082275s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.e( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176872253s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 181.003234863s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.e( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176780701s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 181.003234863s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176628113s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003082275s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176648140s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003265381s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176615715s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003265381s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.14( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176429749s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 181.003387451s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.14( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176401138s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 181.003387451s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.12( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.10( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.1a( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176314354s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003326416s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=54/55 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176264763s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003326416s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.15( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176290512s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 181.003433228s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176252365s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003463745s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.15( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176237106s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 181.003433228s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176227570s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003463745s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.176012993s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 181.003448486s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175990105s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.003448486s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.d( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.179453850s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 181.002609253s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:00 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 58 pg[10.d( v 55'65 (0'0,55'65] local-lis/les=54/55 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.175015450s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 181.002609253s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.6( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.e( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.4( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.16( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 25 10:57:00 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=58/59 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.14( v 55'65 lc 50'54 (0'0,55'65] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=55'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=58/59 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 59 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=58/59 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.1e( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.9( v 55'65 lc 50'56 (0'0,55'65] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=55'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=58/59 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=58/59 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.d( v 55'65 lc 50'50 (0'0,55'65] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=55'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.15( v 55'65 lc 50'46 (0'0,55'65] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=55'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=58/59 n=1 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.e( v 55'65 lc 50'48 (0'0,55'65] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=55'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=58/59 n=1 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=58/59 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 59 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.9( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=58/59 n=1 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 59 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=58/59 n=0 ec=56/48 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 25 10:57:01 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 25 10:57:01 np0005535469 python3.9[105742]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:57:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 56 B/s, 0 objects/s recovering
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 25 10:57:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 10:57:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 25 10:57:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 10:57:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 25 10:57:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 10:57:02 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:02 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 60 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:03 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 10:57:03 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 10:57:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 25 10:57:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 10:57:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 25 10:57:03 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.384049416s) [0] async=[0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.758605957s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.383935928s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.758605957s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.549491882s) [0] async=[0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924301147s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.549408913s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924301147s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.383435249s) [0] async=[0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.758651733s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.548580170s) [0] async=[0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924163818s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.548507690s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924163818s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.382875443s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.758651733s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.383292198s) [0] async=[0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.759399414s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 61 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61 pruub=15.382978439s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.759399414s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 61 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 94 B/s, 0 objects/s recovering
Nov 25 10:57:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 25 10:57:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 10:57:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 25 10:57:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 10:57:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 25 10:57:04 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417510033s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924194336s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417924881s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924606323s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417809486s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924606323s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417390823s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924194336s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417397499s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924514771s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417352676s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924514771s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416931152s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924194336s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416890144s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924194336s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417145729s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924545288s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417101860s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924545288s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.417028427s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924560547s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416648865s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924224854s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416980743s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924560547s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=59/60 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416617393s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924224854s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416814804s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924591064s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416695595s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924484253s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416664124s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924484253s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416769028s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924591064s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416515350s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924407959s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416471481s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924407959s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416478157s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 208.924560547s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 62 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=59/60 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.416445732s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.924560547s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.5( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.11( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.b( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 62 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=61/62 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:04 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 10:57:05 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 10:57:05 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 25 10:57:05 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 25 10:57:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 25 10:57:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 25 10:57:05 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 25 10:57:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.3( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.1d( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.1( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.d( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.9( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.1b( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 63 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:05 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 10:57:05 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 10:57:05 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Nov 25 10:57:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 11 peering, 310 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 560 B/s, 22 objects/s recovering
Nov 25 10:57:06 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Nov 25 10:57:06 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 10:57:06 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 10:57:06 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Nov 25 10:57:06 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Nov 25 10:57:07 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 10:57:07 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 10:57:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:07 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 10:57:07 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 10:57:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 11 peering, 310 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 402 B/s, 16 objects/s recovering
Nov 25 10:57:08 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 10:57:08 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 10:57:09 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 10:57:09 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 10:57:09 np0005535469 systemd[1]: session-34.scope: Deactivated successfully.
Nov 25 10:57:09 np0005535469 systemd[1]: session-34.scope: Consumed 8.563s CPU time.
Nov 25 10:57:09 np0005535469 systemd-logind[791]: Session 34 logged out. Waiting for processes to exit.
Nov 25 10:57:09 np0005535469 systemd-logind[791]: Removed session 34.
Nov 25 10:57:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 11 peering, 310 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 341 B/s, 14 objects/s recovering
Nov 25 10:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:57:10 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 10:57:10 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 10:57:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 280 B/s, 11 objects/s recovering
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 10:57:12 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 10:57:12 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 10:57:12 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:57:12 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dca2871f-6914-4098-975e-6010fef31745 does not exist
Nov 25 10:57:12 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9a5fa946-c6ad-4eb6-a1ec-39b748277181 does not exist
Nov 25 10:57:12 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1c740585-3bbb-4fa6-bc9f-d3b798840f04 does not exist
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:57:12 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 25 10:57:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:12 np0005535469 podman[106071]: 2025-11-25 15:57:12.904590983 +0000 UTC m=+0.023317673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:57:13 np0005535469 podman[106071]: 2025-11-25 15:57:13.021808846 +0000 UTC m=+0.140535496 container create d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_leakey, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 10:57:13 np0005535469 systemd[1]: Started libpod-conmon-d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b.scope.
Nov 25 10:57:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:57:13 np0005535469 podman[106071]: 2025-11-25 15:57:13.204470035 +0000 UTC m=+0.323196705 container init d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_leakey, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 10:57:13 np0005535469 podman[106071]: 2025-11-25 15:57:13.211810025 +0000 UTC m=+0.330536675 container start d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_leakey, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:57:13 np0005535469 clever_leakey[106088]: 167 167
Nov 25 10:57:13 np0005535469 systemd[1]: libpod-d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b.scope: Deactivated successfully.
Nov 25 10:57:13 np0005535469 podman[106071]: 2025-11-25 15:57:13.296733511 +0000 UTC m=+0.415460181 container attach d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:57:13 np0005535469 podman[106071]: 2025-11-25 15:57:13.297110601 +0000 UTC m=+0.415837251 container died d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 10:57:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:57:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 10:57:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:57:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:57:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-07b3e6d49c8eb8c748a639584b03a08a9efb84c503aa2519481cc9074b312747-merged.mount: Deactivated successfully.
Nov 25 10:57:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 25 10:57:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 10:57:14 np0005535469 podman[106071]: 2025-11-25 15:57:14.131179938 +0000 UTC m=+1.249906598 container remove d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:57:14 np0005535469 systemd[1]: libpod-conmon-d333329e7b31feaf72a64a241d242a8d8f1f94db5b534215c5cf33fca7a5205b.scope: Deactivated successfully.
Nov 25 10:57:14 np0005535469 podman[106113]: 2025-11-25 15:57:14.370544158 +0000 UTC m=+0.121105689 container create 822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 10:57:14 np0005535469 podman[106113]: 2025-11-25 15:57:14.277755518 +0000 UTC m=+0.028317079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:57:14 np0005535469 systemd[1]: Started libpod-conmon-822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659.scope.
Nov 25 10:57:14 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 10:57:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 25 10:57:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:57:14 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 10:57:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872f1b9412dc487a2dc02b08caac312a4993309e805a003a9090044ac9414b2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872f1b9412dc487a2dc02b08caac312a4993309e805a003a9090044ac9414b2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872f1b9412dc487a2dc02b08caac312a4993309e805a003a9090044ac9414b2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872f1b9412dc487a2dc02b08caac312a4993309e805a003a9090044ac9414b2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872f1b9412dc487a2dc02b08caac312a4993309e805a003a9090044ac9414b2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 10:57:15 np0005535469 podman[106113]: 2025-11-25 15:57:15.390196404 +0000 UTC m=+1.140757945 container init 822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 10:57:15 np0005535469 podman[106113]: 2025-11-25 15:57:15.397168044 +0000 UTC m=+1.147729565 container start 822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:57:15 np0005535469 podman[106113]: 2025-11-25 15:57:15.861174463 +0000 UTC m=+1.611735994 container attach 822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:57:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 25 10:57:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 10:57:16 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 10:57:16 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 10:57:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 10:57:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 25 10:57:16 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 25 10:57:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 10:57:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 10:57:16 np0005535469 nostalgic_kare[106130]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:57:16 np0005535469 nostalgic_kare[106130]: --> relative data size: 1.0
Nov 25 10:57:16 np0005535469 nostalgic_kare[106130]: --> All data devices are unavailable
Nov 25 10:57:16 np0005535469 systemd[1]: libpod-822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659.scope: Deactivated successfully.
Nov 25 10:57:16 np0005535469 podman[106113]: 2025-11-25 15:57:16.388234984 +0000 UTC m=+2.138796505 container died 822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 10:57:16 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 25 10:57:16 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 25 10:57:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 25 10:57:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-872f1b9412dc487a2dc02b08caac312a4993309e805a003a9090044ac9414b2c-merged.mount: Deactivated successfully.
Nov 25 10:57:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 25 10:57:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 25 10:57:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 10:57:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 25 10:57:17 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 25 10:57:17 np0005535469 podman[106113]: 2025-11-25 15:57:17.869374701 +0000 UTC m=+3.619936232 container remove 822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:57:17 np0005535469 systemd[1]: libpod-conmon-822022caed36b86099c25c74c3e7ae6ebb6ffb263bfbb391a43c3a004a253659.scope: Deactivated successfully.
Nov 25 10:57:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 10:57:18 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 10:57:18 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 10:57:18 np0005535469 podman[106313]: 2025-11-25 15:57:18.49721627 +0000 UTC m=+0.024074155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:57:18 np0005535469 podman[106313]: 2025-11-25 15:57:18.653105202 +0000 UTC m=+0.179963067 container create 222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:57:18 np0005535469 systemd[1]: Started libpod-conmon-222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f.scope.
Nov 25 10:57:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 25 10:57:18 np0005535469 podman[106313]: 2025-11-25 15:57:18.884412113 +0000 UTC m=+0.411270058 container init 222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 25 10:57:18 np0005535469 podman[106313]: 2025-11-25 15:57:18.891609718 +0000 UTC m=+0.418467603 container start 222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cray, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 10:57:18 np0005535469 cool_cray[106329]: 167 167
Nov 25 10:57:18 np0005535469 systemd[1]: libpod-222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f.scope: Deactivated successfully.
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 10:57:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 10:57:18 np0005535469 podman[106313]: 2025-11-25 15:57:18.901836986 +0000 UTC m=+0.428694871 container attach 222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:57:18 np0005535469 podman[106313]: 2025-11-25 15:57:18.902956676 +0000 UTC m=+0.429814591 container died 222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cray, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 10:57:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c37f47525661f1028f2dca939862fe22552d2a61e6eccb3a8acb8db7aef347aa-merged.mount: Deactivated successfully.
Nov 25 10:57:18 np0005535469 podman[106313]: 2025-11-25 15:57:18.978878018 +0000 UTC m=+0.505735913 container remove 222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:57:18 np0005535469 systemd[1]: libpod-conmon-222ab46d1e256c9943a20a1d5fa80acfe289b00ced6546c45a5bf363b9b2408f.scope: Deactivated successfully.
Nov 25 10:57:19 np0005535469 podman[106355]: 2025-11-25 15:57:19.167257454 +0000 UTC m=+0.044296784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:57:19 np0005535469 podman[106355]: 2025-11-25 15:57:19.312541839 +0000 UTC m=+0.189581129 container create d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:57:19 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 10:57:19 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 10:57:19 np0005535469 systemd[1]: Started libpod-conmon-d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b.scope.
Nov 25 10:57:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:57:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5baca63ffb5968f8d99e87fb0c4bc8475ae67fb2fac7920ad8c84794a0ce0da3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5baca63ffb5968f8d99e87fb0c4bc8475ae67fb2fac7920ad8c84794a0ce0da3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5baca63ffb5968f8d99e87fb0c4bc8475ae67fb2fac7920ad8c84794a0ce0da3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5baca63ffb5968f8d99e87fb0c4bc8475ae67fb2fac7920ad8c84794a0ce0da3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.106289864s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 222.500366211s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.105262756s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 222.499069214s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.106228828s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.500366211s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.104880333s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.499069214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.105597496s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 222.500015259s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.105571747s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.500015259s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:19 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:19 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.105942726s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 222.500991821s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:19 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 67 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=13.105911255s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.500991821s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:19 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:19 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:19 np0005535469 podman[106355]: 2025-11-25 15:57:19.468848982 +0000 UTC m=+0.345888332 container init d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:57:19 np0005535469 podman[106355]: 2025-11-25 15:57:19.482475012 +0000 UTC m=+0.359514322 container start d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 10:57:19 np0005535469 podman[106355]: 2025-11-25 15:57:19.487082988 +0000 UTC m=+0.364122398 container attach d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:57:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 25 10:57:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 25 10:57:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 10:57:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 25 10:57:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]: {
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:    "0": [
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:        {
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "devices": [
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "/dev/loop3"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            ],
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_name": "ceph_lv0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_size": "21470642176",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "name": "ceph_lv0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "tags": {
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cluster_name": "ceph",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.crush_device_class": "",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.encrypted": "0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osd_id": "0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.type": "block",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.vdo": "0"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            },
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "type": "block",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "vg_name": "ceph_vg0"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:        }
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:    ],
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:    "1": [
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:        {
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "devices": [
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "/dev/loop4"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            ],
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_name": "ceph_lv1",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_size": "21470642176",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "name": "ceph_lv1",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "tags": {
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cluster_name": "ceph",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.crush_device_class": "",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.encrypted": "0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osd_id": "1",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.type": "block",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.vdo": "0"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            },
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "type": "block",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "vg_name": "ceph_vg1"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:        }
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:    ],
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:    "2": [
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:        {
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "devices": [
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "/dev/loop5"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            ],
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_name": "ceph_lv2",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_size": "21470642176",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "name": "ceph_lv2",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "tags": {
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.cluster_name": "ceph",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.crush_device_class": "",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.encrypted": "0",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osd_id": "2",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.type": "block",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:                "ceph.vdo": "0"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            },
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "type": "block",
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:            "vg_name": "ceph_vg2"
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:        }
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]:    ]
Nov 25 10:57:20 np0005535469 gallant_fermi[106372]: }
Nov 25 10:57:20 np0005535469 systemd[1]: libpod-d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b.scope: Deactivated successfully.
Nov 25 10:57:20 np0005535469 podman[106355]: 2025-11-25 15:57:20.310536887 +0000 UTC m=+1.187576187 container died d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:57:20 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 10:57:20 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:20 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:20 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 68 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5baca63ffb5968f8d99e87fb0c4bc8475ae67fb2fac7920ad8c84794a0ce0da3-merged.mount: Deactivated successfully.
Nov 25 10:57:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 25 10:57:21 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 25 10:57:21 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 25 10:57:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 10:57:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 25 10:57:21 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 25 10:57:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 10:57:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 10:57:21 np0005535469 podman[106355]: 2025-11-25 15:57:21.79928019 +0000 UTC m=+2.676319500 container remove d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 10:57:21 np0005535469 systemd[1]: libpod-conmon-d773540a7211f3e6ba21376b7b9bac00e893a5b9031999b901f174b804c9d32b.scope: Deactivated successfully.
Nov 25 10:57:21 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 69 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=68/69 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=15.758270264s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=50'389 mlcod 0'0 active pruub 237.130462646s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=15.758186340s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 237.130462646s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=14.640280724s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=50'389 mlcod 0'0 active pruub 236.013000488s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=14.640219688s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 236.013000488s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=15.757534981s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=50'389 mlcod 0'0 active pruub 237.130737305s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=15.757502556s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 237.130737305s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=15.757192612s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=50'389 mlcod 0'0 active pruub 237.130630493s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 69 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=15.757156372s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 237.130630493s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2] r=0 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 69 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=68/69 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 69 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=68/69 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 69 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=68/69 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 25 10:57:22 np0005535469 podman[106534]: 2025-11-25 15:57:22.514910362 +0000 UTC m=+0.106541824 container create 0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 10:57:22 np0005535469 podman[106534]: 2025-11-25 15:57:22.435551247 +0000 UTC m=+0.027182699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 70 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=68/69 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.372945786s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 227.909317017s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 70 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=68/69 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.372828484s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.909317017s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 70 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=9.963532448s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 222.500335693s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 70 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=9.963431358s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.500335693s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 70 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=9.963844299s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 222.501007080s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 70 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=9.963782310s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.501007080s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:22 np0005535469 systemd[1]: Started libpod-conmon-0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3.scope.
Nov 25 10:57:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=62/63 n=6 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 70 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=61/62 n=6 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 10:57:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 10:57:22 np0005535469 podman[106534]: 2025-11-25 15:57:22.711481109 +0000 UTC m=+0.303112551 container init 0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:57:22 np0005535469 podman[106534]: 2025-11-25 15:57:22.718316034 +0000 UTC m=+0.309947476 container start 0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:57:22 np0005535469 inspiring_chaum[106551]: 167 167
Nov 25 10:57:22 np0005535469 systemd[1]: libpod-0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3.scope: Deactivated successfully.
Nov 25 10:57:22 np0005535469 podman[106534]: 2025-11-25 15:57:22.755287579 +0000 UTC m=+0.346919011 container attach 0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:57:22 np0005535469 podman[106534]: 2025-11-25 15:57:22.7556865 +0000 UTC m=+0.347317932 container died 0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:57:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-151bc2641c43b5803245364f69afc1ee0f108ac22927b217d5467699db19d43e-merged.mount: Deactivated successfully.
Nov 25 10:57:23 np0005535469 podman[106534]: 2025-11-25 15:57:23.103177025 +0000 UTC m=+0.694808497 container remove 0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:57:23 np0005535469 systemd[1]: libpod-conmon-0b60085e544f98b087b9873022d3022d85aec1ef50fa6dca988352e82c5308c3.scope: Deactivated successfully.
Nov 25 10:57:23 np0005535469 podman[106577]: 2025-11-25 15:57:23.274681342 +0000 UTC m=+0.031493496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:57:23 np0005535469 podman[106577]: 2025-11-25 15:57:23.427956164 +0000 UTC m=+0.184768298 container create d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:57:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 25 10:57:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 25 10:57:23 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 25 10:57:23 np0005535469 systemd[1]: Started libpod-conmon-d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974.scope.
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=68/69 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71 pruub=14.523740768s) [2] async=[2] r=-1 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 227.993347168s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=68/69 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71 pruub=14.523620605s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.993347168s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=0 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=0 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=0 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=0 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=68/69 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71 pruub=14.522511482s) [2] async=[2] r=-1 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 227.993118286s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=68/69 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71 pruub=14.522405624s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.993118286s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=68/69 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71 pruub=14.522389412s) [2] async=[2] r=-1 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 227.993225098s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 71 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=68/69 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71 pruub=14.522211075s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.993225098s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 71 pg[9.e( v 50'389 (0'0,50'389] local-lis/les=70/71 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:57:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ca131bb6acfa921be426697c13914c8dee73ba07f81c346aeaf6fa55493b51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ca131bb6acfa921be426697c13914c8dee73ba07f81c346aeaf6fa55493b51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ca131bb6acfa921be426697c13914c8dee73ba07f81c346aeaf6fa55493b51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ca131bb6acfa921be426697c13914c8dee73ba07f81c346aeaf6fa55493b51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:57:23 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 71 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=70/71 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:23 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 71 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=70/71 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:23 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 71 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=70/71 n=6 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[61,70)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:23 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 71 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=70/71 n=6 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:23 np0005535469 podman[106577]: 2025-11-25 15:57:23.577993807 +0000 UTC m=+0.334805951 container init d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 10:57:23 np0005535469 podman[106577]: 2025-11-25 15:57:23.584836544 +0000 UTC m=+0.341648678 container start d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:57:23 np0005535469 podman[106577]: 2025-11-25 15:57:23.595709989 +0000 UTC m=+0.352522153 container attach d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 10:57:23 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 10:57:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 25 10:57:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 10:57:24 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 10:57:24 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 10:57:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]: {
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "osd_id": 1,
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "type": "bluestore"
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:    },
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "osd_id": 2,
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "type": "bluestore"
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:    },
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "osd_id": 0,
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:        "type": "bluestore"
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]:    }
Nov 25 10:57:24 np0005535469 bold_keldysh[106594]: }
Nov 25 10:57:24 np0005535469 systemd[1]: libpod-d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974.scope: Deactivated successfully.
Nov 25 10:57:24 np0005535469 systemd[1]: libpod-d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974.scope: Consumed 1.075s CPU time.
Nov 25 10:57:24 np0005535469 podman[106577]: 2025-11-25 15:57:24.68147429 +0000 UTC m=+1.438286424 container died d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:57:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 10:57:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 25 10:57:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-64ca131bb6acfa921be426697c13914c8dee73ba07f81c346aeaf6fa55493b51-merged.mount: Deactivated successfully.
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=70/71 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=14.423863411s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 50'389 active pruub 238.895660400s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=70/71 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=14.423707008s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 238.895660400s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=70/71 n=6 ec=54/44 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=14.425040245s) [2] async=[2] r=-1 lpr=72 pi=[61,72)/1 crt=50'389 mlcod 50'389 active pruub 238.898025513s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=70/71 n=6 ec=54/44 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=14.424886703s) [2] r=-1 lpr=72 pi=[61,72)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 238.898025513s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=70/71 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=14.422356606s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 50'389 active pruub 238.895660400s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=70/71 n=6 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=14.424645424s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 50'389 active pruub 238.898056030s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=70/71 n=6 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=14.424351692s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 238.898056030s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 72 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=70/71 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=14.421938896s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 238.895660400s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.6( v 50'389 (0'0,50'389] local-lis/les=71/72 n=6 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 72 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=68/54 les/c/f=69/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 72 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 72 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=71/72 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[54,71)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 podman[106577]: 2025-11-25 15:57:25.360515228 +0000 UTC m=+2.117327402 container remove d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:57:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0565965e-266b-442c-a51d-f8c5032c882d does not exist
Nov 25 10:57:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2dd4b22d-9d7e-44f7-801b-c3343d3caef0 does not exist
Nov 25 10:57:25 np0005535469 systemd[1]: libpod-conmon-d5b67c85e956860b1d2ee9a4369ad275d33bb1a2b5051f7cdc318a7b4a9fb974.scope: Deactivated successfully.
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:57:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73) [2] r=0 lpr=73 pi=[54,73)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73) [2] r=0 lpr=73 pi=[54,73)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73) [2] r=0 lpr=73 pi=[54,73)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73) [2] r=0 lpr=73 pi=[54,73)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:25 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 73 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=71/72 n=6 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=15.273520470s) [2] async=[2] r=-1 lpr=73 pi=[54,73)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 231.139083862s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 73 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=71/72 n=6 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=15.273325920s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.139083862s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:25 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 73 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=15.273013115s) [2] async=[2] r=-1 lpr=73 pi=[54,73)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 231.139083862s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:25 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 73 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73 pruub=15.272738457s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.139083862s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.17( v 50'389 (0'0,50'389] local-lis/les=72/73 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.7( v 50'389 (0'0,50'389] local-lis/les=72/73 n=6 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.f( v 50'389 (0'0,50'389] local-lis/les=72/73 n=6 ec=54/44 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 73 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=72/73 n=5 ec=54/44 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:26 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 10:57:26 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 10:57:26 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 10:57:26 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 10:57:26 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Nov 25 10:57:26 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Nov 25 10:57:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 25 10:57:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 25 10:57:26 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 25 10:57:26 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 74 pg[9.18( v 50'389 (0'0,50'389] local-lis/les=73/74 n=5 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73) [2] r=0 lpr=73 pi=[54,73)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:26 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 74 pg[9.8( v 50'389 (0'0,50'389] local-lis/les=73/74 n=6 ec=54/44 lis/c=71/54 les/c/f=72/55/0 sis=73) [2] r=0 lpr=73 pi=[54,73)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:27 np0005535469 systemd-logind[791]: New session 35 of user zuul.
Nov 25 10:57:27 np0005535469 systemd[1]: Started Session 35 of User zuul.
Nov 25 10:57:27 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 25 10:57:27 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 25 10:57:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 220 B/s, 11 objects/s recovering
Nov 25 10:57:28 np0005535469 python3.9[106842]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 10:57:28 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 25 10:57:28 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 25 10:57:29 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1c deep-scrub starts
Nov 25 10:57:29 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1c deep-scrub ok
Nov 25 10:57:29 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 25 10:57:29 np0005535469 python3.9[107016]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:57:29 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 25 10:57:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 8 objects/s recovering
Nov 25 10:57:30 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 25 10:57:30 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 25 10:57:30 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 25 10:57:30 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 25 10:57:30 np0005535469 python3.9[107172]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:57:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 141 B/s, 7 objects/s recovering
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 10:57:32 np0005535469 python3.9[107325]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:57:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:32 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 25 10:57:32 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 25 10:57:33 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 10:57:33 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 10:57:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 10:57:33 np0005535469 python3.9[107479]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:57:33 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 25 10:57:33 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 25 10:57:33 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 25 10:57:33 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 25 10:57:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 123 B/s, 6 objects/s recovering
Nov 25 10:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 25 10:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 10:57:34 np0005535469 python3.9[107631]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 25 10:57:34 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 25 10:57:34 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 25 10:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 10:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 25 10:57:34 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 25 10:57:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 10:57:35 np0005535469 python3.9[107781]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:57:35 np0005535469 network[107798]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:57:35 np0005535469 network[107799]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:57:35 np0005535469 network[107800]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:57:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 25 10:57:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 10:57:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 25 10:57:36 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 10:57:36 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 10:57:36 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 10:57:36 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 10:57:36 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 10:57:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 10:57:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 25 10:57:36 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 25 10:57:37 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 10:57:37 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 10:57:37 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 10:57:37 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 10:57:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:37 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Nov 25 10:57:37 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Nov 25 10:57:37 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 25 10:57:37 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 25 10:57:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 25 10:57:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 10:57:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 25 10:57:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 10:57:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 25 10:57:38 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 25 10:57:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 10:57:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 10:57:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 10:57:38 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 77 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=9.749548912s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 238.499649048s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:38 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 77 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=9.749452591s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.499649048s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:38 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 77 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=9.750576019s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 238.501449585s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:38 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 77 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=9.750540733s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.501449585s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:38 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:38 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:39 np0005535469 python3.9[108060]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:57:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 25 10:57:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 10:57:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 25 10:57:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 25 10:57:39 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[54,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:39 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[54,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:39 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[54,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:39 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 79 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:39 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 79 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:39 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=-1 lpr=79 pi=[54,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:39 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 79 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:39 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 79 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=54/55 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] r=0 lpr=79 pi=[54,79)/1 crt=50'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:57:39
Nov 25 10:57:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:57:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:57:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'backups', 'vms', 'default.rgw.control', 'default.rgw.meta']
Nov 25 10:57:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 25 10:57:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:57:40 np0005535469 python3.9[108210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:57:40 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 25 10:57:40 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 25 10:57:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 25 10:57:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 10:57:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 25 10:57:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 10:57:40 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 25 10:57:41 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 80 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=79/80 n=5 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[54,79)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:41 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 80 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=79/80 n=6 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=79) [2]/[1] async=[2] r=0 lpr=79 pi=[54,79)/1 crt=50'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:41 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 25 10:57:41 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 25 10:57:41 np0005535469 python3.9[108364]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:57:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 25 10:57:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 10:57:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:42 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 25 10:57:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 25 10:57:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 10:57:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 25 10:57:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 25 10:57:42 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 25 10:57:42 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 81 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=79/80 n=6 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81 pruub=15.022877693s) [2] async=[2] r=-1 lpr=81 pi=[54,81)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 247.013992310s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:42 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 81 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=79/80 n=6 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81 pruub=15.022752762s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.013992310s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:42 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 81 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=79/80 n=5 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81 pruub=15.006183624s) [2] async=[2] r=-1 lpr=81 pi=[54,81)/1 crt=50'389 lcod 0'0 mlcod 0'0 active pruub 246.998184204s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:42 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 81 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=79/80 n=5 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81 pruub=15.006097794s) [2] r=-1 lpr=81 pi=[54,81)/1 crt=50'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.998184204s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:42 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 81 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:42 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 81 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:42 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 81 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:42 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 81 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=0/0 n=6 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:42 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Nov 25 10:57:42 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Nov 25 10:57:42 np0005535469 python3.9[108522]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:57:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 10:57:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 25 10:57:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 10:57:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 25 10:57:43 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 25 10:57:43 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 25 10:57:43 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 25 10:57:43 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 82 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=81/82 n=5 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:43 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 82 pg[9.c( v 50'389 (0'0,50'389] local-lis/les=81/82 n=6 ec=54/44 lis/c=79/54 les/c/f=80/55/0 sis=81) [2] r=0 lpr=81 pi=[54,81)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:43 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 25 10:57:43 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 25 10:57:43 np0005535469 python3.9[108606]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:57:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 10:57:44 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 25 10:57:44 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 10:57:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 10:57:44 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 10:57:44 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 10:57:45 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 25 10:57:45 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 25 10:57:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 10:57:45 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 25 10:57:46 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 25 10:57:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 25 10:57:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 10:57:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 25 10:57:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 10:57:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 25 10:57:46 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 25 10:57:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 10:57:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 10:57:48 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 25 10:57:48 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 25 10:57:48 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 25 10:57:48 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 10:57:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 10:57:49 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 25 10:57:49 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 25 10:57:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 25 10:57:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 25 10:57:49 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 25 10:57:49 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 25 10:57:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 25 10:57:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 25 10:57:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 10:57:50 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 25 10:57:50 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:57:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:57:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 25 10:57:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 10:57:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 25 10:57:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 25 10:57:50 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 86 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=61/62 n=5 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=86 pruub=9.693412781s) [2] r=-1 lpr=86 pi=[61,86)/1 crt=50'389 mlcod 0'0 active pruub 260.013610840s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:50 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 86 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=61/62 n=5 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=86 pruub=9.693360329s) [2] r=-1 lpr=86 pi=[61,86)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 260.013610840s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 10:57:51 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=86) [2] r=0 lpr=86 pi=[61,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Nov 25 10:57:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Nov 25 10:57:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 25 10:57:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 25 10:57:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 25 10:57:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 25 10:57:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[61,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:52 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[61,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 10:57:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 87 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=61/62 n=5 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=87) [2]/[0] r=0 lpr=87 pi=[61,87)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:52 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 87 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=61/62 n=5 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=87) [2]/[0] r=0 lpr=87 pi=[61,87)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:52 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 25 10:57:52 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 25 10:57:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 25 10:57:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 25 10:57:53 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 25 10:57:53 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 25 10:57:53 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 25 10:57:53 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Nov 25 10:57:53 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Nov 25 10:57:53 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 88 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=87/88 n=5 ec=54/44 lis/c=61/61 les/c/f=62/62/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[61,87)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 25 10:57:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 25 10:57:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v236: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 25 10:57:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 25 10:57:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 25 10:57:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 89 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=87/61 les/c/f=88/62/0 sis=89) [2] r=0 lpr=89 pi=[61,89)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:54 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 89 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=87/61 les/c/f=88/62/0 sis=89) [2] r=0 lpr=89 pi=[61,89)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:57:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 89 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=87/88 n=5 ec=54/44 lis/c=87/61 les/c/f=88/62/0 sis=89 pruub=14.822504044s) [2] async=[2] r=-1 lpr=89 pi=[61,89)/1 crt=50'389 mlcod 50'389 active pruub 268.837524414s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:57:54 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 89 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=87/88 n=5 ec=54/44 lis/c=87/61 les/c/f=88/62/0 sis=89 pruub=14.821670532s) [2] r=-1 lpr=89 pi=[61,89)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 268.837524414s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:57:54 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Nov 25 10:57:54 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Nov 25 10:57:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 25 10:57:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 25 10:57:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 25 10:57:55 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 25 10:57:55 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 25 10:57:56 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 90 pg[9.13( v 50'389 (0'0,50'389] local-lis/les=89/90 n=5 ec=54/44 lis/c=87/61 les/c/f=88/62/0 sis=89) [2] r=0 lpr=89 pi=[61,89)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:57:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 1 unknown, 320 active+clean; 457 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:57:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:57:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 25 10:57:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 25 10:57:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 10:57:58 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 25 10:57:58 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 25 10:57:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 25 10:57:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 10:57:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 10:57:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 25 10:57:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 25 10:57:59 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Nov 25 10:57:59 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Nov 25 10:57:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 10:58:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 4 op/s; 36 B/s, 1 objects/s recovering
Nov 25 10:58:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 25 10:58:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 10:58:00 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 25 10:58:00 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 25 10:58:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 25 10:58:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 10:58:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 10:58:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 25 10:58:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 25 10:58:00 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 25 10:58:00 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 25 10:58:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 92 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=92 pruub=8.426829338s) [1] r=-1 lpr=92 pi=[62,92)/1 crt=50'389 mlcod 0'0 active pruub 269.098236084s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 92 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=92 pruub=8.426485062s) [1] r=-1 lpr=92 pi=[62,92)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 269.098236084s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=92) [1] r=0 lpr=92 pi=[62,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:01 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 25 10:58:01 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 25 10:58:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 25 10:58:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 25 10:58:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 25 10:58:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:01 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[62,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 10:58:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 93 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=93) [1]/[0] r=0 lpr=93 pi=[62,93)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:01 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 93 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=93) [1]/[0] r=0 lpr=93 pi=[62,93)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 10:58:02 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Nov 25 10:58:02 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Nov 25 10:58:02 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:02 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 25 10:58:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 10:58:03 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.16 deep-scrub starts
Nov 25 10:58:03 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.16 deep-scrub ok
Nov 25 10:58:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 94 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=93/94 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=93) [1]/[0] async=[1] r=0 lpr=93 pi=[62,93)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 25 10:58:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 10:58:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 25 10:58:03 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 25 10:58:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 95 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=93/94 n=5 ec=54/44 lis/c=93/62 les/c/f=94/63/0 sis=95 pruub=15.251775742s) [1] async=[1] r=-1 lpr=95 pi=[62,95)/1 crt=50'389 mlcod 50'389 active pruub 278.561218262s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:03 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 95 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=93/94 n=5 ec=54/44 lis/c=93/62 les/c/f=94/63/0 sis=95 pruub=15.251653671s) [1] r=-1 lpr=95 pi=[62,95)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 278.561218262s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 95 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=93/62 les/c/f=94/63/0 sis=95) [1] r=0 lpr=95 pi=[62,95)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:04 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 95 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=93/62 les/c/f=94/63/0 sis=95) [1] r=0 lpr=95 pi=[62,95)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 25 10:58:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 10:58:04 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 25 10:58:04 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 25 10:58:04 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 94 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=94 pruub=8.604872704s) [0] r=-1 lpr=94 pi=[71,94)/1 crt=50'389 mlcod 0'0 active pruub 245.597534180s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:04 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 95 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=94 pruub=8.604793549s) [0] r=-1 lpr=94 pi=[71,94)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 245.597534180s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:04 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=94) [0] r=0 lpr=95 pi=[71,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 25 10:58:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 10:58:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 10:58:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 25 10:58:05 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 25 10:58:05 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 96 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=96) [0]/[2] r=0 lpr=96 pi=[71,96)/2 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:05 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 96 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=96) [0]/[2] r=0 lpr=96 pi=[71,96)/2 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[71,96)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:05 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[71,96)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:05 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 96 pg[9.15( v 50'389 (0'0,50'389] local-lis/les=95/96 n=5 ec=54/44 lis/c=93/62 les/c/f=94/63/0 sis=95) [1] r=0 lpr=95 pi=[62,95)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 1 unknown, 1 peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 25 10:58:06 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 25 10:58:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 10:58:06 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 25 10:58:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 25 10:58:06 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 25 10:58:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:07 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 25 10:58:07 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 25 10:58:07 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 97 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=96/97 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=96) [0]/[2] async=[0] r=0 lpr=96 pi=[71,96)/2 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 1 activating+remapped, 1 peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4/212 objects misplaced (1.887%); 21 B/s, 0 objects/s recovering
Nov 25 10:58:08 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 25 10:58:08 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 25 10:58:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 25 10:58:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 25 10:58:08 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 25 10:58:08 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 98 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=96/71 les/c/f=97/72/0 sis=98) [0] r=0 lpr=98 pi=[71,98)/2 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:08 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 98 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=96/71 les/c/f=97/72/0 sis=98) [0] r=0 lpr=98 pi=[71,98)/2 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:08 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 98 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=96/97 n=5 ec=54/44 lis/c=96/71 les/c/f=97/72/0 sis=98 pruub=15.142503738s) [0] async=[0] r=-1 lpr=98 pi=[71,98)/2 crt=50'389 mlcod 50'389 active pruub 256.085327148s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:08 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 98 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=96/97 n=5 ec=54/44 lis/c=96/71 les/c/f=97/72/0 sis=98 pruub=15.142395973s) [0] r=-1 lpr=98 pi=[71,98)/2 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 256.085327148s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:09 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.1e deep-scrub starts
Nov 25 10:58:09 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 8.1e deep-scrub ok
Nov 25 10:58:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 25 10:58:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 25 10:58:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 25 10:58:09 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 99 pg[9.16( v 50'389 (0'0,50'389] local-lis/les=98/99 n=5 ec=54/44 lis/c=96/71 les/c/f=97/72/0 sis=98) [0] r=0 lpr=98 pi=[71,98)/2 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:58:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 214 B/s wr, 7 op/s; 4/212 objects misplaced (1.887%); 23 B/s, 0 objects/s recovering
Nov 25 10:58:10 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Nov 25 10:58:10 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Nov 25 10:58:10 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 25 10:58:10 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 25 10:58:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 25 10:58:12 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 25 10:58:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 10:58:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 10:58:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 25 10:58:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 10:58:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 25 10:58:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 10:58:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 10:58:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 25 10:58:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 25 10:58:15 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 101 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=101 pruub=10.100701332s) [2] r=-1 lpr=101 pi=[62,101)/1 crt=50'389 mlcod 0'0 active pruub 285.133270264s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:15 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 101 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=101 pruub=10.100638390s) [2] r=-1 lpr=101 pi=[62,101)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 285.133270264s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:15 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=101) [2] r=0 lpr=101 pi=[62,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 25 10:58:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 25 10:58:15 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 25 10:58:15 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[62,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:15 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[62,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 10:58:15 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 102 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=102) [2]/[0] r=0 lpr=102 pi=[62,102)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:15 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 102 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=62/63 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=102) [2]/[0] r=0 lpr=102 pi=[62,102)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:15 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 25 10:58:15 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 25 10:58:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 10:58:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 25 10:58:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 10:58:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 25 10:58:16 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 25 10:58:16 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 25 10:58:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 25 10:58:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 25 10:58:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 10:58:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 25 10:58:17 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 25 10:58:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 10:58:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 10:58:18 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 103 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=102/103 n=5 ec=54/44 lis/c=62/62 les/c/f=63/63/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[62,102)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:18 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Nov 25 10:58:18 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 25 10:58:18 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 104 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=102/103 n=5 ec=54/44 lis/c=102/62 les/c/f=103/63/0 sis=104 pruub=15.420033455s) [2] async=[2] r=-1 lpr=104 pi=[62,104)/1 crt=50'389 mlcod 50'389 active pruub 293.380432129s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 10:58:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 10:58:18 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 104 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=102/103 n=5 ec=54/44 lis/c=102/62 les/c/f=103/63/0 sis=104 pruub=15.419842720s) [2] r=-1 lpr=104 pi=[62,104)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 293.380432129s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:18 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 104 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=102/62 les/c/f=103/63/0 sis=104) [2] r=0 lpr=104 pi=[62,104)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:18 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 104 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=102/62 les/c/f=103/63/0 sis=104) [2] r=0 lpr=104 pi=[62,104)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:19 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 25 10:58:19 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 25 10:58:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 25 10:58:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 25 10:58:19 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 25 10:58:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 10:58:19 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 105 pg[9.19( v 50'389 (0'0,50'389] local-lis/les=104/105 n=5 ec=54/44 lis/c=102/62 les/c/f=103/63/0 sis=104) [2] r=0 lpr=104 pi=[62,104)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:20 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Nov 25 10:58:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 25 10:58:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 10:58:20 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Nov 25 10:58:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 25 10:58:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 10:58:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 25 10:58:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 25 10:58:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 10:58:20 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 25 10:58:21 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 25 10:58:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 10:58:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 10:58:22 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Nov 25 10:58:22 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Nov 25 10:58:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 106 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=81/82 n=5 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=106 pruub=9.179060936s) [0] r=-1 lpr=106 pi=[81,106)/1 crt=50'389 mlcod 0'0 active pruub 263.851593018s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 106 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=81/82 n=5 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=106 pruub=9.178240776s) [0] r=-1 lpr=106 pi=[81,106)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 263.851593018s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=106) [0] r=0 lpr=106 pi=[81,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 25 10:58:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[81,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:22 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[81,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:22 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 25 10:58:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 107 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=81/82 n=5 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=0 lpr=107 pi=[81,107)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:22 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 107 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=81/82 n=5 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] r=0 lpr=107 pi=[81,107)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:22 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 10:58:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 10:58:23 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 25 10:58:23 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 25 10:58:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 25 10:58:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 25 10:58:23 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 25 10:58:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 10:58:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 25 10:58:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 10:58:24 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 25 10:58:24 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 25 10:58:24 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 108 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=107/108 n=5 ec=54/44 lis/c=81/81 les/c/f=82/82/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[81,107)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:24 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 25 10:58:24 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 25 10:58:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 25 10:58:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 10:58:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 25 10:58:24 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 25 10:58:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 10:58:24 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 109 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=109 pruub=12.344968796s) [0] r=-1 lpr=109 pi=[71,109)/1 crt=50'389 mlcod 0'0 active pruub 269.626281738s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:24 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 109 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=109 pruub=12.344917297s) [0] r=-1 lpr=109 pi=[71,109)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 269.626281738s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:24 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 109 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=107/108 n=5 ec=54/44 lis/c=107/81 les/c/f=108/82/0 sis=109 pruub=15.362211227s) [0] async=[0] r=-1 lpr=109 pi=[81,109)/1 crt=50'389 mlcod 50'389 active pruub 272.644104004s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:24 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 109 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=107/108 n=5 ec=54/44 lis/c=107/81 les/c/f=108/82/0 sis=109 pruub=15.362114906s) [0] r=-1 lpr=109 pi=[81,109)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 272.644104004s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:24 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=109) [0] r=0 lpr=109 pi=[71,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:24 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 109 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=107/81 les/c/f=108/82/0 sis=109) [0] r=0 lpr=109 pi=[81,109)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:24 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 109 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=107/81 les/c/f=108/82/0 sis=109) [0] r=0 lpr=109 pi=[81,109)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:25 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 25 10:58:25 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 25 10:58:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 25 10:58:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 25 10:58:25 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 25 10:58:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[71,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[71,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 10:58:25 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 110 pg[9.1c( v 50'389 (0'0,50'389] local-lis/les=109/110 n=5 ec=54/44 lis/c=107/81 les/c/f=108/82/0 sis=109) [0] r=0 lpr=109 pi=[81,109)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 110 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=110) [0]/[2] r=0 lpr=110 pi=[71,110)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:25 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 110 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=71/72 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=110) [0]/[2] r=0 lpr=110 pi=[71,110)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:25 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 25 10:58:25 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 25 10:58:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 1 unknown, 1 peering, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:58:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a5eaa00f-e246-49c2-b30c-a3e0eb93df93 does not exist
Nov 25 10:58:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 11e0aa1b-0b73-492a-9176-e2210e2c9910 does not exist
Nov 25 10:58:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 23cec7bd-ffa8-4cfd-b70b-f9ee82bb43ce does not exist
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:58:26 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 25 10:58:26 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 25 10:58:26 np0005535469 podman[109018]: 2025-11-25 15:58:26.771851987 +0000 UTC m=+0.033962053 container create 5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 10:58:26 np0005535469 systemd[76549]: Created slice User Background Tasks Slice.
Nov 25 10:58:26 np0005535469 systemd[76549]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 10:58:26 np0005535469 systemd[1]: Started libpod-conmon-5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387.scope.
Nov 25 10:58:26 np0005535469 systemd[76549]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 10:58:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:58:26 np0005535469 podman[109018]: 2025-11-25 15:58:26.83899432 +0000 UTC m=+0.101104436 container init 5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 10:58:26 np0005535469 podman[109018]: 2025-11-25 15:58:26.84718933 +0000 UTC m=+0.109299396 container start 5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 10:58:26 np0005535469 podman[109018]: 2025-11-25 15:58:26.850693834 +0000 UTC m=+0.112803910 container attach 5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 10:58:26 np0005535469 pedantic_villani[109036]: 167 167
Nov 25 10:58:26 np0005535469 systemd[1]: libpod-5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387.scope: Deactivated successfully.
Nov 25 10:58:26 np0005535469 podman[109018]: 2025-11-25 15:58:26.853754797 +0000 UTC m=+0.115864863 container died 5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:58:26 np0005535469 podman[109018]: 2025-11-25 15:58:26.758084967 +0000 UTC m=+0.020195043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 25 10:58:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ce46388dda2e207d441ae2c31c53b1de7bf756ad4c235a75dabf47ab77130c13-merged.mount: Deactivated successfully.
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 25 10:58:26 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:58:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:58:26 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Nov 25 10:58:26 np0005535469 podman[109018]: 2025-11-25 15:58:26.908580588 +0000 UTC m=+0.170690654 container remove 5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:58:26 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 111 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=110/111 n=5 ec=54/44 lis/c=71/71 les/c/f=72/72/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[71,110)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:26 np0005535469 systemd[1]: libpod-conmon-5cbb611cb300308ef7dad484d370fc33e1645e187f8741773fc0524ec939a387.scope: Deactivated successfully.
Nov 25 10:58:27 np0005535469 podman[109061]: 2025-11-25 15:58:27.050317524 +0000 UTC m=+0.037973611 container create 3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 10:58:27 np0005535469 systemd[1]: Started libpod-conmon-3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046.scope.
Nov 25 10:58:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:58:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f88aea4d1238dd0a4d0961a5c07cc594d12f02212d5f450f1c065ccec788e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:27 np0005535469 podman[109061]: 2025-11-25 15:58:27.034768396 +0000 UTC m=+0.022424463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:58:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f88aea4d1238dd0a4d0961a5c07cc594d12f02212d5f450f1c065ccec788e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f88aea4d1238dd0a4d0961a5c07cc594d12f02212d5f450f1c065ccec788e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f88aea4d1238dd0a4d0961a5c07cc594d12f02212d5f450f1c065ccec788e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f88aea4d1238dd0a4d0961a5c07cc594d12f02212d5f450f1c065ccec788e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:27 np0005535469 podman[109061]: 2025-11-25 15:58:27.141048159 +0000 UTC m=+0.128704226 container init 3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 10:58:27 np0005535469 podman[109061]: 2025-11-25 15:58:27.148300605 +0000 UTC m=+0.135956652 container start 3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:58:27 np0005535469 podman[109061]: 2025-11-25 15:58:27.152671231 +0000 UTC m=+0.140327268 container attach 3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:58:27 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Nov 25 10:58:27 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Nov 25 10:58:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 25 10:58:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 25 10:58:27 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 25 10:58:27 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 112 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=110/111 n=5 ec=54/44 lis/c=110/71 les/c/f=111/72/0 sis=112 pruub=15.451858521s) [0] async=[0] r=-1 lpr=112 pi=[71,112)/1 crt=50'389 mlcod 50'389 active pruub 275.326293945s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:27 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 112 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=110/111 n=5 ec=54/44 lis/c=110/71 les/c/f=111/72/0 sis=112 pruub=15.451676369s) [0] r=-1 lpr=112 pi=[71,112)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 275.326293945s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:27 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 112 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=110/71 les/c/f=111/72/0 sis=112) [0] r=0 lpr=112 pi=[71,112)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:27 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 112 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=110/71 les/c/f=111/72/0 sis=112) [0] r=0 lpr=112 pi=[71,112)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:27 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 25 10:58:27 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 25 10:58:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 1 unknown, 1 peering, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 25 10:58:28 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 25 10:58:28 np0005535469 serene_mclaren[109077]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:58:28 np0005535469 serene_mclaren[109077]: --> relative data size: 1.0
Nov 25 10:58:28 np0005535469 serene_mclaren[109077]: --> All data devices are unavailable
Nov 25 10:58:28 np0005535469 systemd[1]: libpod-3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046.scope: Deactivated successfully.
Nov 25 10:58:28 np0005535469 systemd[1]: libpod-3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046.scope: Consumed 1.003s CPU time.
Nov 25 10:58:28 np0005535469 podman[109061]: 2025-11-25 15:58:28.293091459 +0000 UTC m=+1.280747506 container died 3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:58:28 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 25 10:58:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-79f88aea4d1238dd0a4d0961a5c07cc594d12f02212d5f450f1c065ccec788e9-merged.mount: Deactivated successfully.
Nov 25 10:58:28 np0005535469 podman[109061]: 2025-11-25 15:58:28.349812883 +0000 UTC m=+1.337468930 container remove 3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:58:28 np0005535469 systemd[1]: libpod-conmon-3cd5b33a44e3630859315b0a27668ce419b468e6319582dd8f0a16a4be7be046.scope: Deactivated successfully.
Nov 25 10:58:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 25 10:58:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 25 10:58:28 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 25 10:58:28 np0005535469 ceph-osd[88890]: osd.0 pg_epoch: 113 pg[9.1e( v 50'389 (0'0,50'389] local-lis/les=112/113 n=5 ec=54/44 lis/c=110/71 les/c/f=111/72/0 sis=112) [0] r=0 lpr=112 pi=[71,112)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:28 np0005535469 podman[109262]: 2025-11-25 15:58:28.888689871 +0000 UTC m=+0.047686942 container create 4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 10:58:28 np0005535469 systemd[1]: Started libpod-conmon-4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74.scope.
Nov 25 10:58:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:58:28 np0005535469 podman[109262]: 2025-11-25 15:58:28.864925412 +0000 UTC m=+0.023922513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:58:28 np0005535469 podman[109262]: 2025-11-25 15:58:28.975384628 +0000 UTC m=+0.134381729 container init 4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:58:28 np0005535469 podman[109262]: 2025-11-25 15:58:28.98290185 +0000 UTC m=+0.141898931 container start 4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 10:58:28 np0005535469 podman[109262]: 2025-11-25 15:58:28.98660813 +0000 UTC m=+0.145605231 container attach 4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:58:28 np0005535469 elegant_faraday[109278]: 167 167
Nov 25 10:58:28 np0005535469 systemd[1]: libpod-4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74.scope: Deactivated successfully.
Nov 25 10:58:28 np0005535469 podman[109262]: 2025-11-25 15:58:28.990907524 +0000 UTC m=+0.149904615 container died 4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 10:58:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1a44159b80c2375b99b0f385881da2ec41de102742bb49c144ee09e0cc0bd1a3-merged.mount: Deactivated successfully.
Nov 25 10:58:29 np0005535469 podman[109262]: 2025-11-25 15:58:29.029163392 +0000 UTC m=+0.188160473 container remove 4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 10:58:29 np0005535469 systemd[1]: libpod-conmon-4a3f68b59ee039383b0b54fc449c5fdb307e7fbe0e2547235793a6f07cd58b74.scope: Deactivated successfully.
Nov 25 10:58:29 np0005535469 podman[109302]: 2025-11-25 15:58:29.213144132 +0000 UTC m=+0.055941843 container create d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 10:58:29 np0005535469 systemd[1]: Started libpod-conmon-d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291.scope.
Nov 25 10:58:29 np0005535469 podman[109302]: 2025-11-25 15:58:29.184161403 +0000 UTC m=+0.026959214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:58:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:58:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ffc3350c5705039a851c8bd24546f0d0ed98be581a9fa04d8536b7a840f1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ffc3350c5705039a851c8bd24546f0d0ed98be581a9fa04d8536b7a840f1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ffc3350c5705039a851c8bd24546f0d0ed98be581a9fa04d8536b7a840f1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ffc3350c5705039a851c8bd24546f0d0ed98be581a9fa04d8536b7a840f1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:29 np0005535469 podman[109302]: 2025-11-25 15:58:29.293993482 +0000 UTC m=+0.136791223 container init d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:58:29 np0005535469 podman[109302]: 2025-11-25 15:58:29.299966243 +0000 UTC m=+0.142763964 container start d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 10:58:29 np0005535469 podman[109302]: 2025-11-25 15:58:29.303462796 +0000 UTC m=+0.146260547 container attach d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:58:29 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 25 10:58:29 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 25 10:58:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 1 unknown, 1 peering, 319 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:30 np0005535469 romantic_gould[109319]: {
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:    "0": [
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:        {
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "devices": [
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "/dev/loop3"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            ],
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_name": "ceph_lv0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_size": "21470642176",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "name": "ceph_lv0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "tags": {
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cluster_name": "ceph",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.crush_device_class": "",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.encrypted": "0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osd_id": "0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.type": "block",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.vdo": "0"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            },
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "type": "block",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "vg_name": "ceph_vg0"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:        }
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:    ],
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:    "1": [
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:        {
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "devices": [
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "/dev/loop4"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            ],
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_name": "ceph_lv1",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_size": "21470642176",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "name": "ceph_lv1",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "tags": {
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cluster_name": "ceph",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.crush_device_class": "",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.encrypted": "0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osd_id": "1",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.type": "block",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.vdo": "0"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            },
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "type": "block",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "vg_name": "ceph_vg1"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:        }
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:    ],
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:    "2": [
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:        {
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "devices": [
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "/dev/loop5"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            ],
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_name": "ceph_lv2",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_size": "21470642176",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "name": "ceph_lv2",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "tags": {
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.cluster_name": "ceph",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.crush_device_class": "",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.encrypted": "0",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osd_id": "2",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.type": "block",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:                "ceph.vdo": "0"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            },
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "type": "block",
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:            "vg_name": "ceph_vg2"
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:        }
Nov 25 10:58:30 np0005535469 romantic_gould[109319]:    ]
Nov 25 10:58:30 np0005535469 romantic_gould[109319]: }
Nov 25 10:58:30 np0005535469 systemd[1]: libpod-d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291.scope: Deactivated successfully.
Nov 25 10:58:30 np0005535469 podman[109302]: 2025-11-25 15:58:30.070560662 +0000 UTC m=+0.913358383 container died d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 10:58:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e7ffc3350c5705039a851c8bd24546f0d0ed98be581a9fa04d8536b7a840f1d3-merged.mount: Deactivated successfully.
Nov 25 10:58:30 np0005535469 podman[109302]: 2025-11-25 15:58:30.140397307 +0000 UTC m=+0.983195028 container remove d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:58:30 np0005535469 systemd[1]: libpod-conmon-d94bb2f2f2d69715b06375dd6d12ae8c5f55a04b2be8f1cc531ca94ff7de7291.scope: Deactivated successfully.
Nov 25 10:58:30 np0005535469 podman[109481]: 2025-11-25 15:58:30.726320647 +0000 UTC m=+0.018563629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:58:30 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 25 10:58:30 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 25 10:58:30 np0005535469 podman[109481]: 2025-11-25 15:58:30.957445092 +0000 UTC m=+0.249688034 container create e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pasteur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 10:58:31 np0005535469 systemd[1]: Started libpod-conmon-e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba.scope.
Nov 25 10:58:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:58:31 np0005535469 podman[109481]: 2025-11-25 15:58:31.113875622 +0000 UTC m=+0.406118604 container init e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:58:31 np0005535469 podman[109481]: 2025-11-25 15:58:31.120616824 +0000 UTC m=+0.412859816 container start e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:58:31 np0005535469 unruffled_pasteur[109498]: 167 167
Nov 25 10:58:31 np0005535469 systemd[1]: libpod-e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba.scope: Deactivated successfully.
Nov 25 10:58:31 np0005535469 podman[109481]: 2025-11-25 15:58:31.161671635 +0000 UTC m=+0.453914607 container attach e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 10:58:31 np0005535469 podman[109481]: 2025-11-25 15:58:31.163163496 +0000 UTC m=+0.455406448 container died e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pasteur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:58:31 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 25 10:58:31 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 25 10:58:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-731d954620debd9c348163f72d6a314159313fa7a4f03224304ee02514b40040-merged.mount: Deactivated successfully.
Nov 25 10:58:31 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 25 10:58:31 np0005535469 podman[109481]: 2025-11-25 15:58:31.306031402 +0000 UTC m=+0.598274354 container remove e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pasteur, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:58:31 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 25 10:58:31 np0005535469 systemd[1]: libpod-conmon-e358f3e10305da21f72360ce9f52d67d869de57fbf181a416d730960201a90ba.scope: Deactivated successfully.
Nov 25 10:58:31 np0005535469 podman[109522]: 2025-11-25 15:58:31.458010182 +0000 UTC m=+0.044909207 container create b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:58:31 np0005535469 systemd[1]: Started libpod-conmon-b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb.scope.
Nov 25 10:58:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:58:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e425aea95f64b72c135b37b06529d2b6f297e139568a9ab1562d6f34359db9be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e425aea95f64b72c135b37b06529d2b6f297e139568a9ab1562d6f34359db9be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e425aea95f64b72c135b37b06529d2b6f297e139568a9ab1562d6f34359db9be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e425aea95f64b72c135b37b06529d2b6f297e139568a9ab1562d6f34359db9be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:58:31 np0005535469 podman[109522]: 2025-11-25 15:58:31.43820407 +0000 UTC m=+0.025103115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:58:31 np0005535469 podman[109522]: 2025-11-25 15:58:31.555352015 +0000 UTC m=+0.142251060 container init b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 10:58:31 np0005535469 podman[109522]: 2025-11-25 15:58:31.564912302 +0000 UTC m=+0.151811327 container start b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:58:31 np0005535469 podman[109522]: 2025-11-25 15:58:31.568425896 +0000 UTC m=+0.155324921 container attach b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 10:58:31 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 25 10:58:31 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 25 10:58:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 25 10:58:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 10:58:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:58:32 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 25 10:58:32 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 25 10:58:32 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Nov 25 10:58:32 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Nov 25 10:58:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]: {
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "osd_id": 1,
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "type": "bluestore"
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:    },
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "osd_id": 2,
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "type": "bluestore"
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:    },
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "osd_id": 0,
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:        "type": "bluestore"
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]:    }
Nov 25 10:58:32 np0005535469 pedantic_joliot[109539]: }
Nov 25 10:58:32 np0005535469 systemd[1]: libpod-b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb.scope: Deactivated successfully.
Nov 25 10:58:32 np0005535469 systemd[1]: libpod-b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb.scope: Consumed 1.128s CPU time.
Nov 25 10:58:32 np0005535469 conmon[109539]: conmon b2144ddb3fb083c56d88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb.scope/container/memory.events
Nov 25 10:58:32 np0005535469 podman[109522]: 2025-11-25 15:58:32.681340742 +0000 UTC m=+1.268239767 container died b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:58:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e425aea95f64b72c135b37b06529d2b6f297e139568a9ab1562d6f34359db9be-merged.mount: Deactivated successfully.
Nov 25 10:58:32 np0005535469 podman[109522]: 2025-11-25 15:58:32.744512177 +0000 UTC m=+1.331411202 container remove b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 10:58:32 np0005535469 systemd[1]: libpod-conmon-b2144ddb3fb083c56d88c50ad45a14dbb5434cbacef30387b7d3e5abb140bdcb.scope: Deactivated successfully.
Nov 25 10:58:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:58:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:58:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:58:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:58:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a8dccac3-6d76-42b7-8410-36064cb47cae does not exist
Nov 25 10:58:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 491ce411-351c-437f-8304-c1bc57329ee0 does not exist
Nov 25 10:58:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 25 10:58:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 10:58:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:58:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:58:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:58:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 25 10:58:33 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 25 10:58:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 25 10:58:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 10:58:34 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 25 10:58:34 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 25 10:58:34 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 25 10:58:34 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 25 10:58:34 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 25 10:58:34 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 25 10:58:35 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 25 10:58:35 np0005535469 python3.9[109785]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:58:35 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 25 10:58:35 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 25 10:58:35 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 25 10:58:35 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 114 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=72/73 n=5 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=114 pruub=10.546310425s) [1] r=-1 lpr=114 pi=[72,114)/1 crt=50'389 mlcod 0'0 active pruub 278.359985352s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:35 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 114 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=72/73 n=5 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=114 pruub=10.546257973s) [1] r=-1 lpr=114 pi=[72,114)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 278.359985352s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:35 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=114) [1] r=0 lpr=114 pi=[72,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 25 10:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 25 10:58:35 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 25 10:58:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 25 10:58:36 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 25 10:58:36 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 115 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=72/73 n=5 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=115) [1]/[2] r=0 lpr=115 pi=[72,115)/1 crt=50'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:36 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 115 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=72/73 n=5 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=115) [1]/[2] r=0 lpr=115 pi=[72,115)/1 crt=50'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:36 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[72,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:36 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[72,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:36 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 25 10:58:36 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 25 10:58:36 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 25 10:58:36 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 25 10:58:36 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 25 10:58:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 25 10:58:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 25 10:58:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 25 10:58:37 np0005535469 python3.9[110072]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 10:58:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:37 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 116 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=115/116 n=5 ec=54/44 lis/c=72/72 les/c/f=73/73/0 sis=115) [1]/[2] async=[1] r=0 lpr=115 pi=[72,115)/1 crt=50'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:38 np0005535469 python3.9[110224]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 10:58:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 25 10:58:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 25 10:58:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 25 10:58:38 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 25 10:58:39 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 25 10:58:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 25 10:58:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 25 10:58:39 np0005535469 python3.9[110377]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:58:39 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 25 10:58:39 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 117 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=115/116 n=5 ec=54/44 lis/c=115/72 les/c/f=116/73/0 sis=117 pruub=14.410923958s) [1] async=[1] r=-1 lpr=117 pi=[72,117)/1 crt=50'389 mlcod 50'389 active pruub 286.043945312s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:39 np0005535469 ceph-osd[90994]: osd.2 pg_epoch: 117 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=115/116 n=5 ec=54/44 lis/c=115/72 les/c/f=116/73/0 sis=117 pruub=14.410681725s) [1] r=-1 lpr=117 pi=[72,117)/1 crt=50'389 mlcod 0'0 unknown NOTIFY pruub 286.043945312s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 10:58:39 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 25 10:58:39 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 117 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=115/72 les/c/f=116/73/0 sis=117) [1] r=0 lpr=117 pi=[72,117)/1 luod=0'0 crt=50'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 10:58:39 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 117 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=0/0 n=5 ec=54/44 lis/c=115/72 les/c/f=116/73/0 sis=117) [1] r=0 lpr=117 pi=[72,117)/1 crt=50'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 10:58:39 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 25 10:58:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:58:39
Nov 25 10:58:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:58:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Some PGs (0.003115) are inactive; try again later
Nov 25 10:58:39 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:58:40 np0005535469 python3.9[110529]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 10:58:40 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 25 10:58:40 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 25 10:58:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:58:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 25 10:58:40 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 25 10:58:40 np0005535469 ceph-osd[89991]: osd.1 pg_epoch: 118 pg[9.1f( v 50'389 (0'0,50'389] local-lis/les=117/118 n=5 ec=54/44 lis/c=115/72 les/c/f=116/73/0 sis=117) [1] r=0 lpr=117 pi=[72,117)/1 crt=50'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 10:58:41 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 25 10:58:41 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 25 10:58:41 np0005535469 python3.9[110681]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:58:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 25 10:58:42 np0005535469 python3.9[110833]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:58:42 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 25 10:58:42 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 25 10:58:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:42 np0005535469 python3.9[110911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:58:43 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 25 10:58:43 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 25 10:58:43 np0005535469 python3.9[111063]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:58:43 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 25 10:58:43 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 25 10:58:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Nov 25 10:58:44 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 25 10:58:44 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 25 10:58:44 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Nov 25 10:58:44 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Nov 25 10:58:45 np0005535469 python3.9[111217]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 10:58:45 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 25 10:58:45 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 25 10:58:45 np0005535469 python3.9[111370]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 10:58:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 25 10:58:47 np0005535469 python3.9[111523]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:58:47 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.16 deep-scrub starts
Nov 25 10:58:47 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.16 deep-scrub ok
Nov 25 10:58:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:47 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 25 10:58:47 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 25 10:58:47 np0005535469 python3.9[111675]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 10:58:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:48 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 25 10:58:48 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 25 10:58:48 np0005535469 python3.9[111827]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:58:48 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 25 10:58:48 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 25 10:58:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 25 10:58:49 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 25 10:58:49 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 25 10:58:49 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:58:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:58:50 np0005535469 python3.9[111980]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:58:50 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 25 10:58:50 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 25 10:58:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 25 10:58:51 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 25 10:58:51 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Nov 25 10:58:51 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Nov 25 10:58:51 np0005535469 python3.9[112132]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:58:51 np0005535469 python3.9[112210]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:58:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:52 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 25 10:58:52 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 25 10:58:52 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 25 10:58:52 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 25 10:58:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:52 np0005535469 python3.9[112362]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:58:53 np0005535469 python3.9[112440]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:58:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 25 10:58:53 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 25 10:58:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:54 np0005535469 python3.9[112592]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:58:54 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 25 10:58:54 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 25 10:58:54 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 25 10:58:54 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 25 10:58:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.4 deep-scrub starts
Nov 25 10:58:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.4 deep-scrub ok
Nov 25 10:58:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:56 np0005535469 python3.9[112743]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:58:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:58:57 np0005535469 python3.9[112895]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 10:58:57 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 25 10:58:57 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 25 10:58:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:58:58 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 25 10:58:58 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 25 10:58:58 np0005535469 python3.9[113045]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:58:58 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 25 10:58:58 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 25 10:58:59 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 25 10:58:59 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 25 10:58:59 np0005535469 python3.9[113197]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:58:59 np0005535469 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 10:58:59 np0005535469 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 10:58:59 np0005535469 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 10:58:59 np0005535469 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 10:59:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:00 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 25 10:59:00 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 25 10:59:00 np0005535469 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 10:59:00 np0005535469 python3.9[113358]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 10:59:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:03 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 25 10:59:03 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 25 10:59:03 np0005535469 python3.9[113510]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:59:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:04 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Nov 25 10:59:04 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Nov 25 10:59:04 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 25 10:59:04 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 25 10:59:04 np0005535469 python3.9[113664]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:59:04 np0005535469 systemd[1]: session-35.scope: Deactivated successfully.
Nov 25 10:59:04 np0005535469 systemd[1]: session-35.scope: Consumed 1min 6.989s CPU time.
Nov 25 10:59:04 np0005535469 systemd-logind[791]: Session 35 logged out. Waiting for processes to exit.
Nov 25 10:59:04 np0005535469 systemd-logind[791]: Removed session 35.
Nov 25 10:59:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:07 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 25 10:59:07 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 25 10:59:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:59:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:10 np0005535469 systemd-logind[791]: New session 36 of user zuul.
Nov 25 10:59:10 np0005535469 systemd[1]: Started Session 36 of User zuul.
Nov 25 10:59:11 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 25 10:59:11 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 25 10:59:11 np0005535469 python3.9[113845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:59:12 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 25 10:59:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:12 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 25 10:59:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:13 np0005535469 python3.9[114001]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 10:59:13 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 25 10:59:13 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 25 10:59:14 np0005535469 python3.9[114154]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:59:14 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 25 10:59:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:14 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 25 10:59:14 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 25 10:59:14 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 25 10:59:14 np0005535469 python3.9[114238]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:59:15 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 25 10:59:15 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 25 10:59:15 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 25 10:59:15 np0005535469 ceph-osd[90994]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 25 10:59:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:17 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 25 10:59:17 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 25 10:59:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 25 10:59:17 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 25 10:59:17 np0005535469 python3.9[114391]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:59:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:19 np0005535469 python3.9[114544]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:59:20 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 25 10:59:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:20 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 25 10:59:20 np0005535469 python3.9[114697]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:59:21 np0005535469 python3.9[114849]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 10:59:22 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 25 10:59:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:22 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 25 10:59:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:22 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 25 10:59:22 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 25 10:59:23 np0005535469 python3.9[114999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:59:23 np0005535469 python3.9[115157]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:59:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.376570) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086364376763, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7389, "num_deletes": 251, "total_data_size": 9026990, "memory_usage": 9253536, "flush_reason": "Manual Compaction"}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086364473620, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7397642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7527, "table_properties": {"data_size": 7369999, "index_size": 17992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 80438, "raw_average_key_size": 23, "raw_value_size": 7304363, "raw_average_value_size": 2136, "num_data_blocks": 789, "num_entries": 3419, "num_filter_entries": 3419, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085897, "oldest_key_time": 1764085897, "file_creation_time": 1764086364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 97164 microseconds, and 18325 cpu microseconds.
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.473743) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7397642 bytes OK
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.473765) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.482609) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.482658) EVENT_LOG_v1 {"time_micros": 1764086364482630, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.482703) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8994324, prev total WAL file size 8994324, number of live WAL files 2.
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.485868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7224KB) 13(53KB) 8(1944B)]
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086364486001, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7454847, "oldest_snapshot_seqno": -1}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3235 keys, 7410198 bytes, temperature: kUnknown
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086364552572, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7410198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7382966, "index_size": 18034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 78510, "raw_average_key_size": 24, "raw_value_size": 7318913, "raw_average_value_size": 2262, "num_data_blocks": 793, "num_entries": 3235, "num_filter_entries": 3235, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764086364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.552849) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7410198 bytes
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.559832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.8 rd, 111.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.1, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3525, records dropped: 290 output_compression: NoCompression
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.559862) EVENT_LOG_v1 {"time_micros": 1764086364559843, "job": 4, "event": "compaction_finished", "compaction_time_micros": 66677, "compaction_time_cpu_micros": 18561, "output_level": 6, "num_output_files": 1, "total_output_size": 7410198, "num_input_records": 3525, "num_output_records": 3235, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086364560982, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086364561037, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086364561068, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 10:59:24 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-15:59:24.485607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 10:59:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:26 np0005535469 python3.9[115311]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:59:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:27 np0005535469 python3.9[115598]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 10:59:28 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 25 10:59:28 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 25 10:59:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:28 np0005535469 python3.9[115748]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:59:29 np0005535469 python3.9[115902]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:59:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:31 np0005535469 python3.9[116055]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:59:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:33 np0005535469 python3.9[116308]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:59:33 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b3fd3782-a207-44fd-a906-25b18e3bbe20 does not exist
Nov 25 10:59:33 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev da5acbe8-6e6f-4354-ba24-a3a9c17629a8 does not exist
Nov 25 10:59:33 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3b13a97b-5672-4c5b-a2b9-ff9fc90daa29 does not exist
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 10:59:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 10:59:34 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 25 10:59:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:34 np0005535469 ceph-osd[89991]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 25 10:59:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 10:59:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:59:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 10:59:34 np0005535469 podman[116632]: 2025-11-25 15:59:34.368807484 +0000 UTC m=+0.019678022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:59:34 np0005535469 podman[116632]: 2025-11-25 15:59:34.490516558 +0000 UTC m=+0.141387086 container create a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 10:59:34 np0005535469 systemd[1]: Started libpod-conmon-a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c.scope.
Nov 25 10:59:34 np0005535469 python3.9[116624]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 10:59:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:59:34 np0005535469 podman[116632]: 2025-11-25 15:59:34.597035953 +0000 UTC m=+0.247906491 container init a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 10:59:34 np0005535469 podman[116632]: 2025-11-25 15:59:34.60419389 +0000 UTC m=+0.255064408 container start a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 10:59:34 np0005535469 podman[116632]: 2025-11-25 15:59:34.608598191 +0000 UTC m=+0.259468709 container attach a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wescoff, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 10:59:34 np0005535469 sleepy_wescoff[116648]: 167 167
Nov 25 10:59:34 np0005535469 systemd[1]: libpod-a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c.scope: Deactivated successfully.
Nov 25 10:59:34 np0005535469 podman[116632]: 2025-11-25 15:59:34.611685666 +0000 UTC m=+0.262556184 container died a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:59:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-eaaf611301cb6e04a1cc2378124af752c45808a6ad93431604c0452e1911cafa-merged.mount: Deactivated successfully.
Nov 25 10:59:34 np0005535469 podman[116632]: 2025-11-25 15:59:34.696025272 +0000 UTC m=+0.346895790 container remove a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:59:34 np0005535469 systemd[1]: libpod-conmon-a45ef20a147ec97eaf58121bcb582c8eef440d7671b947f3cd5de189841e747c.scope: Deactivated successfully.
Nov 25 10:59:34 np0005535469 podman[116695]: 2025-11-25 15:59:34.886130794 +0000 UTC m=+0.050054166 container create 787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:59:34 np0005535469 systemd[1]: Started libpod-conmon-787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9.scope.
Nov 25 10:59:34 np0005535469 podman[116695]: 2025-11-25 15:59:34.860349626 +0000 UTC m=+0.024273008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:59:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:59:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d333baa57e8151e5ebda4781b70af305496c998953fe36efe557df2f33649b2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d333baa57e8151e5ebda4781b70af305496c998953fe36efe557df2f33649b2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d333baa57e8151e5ebda4781b70af305496c998953fe36efe557df2f33649b2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d333baa57e8151e5ebda4781b70af305496c998953fe36efe557df2f33649b2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d333baa57e8151e5ebda4781b70af305496c998953fe36efe557df2f33649b2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:34 np0005535469 podman[116695]: 2025-11-25 15:59:34.988726982 +0000 UTC m=+0.152650354 container init 787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:59:35 np0005535469 podman[116695]: 2025-11-25 15:59:35.00250274 +0000 UTC m=+0.166426142 container start 787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 10:59:35 np0005535469 podman[116695]: 2025-11-25 15:59:35.011763205 +0000 UTC m=+0.175686567 container attach 787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:59:35 np0005535469 systemd[1]: session-36.scope: Deactivated successfully.
Nov 25 10:59:35 np0005535469 systemd[1]: session-36.scope: Consumed 19.283s CPU time.
Nov 25 10:59:35 np0005535469 systemd-logind[791]: Session 36 logged out. Waiting for processes to exit.
Nov 25 10:59:35 np0005535469 systemd-logind[791]: Removed session 36.
Nov 25 10:59:36 np0005535469 compassionate_fermat[116711]: --> passed data devices: 0 physical, 3 LVM
Nov 25 10:59:36 np0005535469 compassionate_fermat[116711]: --> relative data size: 1.0
Nov 25 10:59:36 np0005535469 compassionate_fermat[116711]: --> All data devices are unavailable
Nov 25 10:59:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:36 np0005535469 systemd[1]: libpod-787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9.scope: Deactivated successfully.
Nov 25 10:59:36 np0005535469 podman[116695]: 2025-11-25 15:59:36.090921697 +0000 UTC m=+1.254845069 container died 787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 10:59:36 np0005535469 systemd[1]: libpod-787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9.scope: Consumed 1.031s CPU time.
Nov 25 10:59:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d333baa57e8151e5ebda4781b70af305496c998953fe36efe557df2f33649b2b-merged.mount: Deactivated successfully.
Nov 25 10:59:36 np0005535469 podman[116695]: 2025-11-25 15:59:36.170355029 +0000 UTC m=+1.334278391 container remove 787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 10:59:36 np0005535469 systemd[1]: libpod-conmon-787cab1ebde1bd87f925899b681ce6b4ade0c2c8cace999620e0f18894c0ede9.scope: Deactivated successfully.
Nov 25 10:59:36 np0005535469 podman[116895]: 2025-11-25 15:59:36.759283215 +0000 UTC m=+0.039254229 container create 86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 10:59:36 np0005535469 systemd[1]: Started libpod-conmon-86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914.scope.
Nov 25 10:59:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:59:36 np0005535469 podman[116895]: 2025-11-25 15:59:36.826240575 +0000 UTC m=+0.106211609 container init 86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:59:36 np0005535469 podman[116895]: 2025-11-25 15:59:36.832696663 +0000 UTC m=+0.112667667 container start 86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 10:59:36 np0005535469 podman[116895]: 2025-11-25 15:59:36.83586942 +0000 UTC m=+0.115840424 container attach 86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:59:36 np0005535469 quirky_wozniak[116911]: 167 167
Nov 25 10:59:36 np0005535469 systemd[1]: libpod-86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914.scope: Deactivated successfully.
Nov 25 10:59:36 np0005535469 podman[116895]: 2025-11-25 15:59:36.741684492 +0000 UTC m=+0.021655526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:59:36 np0005535469 podman[116895]: 2025-11-25 15:59:36.838246645 +0000 UTC m=+0.118217679 container died 86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 10:59:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c818add42d8d68c4baf1b062d60f150422616242b44a23ef99c40bf2e135fd8c-merged.mount: Deactivated successfully.
Nov 25 10:59:36 np0005535469 podman[116895]: 2025-11-25 15:59:36.874046028 +0000 UTC m=+0.154017022 container remove 86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:59:36 np0005535469 systemd[1]: libpod-conmon-86362702b0ff3e5cf212af059c5850b047db4d527214272202c680ba4358b914.scope: Deactivated successfully.
Nov 25 10:59:37 np0005535469 podman[116934]: 2025-11-25 15:59:37.037691403 +0000 UTC m=+0.046797036 container create ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 10:59:37 np0005535469 systemd[1]: Started libpod-conmon-ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e.scope.
Nov 25 10:59:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:59:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14992703fb0a8adbe37dbe9daf6c1415519edabdaf906f6086dc0e13dfb373d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14992703fb0a8adbe37dbe9daf6c1415519edabdaf906f6086dc0e13dfb373d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14992703fb0a8adbe37dbe9daf6c1415519edabdaf906f6086dc0e13dfb373d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14992703fb0a8adbe37dbe9daf6c1415519edabdaf906f6086dc0e13dfb373d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:37 np0005535469 podman[116934]: 2025-11-25 15:59:37.016963573 +0000 UTC m=+0.026069256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:59:37 np0005535469 podman[116934]: 2025-11-25 15:59:37.124649912 +0000 UTC m=+0.133755555 container init ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 10:59:37 np0005535469 podman[116934]: 2025-11-25 15:59:37.134089301 +0000 UTC m=+0.143194954 container start ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 25 10:59:37 np0005535469 podman[116934]: 2025-11-25 15:59:37.138665597 +0000 UTC m=+0.147771250 container attach ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:59:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]: {
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:    "0": [
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:        {
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "devices": [
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "/dev/loop3"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            ],
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_name": "ceph_lv0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_size": "21470642176",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "name": "ceph_lv0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "tags": {
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cluster_name": "ceph",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.crush_device_class": "",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.encrypted": "0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osd_id": "0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.type": "block",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.vdo": "0"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            },
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "type": "block",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "vg_name": "ceph_vg0"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:        }
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:    ],
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:    "1": [
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:        {
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "devices": [
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "/dev/loop4"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            ],
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_name": "ceph_lv1",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_size": "21470642176",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "name": "ceph_lv1",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "tags": {
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cluster_name": "ceph",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.crush_device_class": "",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.encrypted": "0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osd_id": "1",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.type": "block",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.vdo": "0"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            },
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "type": "block",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "vg_name": "ceph_vg1"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:        }
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:    ],
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:    "2": [
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:        {
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "devices": [
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "/dev/loop5"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            ],
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_name": "ceph_lv2",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_size": "21470642176",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "name": "ceph_lv2",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "tags": {
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cephx_lockbox_secret": "",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.cluster_name": "ceph",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.crush_device_class": "",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.encrypted": "0",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osd_id": "2",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.type": "block",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:                "ceph.vdo": "0"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            },
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "type": "block",
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:            "vg_name": "ceph_vg2"
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:        }
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]:    ]
Nov 25 10:59:37 np0005535469 modest_jepsen[116950]: }
Nov 25 10:59:37 np0005535469 systemd[1]: libpod-ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e.scope: Deactivated successfully.
Nov 25 10:59:37 np0005535469 podman[116934]: 2025-11-25 15:59:37.953476922 +0000 UTC m=+0.962582555 container died ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jepsen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 10:59:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d14992703fb0a8adbe37dbe9daf6c1415519edabdaf906f6086dc0e13dfb373d-merged.mount: Deactivated successfully.
Nov 25 10:59:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 25 10:59:38 np0005535469 podman[116934]: 2025-11-25 15:59:38.02291597 +0000 UTC m=+1.032021603 container remove ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jepsen, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:59:38 np0005535469 systemd[1]: libpod-conmon-ba906bc51e8bf83bd5247c7e6a9c4c67661c4a77ef20a1375e3cb1b56d352c7e.scope: Deactivated successfully.
Nov 25 10:59:38 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 25 10:59:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:38 np0005535469 podman[117114]: 2025-11-25 15:59:38.738373616 +0000 UTC m=+0.045499652 container create 9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_saha, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 10:59:38 np0005535469 systemd[1]: Started libpod-conmon-9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef.scope.
Nov 25 10:59:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:59:38 np0005535469 podman[117114]: 2025-11-25 15:59:38.803017712 +0000 UTC m=+0.110143768 container init 9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_saha, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 10:59:38 np0005535469 podman[117114]: 2025-11-25 15:59:38.810324214 +0000 UTC m=+0.117450250 container start 9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:59:38 np0005535469 podman[117114]: 2025-11-25 15:59:38.813455899 +0000 UTC m=+0.120581935 container attach 9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 10:59:38 np0005535469 podman[117114]: 2025-11-25 15:59:38.721120032 +0000 UTC m=+0.028246088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:59:38 np0005535469 crazy_saha[117130]: 167 167
Nov 25 10:59:38 np0005535469 systemd[1]: libpod-9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef.scope: Deactivated successfully.
Nov 25 10:59:38 np0005535469 podman[117114]: 2025-11-25 15:59:38.81857683 +0000 UTC m=+0.125702866 container died 9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_saha, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:59:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c9924e7f16f19bcbe69c571d69939ffbaea9373ca2fe872267b9ce6718fef173-merged.mount: Deactivated successfully.
Nov 25 10:59:38 np0005535469 podman[117114]: 2025-11-25 15:59:38.852665427 +0000 UTC m=+0.159791463 container remove 9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 10:59:38 np0005535469 systemd[1]: libpod-conmon-9eb28376bb1f8a522ac571f86b6582dc5137531b8d4014175e1a8cc57e1658ef.scope: Deactivated successfully.
Nov 25 10:59:38 np0005535469 podman[117153]: 2025-11-25 15:59:38.998931197 +0000 UTC m=+0.040272077 container create 02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:59:39 np0005535469 systemd[1]: Started libpod-conmon-02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851.scope.
Nov 25 10:59:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 10:59:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a29f4116c0b0002821faa2335d8f89f88ab26ab8ede6f30e3da87dd084cb5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a29f4116c0b0002821faa2335d8f89f88ab26ab8ede6f30e3da87dd084cb5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a29f4116c0b0002821faa2335d8f89f88ab26ab8ede6f30e3da87dd084cb5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a29f4116c0b0002821faa2335d8f89f88ab26ab8ede6f30e3da87dd084cb5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 10:59:39 np0005535469 podman[117153]: 2025-11-25 15:59:38.980275004 +0000 UTC m=+0.021615664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 10:59:39 np0005535469 podman[117153]: 2025-11-25 15:59:39.080705015 +0000 UTC m=+0.122045695 container init 02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 10:59:39 np0005535469 podman[117153]: 2025-11-25 15:59:39.09582847 +0000 UTC m=+0.137169120 container start 02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 10:59:39 np0005535469 podman[117153]: 2025-11-25 15:59:39.099607395 +0000 UTC m=+0.140948095 container attach 02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_engelbart, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 10:59:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_15:59:39
Nov 25 10:59:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 10:59:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 10:59:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 25 10:59:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]: {
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "osd_id": 1,
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "type": "bluestore"
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:    },
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "osd_id": 2,
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "type": "bluestore"
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:    },
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "osd_id": 0,
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:        "type": "bluestore"
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]:    }
Nov 25 10:59:40 np0005535469 confident_engelbart[117170]: }
Nov 25 10:59:40 np0005535469 systemd[1]: libpod-02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851.scope: Deactivated successfully.
Nov 25 10:59:40 np0005535469 systemd[1]: libpod-02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851.scope: Consumed 1.061s CPU time.
Nov 25 10:59:40 np0005535469 podman[117153]: 2025-11-25 15:59:40.155672612 +0000 UTC m=+1.197013262 container died 02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_engelbart, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 10:59:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-72a29f4116c0b0002821faa2335d8f89f88ab26ab8ede6f30e3da87dd084cb5d-merged.mount: Deactivated successfully.
Nov 25 10:59:40 np0005535469 podman[117153]: 2025-11-25 15:59:40.279935917 +0000 UTC m=+1.321276567 container remove 02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 10:59:40 np0005535469 systemd[1]: libpod-conmon-02e4bd607f0576b22f6962c1df9bb7087aa09d0173c4a439630f0870360e8851.scope: Deactivated successfully.
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 10:59:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 10:59:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:59:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 10:59:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e641f6f7-c2ae-4294-81c3-07d36c87a6db does not exist
Nov 25 10:59:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7828dfd4-86dc-4a33-9093-3c747b8cc3bb does not exist
Nov 25 10:59:41 np0005535469 systemd-logind[791]: New session 37 of user zuul.
Nov 25 10:59:41 np0005535469 systemd[1]: Started Session 37 of User zuul.
Nov 25 10:59:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:59:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 10:59:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:42 np0005535469 python3.9[117420]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:43 np0005535469 python3.9[117574]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:59:44 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 25 10:59:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:44 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 25 10:59:44 np0005535469 python3.9[117767]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:59:44 np0005535469 systemd[1]: session-37.scope: Deactivated successfully.
Nov 25 10:59:44 np0005535469 systemd[1]: session-37.scope: Consumed 2.425s CPU time.
Nov 25 10:59:44 np0005535469 systemd-logind[791]: Session 37 logged out. Waiting for processes to exit.
Nov 25 10:59:44 np0005535469 systemd-logind[791]: Removed session 37.
Nov 25 10:59:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:50 np0005535469 systemd-logind[791]: New session 38 of user zuul.
Nov 25 10:59:50 np0005535469 systemd[1]: Started Session 38 of User zuul.
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 10:59:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 10:59:51 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 25 10:59:51 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 25 10:59:51 np0005535469 python3.9[117946]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:59:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:52 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 25 10:59:52 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 25 10:59:52 np0005535469 python3.9[118100]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:59:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:53 np0005535469 python3.9[118256]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:59:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:54 np0005535469 python3.9[118340]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:59:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 25 10:59:55 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 25 10:59:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:56 np0005535469 python3.9[118493]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:59:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 10:59:57 np0005535469 python3.9[118688]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:59:58 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 25 10:59:58 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 25 10:59:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 10:59:58 np0005535469 python3.9[118840]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:59:59 np0005535469 python3.9[119005]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:00 np0005535469 python3.9[119083]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:00 np0005535469 python3.9[119235]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:01 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 25 11:00:01 np0005535469 ceph-osd[88890]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 25 11:00:01 np0005535469 python3.9[119313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:02 np0005535469 python3.9[119465]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:02 np0005535469 python3.9[119617]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:03 np0005535469 python3.9[119769]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:04 np0005535469 python3.9[119921]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:04 np0005535469 python3.9[120073]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 11:00:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:07 np0005535469 python3.9[120226]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:00:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:08 np0005535469 python3.9[120380]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:00:09 np0005535469 python3.9[120532]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:00:09 np0005535469 python3.9[120684]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:00:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:11 np0005535469 python3.9[120837]: ansible-service_facts Invoked
Nov 25 11:00:11 np0005535469 network[120854]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 11:00:11 np0005535469 network[120855]: 'network-scripts' will be removed from distribution in near future.
Nov 25 11:00:11 np0005535469 network[120856]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 11:00:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:18 np0005535469 python3.9[121308]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 11:00:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:21 np0005535469 python3.9[121461]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 11:00:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:22 np0005535469 python3.9[121613]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:23 np0005535469 python3.9[121691]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:23 np0005535469 python3.9[121843]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:24 np0005535469 python3.9[121921]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:25 np0005535469 python3.9[122073]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:26 np0005535469 python3.9[122225]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 11:00:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:27 np0005535469 python3.9[122309]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:00:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:28 np0005535469 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 11:00:28 np0005535469 systemd[1]: session-38.scope: Consumed 26.341s CPU time.
Nov 25 11:00:28 np0005535469 systemd-logind[791]: Session 38 logged out. Waiting for processes to exit.
Nov 25 11:00:28 np0005535469 systemd-logind[791]: Removed session 38.
Nov 25 11:00:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:34 np0005535469 systemd-logind[791]: New session 39 of user zuul.
Nov 25 11:00:34 np0005535469 systemd[1]: Started Session 39 of User zuul.
Nov 25 11:00:35 np0005535469 python3.9[122491]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:36 np0005535469 python3.9[122643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:37 np0005535469 python3.9[122721]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:37 np0005535469 systemd-logind[791]: Session 39 logged out. Waiting for processes to exit.
Nov 25 11:00:37 np0005535469 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 11:00:37 np0005535469 systemd[1]: session-39.scope: Consumed 1.825s CPU time.
Nov 25 11:00:37 np0005535469 systemd-logind[791]: Removed session 39.
Nov 25 11:00:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:00:39
Nov 25 11:00:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:00:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:00:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes']
Nov 25 11:00:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:00:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 18e2cf02-185a-400c-b167-53a40b9b0c66 does not exist
Nov 25 11:00:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2095d124-51ab-4952-934f-d9489eba28d8 does not exist
Nov 25 11:00:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cb1084e9-f3c4-4b82-891f-97266d5bceae does not exist
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:00:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:00:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:42 np0005535469 podman[123018]: 2025-11-25 16:00:42.125264222 +0000 UTC m=+0.053786183 container create 1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:00:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:00:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:00:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:00:42 np0005535469 systemd[1]: Started libpod-conmon-1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d.scope.
Nov 25 11:00:42 np0005535469 podman[123018]: 2025-11-25 16:00:42.098728565 +0000 UTC m=+0.027250616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:00:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:00:42 np0005535469 podman[123018]: 2025-11-25 16:00:42.22205593 +0000 UTC m=+0.150577981 container init 1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:00:42 np0005535469 podman[123018]: 2025-11-25 16:00:42.22916665 +0000 UTC m=+0.157688641 container start 1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:00:42 np0005535469 podman[123018]: 2025-11-25 16:00:42.233103125 +0000 UTC m=+0.161625186 container attach 1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:00:42 np0005535469 elegant_jemison[123034]: 167 167
Nov 25 11:00:42 np0005535469 systemd[1]: libpod-1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d.scope: Deactivated successfully.
Nov 25 11:00:42 np0005535469 podman[123018]: 2025-11-25 16:00:42.2347946 +0000 UTC m=+0.163316561 container died 1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:00:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e03dae7d52510cb495c3326e501a159d354318bb5e72c739ebd7ad37a40f6646-merged.mount: Deactivated successfully.
Nov 25 11:00:42 np0005535469 podman[123018]: 2025-11-25 16:00:42.279796389 +0000 UTC m=+0.208318360 container remove 1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:00:42 np0005535469 systemd[1]: libpod-conmon-1fbf80087658a148b254cd9b1482395b8ca187983890944a22f0eeab89c6b20d.scope: Deactivated successfully.
Nov 25 11:00:42 np0005535469 podman[123058]: 2025-11-25 16:00:42.455260654 +0000 UTC m=+0.050910168 container create dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 11:00:42 np0005535469 systemd[1]: Started libpod-conmon-dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5.scope.
Nov 25 11:00:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:00:42 np0005535469 podman[123058]: 2025-11-25 16:00:42.434628253 +0000 UTC m=+0.030277857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:00:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7060b02c0154c026549df635d377344be8314f54d4e5d6574aaacabaf9cdcadf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7060b02c0154c026549df635d377344be8314f54d4e5d6574aaacabaf9cdcadf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7060b02c0154c026549df635d377344be8314f54d4e5d6574aaacabaf9cdcadf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7060b02c0154c026549df635d377344be8314f54d4e5d6574aaacabaf9cdcadf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7060b02c0154c026549df635d377344be8314f54d4e5d6574aaacabaf9cdcadf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:42 np0005535469 podman[123058]: 2025-11-25 16:00:42.558612867 +0000 UTC m=+0.154262471 container init dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 11:00:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:42 np0005535469 podman[123058]: 2025-11-25 16:00:42.569667081 +0000 UTC m=+0.165316635 container start dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:00:42 np0005535469 podman[123058]: 2025-11-25 16:00:42.573968316 +0000 UTC m=+0.169617870 container attach dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:00:43 np0005535469 systemd-logind[791]: New session 40 of user zuul.
Nov 25 11:00:43 np0005535469 systemd[1]: Started Session 40 of User zuul.
Nov 25 11:00:43 np0005535469 interesting_golick[123075]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:00:43 np0005535469 interesting_golick[123075]: --> relative data size: 1.0
Nov 25 11:00:43 np0005535469 interesting_golick[123075]: --> All data devices are unavailable
Nov 25 11:00:43 np0005535469 systemd[1]: libpod-dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5.scope: Deactivated successfully.
Nov 25 11:00:43 np0005535469 podman[123058]: 2025-11-25 16:00:43.716129844 +0000 UTC m=+1.311779368 container died dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:00:43 np0005535469 systemd[1]: libpod-dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5.scope: Consumed 1.106s CPU time.
Nov 25 11:00:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7060b02c0154c026549df635d377344be8314f54d4e5d6574aaacabaf9cdcadf-merged.mount: Deactivated successfully.
Nov 25 11:00:43 np0005535469 podman[123058]: 2025-11-25 16:00:43.794754599 +0000 UTC m=+1.390404113 container remove dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 11:00:43 np0005535469 systemd[1]: libpod-conmon-dbdef5303b946f98040ea91267a83d160e536caabddab7de10ac4de211e3f2a5.scope: Deactivated successfully.
Nov 25 11:00:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:44 np0005535469 python3.9[123288]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:00:44 np0005535469 podman[123437]: 2025-11-25 16:00:44.56016213 +0000 UTC m=+0.046094329 container create b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 11:00:44 np0005535469 systemd[1]: Started libpod-conmon-b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304.scope.
Nov 25 11:00:44 np0005535469 podman[123437]: 2025-11-25 16:00:44.541220495 +0000 UTC m=+0.027152704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:00:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:00:44 np0005535469 podman[123437]: 2025-11-25 16:00:44.664881189 +0000 UTC m=+0.150813428 container init b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:00:44 np0005535469 podman[123437]: 2025-11-25 16:00:44.674854215 +0000 UTC m=+0.160786404 container start b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 11:00:44 np0005535469 podman[123437]: 2025-11-25 16:00:44.678317667 +0000 UTC m=+0.164249856 container attach b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:00:44 np0005535469 happy_saha[123474]: 167 167
Nov 25 11:00:44 np0005535469 systemd[1]: libpod-b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304.scope: Deactivated successfully.
Nov 25 11:00:44 np0005535469 podman[123437]: 2025-11-25 16:00:44.682814947 +0000 UTC m=+0.168747196 container died b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:00:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7042e724c3e887a113e01f5d9d88f775acfe6895985d6ee648c26ba01d1cb9de-merged.mount: Deactivated successfully.
Nov 25 11:00:44 np0005535469 podman[123437]: 2025-11-25 16:00:44.749947916 +0000 UTC m=+0.235880125 container remove b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:00:44 np0005535469 systemd[1]: libpod-conmon-b803a1bf3bbe4d81967ec3932410bf45a66c40647f4d22583117efee76418304.scope: Deactivated successfully.
Nov 25 11:00:44 np0005535469 podman[123527]: 2025-11-25 16:00:44.977683173 +0000 UTC m=+0.066881783 container create 7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:00:45 np0005535469 systemd[1]: Started libpod-conmon-7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579.scope.
Nov 25 11:00:45 np0005535469 podman[123527]: 2025-11-25 16:00:44.956426086 +0000 UTC m=+0.045624716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:00:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:00:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7099f8f43b3657f1ec62128622ef2831f391ff668a20fe3ed2907c651d7d6771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7099f8f43b3657f1ec62128622ef2831f391ff668a20fe3ed2907c651d7d6771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7099f8f43b3657f1ec62128622ef2831f391ff668a20fe3ed2907c651d7d6771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7099f8f43b3657f1ec62128622ef2831f391ff668a20fe3ed2907c651d7d6771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:45 np0005535469 podman[123527]: 2025-11-25 16:00:45.079238058 +0000 UTC m=+0.168436758 container init 7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:00:45 np0005535469 podman[123527]: 2025-11-25 16:00:45.094790703 +0000 UTC m=+0.183989343 container start 7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:00:45 np0005535469 podman[123527]: 2025-11-25 16:00:45.099237831 +0000 UTC m=+0.188436491 container attach 7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 11:00:45 np0005535469 python3.9[123624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]: {
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:    "0": [
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:        {
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "devices": [
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "/dev/loop3"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            ],
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_name": "ceph_lv0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_size": "21470642176",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "name": "ceph_lv0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "tags": {
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cluster_name": "ceph",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.crush_device_class": "",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.encrypted": "0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osd_id": "0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.type": "block",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.vdo": "0"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            },
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "type": "block",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "vg_name": "ceph_vg0"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:        }
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:    ],
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:    "1": [
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:        {
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "devices": [
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "/dev/loop4"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            ],
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_name": "ceph_lv1",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_size": "21470642176",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "name": "ceph_lv1",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "tags": {
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cluster_name": "ceph",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.crush_device_class": "",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.encrypted": "0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osd_id": "1",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.type": "block",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.vdo": "0"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            },
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "type": "block",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "vg_name": "ceph_vg1"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:        }
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:    ],
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:    "2": [
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:        {
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "devices": [
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "/dev/loop5"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            ],
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_name": "ceph_lv2",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_size": "21470642176",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "name": "ceph_lv2",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "tags": {
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.cluster_name": "ceph",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.crush_device_class": "",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.encrypted": "0",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osd_id": "2",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.type": "block",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:                "ceph.vdo": "0"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            },
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "type": "block",
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:            "vg_name": "ceph_vg2"
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:        }
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]:    ]
Nov 25 11:00:45 np0005535469 relaxed_torvalds[123573]: }
Nov 25 11:00:45 np0005535469 systemd[1]: libpod-7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579.scope: Deactivated successfully.
Nov 25 11:00:45 np0005535469 podman[123527]: 2025-11-25 16:00:45.976233696 +0000 UTC m=+1.065432346 container died 7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:00:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7099f8f43b3657f1ec62128622ef2831f391ff668a20fe3ed2907c651d7d6771-merged.mount: Deactivated successfully.
Nov 25 11:00:46 np0005535469 podman[123527]: 2025-11-25 16:00:46.056473603 +0000 UTC m=+1.145672233 container remove 7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:00:46 np0005535469 systemd[1]: libpod-conmon-7205b19e9083508af7c377936e99baa36e7dca6f99e562e15158fe87e19ed579.scope: Deactivated successfully.
Nov 25 11:00:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:46 np0005535469 python3.9[123866]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:46 np0005535469 podman[124033]: 2025-11-25 16:00:46.829465436 +0000 UTC m=+0.046995624 container create a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:00:46 np0005535469 systemd[1]: Started libpod-conmon-a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b.scope.
Nov 25 11:00:46 np0005535469 podman[124033]: 2025-11-25 16:00:46.81050754 +0000 UTC m=+0.028037738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:00:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:00:46 np0005535469 podman[124033]: 2025-11-25 16:00:46.931199476 +0000 UTC m=+0.148729694 container init a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:00:46 np0005535469 podman[124033]: 2025-11-25 16:00:46.943131744 +0000 UTC m=+0.160661922 container start a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:00:46 np0005535469 podman[124033]: 2025-11-25 16:00:46.946889674 +0000 UTC m=+0.164419862 container attach a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:00:46 np0005535469 vibrant_khayyam[124051]: 167 167
Nov 25 11:00:46 np0005535469 systemd[1]: libpod-a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b.scope: Deactivated successfully.
Nov 25 11:00:46 np0005535469 podman[124033]: 2025-11-25 16:00:46.949651208 +0000 UTC m=+0.167181416 container died a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:00:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8f739fc40a23ee69b2154db125cbf6db3ac9f0aba8a8005db222668b3c6635ab-merged.mount: Deactivated successfully.
Nov 25 11:00:46 np0005535469 podman[124033]: 2025-11-25 16:00:46.996686641 +0000 UTC m=+0.214216809 container remove a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:00:47 np0005535469 systemd[1]: libpod-conmon-a5c03653ffd763ac4868e01b6eae6f3d206b09817bd02d12a969edced25ca42b.scope: Deactivated successfully.
Nov 25 11:00:47 np0005535469 python3.9[124035]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ngpiwl3v recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:47 np0005535469 podman[124099]: 2025-11-25 16:00:47.171999881 +0000 UTC m=+0.056587579 container create 1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 11:00:47 np0005535469 systemd[1]: Started libpod-conmon-1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f.scope.
Nov 25 11:00:47 np0005535469 podman[124099]: 2025-11-25 16:00:47.147446127 +0000 UTC m=+0.032033865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:00:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:00:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f946936e577f0964079c112533f5566a4b797111524d0a6420b4a16685ba929c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f946936e577f0964079c112533f5566a4b797111524d0a6420b4a16685ba929c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f946936e577f0964079c112533f5566a4b797111524d0a6420b4a16685ba929c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f946936e577f0964079c112533f5566a4b797111524d0a6420b4a16685ba929c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:00:47 np0005535469 podman[124099]: 2025-11-25 16:00:47.285558986 +0000 UTC m=+0.170146724 container init 1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 11:00:47 np0005535469 podman[124099]: 2025-11-25 16:00:47.293626631 +0000 UTC m=+0.178214359 container start 1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:00:47 np0005535469 podman[124099]: 2025-11-25 16:00:47.298537372 +0000 UTC m=+0.183125090 container attach 1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:00:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:48 np0005535469 python3.9[124248]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]: {
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "osd_id": 1,
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "type": "bluestore"
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:    },
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "osd_id": 2,
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "type": "bluestore"
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:    },
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "osd_id": 0,
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:        "type": "bluestore"
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]:    }
Nov 25 11:00:48 np0005535469 intelligent_kapitsa[124116]: }
Nov 25 11:00:48 np0005535469 systemd[1]: libpod-1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f.scope: Deactivated successfully.
Nov 25 11:00:48 np0005535469 systemd[1]: libpod-1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f.scope: Consumed 1.035s CPU time.
Nov 25 11:00:48 np0005535469 podman[124099]: 2025-11-25 16:00:48.329039735 +0000 UTC m=+1.213627423 container died 1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:00:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f946936e577f0964079c112533f5566a4b797111524d0a6420b4a16685ba929c-merged.mount: Deactivated successfully.
Nov 25 11:00:48 np0005535469 podman[124099]: 2025-11-25 16:00:48.400912 +0000 UTC m=+1.285499698 container remove 1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:00:48 np0005535469 systemd[1]: libpod-conmon-1383d2f743be8d5ed769556854af3d6a58e192803f37f5ca077f066a6c440c3f.scope: Deactivated successfully.
Nov 25 11:00:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:00:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:00:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:00:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:00:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2b179d57-cb50-4e24-a06a-dc6fadc7b1f5 does not exist
Nov 25 11:00:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 01e1377a-7708-4fd4-b2bf-49b4042e5a02 does not exist
Nov 25 11:00:48 np0005535469 python3.9[124354]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.yh3cy3pl recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:00:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:00:49 np0005535469 python3.9[124568]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:49 np0005535469 python3.9[124720]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:50 np0005535469 python3.9[124798]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:00:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:00:51 np0005535469 python3.9[124950]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:51 np0005535469 python3.9[125028]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:00:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:52 np0005535469 python3.9[125180]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:53 np0005535469 python3.9[125332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:53 np0005535469 python3.9[125410]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:54 np0005535469 python3.9[125562]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:54 np0005535469 python3.9[125640]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:56 np0005535469 python3.9[125792]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:00:56 np0005535469 systemd[1]: Reloading.
Nov 25 11:00:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:56 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:00:56 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:00:57 np0005535469 python3.9[125980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:00:57 np0005535469 python3.9[126058]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:00:58 np0005535469 python3.9[126210]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:00:58 np0005535469 python3.9[126288]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:00:59 np0005535469 python3.9[126440]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:00:59 np0005535469 systemd[1]: Reloading.
Nov 25 11:00:59 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:00:59 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:01:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:00 np0005535469 systemd[1]: Starting Create netns directory...
Nov 25 11:01:00 np0005535469 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 11:01:00 np0005535469 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 11:01:00 np0005535469 systemd[1]: Finished Create netns directory.
Nov 25 11:01:01 np0005535469 python3.9[126633]: ansible-ansible.builtin.service_facts Invoked
Nov 25 11:01:01 np0005535469 network[126661]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 11:01:01 np0005535469 network[126665]: 'network-scripts' will be removed from distribution in near future.
Nov 25 11:01:01 np0005535469 network[126667]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 11:01:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:04 np0005535469 python3.9[126929]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:04 np0005535469 python3.9[127007]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:05 np0005535469 python3.9[127159]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:06 np0005535469 python3.9[127311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:06 np0005535469 python3.9[127389]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:07 np0005535469 python3.9[127541]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 11:01:07 np0005535469 systemd[1]: Starting Time & Date Service...
Nov 25 11:01:07 np0005535469 systemd[1]: Started Time & Date Service.
Nov 25 11:01:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:08 np0005535469 python3.9[127697]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:09 np0005535469 python3.9[127849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:09 np0005535469 python3.9[127927]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:01:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:10 np0005535469 python3.9[128079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:10 np0005535469 python3.9[128157]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._9ef2lb4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:11 np0005535469 python3.9[128309]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:11 np0005535469 python3.9[128387]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:12 np0005535469 python3.9[128539]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:01:13 np0005535469 python3[128692]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 11:01:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:14 np0005535469 python3.9[128844]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:14 np0005535469 python3.9[128922]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:15 np0005535469 python3.9[129074]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:15 np0005535469 python3.9[129152]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:16 np0005535469 python3.9[129304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:17 np0005535469 python3.9[129382]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:17 np0005535469 python3.9[129534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:18 np0005535469 python3.9[129612]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:18 np0005535469 python3.9[129764]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:19 np0005535469 python3.9[129842]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:20 np0005535469 python3.9[129994]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:01:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:21 np0005535469 python3.9[130149]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:21 np0005535469 python3.9[130301]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:22 np0005535469 python3.9[130453]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:23 np0005535469 python3.9[130605]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 11:01:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:24 np0005535469 python3.9[130757]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 11:01:24 np0005535469 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 11:01:24 np0005535469 systemd[1]: session-40.scope: Consumed 30.261s CPU time.
Nov 25 11:01:24 np0005535469 systemd-logind[791]: Session 40 logged out. Waiting for processes to exit.
Nov 25 11:01:24 np0005535469 systemd-logind[791]: Removed session 40.
Nov 25 11:01:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:30 np0005535469 systemd-logind[791]: New session 41 of user zuul.
Nov 25 11:01:30 np0005535469 systemd[1]: Started Session 41 of User zuul.
Nov 25 11:01:31 np0005535469 python3.9[130939]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 11:01:31 np0005535469 python3.9[131091]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:01:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.577783) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086492577815, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1275, "num_deletes": 251, "total_data_size": 1911312, "memory_usage": 1934624, "flush_reason": "Manual Compaction"}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086492590311, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1120480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7528, "largest_seqno": 8802, "table_properties": {"data_size": 1115964, "index_size": 1912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11683, "raw_average_key_size": 19, "raw_value_size": 1106028, "raw_average_value_size": 1877, "num_data_blocks": 91, "num_entries": 589, "num_filter_entries": 589, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086365, "oldest_key_time": 1764086365, "file_creation_time": 1764086492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 12588 microseconds, and 4074 cpu microseconds.
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.590369) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1120480 bytes OK
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.590388) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.591591) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.591660) EVENT_LOG_v1 {"time_micros": 1764086492591627, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.591681) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1905544, prev total WAL file size 1905544, number of live WAL files 2.
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.592389) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1094KB)], [20(7236KB)]
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086492592460, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8530678, "oldest_snapshot_seqno": -1}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3368 keys, 6723480 bytes, temperature: kUnknown
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086492631783, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6723480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6697621, "index_size": 16342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81580, "raw_average_key_size": 24, "raw_value_size": 6633330, "raw_average_value_size": 1969, "num_data_blocks": 725, "num_entries": 3368, "num_filter_entries": 3368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764086492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.631993) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6723480 bytes
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.633374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.6 rd, 170.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.1 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(13.6) write-amplify(6.0) OK, records in: 3824, records dropped: 456 output_compression: NoCompression
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.633391) EVENT_LOG_v1 {"time_micros": 1764086492633382, "job": 6, "event": "compaction_finished", "compaction_time_micros": 39385, "compaction_time_cpu_micros": 14773, "output_level": 6, "num_output_files": 1, "total_output_size": 6723480, "num_input_records": 3824, "num_output_records": 3368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086492633679, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086492634881, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.592292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.634950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.634958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.634962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.634966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:01:32 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:01:32.634970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:01:32 np0005535469 python3.9[131245]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 11:01:33 np0005535469 python3.9[131397]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.pcp8pvuu follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:01:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:34 np0005535469 python3.9[131522]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.pcp8pvuu mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086492.890587-44-59496901225460/.source.pcp8pvuu _original_basename=.pn7_c64b follow=False checksum=b8c0ab5e8d594437cfbcd662edc70076f9beafca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:35 np0005535469 python3.9[131674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:01:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:36 np0005535469 python3.9[131826]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfZCSXvOcw/IEogxXwqVl31uDKlIGL1yCYoHYf1Hq37wPmrm3332XZgVpiaonqTGDrVKu7kC5ZtT4fNx86COcVL5P35x1CMYCXCpbfLPtDrB9ovk+nvaWoeiW2HY7nhMpNWLz0UPcMAjbjq7+LwUDy+w8fFyMcB7uuPPnD5kG+qG2Os9Lp+Q5NJCU/WO68Q3Vk1qBqiEB9QECCNfmF4fXehpW/idQR/ykJooTOb0djW2KAfUtle5XBLF2UxGobAsyIttY31J9sy0dhqY0h3H+ZPiNB11PROCj1OdMyXg7chvF6OJ4aFHlmNY+YDI0wSt8kC43C6i9RYDW71/FuGDjMBMwPp9DzG9LAc8VPcsQj6YEiNoqUWQGsD2+mYBhj1ofQVcISiuL7hrONc2vcCtC3cbYb6a9UEXcQxs8z/pWRw5kCujEYklnj1O6Y6pjWra9fnhogwikDwViiDGLOwhXgyQosxllCDGOEafFcMbL94TVwFENc0FeMSLXtU3RBqhU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEjXuWbF2e+9xQSQ7XZIhNCtP0gS6mmUIrlSH7Q7Jpyy#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNra5eob9nZprGwdbwfM0OOC+PQK0G3M5SCkOTMaDRTa/6360f+sILjj4tVAZ6FcmfKKqWN4+4hUUPUqbZHE+ew=#012 create=True mode=0644 path=/tmp/ansible.pcp8pvuu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:37 np0005535469 python3.9[131978]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.pcp8pvuu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:01:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:01:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 1900 writes, 8688 keys, 1900 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 1899 writes, 1899 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1900 writes, 8688 keys, 1900 commit groups, 1.0 writes per commit group, ingest: 10.45 MB, 0.02 MB/s#012Interval WAL: 1899 writes, 1899 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     52.1      0.16              0.02         3    0.052       0      0       0.0       0.0#012  L6      1/0    6.41 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    143.7    127.1      0.11              0.03         2    0.053    7349    746       0.0       0.0#012 Sum      1/0    6.41 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     58.0     82.4      0.26              0.06         5    0.053    7349    746       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     70.6    100.1      0.22              0.06         4    0.054    7349    746       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    143.7    127.1      0.11              0.03         2    0.053    7349    746       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     74.0      0.11              0.02         2    0.055       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 308.00 MB usage: 483.08 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,396.11 KB,0.125593%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,58.67 KB,0.0186028%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 11:01:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:37 np0005535469 python3.9[132132]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.pcp8pvuu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:37 np0005535469 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 11:01:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:38 np0005535469 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 11:01:38 np0005535469 systemd[1]: session-41.scope: Consumed 5.088s CPU time.
Nov 25 11:01:38 np0005535469 systemd-logind[791]: Session 41 logged out. Waiting for processes to exit.
Nov 25 11:01:38 np0005535469 systemd-logind[791]: Removed session 41.
Nov 25 11:01:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:01:39
Nov 25 11:01:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:01:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:01:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.control']
Nov 25 11:01:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:01:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:43 np0005535469 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 11:01:43 np0005535469 systemd[1]: session-18.scope: Consumed 1min 25.507s CPU time.
Nov 25 11:01:43 np0005535469 systemd-logind[791]: Session 18 logged out. Waiting for processes to exit.
Nov 25 11:01:43 np0005535469 systemd-logind[791]: Removed session 18.
Nov 25 11:01:43 np0005535469 systemd-logind[791]: New session 42 of user zuul.
Nov 25 11:01:43 np0005535469 systemd[1]: Started Session 42 of User zuul.
Nov 25 11:01:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:44 np0005535469 python3.9[132314]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:01:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:46 np0005535469 python3.9[132470]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 11:01:46 np0005535469 python3.9[132624]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:01:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:48 np0005535469 python3.9[132777]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:01:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:48 np0005535469 python3.9[132978]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:01:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f21084a6-9ead-4f10-8f43-bf1006cc53fa does not exist
Nov 25 11:01:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 329667f6-213a-473d-a624-d1170d0062df does not exist
Nov 25 11:01:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2d040917-7285-4d53-a92c-cad9a4db0c5e does not exist
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:01:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:01:49 np0005535469 python3.9[133237]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:50 np0005535469 podman[133379]: 2025-11-25 16:01:50.128858627 +0000 UTC m=+0.019936634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:01:50 np0005535469 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 11:01:50 np0005535469 systemd[1]: session-42.scope: Consumed 3.749s CPU time.
Nov 25 11:01:50 np0005535469 systemd-logind[791]: Session 42 logged out. Waiting for processes to exit.
Nov 25 11:01:50 np0005535469 systemd-logind[791]: Removed session 42.
Nov 25 11:01:50 np0005535469 podman[133379]: 2025-11-25 16:01:50.444448464 +0000 UTC m=+0.335526511 container create 3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:01:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:01:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:01:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:01:50 np0005535469 systemd[1]: Started libpod-conmon-3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63.scope.
Nov 25 11:01:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:01:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:01:50 np0005535469 podman[133379]: 2025-11-25 16:01:50.913374915 +0000 UTC m=+0.804452942 container init 3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:01:50 np0005535469 podman[133379]: 2025-11-25 16:01:50.922855968 +0000 UTC m=+0.813933975 container start 3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:01:50 np0005535469 exciting_proskuriakova[133395]: 167 167
Nov 25 11:01:50 np0005535469 systemd[1]: libpod-3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63.scope: Deactivated successfully.
Nov 25 11:01:51 np0005535469 podman[133379]: 2025-11-25 16:01:51.21133649 +0000 UTC m=+1.102414527 container attach 3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:01:51 np0005535469 podman[133379]: 2025-11-25 16:01:51.212341937 +0000 UTC m=+1.103419964 container died 3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:01:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-827b0ffcf4cb36e073bb519cd7f04f8e0e5625b3d5f239da69ef30fd85279dc6-merged.mount: Deactivated successfully.
Nov 25 11:01:51 np0005535469 podman[133379]: 2025-11-25 16:01:51.987874213 +0000 UTC m=+1.878952240 container remove 3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:01:52 np0005535469 systemd[1]: libpod-conmon-3df6ddc28d8ef7a32f72d5536c61388851f7912249c7280327e08fa1677bde63.scope: Deactivated successfully.
Nov 25 11:01:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:52 np0005535469 podman[133419]: 2025-11-25 16:01:52.117041851 +0000 UTC m=+0.022013300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:01:52 np0005535469 podman[133419]: 2025-11-25 16:01:52.246153497 +0000 UTC m=+0.151124936 container create ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:01:52 np0005535469 systemd[1]: Started libpod-conmon-ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47.scope.
Nov 25 11:01:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:01:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098752b67ed4fe5a995812804a5c6462bf3ede61fbdb12bcc7aa4e51db29226e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098752b67ed4fe5a995812804a5c6462bf3ede61fbdb12bcc7aa4e51db29226e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098752b67ed4fe5a995812804a5c6462bf3ede61fbdb12bcc7aa4e51db29226e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098752b67ed4fe5a995812804a5c6462bf3ede61fbdb12bcc7aa4e51db29226e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098752b67ed4fe5a995812804a5c6462bf3ede61fbdb12bcc7aa4e51db29226e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:52 np0005535469 podman[133419]: 2025-11-25 16:01:52.618275606 +0000 UTC m=+0.523247035 container init ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:01:52 np0005535469 podman[133419]: 2025-11-25 16:01:52.625857549 +0000 UTC m=+0.530828968 container start ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banach, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 11:01:52 np0005535469 podman[133419]: 2025-11-25 16:01:52.679102854 +0000 UTC m=+0.584074343 container attach ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banach, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:01:53 np0005535469 sad_banach[133436]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:01:53 np0005535469 sad_banach[133436]: --> relative data size: 1.0
Nov 25 11:01:53 np0005535469 sad_banach[133436]: --> All data devices are unavailable
Nov 25 11:01:53 np0005535469 systemd[1]: libpod-ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47.scope: Deactivated successfully.
Nov 25 11:01:53 np0005535469 podman[133465]: 2025-11-25 16:01:53.709225945 +0000 UTC m=+0.026198732 container died ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:01:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-098752b67ed4fe5a995812804a5c6462bf3ede61fbdb12bcc7aa4e51db29226e-merged.mount: Deactivated successfully.
Nov 25 11:01:55 np0005535469 podman[133465]: 2025-11-25 16:01:55.30852572 +0000 UTC m=+1.625498527 container remove ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:01:55 np0005535469 systemd[1]: libpod-conmon-ce6301a39d66b4500f66fa9aad92c7cff2e2d52c79a4556f5df5f1c38c89eb47.scope: Deactivated successfully.
Nov 25 11:01:56 np0005535469 podman[133621]: 2025-11-25 16:01:56.056976982 +0000 UTC m=+0.114982508 container create 868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:01:56 np0005535469 podman[133621]: 2025-11-25 16:01:55.961880597 +0000 UTC m=+0.019886143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:01:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:56 np0005535469 systemd[1]: Started libpod-conmon-868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01.scope.
Nov 25 11:01:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:01:56 np0005535469 podman[133621]: 2025-11-25 16:01:56.274871294 +0000 UTC m=+0.332876880 container init 868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:01:56 np0005535469 podman[133621]: 2025-11-25 16:01:56.282328273 +0000 UTC m=+0.340333799 container start 868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:01:56 np0005535469 gifted_meitner[133638]: 167 167
Nov 25 11:01:56 np0005535469 systemd[1]: libpod-868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01.scope: Deactivated successfully.
Nov 25 11:01:56 np0005535469 podman[133621]: 2025-11-25 16:01:56.301158167 +0000 UTC m=+0.359163783 container attach 868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:01:56 np0005535469 podman[133621]: 2025-11-25 16:01:56.301601719 +0000 UTC m=+0.359607285 container died 868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:01:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-86a150bff059f2df27b561b79083749d14d73839bb38c2ce205f2f3bb790f265-merged.mount: Deactivated successfully.
Nov 25 11:01:56 np0005535469 podman[133621]: 2025-11-25 16:01:56.602488683 +0000 UTC m=+0.660494199 container remove 868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:01:56 np0005535469 systemd[1]: libpod-conmon-868505eb9c0a4261ab4c8de17084348ba6981682e41ac50ca3a538c6ec499c01.scope: Deactivated successfully.
Nov 25 11:01:56 np0005535469 systemd-logind[791]: New session 43 of user zuul.
Nov 25 11:01:56 np0005535469 systemd[1]: Started Session 43 of User zuul.
Nov 25 11:01:56 np0005535469 podman[133665]: 2025-11-25 16:01:56.881839979 +0000 UTC m=+0.101893508 container create f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 11:01:56 np0005535469 podman[133665]: 2025-11-25 16:01:56.819791158 +0000 UTC m=+0.039844777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:01:56 np0005535469 systemd[1]: Started libpod-conmon-f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9.scope.
Nov 25 11:01:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:01:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1539954f4c5be914ad2c81cd07f6f7e81bf309db824a2f9f9d08a1285549d75d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1539954f4c5be914ad2c81cd07f6f7e81bf309db824a2f9f9d08a1285549d75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1539954f4c5be914ad2c81cd07f6f7e81bf309db824a2f9f9d08a1285549d75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1539954f4c5be914ad2c81cd07f6f7e81bf309db824a2f9f9d08a1285549d75d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:56 np0005535469 podman[133665]: 2025-11-25 16:01:56.994842164 +0000 UTC m=+0.214895723 container init f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:01:57 np0005535469 podman[133665]: 2025-11-25 16:01:57.004447311 +0000 UTC m=+0.224500850 container start f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:01:57 np0005535469 podman[133665]: 2025-11-25 16:01:57.026410209 +0000 UTC m=+0.246463758 container attach f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:01:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]: {
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:    "0": [
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:        {
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "devices": [
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "/dev/loop3"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            ],
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_name": "ceph_lv0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_size": "21470642176",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "name": "ceph_lv0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "tags": {
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cluster_name": "ceph",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.crush_device_class": "",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.encrypted": "0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osd_id": "0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.type": "block",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.vdo": "0"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            },
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "type": "block",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "vg_name": "ceph_vg0"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:        }
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:    ],
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:    "1": [
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:        {
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "devices": [
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "/dev/loop4"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            ],
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_name": "ceph_lv1",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_size": "21470642176",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "name": "ceph_lv1",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "tags": {
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cluster_name": "ceph",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.crush_device_class": "",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.encrypted": "0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osd_id": "1",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.type": "block",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.vdo": "0"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            },
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "type": "block",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "vg_name": "ceph_vg1"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:        }
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:    ],
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:    "2": [
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:        {
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "devices": [
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "/dev/loop5"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            ],
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_name": "ceph_lv2",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_size": "21470642176",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "name": "ceph_lv2",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "tags": {
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.cluster_name": "ceph",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.crush_device_class": "",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.encrypted": "0",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osd_id": "2",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.type": "block",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:                "ceph.vdo": "0"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            },
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "type": "block",
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:            "vg_name": "ceph_vg2"
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:        }
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]:    ]
Nov 25 11:01:57 np0005535469 peaceful_cray[133718]: }
Nov 25 11:01:57 np0005535469 python3.9[133836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:01:57 np0005535469 systemd[1]: libpod-f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9.scope: Deactivated successfully.
Nov 25 11:01:57 np0005535469 podman[133665]: 2025-11-25 16:01:57.882596765 +0000 UTC m=+1.102650314 container died f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:01:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1539954f4c5be914ad2c81cd07f6f7e81bf309db824a2f9f9d08a1285549d75d-merged.mount: Deactivated successfully.
Nov 25 11:01:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:01:58 np0005535469 podman[133665]: 2025-11-25 16:01:58.18318261 +0000 UTC m=+1.403236139 container remove f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:01:58 np0005535469 systemd[1]: libpod-conmon-f42757cb9cd74a6795451d4bdd59f4eb31e2b9f49ac32ce744f11ba5c80503f9.scope: Deactivated successfully.
Nov 25 11:01:58 np0005535469 podman[134149]: 2025-11-25 16:01:58.845899197 +0000 UTC m=+0.081959045 container create 8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:01:58 np0005535469 podman[134149]: 2025-11-25 16:01:58.804213662 +0000 UTC m=+0.040273590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:01:58 np0005535469 systemd[1]: Started libpod-conmon-8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e.scope.
Nov 25 11:01:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:01:58 np0005535469 podman[134149]: 2025-11-25 16:01:58.995597664 +0000 UTC m=+0.231657522 container init 8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:01:58 np0005535469 python3.9[134137]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 11:01:59 np0005535469 podman[134149]: 2025-11-25 16:01:59.003158976 +0000 UTC m=+0.239218814 container start 8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:01:59 np0005535469 xenodochial_mestorf[134165]: 167 167
Nov 25 11:01:59 np0005535469 systemd[1]: libpod-8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e.scope: Deactivated successfully.
Nov 25 11:01:59 np0005535469 podman[134149]: 2025-11-25 16:01:59.028987507 +0000 UTC m=+0.265047355 container attach 8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:01:59 np0005535469 podman[134149]: 2025-11-25 16:01:59.030050005 +0000 UTC m=+0.266109833 container died 8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:01:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-beddc5f3d6845fef3a135e9394cd2327e82bf0b76b96a89c1e131287f3a54771-merged.mount: Deactivated successfully.
Nov 25 11:01:59 np0005535469 podman[134149]: 2025-11-25 16:01:59.45462877 +0000 UTC m=+0.690688588 container remove 8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:01:59 np0005535469 systemd[1]: libpod-conmon-8c41bd3bb90af9f2ef2f3b4c791fde9c9adbfd83eaf5f6b7364fc4d1b51da83e.scope: Deactivated successfully.
Nov 25 11:01:59 np0005535469 podman[134243]: 2025-11-25 16:01:59.696255726 +0000 UTC m=+0.086703471 container create afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:01:59 np0005535469 podman[134243]: 2025-11-25 16:01:59.634149044 +0000 UTC m=+0.024596839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:01:59 np0005535469 systemd[1]: Started libpod-conmon-afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3.scope.
Nov 25 11:01:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:01:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa45167cb72a710f4ff2a337fefd3e3359d43ee4c90ece95d5cc35ac19728c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa45167cb72a710f4ff2a337fefd3e3359d43ee4c90ece95d5cc35ac19728c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa45167cb72a710f4ff2a337fefd3e3359d43ee4c90ece95d5cc35ac19728c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aa45167cb72a710f4ff2a337fefd3e3359d43ee4c90ece95d5cc35ac19728c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:59 np0005535469 podman[134243]: 2025-11-25 16:01:59.841582246 +0000 UTC m=+0.232030011 container init afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:01:59 np0005535469 podman[134243]: 2025-11-25 16:01:59.848761169 +0000 UTC m=+0.239208914 container start afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 11:01:59 np0005535469 podman[134243]: 2025-11-25 16:01:59.853903515 +0000 UTC m=+0.244351260 container attach afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:01:59 np0005535469 python3.9[134283]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 11:02:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]: {
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "osd_id": 1,
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "type": "bluestore"
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:    },
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "osd_id": 2,
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "type": "bluestore"
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:    },
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "osd_id": 0,
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:        "type": "bluestore"
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]:    }
Nov 25 11:02:00 np0005535469 crazy_bohr[134288]: }
Nov 25 11:02:00 np0005535469 systemd[1]: libpod-afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3.scope: Deactivated successfully.
Nov 25 11:02:00 np0005535469 podman[134243]: 2025-11-25 16:02:00.807742995 +0000 UTC m=+1.198190740 container died afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:02:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9aa45167cb72a710f4ff2a337fefd3e3359d43ee4c90ece95d5cc35ac19728c6-merged.mount: Deactivated successfully.
Nov 25 11:02:00 np0005535469 podman[134243]: 2025-11-25 16:02:00.873768092 +0000 UTC m=+1.264215847 container remove afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:02:00 np0005535469 systemd[1]: libpod-conmon-afe9f76accdd1a38459ecfbd2ba7de19731ae039abf2c6d46ae43cecd53348c3.scope: Deactivated successfully.
Nov 25 11:02:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:02:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:02:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:02:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:02:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 78a34150-45dd-4779-a45c-b4289d2f5f1f does not exist
Nov 25 11:02:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d29fb6e7-d138-4caf-aa72-9a84a5a99708 does not exist
Nov 25 11:02:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:02:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:02:02 np0005535469 python3.9[134533]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:02:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:03 np0005535469 python3.9[134684]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 11:02:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:04 np0005535469 python3.9[134834]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:02:05 np0005535469 python3.9[134984]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:02:05 np0005535469 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 11:02:05 np0005535469 systemd[1]: session-43.scope: Consumed 6.119s CPU time.
Nov 25 11:02:05 np0005535469 systemd-logind[791]: Session 43 logged out. Waiting for processes to exit.
Nov 25 11:02:05 np0005535469 systemd-logind[791]: Removed session 43.
Nov 25 11:02:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:02:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:11 np0005535469 systemd-logind[791]: New session 44 of user zuul.
Nov 25 11:02:11 np0005535469 systemd[1]: Started Session 44 of User zuul.
Nov 25 11:02:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:12 np0005535469 python3.9[135162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:02:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:14 np0005535469 python3.9[135318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:15 np0005535469 python3.9[135470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:15 np0005535469 python3.9[135622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:16 np0005535469 python3.9[135745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086535.3069205-65-243678897492902/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5eb2450f2a3c0cf71a3dea9ce0bd6d38eb2309cc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:17 np0005535469 python3.9[135897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:18 np0005535469 python3.9[136020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086536.9626837-65-97101271558185/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=325425734b9d9786ff7dd6f1cc3aa04c2306340b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:19 np0005535469 python3.9[136172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:19 np0005535469 python3.9[136295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086538.4899156-65-93048501629399/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9ec6485c844f7c40f604ba67d744525060ceb3fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:20 np0005535469 python3.9[136447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:21 np0005535469 python3.9[136599]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:21 np0005535469 python3.9[136751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:22 np0005535469 python3.9[136874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086541.3262825-124-178737958454289/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=33314c62be24529e1885d405476116308431b93a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:23 np0005535469 python3.9[137026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:23 np0005535469 python3.9[137149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086542.573007-124-123373198493587/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=58ee2c9ae6c4a704a245e9469b5defe0fb815b85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:24 np0005535469 python3.9[137301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:25 np0005535469 python3.9[137424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086544.0324144-124-171282983496942/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=241329017f562e56308fe0ec5ccfd0e65c948130 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:25 np0005535469 python3.9[137576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:26 np0005535469 python3.9[137728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:27 np0005535469 python3.9[137880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:27 np0005535469 python3.9[138003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086546.69139-183-121832408183437/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d10c6e516a39c4ab4907efceb44991b06a618460 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:28 np0005535469 python3.9[138155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:29 np0005535469 python3.9[138278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086548.0617838-183-53301943649271/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=58ee2c9ae6c4a704a245e9469b5defe0fb815b85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:29 np0005535469 python3.9[138430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:30 np0005535469 python3.9[138553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086549.4045448-183-27716139942327/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7341592e3775bbc81a65220d3cfd9dd49867ee19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:31 np0005535469 python3.9[138705]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:32 np0005535469 python3.9[138857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:33 np0005535469 python3.9[138980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086552.0851843-251-153702910857815/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=faad1a4d90713afcfc116f7cdc6943e028def941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:33 np0005535469 python3.9[139132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:34 np0005535469 python3.9[139284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:35 np0005535469 python3.9[139407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086554.119618-275-79746334445082/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=faad1a4d90713afcfc116f7cdc6943e028def941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:36 np0005535469 python3.9[139559]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:36 np0005535469 python3.9[139711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:37 np0005535469 python3.9[139834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086556.4529114-299-84810487397796/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=faad1a4d90713afcfc116f7cdc6943e028def941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:38 np0005535469 python3.9[139986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:38 np0005535469 python3.9[140138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:39 np0005535469 python3.9[140261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086558.467582-323-198413467611428/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=faad1a4d90713afcfc116f7cdc6943e028def941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:02:39
Nov 25 11:02:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:02:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:02:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 11:02:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:40 np0005535469 python3.9[140413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:02:40 np0005535469 python3.9[140565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:41 np0005535469 python3.9[140688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086560.374985-347-268495048734415/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=faad1a4d90713afcfc116f7cdc6943e028def941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:42 np0005535469 python3.9[140840]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:02:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:42 np0005535469 python3.9[140992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:43 np0005535469 python3.9[141115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086562.3705704-371-122164797154517/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=faad1a4d90713afcfc116f7cdc6943e028def941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:43 np0005535469 systemd-logind[791]: Session 44 logged out. Waiting for processes to exit.
Nov 25 11:02:43 np0005535469 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 11:02:43 np0005535469 systemd[1]: session-44.scope: Consumed 24.378s CPU time.
Nov 25 11:02:43 np0005535469 systemd-logind[791]: Removed session 44.
Nov 25 11:02:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:49 np0005535469 systemd-logind[791]: New session 45 of user zuul.
Nov 25 11:02:49 np0005535469 systemd[1]: Started Session 45 of User zuul.
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:50 np0005535469 python3.9[141295]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:02:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:02:51 np0005535469 python3.9[141447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:52 np0005535469 python3.9[141570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086570.733805-34-261107533530787/.source.conf _original_basename=ceph.conf follow=False checksum=04352d8678d0df29c446464b71cb3b643b7b6cee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:52 np0005535469 python3.9[141722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:02:53 np0005535469 python3.9[141845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086572.2703555-34-154515194985798/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=bf49f99c554355929b34667e0fb24fbf9e247d1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:02:53 np0005535469 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 11:02:53 np0005535469 systemd[1]: session-45.scope: Consumed 2.800s CPU time.
Nov 25 11:02:53 np0005535469 systemd-logind[791]: Session 45 logged out. Waiting for processes to exit.
Nov 25 11:02:53 np0005535469 systemd-logind[791]: Removed session 45.
Nov 25 11:02:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:02:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:02:59 np0005535469 systemd-logind[791]: New session 46 of user zuul.
Nov 25 11:02:59 np0005535469 systemd[1]: Started Session 46 of User zuul.
Nov 25 11:03:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:00 np0005535469 python3.9[142023]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:03:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:03:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:03:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:01 np0005535469 python3.9[142399]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 73f95a95-b1d6-494d-85c4-8c9374a073b4 does not exist
Nov 25 11:03:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5bf82ed1-c4a9-4064-b9ad-cfcf632233ea does not exist
Nov 25 11:03:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fca0b1c1-0700-4a16-a88e-74d2873e2c86 does not exist
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:03:02 np0005535469 python3.9[142655]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:02 np0005535469 podman[142750]: 2025-11-25 16:03:02.815983965 +0000 UTC m=+0.054511370 container create 25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 11:03:02 np0005535469 systemd[1]: Started libpod-conmon-25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2.scope.
Nov 25 11:03:02 np0005535469 podman[142750]: 2025-11-25 16:03:02.785849892 +0000 UTC m=+0.024377337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:03:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:03:02 np0005535469 podman[142750]: 2025-11-25 16:03:02.91034819 +0000 UTC m=+0.148875585 container init 25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:03:02 np0005535469 podman[142750]: 2025-11-25 16:03:02.918328688 +0000 UTC m=+0.156856053 container start 25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:03:02 np0005535469 podman[142750]: 2025-11-25 16:03:02.921508765 +0000 UTC m=+0.160036130 container attach 25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mestorf, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:03:02 np0005535469 systemd[1]: libpod-25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2.scope: Deactivated successfully.
Nov 25 11:03:02 np0005535469 focused_mestorf[142817]: 167 167
Nov 25 11:03:02 np0005535469 podman[142750]: 2025-11-25 16:03:02.924490456 +0000 UTC m=+0.163017821 container died 25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:03:02 np0005535469 conmon[142817]: conmon 25d3c620a225703fc26f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2.scope/container/memory.events
Nov 25 11:03:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d93c255b55c099c7df1abbea6a046b801a1e59edbf1e559191c7be98d053836c-merged.mount: Deactivated successfully.
Nov 25 11:03:02 np0005535469 podman[142750]: 2025-11-25 16:03:02.972789375 +0000 UTC m=+0.211316750 container remove 25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:03:02 np0005535469 systemd[1]: libpod-conmon-25d3c620a225703fc26f9e473f3e4d1bd221f8456e35d1bc8d2577a3268359d2.scope: Deactivated successfully.
Nov 25 11:03:03 np0005535469 podman[142914]: 2025-11-25 16:03:03.140304387 +0000 UTC m=+0.043210530 container create 82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poitras, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:03:03 np0005535469 systemd[1]: Started libpod-conmon-82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095.scope.
Nov 25 11:03:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:03:03 np0005535469 podman[142914]: 2025-11-25 16:03:03.120004593 +0000 UTC m=+0.022910776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:03:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16df0117ddfbfc52928dea8ed24800c302689adb858f34fb53841a9a7dbe053/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16df0117ddfbfc52928dea8ed24800c302689adb858f34fb53841a9a7dbe053/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16df0117ddfbfc52928dea8ed24800c302689adb858f34fb53841a9a7dbe053/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16df0117ddfbfc52928dea8ed24800c302689adb858f34fb53841a9a7dbe053/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16df0117ddfbfc52928dea8ed24800c302689adb858f34fb53841a9a7dbe053/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:03 np0005535469 podman[142914]: 2025-11-25 16:03:03.22797038 +0000 UTC m=+0.130876543 container init 82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:03:03 np0005535469 podman[142914]: 2025-11-25 16:03:03.237539861 +0000 UTC m=+0.140446014 container start 82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poitras, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:03:03 np0005535469 podman[142914]: 2025-11-25 16:03:03.241035676 +0000 UTC m=+0.143941849 container attach 82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 11:03:03 np0005535469 python3.9[142909]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:03:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:04 np0005535469 python3.9[143096]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 11:03:04 np0005535469 compassionate_poitras[142931]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:03:04 np0005535469 compassionate_poitras[142931]: --> relative data size: 1.0
Nov 25 11:03:04 np0005535469 compassionate_poitras[142931]: --> All data devices are unavailable
Nov 25 11:03:04 np0005535469 systemd[1]: libpod-82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095.scope: Deactivated successfully.
Nov 25 11:03:04 np0005535469 podman[142914]: 2025-11-25 16:03:04.254989103 +0000 UTC m=+1.157895256 container died 82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poitras, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:03:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b16df0117ddfbfc52928dea8ed24800c302689adb858f34fb53841a9a7dbe053-merged.mount: Deactivated successfully.
Nov 25 11:03:04 np0005535469 podman[142914]: 2025-11-25 16:03:04.321415877 +0000 UTC m=+1.224322030 container remove 82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:03:04 np0005535469 systemd[1]: libpod-conmon-82b9c16b91cb143db22119bab166e19b9faa046527b104e989c74d83c7175095.scope: Deactivated successfully.
Nov 25 11:03:05 np0005535469 podman[143265]: 2025-11-25 16:03:05.013123668 +0000 UTC m=+0.057192992 container create b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:03:05 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 11:03:05 np0005535469 systemd[1]: Started libpod-conmon-b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f.scope.
Nov 25 11:03:05 np0005535469 podman[143265]: 2025-11-25 16:03:04.977696 +0000 UTC m=+0.021765344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:03:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:03:05 np0005535469 podman[143265]: 2025-11-25 16:03:05.141107762 +0000 UTC m=+0.185177176 container init b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:03:05 np0005535469 podman[143265]: 2025-11-25 16:03:05.15204116 +0000 UTC m=+0.196110524 container start b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:03:05 np0005535469 wonderful_feistel[143282]: 167 167
Nov 25 11:03:05 np0005535469 systemd[1]: libpod-b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f.scope: Deactivated successfully.
Nov 25 11:03:05 np0005535469 podman[143265]: 2025-11-25 16:03:05.267967944 +0000 UTC m=+0.312037368 container attach b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_feistel, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:03:05 np0005535469 podman[143265]: 2025-11-25 16:03:05.268845308 +0000 UTC m=+0.312914672 container died b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 11:03:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f62655640328180770a36a616fea3a74a727be296aae380f3528156633290c7c-merged.mount: Deactivated successfully.
Nov 25 11:03:05 np0005535469 podman[143265]: 2025-11-25 16:03:05.396977375 +0000 UTC m=+0.441046699 container remove b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_feistel, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:03:05 np0005535469 systemd[1]: libpod-conmon-b853b3500b68c9c87ffdee62499e3b4a18417e7ce4dc9d7b255886e14c7ca85f.scope: Deactivated successfully.
Nov 25 11:03:05 np0005535469 podman[143310]: 2025-11-25 16:03:05.570251195 +0000 UTC m=+0.043320614 container create 9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:03:05 np0005535469 systemd[1]: Started libpod-conmon-9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f.scope.
Nov 25 11:03:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:03:05 np0005535469 podman[143310]: 2025-11-25 16:03:05.549420326 +0000 UTC m=+0.022489765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:03:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a73c3fafe03769abf7ef7a1cf9af58c9c3befee7cc9ee3567b86c9c35c1dbf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a73c3fafe03769abf7ef7a1cf9af58c9c3befee7cc9ee3567b86c9c35c1dbf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a73c3fafe03769abf7ef7a1cf9af58c9c3befee7cc9ee3567b86c9c35c1dbf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a73c3fafe03769abf7ef7a1cf9af58c9c3befee7cc9ee3567b86c9c35c1dbf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:05 np0005535469 podman[143310]: 2025-11-25 16:03:05.66715543 +0000 UTC m=+0.140224949 container init 9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:03:05 np0005535469 podman[143310]: 2025-11-25 16:03:05.679021245 +0000 UTC m=+0.152090714 container start 9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:03:05 np0005535469 podman[143310]: 2025-11-25 16:03:05.715895041 +0000 UTC m=+0.188964590 container attach 9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:03:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:06 np0005535469 python3.9[143483]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]: {
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:    "0": [
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:        {
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "devices": [
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "/dev/loop3"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            ],
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_name": "ceph_lv0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_size": "21470642176",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "name": "ceph_lv0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "tags": {
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cluster_name": "ceph",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.crush_device_class": "",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.encrypted": "0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osd_id": "0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.type": "block",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.vdo": "0"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            },
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "type": "block",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "vg_name": "ceph_vg0"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:        }
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:    ],
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:    "1": [
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:        {
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "devices": [
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "/dev/loop4"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            ],
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_name": "ceph_lv1",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_size": "21470642176",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "name": "ceph_lv1",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "tags": {
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cluster_name": "ceph",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.crush_device_class": "",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.encrypted": "0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osd_id": "1",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.type": "block",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.vdo": "0"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            },
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "type": "block",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "vg_name": "ceph_vg1"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:        }
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:    ],
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:    "2": [
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:        {
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "devices": [
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "/dev/loop5"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            ],
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_name": "ceph_lv2",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_size": "21470642176",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "name": "ceph_lv2",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "tags": {
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.cluster_name": "ceph",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.crush_device_class": "",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.encrypted": "0",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osd_id": "2",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.type": "block",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:                "ceph.vdo": "0"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            },
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "type": "block",
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:            "vg_name": "ceph_vg2"
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:        }
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]:    ]
Nov 25 11:03:06 np0005535469 eager_engelbart[143351]: }
Nov 25 11:03:06 np0005535469 systemd[1]: libpod-9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f.scope: Deactivated successfully.
Nov 25 11:03:06 np0005535469 podman[143310]: 2025-11-25 16:03:06.474505608 +0000 UTC m=+0.947575027 container died 9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 11:03:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3a73c3fafe03769abf7ef7a1cf9af58c9c3befee7cc9ee3567b86c9c35c1dbf4-merged.mount: Deactivated successfully.
Nov 25 11:03:06 np0005535469 podman[143310]: 2025-11-25 16:03:06.531042581 +0000 UTC m=+1.004112000 container remove 9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:03:06 np0005535469 systemd[1]: libpod-conmon-9f842eca0535da90a8ae3b97190ad2bc86f040b81d7bb647517460b14f3bf52f.scope: Deactivated successfully.
Nov 25 11:03:07 np0005535469 podman[143724]: 2025-11-25 16:03:07.15123138 +0000 UTC m=+0.049181143 container create 55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:03:07 np0005535469 systemd[1]: Started libpod-conmon-55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4.scope.
Nov 25 11:03:07 np0005535469 podman[143724]: 2025-11-25 16:03:07.131263605 +0000 UTC m=+0.029213378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:03:07 np0005535469 python3.9[143709]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 11:03:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:03:07 np0005535469 podman[143724]: 2025-11-25 16:03:07.262110537 +0000 UTC m=+0.160060340 container init 55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ardinghelli, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:03:07 np0005535469 podman[143724]: 2025-11-25 16:03:07.271684288 +0000 UTC m=+0.169634051 container start 55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ardinghelli, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 11:03:07 np0005535469 vigilant_ardinghelli[143741]: 167 167
Nov 25 11:03:07 np0005535469 systemd[1]: libpod-55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4.scope: Deactivated successfully.
Nov 25 11:03:07 np0005535469 podman[143724]: 2025-11-25 16:03:07.278889295 +0000 UTC m=+0.176839108 container attach 55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 11:03:07 np0005535469 podman[143724]: 2025-11-25 16:03:07.279784599 +0000 UTC m=+0.177734362 container died 55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:03:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-936a0fe04f868658d1f850d6e6aa91c7d66800b0d742029cc5260cb4ecf31e69-merged.mount: Deactivated successfully.
Nov 25 11:03:08 np0005535469 podman[143724]: 2025-11-25 16:03:08.090019915 +0000 UTC m=+0.987969688 container remove 55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ardinghelli, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 11:03:08 np0005535469 systemd[1]: libpod-conmon-55a78af419a07a498663ee1f0214e755ca076bea6ce5915f0da41efd6ddd67d4.scope: Deactivated successfully.
Nov 25 11:03:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:08 np0005535469 podman[143765]: 2025-11-25 16:03:08.26492724 +0000 UTC m=+0.061802138 container create 7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 11:03:08 np0005535469 systemd[1]: Started libpod-conmon-7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd.scope.
Nov 25 11:03:08 np0005535469 podman[143765]: 2025-11-25 16:03:08.234426157 +0000 UTC m=+0.031301005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:03:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:03:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672553e7bf312c13ce7ee19708cf618970a072c93b08e3b561e52c4ad09ba45c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672553e7bf312c13ce7ee19708cf618970a072c93b08e3b561e52c4ad09ba45c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672553e7bf312c13ce7ee19708cf618970a072c93b08e3b561e52c4ad09ba45c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672553e7bf312c13ce7ee19708cf618970a072c93b08e3b561e52c4ad09ba45c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:03:08 np0005535469 podman[143765]: 2025-11-25 16:03:08.37046274 +0000 UTC m=+0.167337578 container init 7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:03:08 np0005535469 podman[143765]: 2025-11-25 16:03:08.381028618 +0000 UTC m=+0.177903416 container start 7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:03:08 np0005535469 podman[143765]: 2025-11-25 16:03:08.388568345 +0000 UTC m=+0.185443253 container attach 7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:03:09 np0005535469 pensive_galois[143782]: {
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "osd_id": 1,
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "type": "bluestore"
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:    },
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "osd_id": 2,
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "type": "bluestore"
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:    },
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "osd_id": 0,
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:        "type": "bluestore"
Nov 25 11:03:09 np0005535469 pensive_galois[143782]:    }
Nov 25 11:03:09 np0005535469 pensive_galois[143782]: }
Nov 25 11:03:09 np0005535469 systemd[1]: libpod-7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd.scope: Deactivated successfully.
Nov 25 11:03:09 np0005535469 systemd[1]: libpod-7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd.scope: Consumed 1.003s CPU time.
Nov 25 11:03:09 np0005535469 podman[143765]: 2025-11-25 16:03:09.383138823 +0000 UTC m=+1.180013621 container died 7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 11:03:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-672553e7bf312c13ce7ee19708cf618970a072c93b08e3b561e52c4ad09ba45c-merged.mount: Deactivated successfully.
Nov 25 11:03:09 np0005535469 podman[143765]: 2025-11-25 16:03:09.438892264 +0000 UTC m=+1.235767062 container remove 7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 11:03:09 np0005535469 systemd[1]: libpod-conmon-7e386bb542fba48ed569336566e93ab7d77e5ca241f7388d657934d47fb9e4cd.scope: Deactivated successfully.
Nov 25 11:03:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:03:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:03:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3a5841fe-030b-4815-82f1-650da0ac5d80 does not exist
Nov 25 11:03:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1041307b-7f07-45c0-83da-edd5d7f1371c does not exist
Nov 25 11:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:03:10 np0005535469 python3.9[144029]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 11:03:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:03:11 np0005535469 python3[144184]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 11:03:11 np0005535469 python3.9[144336]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.229108) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086592229188, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1031, "num_deletes": 251, "total_data_size": 1549223, "memory_usage": 1573536, "flush_reason": "Manual Compaction"}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086592244266, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1514168, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8803, "largest_seqno": 9833, "table_properties": {"data_size": 1509141, "index_size": 2551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10306, "raw_average_key_size": 18, "raw_value_size": 1499114, "raw_average_value_size": 2735, "num_data_blocks": 119, "num_entries": 548, "num_filter_entries": 548, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086494, "oldest_key_time": 1764086494, "file_creation_time": 1764086592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 15237 microseconds, and 8448 cpu microseconds.
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.244350) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1514168 bytes OK
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.244383) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.246177) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.246201) EVENT_LOG_v1 {"time_micros": 1764086592246194, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.246233) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1544371, prev total WAL file size 1544371, number of live WAL files 2.
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.247367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1478KB)], [23(6565KB)]
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086592247443, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8237648, "oldest_snapshot_seqno": -1}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3402 keys, 6725952 bytes, temperature: kUnknown
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086592287203, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6725952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6700409, "index_size": 15962, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 82961, "raw_average_key_size": 24, "raw_value_size": 6636036, "raw_average_value_size": 1950, "num_data_blocks": 698, "num_entries": 3402, "num_filter_entries": 3402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764086592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.287442) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6725952 bytes
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.288312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.8 rd, 168.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 6.4 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(9.9) write-amplify(4.4) OK, records in: 3916, records dropped: 514 output_compression: NoCompression
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.288332) EVENT_LOG_v1 {"time_micros": 1764086592288322, "job": 8, "event": "compaction_finished", "compaction_time_micros": 39839, "compaction_time_cpu_micros": 17971, "output_level": 6, "num_output_files": 1, "total_output_size": 6725952, "num_input_records": 3916, "num_output_records": 3402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086592288726, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086592290053, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.247240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.290129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.290134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.290136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.290137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:03:12.290139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:03:12 np0005535469 python3.9[144488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:12 np0005535469 python3.9[144566]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:13 np0005535469 python3.9[144718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:14 np0005535469 python3.9[144796]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yepboddy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:14 np0005535469 python3.9[144948]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:15 np0005535469 python3.9[145026]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:16 np0005535469 python3.9[145178]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:03:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:17 np0005535469 python3[145331]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 11:03:17 np0005535469 python3.9[145483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:18 np0005535469 python3.9[145608]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086597.3036516-157-51774050017081/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:19 np0005535469 python3.9[145760]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:19 np0005535469 python3.9[145885]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086598.7095919-172-129427969783043/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:20 np0005535469 python3.9[146037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:21 np0005535469 python3.9[146162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086600.089614-187-216927159782520/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:21 np0005535469 python3.9[146314]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:22 np0005535469 python3.9[146439]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086601.4086154-202-262399346173749/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:23 np0005535469 python3.9[146591]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:23 np0005535469 python3.9[146716]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086602.6349077-217-27365128822921/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:24 np0005535469 python3.9[146868]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:25 np0005535469 python3.9[147020]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:03:25 np0005535469 python3.9[147175]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:26 np0005535469 python3.9[147327]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:03:27 np0005535469 python3.9[147480]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:03:27 np0005535469 python3.9[147634]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:03:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:28 np0005535469 python3.9[147789]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:29 np0005535469 python3.9[147939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:03:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:30 np0005535469 python3.9[148092]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:03:30 np0005535469 ovs-vsctl[148093]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 11:03:31 np0005535469 python3.9[148245]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:03:31 np0005535469 python3.9[148400]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:03:31 np0005535469 ovs-vsctl[148401]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 11:03:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:32 np0005535469 python3.9[148551]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:03:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:33 np0005535469 python3.9[148705]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:33 np0005535469 python3.9[148857]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:34 np0005535469 python3.9[148935]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:34 np0005535469 python3.9[149087]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:35 np0005535469 python3.9[149165]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:36 np0005535469 python3.9[149317]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:36 np0005535469 python3.9[149469]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:37 np0005535469 python3.9[149547]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:37 np0005535469 python3.9[149699]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:38 np0005535469 python3.9[149777]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:39 np0005535469 python3.9[149929]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:03:39 np0005535469 systemd[1]: Reloading.
Nov 25 11:03:39 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:03:39 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:03:39 np0005535469 python3.9[150118]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:03:39
Nov 25 11:03:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:03:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:03:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data']
Nov 25 11:03:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:03:40 np0005535469 python3.9[150196]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:03:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5332 writes, 23K keys, 5332 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5332 writes, 722 syncs, 7.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5332 writes, 23K keys, 5332 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s#012Interval WAL: 5332 writes, 722 syncs, 7.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 11:03:40 np0005535469 python3.9[150348]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:41 np0005535469 python3.9[150426]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:42 np0005535469 python3.9[150578]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:03:42 np0005535469 systemd[1]: Reloading.
Nov 25 11:03:42 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:03:42 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:03:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:42 np0005535469 systemd[1]: Starting Create netns directory...
Nov 25 11:03:42 np0005535469 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 11:03:42 np0005535469 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 11:03:42 np0005535469 systemd[1]: Finished Create netns directory.
Nov 25 11:03:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:43 np0005535469 python3.9[150771]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:43 np0005535469 python3.9[150923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:44 np0005535469 python3.9[151046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086623.1996264-468-56058645563241/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:44 np0005535469 python3.9[151198]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:03:45 np0005535469 python3.9[151350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:03:46 np0005535469 python3.9[151473]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086625.2074752-493-235299705410053/.source.json _original_basename=.46iq4ttj follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:46 np0005535469 python3.9[151625]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:03:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:49 np0005535469 python3.9[152052]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 11:03:49 np0005535469 python3.9[152204]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 11:03:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:03:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.2 total, 600.0 interval#012Cumulative writes: 6432 writes, 27K keys, 6432 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6432 writes, 1053 syncs, 6.11 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6432 writes, 27K keys, 6432 commit groups, 1.0 writes per commit group, ingest: 19.21 MB, 0.03 MB/s#012Interval WAL: 6432 writes, 1053 syncs, 6.11 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:50 np0005535469 python3.9[152356]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:03:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:03:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:52 np0005535469 python3[152534]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 11:03:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:03:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:03:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:00 np0005535469 podman[152548]: 2025-11-25 16:04:00.482757425 +0000 UTC m=+8.210358874 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 11:04:00 np0005535469 podman[152663]: 2025-11-25 16:04:00.636267827 +0000 UTC m=+0.029369457 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 11:04:00 np0005535469 podman[152663]: 2025-11-25 16:04:00.757992278 +0000 UTC m=+0.151093878 container create ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:04:00 np0005535469 python3[152534]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 11:04:01 np0005535469 python3.9[152853]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:04:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:02 np0005535469 python3.9[153007]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:03 np0005535469 python3.9[153083]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:04:03 np0005535469 python3.9[153234]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764086643.180267-581-258079162097022/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:04 np0005535469 python3.9[153310]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:04:04 np0005535469 systemd[1]: Reloading.
Nov 25 11:04:04 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:04:04 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:04:05 np0005535469 python3.9[153421]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:04:05 np0005535469 systemd[1]: Reloading.
Nov 25 11:04:05 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:04:05 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:04:05 np0005535469 systemd[1]: Starting ovn_controller container...
Nov 25 11:04:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:04:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28f99e7ab5a66612e1d1ace18a91f5a87e22a268a6ee860dbbbc6e16cd0251a/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:05 np0005535469 systemd[1]: Started /usr/bin/podman healthcheck run ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b.
Nov 25 11:04:06 np0005535469 podman[153462]: 2025-11-25 16:04:06.02775291 +0000 UTC m=+0.278062989 container init ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + sudo -E kolla_set_configs
Nov 25 11:04:06 np0005535469 podman[153462]: 2025-11-25 16:04:06.050077309 +0000 UTC m=+0.300387388 container start ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:04:06 np0005535469 edpm-start-podman-container[153462]: ovn_controller
Nov 25 11:04:06 np0005535469 systemd[1]: Created slice User Slice of UID 0.
Nov 25 11:04:06 np0005535469 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 11:04:06 np0005535469 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 11:04:06 np0005535469 systemd[1]: Starting User Manager for UID 0...
Nov 25 11:04:06 np0005535469 edpm-start-podman-container[153461]: Creating additional drop-in dependency for "ovn_controller" (ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b)
Nov 25 11:04:06 np0005535469 systemd[1]: Reloading.
Nov 25 11:04:06 np0005535469 podman[153484]: 2025-11-25 16:04:06.156717703 +0000 UTC m=+0.096623110 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:04:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:06 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:04:06 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:04:06 np0005535469 systemd[153516]: Queued start job for default target Main User Target.
Nov 25 11:04:06 np0005535469 systemd[153516]: Created slice User Application Slice.
Nov 25 11:04:06 np0005535469 systemd[153516]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 11:04:06 np0005535469 systemd[153516]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 11:04:06 np0005535469 systemd[153516]: Reached target Paths.
Nov 25 11:04:06 np0005535469 systemd[153516]: Reached target Timers.
Nov 25 11:04:06 np0005535469 systemd[153516]: Starting D-Bus User Message Bus Socket...
Nov 25 11:04:06 np0005535469 systemd[153516]: Starting Create User's Volatile Files and Directories...
Nov 25 11:04:06 np0005535469 systemd[153516]: Finished Create User's Volatile Files and Directories.
Nov 25 11:04:06 np0005535469 systemd[153516]: Listening on D-Bus User Message Bus Socket.
Nov 25 11:04:06 np0005535469 systemd[153516]: Reached target Sockets.
Nov 25 11:04:06 np0005535469 systemd[153516]: Reached target Basic System.
Nov 25 11:04:06 np0005535469 systemd[153516]: Reached target Main User Target.
Nov 25 11:04:06 np0005535469 systemd[153516]: Startup finished in 137ms.
Nov 25 11:04:06 np0005535469 systemd[1]: Started User Manager for UID 0.
Nov 25 11:04:06 np0005535469 systemd[1]: Started ovn_controller container.
Nov 25 11:04:06 np0005535469 systemd[1]: ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b-37857e7b0be09947.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 11:04:06 np0005535469 systemd[1]: ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b-37857e7b0be09947.service: Failed with result 'exit-code'.
Nov 25 11:04:06 np0005535469 systemd[1]: Started Session c1 of User root.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: INFO:__main__:Validating config file
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: INFO:__main__:Writing out command to execute
Nov 25 11:04:06 np0005535469 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: ++ cat /run_command
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + ARGS=
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + sudo kolla_copy_cacerts
Nov 25 11:04:06 np0005535469 systemd[1]: Started Session c2 of User root.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + [[ ! -n '' ]]
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + . kolla_extend_start
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + umask 0022
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 11:04:06 np0005535469 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.5714] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.5721] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 11:04:06 np0005535469 kernel: br-int: entered promiscuous mode
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.5730] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 11:04:06 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.5735] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 11:04:06 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.5737] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 11:04:06 np0005535469 systemd-udevd[153629]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 11:04:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:06Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.6324] manager: (ovn-7c405e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 11:04:06 np0005535469 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 11:04:06 np0005535469 systemd-udevd[153634]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.6572] device (genev_sys_6081): carrier: link connected
Nov 25 11:04:06 np0005535469 NetworkManager[48891]: <info>  [1764086646.6574] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 11:04:07 np0005535469 python3.9[153742]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:04:07 np0005535469 ovs-vsctl[153743]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 11:04:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:04:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.5 total, 600.0 interval#012Cumulative writes: 5498 writes, 23K keys, 5498 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5498 writes, 754 syncs, 7.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5498 writes, 23K keys, 5498 commit groups, 1.0 writes per commit group, ingest: 18.34 MB, 0.03 MB/s#012Interval WAL: 5498 writes, 754 syncs, 7.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 11:04:07 np0005535469 python3.9[153895]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:04:07 np0005535469 ovs-vsctl[153897]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 11:04:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:08 np0005535469 python3.9[154050]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:04:08 np0005535469 ovs-vsctl[154051]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 11:04:09 np0005535469 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 11:04:09 np0005535469 systemd[1]: session-46.scope: Consumed 54.958s CPU time.
Nov 25 11:04:09 np0005535469 systemd-logind[791]: Session 46 logged out. Waiting for processes to exit.
Nov 25 11:04:09 np0005535469 systemd-logind[791]: Removed session 46.
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7119cca2-49c0-4ddd-80e6-5eae890ab637 does not exist
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7ecffce8-6344-440d-8bc2-1e08e2937fbc does not exist
Nov 25 11:04:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cc421b6a-90fa-4666-a0e8-42bd3ea86695 does not exist
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:04:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:04:10 np0005535469 podman[154344]: 2025-11-25 16:04:10.88749146 +0000 UTC m=+0.022067243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:04:10 np0005535469 podman[154344]: 2025-11-25 16:04:10.98905578 +0000 UTC m=+0.123631543 container create 474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:04:11 np0005535469 systemd[1]: Started libpod-conmon-474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1.scope.
Nov 25 11:04:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:04:11 np0005535469 podman[154344]: 2025-11-25 16:04:11.161983243 +0000 UTC m=+0.296559016 container init 474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 25 11:04:11 np0005535469 podman[154344]: 2025-11-25 16:04:11.168498265 +0000 UTC m=+0.303074018 container start 474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:04:11 np0005535469 silly_bartik[154360]: 167 167
Nov 25 11:04:11 np0005535469 systemd[1]: libpod-474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1.scope: Deactivated successfully.
Nov 25 11:04:11 np0005535469 podman[154344]: 2025-11-25 16:04:11.195062766 +0000 UTC m=+0.329638539 container attach 474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bartik, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 11:04:11 np0005535469 podman[154344]: 2025-11-25 16:04:11.196012901 +0000 UTC m=+0.330588654 container died 474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bartik, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:04:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f778f67a686c15413085b2b43c6e58f6e118de8d1bdaea6d094459a6a242452d-merged.mount: Deactivated successfully.
Nov 25 11:04:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:04:11 np0005535469 podman[154344]: 2025-11-25 16:04:11.576026679 +0000 UTC m=+0.710602432 container remove 474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:04:11 np0005535469 systemd[1]: libpod-conmon-474d88944354ad0d19b074ff3cd475dba6fb581bce5ead1738ad8d127b1ccba1.scope: Deactivated successfully.
Nov 25 11:04:11 np0005535469 podman[154385]: 2025-11-25 16:04:11.753615915 +0000 UTC m=+0.053412250 container create 73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:04:11 np0005535469 systemd[1]: Started libpod-conmon-73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4.scope.
Nov 25 11:04:11 np0005535469 podman[154385]: 2025-11-25 16:04:11.724526608 +0000 UTC m=+0.024322953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:04:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:04:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace447f54c4494ec58de833cbf2fbd02f392e4beb64f41b4a96096b5d4eb4d3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace447f54c4494ec58de833cbf2fbd02f392e4beb64f41b4a96096b5d4eb4d3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace447f54c4494ec58de833cbf2fbd02f392e4beb64f41b4a96096b5d4eb4d3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace447f54c4494ec58de833cbf2fbd02f392e4beb64f41b4a96096b5d4eb4d3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace447f54c4494ec58de833cbf2fbd02f392e4beb64f41b4a96096b5d4eb4d3a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:11 np0005535469 podman[154385]: 2025-11-25 16:04:11.872587365 +0000 UTC m=+0.172383720 container init 73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:04:11 np0005535469 podman[154385]: 2025-11-25 16:04:11.882625139 +0000 UTC m=+0.182421464 container start 73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:04:11 np0005535469 podman[154385]: 2025-11-25 16:04:11.889216084 +0000 UTC m=+0.189012409 container attach 73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:04:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:12 np0005535469 cool_mcnulty[154401]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:04:12 np0005535469 cool_mcnulty[154401]: --> relative data size: 1.0
Nov 25 11:04:12 np0005535469 cool_mcnulty[154401]: --> All data devices are unavailable
Nov 25 11:04:12 np0005535469 systemd[1]: libpod-73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4.scope: Deactivated successfully.
Nov 25 11:04:12 np0005535469 systemd[1]: libpod-73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4.scope: Consumed 1.027s CPU time.
Nov 25 11:04:12 np0005535469 conmon[154401]: conmon 73e9e90c06ab53abfdc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4.scope/container/memory.events
Nov 25 11:04:12 np0005535469 podman[154385]: 2025-11-25 16:04:12.958746116 +0000 UTC m=+1.258542441 container died 73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:04:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ace447f54c4494ec58de833cbf2fbd02f392e4beb64f41b4a96096b5d4eb4d3a-merged.mount: Deactivated successfully.
Nov 25 11:04:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:13 np0005535469 podman[154385]: 2025-11-25 16:04:13.136873778 +0000 UTC m=+1.436670103 container remove 73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:04:13 np0005535469 systemd[1]: libpod-conmon-73e9e90c06ab53abfdc1adf2e34de5b0d178d7fe9ec626034a394f8575969eb4.scope: Deactivated successfully.
Nov 25 11:04:13 np0005535469 podman[154584]: 2025-11-25 16:04:13.697257315 +0000 UTC m=+0.021239422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:04:13 np0005535469 podman[154584]: 2025-11-25 16:04:13.81682022 +0000 UTC m=+0.140802307 container create c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_saha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:04:13 np0005535469 systemd[1]: Started libpod-conmon-c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18.scope.
Nov 25 11:04:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:04:14 np0005535469 podman[154584]: 2025-11-25 16:04:14.061155828 +0000 UTC m=+0.385137945 container init c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_saha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:04:14 np0005535469 podman[154584]: 2025-11-25 16:04:14.067263819 +0000 UTC m=+0.391245906 container start c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_saha, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:04:14 np0005535469 great_saha[154601]: 167 167
Nov 25 11:04:14 np0005535469 systemd[1]: libpod-c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18.scope: Deactivated successfully.
Nov 25 11:04:14 np0005535469 podman[154584]: 2025-11-25 16:04:14.078552887 +0000 UTC m=+0.402534994 container attach c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 11:04:14 np0005535469 podman[154584]: 2025-11-25 16:04:14.079175893 +0000 UTC m=+0.403158000 container died c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_saha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:04:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-74d73c7d72bbe36ee0d289087bf37ece4ee1271eaf40175e7da93e2aeeda6d4a-merged.mount: Deactivated successfully.
Nov 25 11:04:14 np0005535469 podman[154584]: 2025-11-25 16:04:14.135691484 +0000 UTC m=+0.459673571 container remove c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_saha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:04:14 np0005535469 systemd[1]: libpod-conmon-c6c9b35e641eba5b62aab72e214ed1cc59560d31f713dc6a0a86b9b1e3015d18.scope: Deactivated successfully.
Nov 25 11:04:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:14 np0005535469 podman[154626]: 2025-11-25 16:04:14.300042321 +0000 UTC m=+0.056701247 container create f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sutherland, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:04:14 np0005535469 systemd[1]: Started libpod-conmon-f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96.scope.
Nov 25 11:04:14 np0005535469 podman[154626]: 2025-11-25 16:04:14.262023129 +0000 UTC m=+0.018682075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:04:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:04:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4634e1b3ba600e0130f467d70916aee321fadd54c329247e024619784bda132/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4634e1b3ba600e0130f467d70916aee321fadd54c329247e024619784bda132/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4634e1b3ba600e0130f467d70916aee321fadd54c329247e024619784bda132/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4634e1b3ba600e0130f467d70916aee321fadd54c329247e024619784bda132/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:14 np0005535469 podman[154626]: 2025-11-25 16:04:14.51144503 +0000 UTC m=+0.268103986 container init f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sutherland, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:04:14 np0005535469 podman[154626]: 2025-11-25 16:04:14.519999356 +0000 UTC m=+0.276658282 container start f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:04:14 np0005535469 podman[154626]: 2025-11-25 16:04:14.532703931 +0000 UTC m=+0.289362887 container attach f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]: {
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:    "0": [
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:        {
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "devices": [
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "/dev/loop3"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            ],
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_name": "ceph_lv0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_size": "21470642176",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "name": "ceph_lv0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "tags": {
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cluster_name": "ceph",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.crush_device_class": "",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.encrypted": "0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osd_id": "0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.type": "block",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.vdo": "0"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            },
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "type": "block",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "vg_name": "ceph_vg0"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:        }
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:    ],
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:    "1": [
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:        {
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "devices": [
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "/dev/loop4"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            ],
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_name": "ceph_lv1",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_size": "21470642176",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "name": "ceph_lv1",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "tags": {
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cluster_name": "ceph",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.crush_device_class": "",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.encrypted": "0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osd_id": "1",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.type": "block",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.vdo": "0"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            },
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "type": "block",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "vg_name": "ceph_vg1"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:        }
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:    ],
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:    "2": [
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:        {
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "devices": [
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "/dev/loop5"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            ],
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_name": "ceph_lv2",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_size": "21470642176",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "name": "ceph_lv2",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "tags": {
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.cluster_name": "ceph",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.crush_device_class": "",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.encrypted": "0",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osd_id": "2",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.type": "block",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:                "ceph.vdo": "0"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            },
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "type": "block",
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:            "vg_name": "ceph_vg2"
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:        }
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]:    ]
Nov 25 11:04:15 np0005535469 quirky_sutherland[154643]: }
Nov 25 11:04:15 np0005535469 systemd[1]: libpod-f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96.scope: Deactivated successfully.
Nov 25 11:04:15 np0005535469 podman[154626]: 2025-11-25 16:04:15.333517873 +0000 UTC m=+1.090176829 container died f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:04:15 np0005535469 systemd-logind[791]: New session 48 of user zuul.
Nov 25 11:04:15 np0005535469 systemd[1]: Started Session 48 of User zuul.
Nov 25 11:04:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a4634e1b3ba600e0130f467d70916aee321fadd54c329247e024619784bda132-merged.mount: Deactivated successfully.
Nov 25 11:04:15 np0005535469 podman[154626]: 2025-11-25 16:04:15.582439092 +0000 UTC m=+1.339098018 container remove f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:04:15 np0005535469 systemd[1]: libpod-conmon-f0ad7c759c29616f06b57e9bcacef0f366644ac2b568ef13c193d878a29bec96.scope: Deactivated successfully.
Nov 25 11:04:16 np0005535469 podman[154860]: 2025-11-25 16:04:16.167512011 +0000 UTC m=+0.042403650 container create a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:04:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:16 np0005535469 systemd[1]: Started libpod-conmon-a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7.scope.
Nov 25 11:04:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:04:16 np0005535469 podman[154860]: 2025-11-25 16:04:16.145986193 +0000 UTC m=+0.020877852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:04:16 np0005535469 podman[154860]: 2025-11-25 16:04:16.243871356 +0000 UTC m=+0.118763015 container init a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:04:16 np0005535469 podman[154860]: 2025-11-25 16:04:16.253224112 +0000 UTC m=+0.128115751 container start a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:04:16 np0005535469 optimistic_johnson[154877]: 167 167
Nov 25 11:04:16 np0005535469 systemd[1]: libpod-a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7.scope: Deactivated successfully.
Nov 25 11:04:16 np0005535469 conmon[154877]: conmon a2457692c453872ecc6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7.scope/container/memory.events
Nov 25 11:04:16 np0005535469 podman[154860]: 2025-11-25 16:04:16.2610688 +0000 UTC m=+0.135960469 container attach a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_johnson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:04:16 np0005535469 podman[154860]: 2025-11-25 16:04:16.261879001 +0000 UTC m=+0.136770640 container died a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:04:16 np0005535469 systemd[1]: var-lib-containers-storage-overlay-877b171b606525ebdf510138aee909a47f56b1a0c661f53f50a156b7d2ef57f0-merged.mount: Deactivated successfully.
Nov 25 11:04:16 np0005535469 podman[154860]: 2025-11-25 16:04:16.332388752 +0000 UTC m=+0.207280401 container remove a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_johnson, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:04:16 np0005535469 systemd[1]: libpod-conmon-a2457692c453872ecc6d71b0218bfde5ca2b2e1e74de04fc7375c1a087a764b7.scope: Deactivated successfully.
Nov 25 11:04:16 np0005535469 podman[154947]: 2025-11-25 16:04:16.529402911 +0000 UTC m=+0.083165066 container create 10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:04:16 np0005535469 systemd[1]: Stopping User Manager for UID 0...
Nov 25 11:04:16 np0005535469 systemd[153516]: Activating special unit Exit the Session...
Nov 25 11:04:16 np0005535469 systemd[153516]: Stopped target Main User Target.
Nov 25 11:04:16 np0005535469 systemd[153516]: Stopped target Basic System.
Nov 25 11:04:16 np0005535469 systemd[153516]: Stopped target Paths.
Nov 25 11:04:16 np0005535469 systemd[153516]: Stopped target Sockets.
Nov 25 11:04:16 np0005535469 systemd[153516]: Stopped target Timers.
Nov 25 11:04:16 np0005535469 systemd[153516]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 11:04:16 np0005535469 systemd[153516]: Closed D-Bus User Message Bus Socket.
Nov 25 11:04:16 np0005535469 systemd[153516]: Stopped Create User's Volatile Files and Directories.
Nov 25 11:04:16 np0005535469 systemd[153516]: Removed slice User Application Slice.
Nov 25 11:04:16 np0005535469 systemd[153516]: Reached target Shutdown.
Nov 25 11:04:16 np0005535469 systemd[153516]: Finished Exit the Session.
Nov 25 11:04:16 np0005535469 systemd[153516]: Reached target Exit the Session.
Nov 25 11:04:16 np0005535469 podman[154947]: 2025-11-25 16:04:16.470759343 +0000 UTC m=+0.024521508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:04:16 np0005535469 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 11:04:16 np0005535469 systemd[1]: Stopped User Manager for UID 0.
Nov 25 11:04:16 np0005535469 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 11:04:16 np0005535469 systemd[1]: Started libpod-conmon-10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf.scope.
Nov 25 11:04:16 np0005535469 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 11:04:16 np0005535469 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 11:04:16 np0005535469 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 11:04:16 np0005535469 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 11:04:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:04:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aeb68c97caf8f681a7042a1422942a9bbd7376fe5522350d29bb24597b6691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aeb68c97caf8f681a7042a1422942a9bbd7376fe5522350d29bb24597b6691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aeb68c97caf8f681a7042a1422942a9bbd7376fe5522350d29bb24597b6691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aeb68c97caf8f681a7042a1422942a9bbd7376fe5522350d29bb24597b6691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:16 np0005535469 podman[154947]: 2025-11-25 16:04:16.763102147 +0000 UTC m=+0.316864312 container init 10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:04:16 np0005535469 podman[154947]: 2025-11-25 16:04:16.771798016 +0000 UTC m=+0.325560191 container start 10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:04:16 np0005535469 podman[154947]: 2025-11-25 16:04:16.776155492 +0000 UTC m=+0.329917637 container attach 10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 11:04:16 np0005535469 python3.9[155013]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]: {
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "osd_id": 1,
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "type": "bluestore"
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:    },
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "osd_id": 2,
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "type": "bluestore"
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:    },
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "osd_id": 0,
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:        "type": "bluestore"
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]:    }
Nov 25 11:04:17 np0005535469 dazzling_brattain[155016]: }
Nov 25 11:04:17 np0005535469 systemd[1]: libpod-10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf.scope: Deactivated successfully.
Nov 25 11:04:17 np0005535469 podman[154947]: 2025-11-25 16:04:17.753888363 +0000 UTC m=+1.307650508 container died 10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:04:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c0aeb68c97caf8f681a7042a1422942a9bbd7376fe5522350d29bb24597b6691-merged.mount: Deactivated successfully.
Nov 25 11:04:17 np0005535469 podman[154947]: 2025-11-25 16:04:17.82086134 +0000 UTC m=+1.374623475 container remove 10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 11:04:17 np0005535469 systemd[1]: libpod-conmon-10d37697a8fab52a677f4a6f964421c1ebfd7eb0b78c185cf9965b9dadd44ecf.scope: Deactivated successfully.
Nov 25 11:04:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:04:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:04:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:04:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:04:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 12f21f1d-45a0-41fa-a47a-2eb3b233abc0 does not exist
Nov 25 11:04:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fdfa0857-9acd-42f6-a72d-ad87f61a9cfb does not exist
Nov 25 11:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:18 np0005535469 python3.9[155244]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:18 np0005535469 python3.9[155422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:04:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:04:19 np0005535469 python3.9[155575]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:20 np0005535469 python3.9[155728]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:20 np0005535469 python3.9[155880]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:21 np0005535469 python3.9[156030]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:04:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:22 np0005535469 python3.9[156182]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 11:04:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:23 np0005535469 python3.9[156332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:24 np0005535469 python3.9[156453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086663.3476367-86-4721308742494/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 11:04:25 np0005535469 python3.9[156604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:25 np0005535469 python3.9[156725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086664.9352558-101-71511746707907/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:26 np0005535469 python3.9[156877]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 11:04:27 np0005535469 python3.9[156961]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 11:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:30 np0005535469 python3.9[157114]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 11:04:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:30 np0005535469 python3.9[157267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:31 np0005535469 python3.9[157388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086670.3205526-138-72894467933852/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:31 np0005535469 python3.9[157538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:32 np0005535469 python3.9[157659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086671.44804-138-170483975556944/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:33 np0005535469 python3.9[157809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:34 np0005535469 python3.9[157930]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086673.3863854-182-203136896718214/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:34 np0005535469 python3.9[158080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:35 np0005535469 python3.9[158201]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086674.505566-182-261839983702146/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:36 np0005535469 python3.9[158351]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:04:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:36Z|00025|memory|INFO|17280 kB peak resident set size after 30.0 seconds
Nov 25 11:04:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:04:36Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 11:04:36 np0005535469 podman[158477]: 2025-11-25 16:04:36.600207509 +0000 UTC m=+0.085393495 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:04:36 np0005535469 python3.9[158520]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:37 np0005535469 python3.9[158682]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:38 np0005535469 python3.9[158760]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:38 np0005535469 python3.9[158912]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:39 np0005535469 python3.9[158990]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:39 np0005535469 python3.9[159142]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:04:39
Nov 25 11:04:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:04:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:04:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Nov 25 11:04:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:04:40 np0005535469 python3.9[159294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:40 np0005535469 python3.9[159372]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:41 np0005535469 python3.9[159524]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:42 np0005535469 python3.9[159602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:42 np0005535469 python3.9[159754]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:04:42 np0005535469 systemd[1]: Reloading.
Nov 25 11:04:42 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:04:42 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:04:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:43 np0005535469 python3.9[159943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:44 np0005535469 python3.9[160021]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:45 np0005535469 python3.9[160173]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:45 np0005535469 python3.9[160251]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:46 np0005535469 python3.9[160403]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:04:46 np0005535469 systemd[1]: Reloading.
Nov 25 11:04:46 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:04:46 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:04:46 np0005535469 systemd[1]: Starting Create netns directory...
Nov 25 11:04:46 np0005535469 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 11:04:46 np0005535469 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 11:04:46 np0005535469 systemd[1]: Finished Create netns directory.
Nov 25 11:04:47 np0005535469 python3.9[160596]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:47 np0005535469 python3.9[160748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:48 np0005535469 python3.9[160871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764086687.4941006-333-37629477405462/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:49 np0005535469 python3.9[161023]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:04:50 np0005535469 python3.9[161175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:50 np0005535469 python3.9[161298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086689.5327096-358-135171531820543/.source.json _original_basename=.tbyowee3 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:04:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:04:51 np0005535469 python3.9[161450]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:04:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:53 np0005535469 python3.9[161877]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 11:04:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:54 np0005535469 python3.9[162029]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 11:04:55 np0005535469 python3.9[162181]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 11:04:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:04:57 np0005535469 python3[162358]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 11:04:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:04:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:06 np0005535469 podman[162371]: 2025-11-25 16:05:06.417225953 +0000 UTC m=+9.219384539 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:05:06 np0005535469 podman[162490]: 2025-11-25 16:05:06.551569627 +0000 UTC m=+0.050530866 container create f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:05:06 np0005535469 podman[162490]: 2025-11-25 16:05:06.527334433 +0000 UTC m=+0.026295702 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:05:06 np0005535469 python3[162358]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:05:07 np0005535469 podman[162653]: 2025-11-25 16:05:07.201583485 +0000 UTC m=+0.152446632 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:05:07 np0005535469 python3.9[162700]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:05:07 np0005535469 python3.9[162861]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:08 np0005535469 python3.9[162937]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:05:09 np0005535469 python3.9[163088]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764086708.4708269-446-209008485501246/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:09 np0005535469 python3.9[163164]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:05:09 np0005535469 systemd[1]: Reloading.
Nov 25 11:05:09 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:05:09 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:05:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:10 np0005535469 python3.9[163276]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:10 np0005535469 systemd[1]: Reloading.
Nov 25 11:05:10 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:05:10 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:05:10 np0005535469 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 11:05:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:05:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16545ed3f0c19f0402755e0be3ae51b8340c5333bc9f5ac3468dcd2c6e5e18d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16545ed3f0c19f0402755e0be3ae51b8340c5333bc9f5ac3468dcd2c6e5e18d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:11 np0005535469 systemd[1]: Started /usr/bin/podman healthcheck run f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2.
Nov 25 11:05:11 np0005535469 podman[163317]: 2025-11-25 16:05:11.649429424 +0000 UTC m=+0.779594183 container init f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + sudo -E kolla_set_configs
Nov 25 11:05:11 np0005535469 podman[163317]: 2025-11-25 16:05:11.679604191 +0000 UTC m=+0.809768910 container start f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Validating config file
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Copying service configuration files
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Writing out command to execute
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: ++ cat /run_command
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + CMD=neutron-ovn-metadata-agent
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + ARGS=
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + sudo kolla_copy_cacerts
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + [[ ! -n '' ]]
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + . kolla_extend_start
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + umask 0022
Nov 25 11:05:11 np0005535469 ovn_metadata_agent[163333]: + exec neutron-ovn-metadata-agent
Nov 25 11:05:12 np0005535469 edpm-start-podman-container[163317]: ovn_metadata_agent
Nov 25 11:05:12 np0005535469 edpm-start-podman-container[163316]: Creating additional drop-in dependency for "ovn_metadata_agent" (f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2)
Nov 25 11:05:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:12 np0005535469 systemd[1]: Reloading.
Nov 25 11:05:12 np0005535469 podman[163340]: 2025-11-25 16:05:12.276110881 +0000 UTC m=+0.584571764 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:05:12 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:05:12 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:05:12 np0005535469 systemd[1]: Started ovn_metadata_agent container.
Nov 25 11:05:13 np0005535469 systemd[1]: session-48.scope: Deactivated successfully.
Nov 25 11:05:13 np0005535469 systemd[1]: session-48.scope: Consumed 51.715s CPU time.
Nov 25 11:05:13 np0005535469 systemd-logind[791]: Session 48 logged out. Waiting for processes to exit.
Nov 25 11:05:13 np0005535469 systemd-logind[791]: Removed session 48.
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.510 163338 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.511 163338 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.511 163338 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.512 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.512 163338 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.513 163338 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.513 163338 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.513 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.514 163338 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.514 163338 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.514 163338 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.514 163338 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.515 163338 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.515 163338 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.515 163338 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.515 163338 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.516 163338 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.516 163338 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.516 163338 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.516 163338 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.517 163338 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.517 163338 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.517 163338 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.517 163338 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.518 163338 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.518 163338 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.518 163338 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.518 163338 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.518 163338 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.519 163338 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.519 163338 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.519 163338 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.520 163338 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.520 163338 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.520 163338 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.520 163338 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.521 163338 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.521 163338 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.521 163338 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.522 163338 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.522 163338 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.522 163338 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.522 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.523 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.523 163338 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.523 163338 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.523 163338 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.523 163338 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.524 163338 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.524 163338 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.524 163338 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.524 163338 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.524 163338 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.525 163338 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.525 163338 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.525 163338 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.525 163338 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.526 163338 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.526 163338 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.526 163338 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.526 163338 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.526 163338 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.527 163338 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.527 163338 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.527 163338 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.528 163338 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.528 163338 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.528 163338 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.528 163338 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.528 163338 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.529 163338 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.529 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.529 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.529 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.530 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.530 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.530 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.530 163338 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.531 163338 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.531 163338 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.531 163338 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.531 163338 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.532 163338 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.532 163338 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.532 163338 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.532 163338 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.533 163338 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.533 163338 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.533 163338 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.533 163338 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.534 163338 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.534 163338 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.534 163338 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.534 163338 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.535 163338 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.536 163338 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.536 163338 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.536 163338 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.536 163338 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.536 163338 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.536 163338 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.536 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.537 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.537 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.537 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.537 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.537 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.537 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.538 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.538 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.538 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.538 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.538 163338 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.538 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.539 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.539 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.539 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.539 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.539 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.539 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.540 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.540 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.540 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.540 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.540 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.540 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.540 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.541 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.541 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.541 163338 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.541 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.541 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.541 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.542 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.542 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.542 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.542 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.542 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.542 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.542 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.543 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.543 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.543 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.543 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.543 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.543 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.544 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.544 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.544 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.544 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.544 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.544 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.544 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.545 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.545 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.545 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.545 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.545 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.545 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.546 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.546 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.546 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.546 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.546 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.546 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.547 163338 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.547 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.547 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.547 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.547 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.547 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.548 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.548 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.548 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.548 163338 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.548 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.548 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.549 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.549 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.549 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.549 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.549 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.549 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.550 163338 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.550 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.550 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.550 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.550 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.550 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.550 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.551 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.551 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.551 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.551 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.551 163338 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.551 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.552 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.552 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.552 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.552 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.552 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.552 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.552 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.553 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.553 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.553 163338 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.553 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.553 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.553 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.553 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.554 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.554 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.554 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.554 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.554 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.554 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.555 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.555 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.555 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.555 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.555 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.555 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.555 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.556 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.556 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.556 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.556 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.556 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.556 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.556 163338 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.557 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.557 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.557 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.557 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.557 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.557 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.558 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.558 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.558 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.558 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.558 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.558 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.559 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.559 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.559 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.559 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.559 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.559 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.560 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.560 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.560 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.560 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.560 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.560 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.561 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.561 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.561 163338 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.561 163338 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.561 163338 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.561 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.561 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.562 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.562 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.562 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.562 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.562 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.562 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.563 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.563 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.563 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.563 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.563 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.563 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.563 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.564 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.564 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.564 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.564 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.564 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.564 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.565 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.565 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.565 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.565 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.565 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.565 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.565 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.566 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.566 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.566 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.566 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.566 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.566 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.567 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.567 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.567 163338 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.567 163338 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.578 163338 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.578 163338 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.578 163338 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.579 163338 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.579 163338 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.595 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 124ba75b-cc77-4d76-b66e-16ed1bbb2181 (UUID: 124ba75b-cc77-4d76-b66e-16ed1bbb2181) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.625 163338 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.626 163338 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.626 163338 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.626 163338 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.629 163338 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.635 163338 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.643 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '124ba75b-cc77-4d76-b66e-16ed1bbb2181'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], external_ids={}, name=124ba75b-cc77-4d76-b66e-16ed1bbb2181, nb_cfg_timestamp=1764086654598, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.644 163338 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe0ed0aab20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.645 163338 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.650 163338 DEBUG oslo_service.service [-] Started child 163448 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.653 163338 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpcudzvxuu/privsep.sock']#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.653 163448 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-232090'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.678 163448 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.679 163448 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.679 163448 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.681 163448 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.688 163448 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 11:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:13.693 163448 INFO eventlet.wsgi.server [-] (163448) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 25 11:05:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:14 np0005535469 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.349 163338 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.350 163338 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcudzvxuu/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.245 163453 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.252 163453 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.256 163453 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.256 163453 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163453#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.352 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe74f8b-5971-4dfd-a075-34c2eb8696ca]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.855 163453 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.855 163453 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:05:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:14.855 163453 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.361 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9c24fa69-e4ef-4f17-ab15-69d2fdbd5e40]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.363 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, column=external_ids, values=({'neutron:ovn-metadata-id': '2bacac2d-c551-5f2b-b1fb-56d2a112be8e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.392 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.406 163338 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.406 163338 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.406 163338 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.406 163338 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.406 163338 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.406 163338 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.407 163338 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.407 163338 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.407 163338 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.407 163338 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.407 163338 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.407 163338 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.407 163338 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.408 163338 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.409 163338 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.410 163338 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.411 163338 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.412 163338 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.413 163338 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.414 163338 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.415 163338 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.416 163338 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.417 163338 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.418 163338 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.419 163338 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.420 163338 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.421 163338 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.422 163338 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.423 163338 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.424 163338 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.425 163338 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.426 163338 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.427 163338 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.428 163338 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.429 163338 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.430 163338 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.431 163338 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.432 163338 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.433 163338 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.434 163338 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.435 163338 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.436 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.437 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.438 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.439 163338 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.440 163338 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.440 163338 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.440 163338 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:05:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:05:15.440 163338 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 11:05:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:18 np0005535469 systemd-logind[791]: New session 49 of user zuul.
Nov 25 11:05:18 np0005535469 systemd[1]: Started Session 49 of User zuul.
Nov 25 11:05:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:05:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d2eb7c5d-4bf3-4218-b8fe-accebe8766e2 does not exist
Nov 25 11:05:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev efdac485-6af9-4f68-9311-4e10cc61b1f0 does not exist
Nov 25 11:05:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dbf527bb-b4d2-4cf8-b5cd-59537b376ba7 does not exist
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:05:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:05:19 np0005535469 python3.9[163802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:05:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:05:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:05:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:05:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:05:19 np0005535469 podman[163890]: 2025-11-25 16:05:19.330284159 +0000 UTC m=+0.047406151 container create ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:05:19 np0005535469 systemd[1]: Started libpod-conmon-ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5.scope.
Nov 25 11:05:19 np0005535469 podman[163890]: 2025-11-25 16:05:19.309726744 +0000 UTC m=+0.026848766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:05:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:05:19 np0005535469 podman[163890]: 2025-11-25 16:05:19.441731336 +0000 UTC m=+0.158853318 container init ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:05:19 np0005535469 podman[163890]: 2025-11-25 16:05:19.449105367 +0000 UTC m=+0.166227339 container start ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:05:19 np0005535469 podman[163890]: 2025-11-25 16:05:19.452180801 +0000 UTC m=+0.169302783 container attach ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:05:19 np0005535469 brave_bardeen[163907]: 167 167
Nov 25 11:05:19 np0005535469 systemd[1]: libpod-ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5.scope: Deactivated successfully.
Nov 25 11:05:19 np0005535469 conmon[163907]: conmon ff162669e5f0671a547e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5.scope/container/memory.events
Nov 25 11:05:19 np0005535469 podman[163890]: 2025-11-25 16:05:19.455604195 +0000 UTC m=+0.172726177 container died ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:05:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-18c6cbe94a4e50c24dda66ba9c0c1589b90b6063467caae7c283a7633b5e4c93-merged.mount: Deactivated successfully.
Nov 25 11:05:19 np0005535469 podman[163890]: 2025-11-25 16:05:19.495034067 +0000 UTC m=+0.212156049 container remove ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 11:05:19 np0005535469 systemd[1]: libpod-conmon-ff162669e5f0671a547e19443e35958d7f7c10f2638cbec7da6791c6457dddb5.scope: Deactivated successfully.
Nov 25 11:05:19 np0005535469 podman[163953]: 2025-11-25 16:05:19.639065437 +0000 UTC m=+0.035135455 container create 60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 11:05:19 np0005535469 systemd[1]: Started libpod-conmon-60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05.scope.
Nov 25 11:05:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:05:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b07cf58e1513f34cf99a01c256149f6c6f8007a8c33dd0ad84b8bcc59087b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b07cf58e1513f34cf99a01c256149f6c6f8007a8c33dd0ad84b8bcc59087b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b07cf58e1513f34cf99a01c256149f6c6f8007a8c33dd0ad84b8bcc59087b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b07cf58e1513f34cf99a01c256149f6c6f8007a8c33dd0ad84b8bcc59087b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02b07cf58e1513f34cf99a01c256149f6c6f8007a8c33dd0ad84b8bcc59087b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:19 np0005535469 podman[163953]: 2025-11-25 16:05:19.706389394 +0000 UTC m=+0.102459442 container init 60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 25 11:05:19 np0005535469 podman[163953]: 2025-11-25 16:05:19.717705814 +0000 UTC m=+0.113775842 container start 60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:05:19 np0005535469 podman[163953]: 2025-11-25 16:05:19.624879868 +0000 UTC m=+0.020949906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:05:19 np0005535469 podman[163953]: 2025-11-25 16:05:19.721365094 +0000 UTC m=+0.117435122 container attach 60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:05:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:20 np0005535469 python3.9[164101]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:20 np0005535469 quirky_yalow[163989]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:05:20 np0005535469 quirky_yalow[163989]: --> relative data size: 1.0
Nov 25 11:05:20 np0005535469 quirky_yalow[163989]: --> All data devices are unavailable
Nov 25 11:05:20 np0005535469 systemd[1]: libpod-60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05.scope: Deactivated successfully.
Nov 25 11:05:20 np0005535469 podman[163953]: 2025-11-25 16:05:20.70969461 +0000 UTC m=+1.105764648 container died 60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:05:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-02b07cf58e1513f34cf99a01c256149f6c6f8007a8c33dd0ad84b8bcc59087b0-merged.mount: Deactivated successfully.
Nov 25 11:05:20 np0005535469 podman[163953]: 2025-11-25 16:05:20.774089906 +0000 UTC m=+1.170159924 container remove 60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:05:20 np0005535469 systemd[1]: libpod-conmon-60b1bc97be7183a116758e4de8f39658dd5b71e7b7868c5bbdd7e787fca5ad05.scope: Deactivated successfully.
Nov 25 11:05:21 np0005535469 podman[164443]: 2025-11-25 16:05:21.447006462 +0000 UTC m=+0.042418855 container create 068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:05:21 np0005535469 systemd[1]: Started libpod-conmon-068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842.scope.
Nov 25 11:05:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:05:21 np0005535469 podman[164443]: 2025-11-25 16:05:21.42506758 +0000 UTC m=+0.020480043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:05:21 np0005535469 podman[164443]: 2025-11-25 16:05:21.525595687 +0000 UTC m=+0.121008130 container init 068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:05:21 np0005535469 podman[164443]: 2025-11-25 16:05:21.531801758 +0000 UTC m=+0.127214151 container start 068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 11:05:21 np0005535469 podman[164443]: 2025-11-25 16:05:21.53478817 +0000 UTC m=+0.130200563 container attach 068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:05:21 np0005535469 laughing_fermi[164459]: 167 167
Nov 25 11:05:21 np0005535469 systemd[1]: libpod-068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842.scope: Deactivated successfully.
Nov 25 11:05:21 np0005535469 conmon[164459]: conmon 068efbeefb3241f093fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842.scope/container/memory.events
Nov 25 11:05:21 np0005535469 podman[164443]: 2025-11-25 16:05:21.536717782 +0000 UTC m=+0.132130175 container died 068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:05:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-db85efbbfd53da8893794f6011cca5e13cac763988c0d9016ad5ef9b0b3244ff-merged.mount: Deactivated successfully.
Nov 25 11:05:21 np0005535469 podman[164443]: 2025-11-25 16:05:21.567296541 +0000 UTC m=+0.162708934 container remove 068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:05:21 np0005535469 systemd[1]: libpod-conmon-068efbeefb3241f093fdbeacac19ae62b2ac597fb59bd913201df049243ff842.scope: Deactivated successfully.
Nov 25 11:05:21 np0005535469 python3.9[164421]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:05:21 np0005535469 systemd[1]: Reloading.
Nov 25 11:05:21 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:05:21 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:05:21 np0005535469 podman[164500]: 2025-11-25 16:05:21.733725376 +0000 UTC m=+0.050952389 container create ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:05:21 np0005535469 podman[164500]: 2025-11-25 16:05:21.711016583 +0000 UTC m=+0.028243626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:05:21 np0005535469 systemd[1]: Started libpod-conmon-ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c.scope.
Nov 25 11:05:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:05:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9192f35af7cc8901b9be68736a90ae5d3649a462591ff32e97b1ffbcba4eff6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9192f35af7cc8901b9be68736a90ae5d3649a462591ff32e97b1ffbcba4eff6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9192f35af7cc8901b9be68736a90ae5d3649a462591ff32e97b1ffbcba4eff6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9192f35af7cc8901b9be68736a90ae5d3649a462591ff32e97b1ffbcba4eff6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:21 np0005535469 podman[164500]: 2025-11-25 16:05:21.925007812 +0000 UTC m=+0.242234845 container init ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chaum, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:05:21 np0005535469 podman[164500]: 2025-11-25 16:05:21.932453266 +0000 UTC m=+0.249680289 container start ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:05:21 np0005535469 podman[164500]: 2025-11-25 16:05:21.935772647 +0000 UTC m=+0.252999660 container attach ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:05:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:22 np0005535469 loving_chaum[164534]: {
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:    "0": [
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:        {
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "devices": [
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "/dev/loop3"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            ],
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_name": "ceph_lv0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_size": "21470642176",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "name": "ceph_lv0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "tags": {
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cluster_name": "ceph",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.crush_device_class": "",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.encrypted": "0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osd_id": "0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.type": "block",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.vdo": "0"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            },
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "type": "block",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "vg_name": "ceph_vg0"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:        }
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:    ],
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:    "1": [
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:        {
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "devices": [
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "/dev/loop4"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            ],
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_name": "ceph_lv1",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_size": "21470642176",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "name": "ceph_lv1",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "tags": {
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cluster_name": "ceph",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.crush_device_class": "",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.encrypted": "0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osd_id": "1",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.type": "block",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.vdo": "0"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            },
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "type": "block",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "vg_name": "ceph_vg1"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:        }
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:    ],
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:    "2": [
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:        {
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "devices": [
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "/dev/loop5"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            ],
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_name": "ceph_lv2",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_size": "21470642176",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "name": "ceph_lv2",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "tags": {
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.cluster_name": "ceph",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.crush_device_class": "",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.encrypted": "0",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osd_id": "2",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.type": "block",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:                "ceph.vdo": "0"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            },
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "type": "block",
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:            "vg_name": "ceph_vg2"
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:        }
Nov 25 11:05:22 np0005535469 loving_chaum[164534]:    ]
Nov 25 11:05:22 np0005535469 loving_chaum[164534]: }
Nov 25 11:05:22 np0005535469 python3.9[164688]: ansible-ansible.builtin.service_facts Invoked
Nov 25 11:05:22 np0005535469 systemd[1]: libpod-ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c.scope: Deactivated successfully.
Nov 25 11:05:22 np0005535469 podman[164500]: 2025-11-25 16:05:22.68481859 +0000 UTC m=+1.002045603 container died ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chaum, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:05:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9192f35af7cc8901b9be68736a90ae5d3649a462591ff32e97b1ffbcba4eff6b-merged.mount: Deactivated successfully.
Nov 25 11:05:22 np0005535469 network[164720]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 11:05:22 np0005535469 network[164721]: 'network-scripts' will be removed from distribution in near future.
Nov 25 11:05:22 np0005535469 network[164722]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 11:05:22 np0005535469 podman[164500]: 2025-11-25 16:05:22.735987893 +0000 UTC m=+1.053214946 container remove ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:05:22 np0005535469 systemd[1]: libpod-conmon-ff4743dce254771aba32fa18f1b10e4ac366090662e182dac6ca4125b9ada32c.scope: Deactivated successfully.
Nov 25 11:05:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:23 np0005535469 podman[164886]: 2025-11-25 16:05:23.845223175 +0000 UTC m=+0.035225026 container create 541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 11:05:23 np0005535469 systemd[1]: Started libpod-conmon-541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c.scope.
Nov 25 11:05:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:05:23 np0005535469 podman[164886]: 2025-11-25 16:05:23.917889198 +0000 UTC m=+0.107891059 container init 541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:05:23 np0005535469 podman[164886]: 2025-11-25 16:05:23.82860642 +0000 UTC m=+0.018608291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:05:23 np0005535469 podman[164886]: 2025-11-25 16:05:23.929924089 +0000 UTC m=+0.119925940 container start 541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:05:23 np0005535469 podman[164886]: 2025-11-25 16:05:23.933769374 +0000 UTC m=+0.123771275 container attach 541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:05:23 np0005535469 vigorous_mccarthy[164906]: 167 167
Nov 25 11:05:23 np0005535469 systemd[1]: libpod-541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c.scope: Deactivated successfully.
Nov 25 11:05:23 np0005535469 podman[164886]: 2025-11-25 16:05:23.935713348 +0000 UTC m=+0.125715199 container died 541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:05:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e368f824a586bc82156c8aa1b41d4f34c8cb5d3dcd39156d39a2155b1f42c897-merged.mount: Deactivated successfully.
Nov 25 11:05:23 np0005535469 podman[164886]: 2025-11-25 16:05:23.975222291 +0000 UTC m=+0.165224152 container remove 541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:05:23 np0005535469 systemd[1]: libpod-conmon-541ac90e7d80a1d11ca90bb00eca5e0975c9f7eda08abfaf0dcd714b4854195c.scope: Deactivated successfully.
Nov 25 11:05:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:24 np0005535469 podman[164939]: 2025-11-25 16:05:24.136070172 +0000 UTC m=+0.024633786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:05:24 np0005535469 podman[164939]: 2025-11-25 16:05:24.287576628 +0000 UTC m=+0.176140182 container create c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hofstadter, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:05:24 np0005535469 systemd[1]: Started libpod-conmon-c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12.scope.
Nov 25 11:05:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:05:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f4db2b8ac544f1781bf7fd18cd6326d41860676819b49dec7363c206ebfe97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f4db2b8ac544f1781bf7fd18cd6326d41860676819b49dec7363c206ebfe97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f4db2b8ac544f1781bf7fd18cd6326d41860676819b49dec7363c206ebfe97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f4db2b8ac544f1781bf7fd18cd6326d41860676819b49dec7363c206ebfe97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:24 np0005535469 podman[164939]: 2025-11-25 16:05:24.890510714 +0000 UTC m=+0.779074268 container init c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hofstadter, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:05:24 np0005535469 podman[164939]: 2025-11-25 16:05:24.897582478 +0000 UTC m=+0.786146002 container start c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:05:25 np0005535469 podman[164939]: 2025-11-25 16:05:25.008357616 +0000 UTC m=+0.896921160 container attach c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]: {
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "osd_id": 1,
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "type": "bluestore"
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:    },
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "osd_id": 2,
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "type": "bluestore"
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:    },
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "osd_id": 0,
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:        "type": "bluestore"
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]:    }
Nov 25 11:05:25 np0005535469 goofy_hofstadter[164956]: }
Nov 25 11:05:25 np0005535469 systemd[1]: libpod-c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12.scope: Deactivated successfully.
Nov 25 11:05:25 np0005535469 podman[164939]: 2025-11-25 16:05:25.867396316 +0000 UTC m=+1.755959840 container died c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 11:05:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-17f4db2b8ac544f1781bf7fd18cd6326d41860676819b49dec7363c206ebfe97-merged.mount: Deactivated successfully.
Nov 25 11:05:26 np0005535469 podman[164939]: 2025-11-25 16:05:26.656137218 +0000 UTC m=+2.544700782 container remove c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:05:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:05:26 np0005535469 systemd[1]: libpod-conmon-c9e1f72209cbbc1732faf50f014b6f9f5115d74506169855c79e9ae696a24d12.scope: Deactivated successfully.
Nov 25 11:05:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:05:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:05:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:05:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 86e99eeb-49ed-4335-99df-4a4f5b7761f5 does not exist
Nov 25 11:05:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dd7cbb0d-5dd2-41e1-ae40-da3366d6b866 does not exist
Nov 25 11:05:27 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:05:27 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:05:28 np0005535469 python3.9[165274]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:28 np0005535469 python3.9[165427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:29 np0005535469 python3.9[165580]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:30 np0005535469 python3.9[165733]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:31 np0005535469 python3.9[165886]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:31 np0005535469 python3.9[166039]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:32 np0005535469 python3.9[166192]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:05:33 np0005535469 python3.9[166345]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:34 np0005535469 python3.9[166497]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:34 np0005535469 python3.9[166649]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:35 np0005535469 python3.9[166801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:35 np0005535469 python3.9[166953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:36 np0005535469 python3.9[167105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:37 np0005535469 python3.9[167257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:37 np0005535469 podman[167381]: 2025-11-25 16:05:37.655409075 +0000 UTC m=+0.089348741 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 11:05:37 np0005535469 python3.9[167427]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:38 np0005535469 python3.9[167587]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:39 np0005535469 python3.9[167739]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:39 np0005535469 python3.9[167891]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:05:39
Nov 25 11:05:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:05:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:05:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', '.mgr', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control']
Nov 25 11:05:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:05:40 np0005535469 python3.9[168043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:41 np0005535469 python3.9[168195]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:42 np0005535469 python3.9[168347]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:05:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:42 np0005535469 python3.9[168499]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:42 np0005535469 podman[168502]: 2025-11-25 16:05:42.800074874 +0000 UTC m=+0.053339114 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 25 11:05:43 np0005535469 python3.9[168670]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 11:05:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:44 np0005535469 python3.9[168822]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:05:44 np0005535469 systemd[1]: Reloading.
Nov 25 11:05:44 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:05:44 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:05:45 np0005535469 python3.9[169010]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:46 np0005535469 python3.9[169163]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:46 np0005535469 python3.9[169316]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:47 np0005535469 python3.9[169469]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:49 np0005535469 python3.9[169622]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:49 np0005535469 python3.9[169775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:50 np0005535469 python3.9[169928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:05:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:05:51 np0005535469 python3.9[170081]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 11:05:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:52 np0005535469 python3.9[170234]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 11:05:53 np0005535469 python3.9[170392]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 11:05:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:05:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:54 np0005535469 python3.9[170552]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 11:05:55 np0005535469 python3.9[170636]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 11:05:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:05:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 11:06:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 11:06:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 11:06:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 11:06:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 11:06:08 np0005535469 podman[170823]: 2025-11-25 16:06:08.706595203 +0000 UTC m=+0.111280362 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:06:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:06:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 11:06:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Nov 25 11:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:06:13.569 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:06:13.569 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:06:13.569 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:06:13 np0005535469 podman[170854]: 2025-11-25 16:06:13.671236877 +0000 UTC m=+0.080485153 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 11:06:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 25 11:06:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 25 11:06:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:23 np0005535469 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 11:06:23 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 11:06:23 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 11:06:23 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 11:06:23 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 11:06:23 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 11:06:23 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 11:06:23 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 11:06:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:27 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 11:06:27 np0005535469 podman[171052]: 2025-11-25 16:06:27.772531856 +0000 UTC m=+0.055346068 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:06:27 np0005535469 podman[171052]: 2025-11-25 16:06:27.923080592 +0000 UTC m=+0.205894794 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:06:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:06:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:06:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2fa5b711-ba0d-4eed-a702-f72b9102ecf9 does not exist
Nov 25 11:06:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cc3bcc70-3c78-4536-8260-35898da80d2d does not exist
Nov 25 11:06:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b713a854-cb3b-4f9a-bf08-acc9e810b03f does not exist
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:06:29 np0005535469 podman[171481]: 2025-11-25 16:06:29.615451718 +0000 UTC m=+0.040915360 container create fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_heyrovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:06:29 np0005535469 systemd[1]: Started libpod-conmon-fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618.scope.
Nov 25 11:06:29 np0005535469 podman[171481]: 2025-11-25 16:06:29.594380627 +0000 UTC m=+0.019844299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:06:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:06:29 np0005535469 podman[171481]: 2025-11-25 16:06:29.720375765 +0000 UTC m=+0.145839427 container init fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 25 11:06:29 np0005535469 podman[171481]: 2025-11-25 16:06:29.727559052 +0000 UTC m=+0.153022694 container start fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_heyrovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:06:29 np0005535469 silly_heyrovsky[171497]: 167 167
Nov 25 11:06:29 np0005535469 podman[171481]: 2025-11-25 16:06:29.734543096 +0000 UTC m=+0.160006768 container attach fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_heyrovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 11:06:29 np0005535469 systemd[1]: libpod-fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618.scope: Deactivated successfully.
Nov 25 11:06:29 np0005535469 podman[171481]: 2025-11-25 16:06:29.735350148 +0000 UTC m=+0.160813790 container died fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:06:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8e67cb1fafea44504687d130e0310b093e0cc5fc4192e6cd9ce3de798b7757cd-merged.mount: Deactivated successfully.
Nov 25 11:06:29 np0005535469 podman[171481]: 2025-11-25 16:06:29.778846488 +0000 UTC m=+0.204310140 container remove fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_heyrovsky, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:06:29 np0005535469 systemd[1]: libpod-conmon-fbf26aea6de8a70e90f1331029a59ea9174f6bf6be91043a74cffa796eec8618.scope: Deactivated successfully.
Nov 25 11:06:29 np0005535469 podman[171521]: 2025-11-25 16:06:29.925475726 +0000 UTC m=+0.036338534 container create 5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:06:29 np0005535469 systemd[1]: Started libpod-conmon-5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27.scope.
Nov 25 11:06:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:06:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada781eb4cfa6cf8c2b08abaa052c4db55249bd4647711de3201a56bddd8ca47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada781eb4cfa6cf8c2b08abaa052c4db55249bd4647711de3201a56bddd8ca47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada781eb4cfa6cf8c2b08abaa052c4db55249bd4647711de3201a56bddd8ca47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada781eb4cfa6cf8c2b08abaa052c4db55249bd4647711de3201a56bddd8ca47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada781eb4cfa6cf8c2b08abaa052c4db55249bd4647711de3201a56bddd8ca47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:30 np0005535469 podman[171521]: 2025-11-25 16:06:30.001309239 +0000 UTC m=+0.112172067 container init 5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:06:30 np0005535469 podman[171521]: 2025-11-25 16:06:29.909928216 +0000 UTC m=+0.020791044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:06:30 np0005535469 podman[171521]: 2025-11-25 16:06:30.007795518 +0000 UTC m=+0.118658326 container start 5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_swirles, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:06:30 np0005535469 podman[171521]: 2025-11-25 16:06:30.011036857 +0000 UTC m=+0.121899685 container attach 5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_swirles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:06:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:31 np0005535469 laughing_swirles[171538]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:06:31 np0005535469 laughing_swirles[171538]: --> relative data size: 1.0
Nov 25 11:06:31 np0005535469 laughing_swirles[171538]: --> All data devices are unavailable
Nov 25 11:06:31 np0005535469 systemd[1]: libpod-5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27.scope: Deactivated successfully.
Nov 25 11:06:31 np0005535469 podman[171521]: 2025-11-25 16:06:31.042005656 +0000 UTC m=+1.152868464 container died 5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_swirles, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:06:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ada781eb4cfa6cf8c2b08abaa052c4db55249bd4647711de3201a56bddd8ca47-merged.mount: Deactivated successfully.
Nov 25 11:06:31 np0005535469 podman[171521]: 2025-11-25 16:06:31.096742227 +0000 UTC m=+1.207605035 container remove 5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_swirles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 11:06:31 np0005535469 systemd[1]: libpod-conmon-5159243bc1693a6cb7e18152426992caed817ff56272b5317dbf1e9b63bffa27.scope: Deactivated successfully.
Nov 25 11:06:31 np0005535469 podman[171721]: 2025-11-25 16:06:31.643491079 +0000 UTC m=+0.052679595 container create 6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ptolemy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:06:31 np0005535469 systemd[1]: Started libpod-conmon-6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532.scope.
Nov 25 11:06:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:06:31 np0005535469 podman[171721]: 2025-11-25 16:06:31.627752845 +0000 UTC m=+0.036941361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:06:31 np0005535469 podman[171721]: 2025-11-25 16:06:31.719672352 +0000 UTC m=+0.128860888 container init 6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:06:31 np0005535469 podman[171721]: 2025-11-25 16:06:31.725920395 +0000 UTC m=+0.135108911 container start 6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ptolemy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:06:31 np0005535469 podman[171721]: 2025-11-25 16:06:31.728719863 +0000 UTC m=+0.137908399 container attach 6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:06:31 np0005535469 stupefied_ptolemy[171738]: 167 167
Nov 25 11:06:31 np0005535469 systemd[1]: libpod-6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532.scope: Deactivated successfully.
Nov 25 11:06:31 np0005535469 podman[171721]: 2025-11-25 16:06:31.731740416 +0000 UTC m=+0.140928932 container died 6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ptolemy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:06:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-43a1919d22a7e67f82d5788e63c0c503de4d6c90d4b3f45b301eb5d8fd437ad1-merged.mount: Deactivated successfully.
Nov 25 11:06:31 np0005535469 podman[171721]: 2025-11-25 16:06:31.765429636 +0000 UTC m=+0.174618142 container remove 6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:06:31 np0005535469 systemd[1]: libpod-conmon-6cca84ffe696954a0b9330ec45af9a68f993c69347b458797fcd0806eb011532.scope: Deactivated successfully.
Nov 25 11:06:31 np0005535469 podman[171760]: 2025-11-25 16:06:31.906097579 +0000 UTC m=+0.035229504 container create 74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:06:31 np0005535469 systemd[1]: Started libpod-conmon-74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977.scope.
Nov 25 11:06:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:06:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df76198d4f52e30b5a68ea4afa05e8d0a399c7889d452ae730b7d93aedf48df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df76198d4f52e30b5a68ea4afa05e8d0a399c7889d452ae730b7d93aedf48df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df76198d4f52e30b5a68ea4afa05e8d0a399c7889d452ae730b7d93aedf48df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df76198d4f52e30b5a68ea4afa05e8d0a399c7889d452ae730b7d93aedf48df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:31 np0005535469 podman[171760]: 2025-11-25 16:06:31.985527351 +0000 UTC m=+0.114659306 container init 74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:06:31 np0005535469 podman[171760]: 2025-11-25 16:06:31.890887238 +0000 UTC m=+0.020019183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:06:31 np0005535469 podman[171760]: 2025-11-25 16:06:31.992228737 +0000 UTC m=+0.121360662 container start 74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 11:06:31 np0005535469 podman[171760]: 2025-11-25 16:06:31.995432845 +0000 UTC m=+0.124564850 container attach 74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:06:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]: {
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:    "0": [
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:        {
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "devices": [
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "/dev/loop3"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            ],
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_name": "ceph_lv0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_size": "21470642176",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "name": "ceph_lv0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "tags": {
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cluster_name": "ceph",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.crush_device_class": "",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.encrypted": "0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osd_id": "0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.type": "block",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.vdo": "0"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            },
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "type": "block",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "vg_name": "ceph_vg0"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:        }
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:    ],
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:    "1": [
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:        {
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "devices": [
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "/dev/loop4"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            ],
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_name": "ceph_lv1",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_size": "21470642176",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "name": "ceph_lv1",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "tags": {
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cluster_name": "ceph",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.crush_device_class": "",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.encrypted": "0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osd_id": "1",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.type": "block",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.vdo": "0"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            },
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "type": "block",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "vg_name": "ceph_vg1"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:        }
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:    ],
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:    "2": [
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:        {
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "devices": [
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "/dev/loop5"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            ],
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_name": "ceph_lv2",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_size": "21470642176",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "name": "ceph_lv2",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "tags": {
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.cluster_name": "ceph",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.crush_device_class": "",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.encrypted": "0",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osd_id": "2",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.type": "block",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:                "ceph.vdo": "0"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            },
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "type": "block",
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:            "vg_name": "ceph_vg2"
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:        }
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]:    ]
Nov 25 11:06:32 np0005535469 quizzical_bell[171777]: }
Nov 25 11:06:32 np0005535469 systemd[1]: libpod-74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977.scope: Deactivated successfully.
Nov 25 11:06:32 np0005535469 podman[171760]: 2025-11-25 16:06:32.767519787 +0000 UTC m=+0.896651772 container died 74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:06:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0df76198d4f52e30b5a68ea4afa05e8d0a399c7889d452ae730b7d93aedf48df-merged.mount: Deactivated successfully.
Nov 25 11:06:32 np0005535469 podman[171760]: 2025-11-25 16:06:32.924961804 +0000 UTC m=+1.054093729 container remove 74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:06:32 np0005535469 systemd[1]: libpod-conmon-74c12cc138382d57b67abbcfeff6d684e9dc2643e6d2d98f7003a016397cd977.scope: Deactivated successfully.
Nov 25 11:06:33 np0005535469 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 11:06:33 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 11:06:33 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 11:06:33 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 11:06:33 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 11:06:33 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 11:06:33 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 11:06:33 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 11:06:33 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 11:06:33 np0005535469 podman[171949]: 2025-11-25 16:06:33.530856979 +0000 UTC m=+0.044559782 container create 894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 25 11:06:33 np0005535469 systemd[1]: Started libpod-conmon-894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687.scope.
Nov 25 11:06:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:06:33 np0005535469 podman[171949]: 2025-11-25 16:06:33.514838516 +0000 UTC m=+0.028541339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:06:33 np0005535469 podman[171949]: 2025-11-25 16:06:33.612300937 +0000 UTC m=+0.126003760 container init 894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:06:33 np0005535469 podman[171949]: 2025-11-25 16:06:33.618704863 +0000 UTC m=+0.132407666 container start 894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:06:33 np0005535469 systemd[1]: libpod-894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687.scope: Deactivated successfully.
Nov 25 11:06:33 np0005535469 gifted_ramanujan[171965]: 167 167
Nov 25 11:06:33 np0005535469 conmon[171965]: conmon 894cacc9760725e725b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687.scope/container/memory.events
Nov 25 11:06:33 np0005535469 podman[171949]: 2025-11-25 16:06:33.625160891 +0000 UTC m=+0.138863714 container attach 894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:06:33 np0005535469 podman[171949]: 2025-11-25 16:06:33.625489351 +0000 UTC m=+0.139192154 container died 894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:06:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0413c50b2b98402fedba391c69814e5496431c3ebe1d0005d4c9853ade49eea9-merged.mount: Deactivated successfully.
Nov 25 11:06:33 np0005535469 podman[171949]: 2025-11-25 16:06:33.664463107 +0000 UTC m=+0.178165910 container remove 894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:06:33 np0005535469 systemd[1]: libpod-conmon-894cacc9760725e725b5dfd0da635ce78d5f4e3127951bb42eb719648df0f687.scope: Deactivated successfully.
Nov 25 11:06:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:33 np0005535469 podman[171988]: 2025-11-25 16:06:33.805609453 +0000 UTC m=+0.038767972 container create 77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:06:33 np0005535469 systemd[1]: Started libpod-conmon-77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82.scope.
Nov 25 11:06:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:06:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93898349b506ac7b275800ec8456bc1ccd3c932d1c9ba88b15d43eea8e2b28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93898349b506ac7b275800ec8456bc1ccd3c932d1c9ba88b15d43eea8e2b28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93898349b506ac7b275800ec8456bc1ccd3c932d1c9ba88b15d43eea8e2b28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93898349b506ac7b275800ec8456bc1ccd3c932d1c9ba88b15d43eea8e2b28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:06:33 np0005535469 podman[171988]: 2025-11-25 16:06:33.787084671 +0000 UTC m=+0.020243220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:06:33 np0005535469 podman[171988]: 2025-11-25 16:06:33.891312958 +0000 UTC m=+0.124471507 container init 77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_diffie, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:06:33 np0005535469 podman[171988]: 2025-11-25 16:06:33.897354455 +0000 UTC m=+0.130512974 container start 77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:06:33 np0005535469 podman[171988]: 2025-11-25 16:06:33.900365298 +0000 UTC m=+0.133523827 container attach 77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_diffie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:06:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:34 np0005535469 cool_diffie[172006]: {
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "osd_id": 1,
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "type": "bluestore"
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:    },
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "osd_id": 2,
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "type": "bluestore"
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:    },
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "osd_id": 0,
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:        "type": "bluestore"
Nov 25 11:06:34 np0005535469 cool_diffie[172006]:    }
Nov 25 11:06:34 np0005535469 cool_diffie[172006]: }
Nov 25 11:06:34 np0005535469 systemd[1]: libpod-77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82.scope: Deactivated successfully.
Nov 25 11:06:34 np0005535469 podman[171988]: 2025-11-25 16:06:34.832922761 +0000 UTC m=+1.066081290 container died 77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:06:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cc93898349b506ac7b275800ec8456bc1ccd3c932d1c9ba88b15d43eea8e2b28-merged.mount: Deactivated successfully.
Nov 25 11:06:34 np0005535469 podman[171988]: 2025-11-25 16:06:34.902532501 +0000 UTC m=+1.135691030 container remove 77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_diffie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:06:34 np0005535469 systemd[1]: libpod-conmon-77135441972ba6808127015ca1dacf191ee2d098414c8c7acd919800437d2f82.scope: Deactivated successfully.
Nov 25 11:06:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:06:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:06:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c102fa9b-fc62-4f5a-a28d-6403aed62d0f does not exist
Nov 25 11:06:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1e68adce-285d-46d1-aeef-4ecdcc76cac1 does not exist
Nov 25 11:06:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:06:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:39 np0005535469 podman[172101]: 2025-11-25 16:06:39.707653722 +0000 UTC m=+0.124571820 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 11:06:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:06:39
Nov 25 11:06:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:06:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:06:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.mgr', '.rgw.root']
Nov 25 11:06:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:06:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:44 np0005535469 podman[172130]: 2025-11-25 16:06:44.659515732 +0000 UTC m=+0.075050342 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 11:06:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:06:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:06:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:06:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:06:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:07:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:10 np0005535469 podman[187906]: 2025-11-25 16:07:10.684426762 +0000 UTC m=+0.102412982 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 11:07:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:07:13.570 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:07:13.570 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:07:13.571 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 25 11:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:13.706459) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 25 11:07:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086833706588, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 251, "total_data_size": 3520009, "memory_usage": 3568984, "flush_reason": "Manual Compaction"}
Nov 25 11:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 25 11:07:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086834508194, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3455511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9834, "largest_seqno": 11872, "table_properties": {"data_size": 3446198, "index_size": 5935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17778, "raw_average_key_size": 19, "raw_value_size": 3427822, "raw_average_value_size": 3750, "num_data_blocks": 269, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086593, "oldest_key_time": 1764086593, "file_creation_time": 1764086833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 801755 microseconds, and 14576 cpu microseconds.
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:14.508238) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3455511 bytes OK
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:14.508262) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:14.808176) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:14.808228) EVENT_LOG_v1 {"time_micros": 1764086834808216, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:14.808255) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3511509, prev total WAL file size 3524811, number of live WAL files 2.
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:14.851036) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3374KB)], [26(6568KB)]
Nov 25 11:07:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086834851085, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10181463, "oldest_snapshot_seqno": -1}
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3802 keys, 8279719 bytes, temperature: kUnknown
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086835072424, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8279719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8250262, "index_size": 18851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91814, "raw_average_key_size": 24, "raw_value_size": 8177573, "raw_average_value_size": 2150, "num_data_blocks": 816, "num_entries": 3802, "num_filter_entries": 3802, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764086834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.072707) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8279719 bytes
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.154975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.0 rd, 37.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.4 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 4316, records dropped: 514 output_compression: NoCompression
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.155026) EVENT_LOG_v1 {"time_micros": 1764086835155004, "job": 10, "event": "compaction_finished", "compaction_time_micros": 221429, "compaction_time_cpu_micros": 19994, "output_level": 6, "num_output_files": 1, "total_output_size": 8279719, "num_input_records": 4316, "num_output_records": 3802, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086835155692, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086835157081, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:14.850954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.157181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.157186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.157188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.157189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:07:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:07:15.157191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:07:15 np0005535469 podman[188965]: 2025-11-25 16:07:15.621310672 +0000 UTC m=+0.047062861 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:07:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:28 np0005535469 kernel: SELinux:  Converting 2769 SID table entries...
Nov 25 11:07:28 np0005535469 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 11:07:28 np0005535469 kernel: SELinux:  policy capability open_perms=1
Nov 25 11:07:28 np0005535469 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 11:07:28 np0005535469 kernel: SELinux:  policy capability always_check_network=0
Nov 25 11:07:28 np0005535469 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 11:07:28 np0005535469 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 11:07:28 np0005535469 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 11:07:29 np0005535469 dbus-broker-launch[749]: Noticed file-system modification, trigger reload.
Nov 25 11:07:29 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 11:07:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:29 np0005535469 dbus-broker-launch[749]: Noticed file-system modification, trigger reload.
Nov 25 11:07:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:07:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c518c3ec-3a2f-4c3b-9aee-87bfcd756535 does not exist
Nov 25 11:07:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 53b2f546-3270-421a-a86b-b2b13bc91763 does not exist
Nov 25 11:07:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7c2dab89-2990-4d7c-9772-a7584d31552a does not exist
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:07:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:07:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:36 np0005535469 podman[189520]: 2025-11-25 16:07:36.43736374 +0000 UTC m=+0.019368601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:07:36 np0005535469 podman[189520]: 2025-11-25 16:07:36.630455936 +0000 UTC m=+0.212460787 container create a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:07:36 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:07:36 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:07:36 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:07:36 np0005535469 systemd[1]: Started libpod-conmon-a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd.scope.
Nov 25 11:07:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:07:36 np0005535469 podman[189520]: 2025-11-25 16:07:36.925152682 +0000 UTC m=+0.507157543 container init a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:07:36 np0005535469 podman[189520]: 2025-11-25 16:07:36.931676794 +0000 UTC m=+0.513681635 container start a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:07:36 np0005535469 great_feynman[189628]: 167 167
Nov 25 11:07:36 np0005535469 systemd[1]: libpod-a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd.scope: Deactivated successfully.
Nov 25 11:07:37 np0005535469 podman[189520]: 2025-11-25 16:07:37.052007805 +0000 UTC m=+0.634012656 container attach a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:07:37 np0005535469 podman[189520]: 2025-11-25 16:07:37.053247959 +0000 UTC m=+0.635252840 container died a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:07:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a51978420ed896adc8bf2bb93a0034f2a966e3c457c1ba71ddaaeba2c5d78032-merged.mount: Deactivated successfully.
Nov 25 11:07:37 np0005535469 podman[189520]: 2025-11-25 16:07:37.634354991 +0000 UTC m=+1.216359872 container remove a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:07:37 np0005535469 systemd[1]: libpod-conmon-a7d6360ec388c767d35af2bb96a88a12e42c24e12c35bd7bc6754417de1d4acd.scope: Deactivated successfully.
Nov 25 11:07:37 np0005535469 podman[190177]: 2025-11-25 16:07:37.79449357 +0000 UTC m=+0.020901084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:07:37 np0005535469 podman[190177]: 2025-11-25 16:07:37.932166323 +0000 UTC m=+0.158573827 container create 742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lamarr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 11:07:37 np0005535469 systemd[1]: Started libpod-conmon-742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77.scope.
Nov 25 11:07:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:07:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d11b5018ffede068bcd04dc84e85ca1281c2f14b2b45b6e4a8790989f19160/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d11b5018ffede068bcd04dc84e85ca1281c2f14b2b45b6e4a8790989f19160/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d11b5018ffede068bcd04dc84e85ca1281c2f14b2b45b6e4a8790989f19160/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d11b5018ffede068bcd04dc84e85ca1281c2f14b2b45b6e4a8790989f19160/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d11b5018ffede068bcd04dc84e85ca1281c2f14b2b45b6e4a8790989f19160/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:38 np0005535469 podman[190177]: 2025-11-25 16:07:38.022780176 +0000 UTC m=+0.249187700 container init 742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:07:38 np0005535469 podman[190177]: 2025-11-25 16:07:38.034534463 +0000 UTC m=+0.260941967 container start 742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:07:38 np0005535469 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 11:07:38 np0005535469 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 11:07:38 np0005535469 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 11:07:38 np0005535469 systemd[1]: sshd.service: Consumed 2.791s CPU time, read 32.0K from disk, written 0B to disk.
Nov 25 11:07:38 np0005535469 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 11:07:38 np0005535469 systemd[1]: Stopping sshd-keygen.target...
Nov 25 11:07:38 np0005535469 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 11:07:38 np0005535469 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 11:07:38 np0005535469 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 11:07:38 np0005535469 systemd[1]: Reached target sshd-keygen.target.
Nov 25 11:07:38 np0005535469 podman[190177]: 2025-11-25 16:07:38.042121725 +0000 UTC m=+0.268529249 container attach 742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:07:38 np0005535469 systemd[1]: Starting OpenSSH server daemon...
Nov 25 11:07:38 np0005535469 systemd[1]: Started OpenSSH server daemon.
Nov 25 11:07:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:39 np0005535469 keen_lamarr[190194]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:07:39 np0005535469 keen_lamarr[190194]: --> relative data size: 1.0
Nov 25 11:07:39 np0005535469 keen_lamarr[190194]: --> All data devices are unavailable
Nov 25 11:07:39 np0005535469 podman[190177]: 2025-11-25 16:07:39.076058475 +0000 UTC m=+1.302465979 container died 742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:07:39 np0005535469 systemd[1]: libpod-742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77.scope: Deactivated successfully.
Nov 25 11:07:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-78d11b5018ffede068bcd04dc84e85ca1281c2f14b2b45b6e4a8790989f19160-merged.mount: Deactivated successfully.
Nov 25 11:07:39 np0005535469 podman[190177]: 2025-11-25 16:07:39.422403359 +0000 UTC m=+1.648810863 container remove 742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:07:39 np0005535469 systemd[1]: libpod-conmon-742250b84c5ccfdedad2339510884faf53a98255c4c3d31eb794339e83677f77.scope: Deactivated successfully.
Nov 25 11:07:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:39 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 11:07:39 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 11:07:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:07:39
Nov 25 11:07:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:07:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:07:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 11:07:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:07:39 np0005535469 podman[190583]: 2025-11-25 16:07:39.986009993 +0000 UTC m=+0.039335427 container create 2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:07:40 np0005535469 systemd[1]: Started libpod-conmon-2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff.scope.
Nov 25 11:07:40 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:40 np0005535469 podman[190583]: 2025-11-25 16:07:39.969234106 +0000 UTC m=+0.022559570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:07:40 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:40 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:07:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:07:40 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 11:07:40 np0005535469 podman[190583]: 2025-11-25 16:07:40.365351635 +0000 UTC m=+0.418677089 container init 2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:07:40 np0005535469 podman[190583]: 2025-11-25 16:07:40.375859948 +0000 UTC m=+0.429185382 container start 2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:07:40 np0005535469 podman[190583]: 2025-11-25 16:07:40.37986708 +0000 UTC m=+0.433192544 container attach 2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:07:40 np0005535469 sad_chebyshev[190626]: 167 167
Nov 25 11:07:40 np0005535469 systemd[1]: libpod-2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff.scope: Deactivated successfully.
Nov 25 11:07:40 np0005535469 podman[190583]: 2025-11-25 16:07:40.385661832 +0000 UTC m=+0.438987266 container died 2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:07:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2965de097924a8d652388c00a6a3020439b07c828348c80efd24a7100ee7b690-merged.mount: Deactivated successfully.
Nov 25 11:07:40 np0005535469 podman[190583]: 2025-11-25 16:07:40.42761266 +0000 UTC m=+0.480938094 container remove 2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chebyshev, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:07:40 np0005535469 systemd[1]: libpod-conmon-2d7b09d467723bd85a90f124f9d6e0f24e2c6142645c02f87b33459bc78b14ff.scope: Deactivated successfully.
Nov 25 11:07:40 np0005535469 podman[191047]: 2025-11-25 16:07:40.569159381 +0000 UTC m=+0.021403547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:07:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:42 np0005535469 podman[191047]: 2025-11-25 16:07:42.480960566 +0000 UTC m=+1.933204702 container create a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:07:42 np0005535469 systemd[1]: Started libpod-conmon-a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716.scope.
Nov 25 11:07:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:07:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83f89e097c37310ccbc261198886643ccf16301750927b5aa061bfb28ddc6b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83f89e097c37310ccbc261198886643ccf16301750927b5aa061bfb28ddc6b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83f89e097c37310ccbc261198886643ccf16301750927b5aa061bfb28ddc6b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83f89e097c37310ccbc261198886643ccf16301750927b5aa061bfb28ddc6b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:42 np0005535469 podman[191047]: 2025-11-25 16:07:42.5698516 +0000 UTC m=+2.022095736 container init a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:07:42 np0005535469 podman[191047]: 2025-11-25 16:07:42.577927355 +0000 UTC m=+2.030171491 container start a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:07:42 np0005535469 podman[191047]: 2025-11-25 16:07:42.583107739 +0000 UTC m=+2.035351875 container attach a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_thompson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 11:07:42 np0005535469 podman[192407]: 2025-11-25 16:07:42.654663412 +0000 UTC m=+1.065317145 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]: {
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:    "0": [
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:        {
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "devices": [
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "/dev/loop3"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            ],
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_name": "ceph_lv0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_size": "21470642176",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "name": "ceph_lv0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "tags": {
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cluster_name": "ceph",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.crush_device_class": "",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.encrypted": "0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osd_id": "0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.type": "block",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.vdo": "0"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            },
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "type": "block",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "vg_name": "ceph_vg0"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:        }
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:    ],
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:    "1": [
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:        {
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "devices": [
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "/dev/loop4"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            ],
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_name": "ceph_lv1",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_size": "21470642176",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "name": "ceph_lv1",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "tags": {
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cluster_name": "ceph",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.crush_device_class": "",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.encrypted": "0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osd_id": "1",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.type": "block",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.vdo": "0"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            },
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "type": "block",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "vg_name": "ceph_vg1"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:        }
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:    ],
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:    "2": [
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:        {
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "devices": [
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "/dev/loop5"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            ],
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_name": "ceph_lv2",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_size": "21470642176",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "name": "ceph_lv2",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "tags": {
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.cluster_name": "ceph",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.crush_device_class": "",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.encrypted": "0",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osd_id": "2",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.type": "block",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:                "ceph.vdo": "0"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            },
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "type": "block",
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:            "vg_name": "ceph_vg2"
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:        }
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]:    ]
Nov 25 11:07:43 np0005535469 reverent_thompson[193480]: }
Nov 25 11:07:43 np0005535469 systemd[1]: libpod-a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716.scope: Deactivated successfully.
Nov 25 11:07:43 np0005535469 podman[191047]: 2025-11-25 16:07:43.363773508 +0000 UTC m=+2.816017644 container died a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_thompson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 11:07:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c83f89e097c37310ccbc261198886643ccf16301750927b5aa061bfb28ddc6b2-merged.mount: Deactivated successfully.
Nov 25 11:07:43 np0005535469 podman[191047]: 2025-11-25 16:07:43.41376864 +0000 UTC m=+2.866012776 container remove a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:07:43 np0005535469 systemd[1]: libpod-conmon-a34054669524175df636ae0b280470281324c30452a03f1ca5d1ff2276689716.scope: Deactivated successfully.
Nov 25 11:07:44 np0005535469 podman[195298]: 2025-11-25 16:07:44.002361019 +0000 UTC m=+0.043752309 container create 186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:07:44 np0005535469 systemd[1]: Started libpod-conmon-186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941.scope.
Nov 25 11:07:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:07:44 np0005535469 podman[195298]: 2025-11-25 16:07:43.983218476 +0000 UTC m=+0.024609786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:07:44 np0005535469 podman[195298]: 2025-11-25 16:07:44.08642553 +0000 UTC m=+0.127816840 container init 186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 11:07:44 np0005535469 podman[195298]: 2025-11-25 16:07:44.093702732 +0000 UTC m=+0.135094022 container start 186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:07:44 np0005535469 podman[195298]: 2025-11-25 16:07:44.097680604 +0000 UTC m=+0.139071924 container attach 186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:07:44 np0005535469 dazzling_brahmagupta[195443]: 167 167
Nov 25 11:07:44 np0005535469 systemd[1]: libpod-186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941.scope: Deactivated successfully.
Nov 25 11:07:44 np0005535469 podman[195298]: 2025-11-25 16:07:44.099699989 +0000 UTC m=+0.141091279 container died 186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:07:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9ddd6796324c4e2dd93c7d9fdc91ebe250c2aa27f2dc41af0abd669c5f788504-merged.mount: Deactivated successfully.
Nov 25 11:07:44 np0005535469 podman[195298]: 2025-11-25 16:07:44.136652359 +0000 UTC m=+0.178043649 container remove 186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:07:44 np0005535469 systemd[1]: libpod-conmon-186bd5861bcaa608de765e379adb27ea15b7eee0df8a7e73cd273177fe4f9941.scope: Deactivated successfully.
Nov 25 11:07:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:44 np0005535469 podman[195713]: 2025-11-25 16:07:44.311070935 +0000 UTC m=+0.057501292 container create 53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_blackburn, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:07:44 np0005535469 systemd[1]: Started libpod-conmon-53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f.scope.
Nov 25 11:07:44 np0005535469 podman[195713]: 2025-11-25 16:07:44.277149981 +0000 UTC m=+0.023580358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:07:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:07:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4bcb36fd72829907006005a89432b35f756c40d04868c8878a6f75557fae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4bcb36fd72829907006005a89432b35f756c40d04868c8878a6f75557fae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4bcb36fd72829907006005a89432b35f756c40d04868c8878a6f75557fae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4bcb36fd72829907006005a89432b35f756c40d04868c8878a6f75557fae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:07:44 np0005535469 podman[195713]: 2025-11-25 16:07:44.437828735 +0000 UTC m=+0.184259112 container init 53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:07:44 np0005535469 podman[195713]: 2025-11-25 16:07:44.44627913 +0000 UTC m=+0.192709487 container start 53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:07:44 np0005535469 python3.9[195594]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 11:07:44 np0005535469 podman[195713]: 2025-11-25 16:07:44.488555427 +0000 UTC m=+0.234985804 container attach 53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:07:44 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:44 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:44 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]: {
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "osd_id": 1,
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "type": "bluestore"
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:    },
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "osd_id": 2,
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "type": "bluestore"
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:    },
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "osd_id": 0,
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:        "type": "bluestore"
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]:    }
Nov 25 11:07:45 np0005535469 kind_blackburn[195844]: }
Nov 25 11:07:45 np0005535469 systemd[1]: libpod-53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f.scope: Deactivated successfully.
Nov 25 11:07:45 np0005535469 podman[195713]: 2025-11-25 16:07:45.399571475 +0000 UTC m=+1.146001852 container died 53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_blackburn, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:07:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a88b4bcb36fd72829907006005a89432b35f756c40d04868c8878a6f75557fae-merged.mount: Deactivated successfully.
Nov 25 11:07:45 np0005535469 podman[195713]: 2025-11-25 16:07:45.459532654 +0000 UTC m=+1.205963021 container remove 53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:07:45 np0005535469 systemd[1]: libpod-conmon-53daffbaa21af37efe841b07bb4855757f957695b612697243fddddf9c06635f.scope: Deactivated successfully.
Nov 25 11:07:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:07:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:07:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:07:45 np0005535469 python3.9[196954]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 11:07:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:07:45 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dcb19ba3-4dcf-4de1-b1ef-9f4fc01941cb does not exist
Nov 25 11:07:45 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 064c6c01-8843-4e88-bb6a-c7453a4ccbf6 does not exist
Nov 25 11:07:45 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:45 np0005535469 podman[197370]: 2025-11-25 16:07:45.73159103 +0000 UTC m=+0.071364418 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:07:45 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:45 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:07:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:07:46 np0005535469 python3.9[198359]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 11:07:46 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:46 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:46 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:47 np0005535469 python3.9[199540]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 11:07:47 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:47 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:47 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:48 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 11:07:48 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 11:07:48 np0005535469 systemd[1]: man-db-cache-update.service: Consumed 9.874s CPU time.
Nov 25 11:07:48 np0005535469 systemd[1]: run-r519be0c60eed4bf8911f78626c985f30.service: Deactivated successfully.
Nov 25 11:07:48 np0005535469 python3.9[200343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:48 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:49 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:49 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:49 np0005535469 python3.9[200534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:50 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:50 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:50 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:50 np0005535469 auditd[702]: Audit daemon rotating log files
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:07:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:07:51 np0005535469 python3.9[200723]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:51 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:51 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:51 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:52 np0005535469 python3.9[200912]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:52 np0005535469 python3.9[201067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:52 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:53 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:53 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:54 np0005535469 python3.9[201256]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 11:07:54 np0005535469 systemd[1]: Reloading.
Nov 25 11:07:54 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:07:54 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:07:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:54 np0005535469 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 11:07:54 np0005535469 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 11:07:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:07:55 np0005535469 python3.9[201449]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:56 np0005535469 python3.9[201604]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:57 np0005535469 python3.9[201759]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:57 np0005535469 python3.9[201914]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:07:58 np0005535469 python3.9[202069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:59 np0005535469 python3.9[202224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:07:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:00 np0005535469 python3.9[202379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:01 np0005535469 python3.9[202534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:01 np0005535469 python3.9[202689]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:02 np0005535469 python3.9[202844]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:03 np0005535469 python3.9[202999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:04 np0005535469 python3.9[203154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:04 np0005535469 python3.9[203309]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:05 np0005535469 python3.9[203464]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 11:08:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:06 np0005535469 python3.9[203619]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:08:07 np0005535469 python3.9[203771]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:08:07 np0005535469 python3.9[203923]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:08:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:08 np0005535469 python3.9[204075]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:08:08 np0005535469 python3.9[204227]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:08:09 np0005535469 python3.9[204379]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:08:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:08:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:10 np0005535469 python3.9[204531]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:11 np0005535469 python3.9[204656]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086889.7651892-554-25422644512137/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:11 np0005535469 python3.9[204808]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:12 np0005535469 python3.9[204933]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086891.3404455-554-274232884897890/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:12 np0005535469 podman[205057]: 2025-11-25 16:08:12.878056331 +0000 UTC m=+0.104204263 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:08:13 np0005535469 python3.9[205101]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:08:13.571 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:08:13.572 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:08:13.572 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:08:13 np0005535469 python3.9[205235]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086892.5153756-554-164349029394637/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:14 np0005535469 python3.9[205387]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:14 np0005535469 python3.9[205512]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086893.776879-554-28004178035059/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:15 np0005535469 python3.9[205664]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:15 np0005535469 python3.9[205789]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086895.0264823-554-44690946739123/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:16 np0005535469 podman[205790]: 2025-11-25 16:08:16.078555872 +0000 UTC m=+0.056416811 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 11:08:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:16 np0005535469 python3.9[205960]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:17 np0005535469 python3.9[206085]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086896.177483-554-33476972791500/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:17 np0005535469 python3.9[206237]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:18 np0005535469 python3.9[206360]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086897.4056704-554-237219899569538/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:19 np0005535469 python3.9[206512]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:19 np0005535469 python3.9[206637]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764086898.5308943-554-34868076430645/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:20 np0005535469 python3.9[206789]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 11:08:20 np0005535469 python3.9[206942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:21 np0005535469 python3.9[207094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:22 np0005535469 python3.9[207246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:22 np0005535469 python3.9[207398]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:23 np0005535469 python3.9[207550]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:24 np0005535469 python3.9[207702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:24 np0005535469 python3.9[207854]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:25 np0005535469 python3.9[208006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:26 np0005535469 python3.9[208158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:27 np0005535469 python3.9[208310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:27 np0005535469 python3.9[208462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:28 np0005535469 python3.9[208614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:29 np0005535469 python3.9[208766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:29 np0005535469 python3.9[208918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:30 np0005535469 python3.9[209070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:30 np0005535469 python3.9[209193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086909.8750446-775-65282144200515/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:31 np0005535469 python3.9[209345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:32 np0005535469 python3.9[209468]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086910.9967015-775-111090079915982/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:32 np0005535469 python3.9[209620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:33 np0005535469 python3.9[209743]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086912.2266-775-156002020588211/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:33 np0005535469 python3.9[209895]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:34 np0005535469 python3.9[210018]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086913.3705776-775-163784125281440/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:35 np0005535469 python3.9[210170]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:35 np0005535469 python3.9[210293]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086914.6396759-775-33587983397817/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:36 np0005535469 python3.9[210445]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:36 np0005535469 python3.9[210568]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086915.8090875-775-158894526579171/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:37 np0005535469 python3.9[210720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:38 np0005535469 python3.9[210843]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086917.0376344-775-230703295636636/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:38 np0005535469 python3.9[210995]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:39 np0005535469 python3.9[211118]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086918.1986234-775-214490709394790/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:39 np0005535469 python3.9[211270]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:08:39
Nov 25 11:08:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:08:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:08:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes']
Nov 25 11:08:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:40 np0005535469 python3.9[211393]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086919.3395858-775-280693631321842/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:08:40 np0005535469 python3.9[211545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:41 np0005535469 python3.9[211668]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086920.4465492-775-59143182607003/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:42 np0005535469 python3.9[211820]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:42 np0005535469 python3.9[211943]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086921.577912-775-115225756525353/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:43 np0005535469 podman[212067]: 2025-11-25 16:08:43.129754497 +0000 UTC m=+0.103475655 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:08:43 np0005535469 python3.9[212115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:43 np0005535469 python3.9[212244]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086922.763177-775-56415073214468/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:44 np0005535469 python3.9[212396]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:44 np0005535469 python3.9[212519]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086923.9798696-775-167455782702315/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:45 np0005535469 python3.9[212671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:08:45 np0005535469 python3.9[212794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086925.0283418-775-245706680493760/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:46 np0005535469 podman[212843]: 2025-11-25 16:08:46.168554789 +0000 UTC m=+0.046574323 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:08:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:46 np0005535469 python3.9[213063]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:08:46 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e5cbb4ee-4c19-466e-820d-be051780a62e does not exist
Nov 25 11:08:46 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 17b1f5d7-1bfb-498e-99a1-483c4d17c0d2 does not exist
Nov 25 11:08:46 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 697e83c8-c717-4b7a-b953-e1503dc4f067 does not exist
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:08:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:08:47 np0005535469 podman[213357]: 2025-11-25 16:08:47.373713342 +0000 UTC m=+0.045750509 container create 419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:08:47 np0005535469 systemd[1]: Started libpod-conmon-419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f.scope.
Nov 25 11:08:47 np0005535469 podman[213357]: 2025-11-25 16:08:47.353736917 +0000 UTC m=+0.025774104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:08:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:08:47 np0005535469 podman[213357]: 2025-11-25 16:08:47.469769243 +0000 UTC m=+0.141806420 container init 419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:08:47 np0005535469 podman[213357]: 2025-11-25 16:08:47.477742201 +0000 UTC m=+0.149779358 container start 419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 11:08:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:08:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:08:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:08:47 np0005535469 great_jennings[213402]: 167 167
Nov 25 11:08:47 np0005535469 podman[213357]: 2025-11-25 16:08:47.487665763 +0000 UTC m=+0.159702920 container attach 419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:08:47 np0005535469 systemd[1]: libpod-419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f.scope: Deactivated successfully.
Nov 25 11:08:47 np0005535469 podman[213357]: 2025-11-25 16:08:47.488911886 +0000 UTC m=+0.160949043 container died 419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:08:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-11e37bdccc122e67d927465c561f581e950319ad585290a459854f5fae004121-merged.mount: Deactivated successfully.
Nov 25 11:08:47 np0005535469 podman[213357]: 2025-11-25 16:08:47.531079797 +0000 UTC m=+0.203116954 container remove 419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 11:08:47 np0005535469 systemd[1]: libpod-conmon-419cff2b6ea0b2052df92129222057a5cee404dc6e70bfedc48121c1e43fa76f.scope: Deactivated successfully.
Nov 25 11:08:47 np0005535469 python3.9[213404]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 11:08:47 np0005535469 podman[213428]: 2025-11-25 16:08:47.699522084 +0000 UTC m=+0.042160191 container create 466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:08:47 np0005535469 systemd[1]: Started libpod-conmon-466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037.scope.
Nov 25 11:08:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:08:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040c882c0c3d03e8fc3081556a19b3762261809399c1c3754fd3f2de4654a5b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040c882c0c3d03e8fc3081556a19b3762261809399c1c3754fd3f2de4654a5b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:47 np0005535469 podman[213428]: 2025-11-25 16:08:47.680181137 +0000 UTC m=+0.022819264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:08:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040c882c0c3d03e8fc3081556a19b3762261809399c1c3754fd3f2de4654a5b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040c882c0c3d03e8fc3081556a19b3762261809399c1c3754fd3f2de4654a5b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040c882c0c3d03e8fc3081556a19b3762261809399c1c3754fd3f2de4654a5b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:47 np0005535469 podman[213428]: 2025-11-25 16:08:47.80375519 +0000 UTC m=+0.146393327 container init 466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:08:47 np0005535469 podman[213428]: 2025-11-25 16:08:47.810419292 +0000 UTC m=+0.153057389 container start 466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:08:47 np0005535469 podman[213428]: 2025-11-25 16:08:47.815364537 +0000 UTC m=+0.158002694 container attach 466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:08:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:48 np0005535469 romantic_sanderson[213443]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:08:48 np0005535469 romantic_sanderson[213443]: --> relative data size: 1.0
Nov 25 11:08:48 np0005535469 romantic_sanderson[213443]: --> All data devices are unavailable
Nov 25 11:08:48 np0005535469 systemd[1]: libpod-466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037.scope: Deactivated successfully.
Nov 25 11:08:48 np0005535469 podman[213428]: 2025-11-25 16:08:48.798546082 +0000 UTC m=+1.141184189 container died 466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:08:48 np0005535469 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 11:08:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-040c882c0c3d03e8fc3081556a19b3762261809399c1c3754fd3f2de4654a5b1-merged.mount: Deactivated successfully.
Nov 25 11:08:48 np0005535469 podman[213428]: 2025-11-25 16:08:48.885154806 +0000 UTC m=+1.227792923 container remove 466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:08:48 np0005535469 systemd[1]: libpod-conmon-466484087b02a9584cc91381531685defcaa4e090de65682b85114591c75d037.scope: Deactivated successfully.
Nov 25 11:08:49 np0005535469 podman[213782]: 2025-11-25 16:08:49.4997393 +0000 UTC m=+0.039889820 container create 8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:08:49 np0005535469 python3.9[213767]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:49 np0005535469 systemd[1]: Started libpod-conmon-8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2.scope.
Nov 25 11:08:49 np0005535469 podman[213782]: 2025-11-25 16:08:49.483466156 +0000 UTC m=+0.023616706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:08:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:08:49 np0005535469 podman[213782]: 2025-11-25 16:08:49.59681924 +0000 UTC m=+0.136969780 container init 8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:08:49 np0005535469 podman[213782]: 2025-11-25 16:08:49.603910383 +0000 UTC m=+0.144060903 container start 8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:08:49 np0005535469 podman[213782]: 2025-11-25 16:08:49.607244484 +0000 UTC m=+0.147395024 container attach 8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:08:49 np0005535469 infallible_keller[213798]: 167 167
Nov 25 11:08:49 np0005535469 systemd[1]: libpod-8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2.scope: Deactivated successfully.
Nov 25 11:08:49 np0005535469 conmon[213798]: conmon 8d6ab1464f708db6265d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2.scope/container/memory.events
Nov 25 11:08:49 np0005535469 podman[213782]: 2025-11-25 16:08:49.612126717 +0000 UTC m=+0.152277247 container died 8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:08:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1747c829a9d554df62d39342802e3d260a49227d24244941b02fb93bade689c1-merged.mount: Deactivated successfully.
Nov 25 11:08:49 np0005535469 podman[213782]: 2025-11-25 16:08:49.648742377 +0000 UTC m=+0.188892897 container remove 8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:08:49 np0005535469 systemd[1]: libpod-conmon-8d6ab1464f708db6265d7d3eef32512fad432e423739745282f9417ea387b6d2.scope: Deactivated successfully.
Nov 25 11:08:49 np0005535469 podman[213887]: 2025-11-25 16:08:49.804874708 +0000 UTC m=+0.041478193 container create 16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rosalind, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:08:49 np0005535469 systemd[1]: Started libpod-conmon-16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8.scope.
Nov 25 11:08:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:08:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c1c0dcc39a6e0e5774580b82628bf7339caaca62301970610eff1335822ec7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c1c0dcc39a6e0e5774580b82628bf7339caaca62301970610eff1335822ec7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c1c0dcc39a6e0e5774580b82628bf7339caaca62301970610eff1335822ec7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c1c0dcc39a6e0e5774580b82628bf7339caaca62301970610eff1335822ec7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:49 np0005535469 podman[213887]: 2025-11-25 16:08:49.789828228 +0000 UTC m=+0.026431743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:08:49 np0005535469 podman[213887]: 2025-11-25 16:08:49.889508928 +0000 UTC m=+0.126112443 container init 16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:08:49 np0005535469 podman[213887]: 2025-11-25 16:08:49.899242744 +0000 UTC m=+0.135846259 container start 16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:08:49 np0005535469 podman[213887]: 2025-11-25 16:08:49.906892962 +0000 UTC m=+0.143496477 container attach 16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:08:50 np0005535469 python3.9[213995]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]: {
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:    "0": [
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:        {
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "devices": [
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "/dev/loop3"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            ],
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_name": "ceph_lv0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_size": "21470642176",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "name": "ceph_lv0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "tags": {
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cluster_name": "ceph",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.crush_device_class": "",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.encrypted": "0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osd_id": "0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.type": "block",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.vdo": "0"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            },
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "type": "block",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "vg_name": "ceph_vg0"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:        }
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:    ],
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:    "1": [
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:        {
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "devices": [
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "/dev/loop4"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            ],
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_name": "ceph_lv1",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_size": "21470642176",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "name": "ceph_lv1",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "tags": {
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cluster_name": "ceph",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.crush_device_class": "",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.encrypted": "0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osd_id": "1",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.type": "block",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.vdo": "0"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            },
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "type": "block",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "vg_name": "ceph_vg1"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:        }
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:    ],
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:    "2": [
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:        {
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "devices": [
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "/dev/loop5"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            ],
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_name": "ceph_lv2",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_size": "21470642176",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "name": "ceph_lv2",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "tags": {
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.cluster_name": "ceph",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.crush_device_class": "",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.encrypted": "0",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osd_id": "2",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.type": "block",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:                "ceph.vdo": "0"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            },
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "type": "block",
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:            "vg_name": "ceph_vg2"
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:        }
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]:    ]
Nov 25 11:08:50 np0005535469 laughing_rosalind[213938]: }
Nov 25 11:08:50 np0005535469 systemd[1]: libpod-16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8.scope: Deactivated successfully.
Nov 25 11:08:50 np0005535469 podman[213887]: 2025-11-25 16:08:50.668739396 +0000 UTC m=+0.905342891 container died 16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rosalind, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:08:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-79c1c0dcc39a6e0e5774580b82628bf7339caaca62301970610eff1335822ec7-merged.mount: Deactivated successfully.
Nov 25 11:08:50 np0005535469 python3.9[214148]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:50 np0005535469 podman[213887]: 2025-11-25 16:08:50.72051904 +0000 UTC m=+0.957122535 container remove 16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rosalind, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:08:50 np0005535469 systemd[1]: libpod-conmon-16f9b02648d0c0524df2fc66cf4e5f87afa4d4b4bc3bf249762cc8da15d404d8.scope: Deactivated successfully.
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:08:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:08:51 np0005535469 podman[214458]: 2025-11-25 16:08:51.300671825 +0000 UTC m=+0.041347820 container create 59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:08:51 np0005535469 python3.9[214424]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:51 np0005535469 systemd[1]: Started libpod-conmon-59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e.scope.
Nov 25 11:08:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:08:51 np0005535469 podman[214458]: 2025-11-25 16:08:51.28438915 +0000 UTC m=+0.025065175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:08:51 np0005535469 podman[214458]: 2025-11-25 16:08:51.382238061 +0000 UTC m=+0.122914076 container init 59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:08:51 np0005535469 podman[214458]: 2025-11-25 16:08:51.388974094 +0000 UTC m=+0.129650089 container start 59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:08:51 np0005535469 sleepy_keldysh[214474]: 167 167
Nov 25 11:08:51 np0005535469 systemd[1]: libpod-59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e.scope: Deactivated successfully.
Nov 25 11:08:51 np0005535469 podman[214458]: 2025-11-25 16:08:51.395794871 +0000 UTC m=+0.136470886 container attach 59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:08:51 np0005535469 podman[214458]: 2025-11-25 16:08:51.396741957 +0000 UTC m=+0.137417962 container died 59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:08:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-769895ba0126c37c479bebc442485b699ed36a24421168461d4b1a5f1dc121a0-merged.mount: Deactivated successfully.
Nov 25 11:08:51 np0005535469 podman[214458]: 2025-11-25 16:08:51.442060303 +0000 UTC m=+0.182736308 container remove 59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:08:51 np0005535469 systemd[1]: libpod-conmon-59d8361a55777505a88c2fc9b3205a576dac971030fd5a096795780547faa49e.scope: Deactivated successfully.
Nov 25 11:08:51 np0005535469 podman[214576]: 2025-11-25 16:08:51.58922339 +0000 UTC m=+0.037988108 container create 4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:08:51 np0005535469 systemd[1]: Started libpod-conmon-4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88.scope.
Nov 25 11:08:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:08:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bdabefb3b44981d3ad384c2287e891117f74d1bcf668f0fa5c4471bf0010fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bdabefb3b44981d3ad384c2287e891117f74d1bcf668f0fa5c4471bf0010fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bdabefb3b44981d3ad384c2287e891117f74d1bcf668f0fa5c4471bf0010fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02bdabefb3b44981d3ad384c2287e891117f74d1bcf668f0fa5c4471bf0010fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:08:51 np0005535469 podman[214576]: 2025-11-25 16:08:51.572104253 +0000 UTC m=+0.020868991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:08:51 np0005535469 podman[214576]: 2025-11-25 16:08:51.674672332 +0000 UTC m=+0.123437060 container init 4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:08:51 np0005535469 podman[214576]: 2025-11-25 16:08:51.684007528 +0000 UTC m=+0.132772246 container start 4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:08:51 np0005535469 podman[214576]: 2025-11-25 16:08:51.686827715 +0000 UTC m=+0.135592473 container attach 4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:08:51 np0005535469 python3.9[214671]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:52 np0005535469 python3.9[214823]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]: {
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "osd_id": 1,
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "type": "bluestore"
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:    },
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "osd_id": 2,
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "type": "bluestore"
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:    },
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "osd_id": 0,
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:        "type": "bluestore"
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]:    }
Nov 25 11:08:52 np0005535469 interesting_nightingale[214637]: }
Nov 25 11:08:52 np0005535469 systemd[1]: libpod-4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88.scope: Deactivated successfully.
Nov 25 11:08:52 np0005535469 podman[214576]: 2025-11-25 16:08:52.713830265 +0000 UTC m=+1.162594993 container died 4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:08:52 np0005535469 systemd[1]: libpod-4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88.scope: Consumed 1.032s CPU time.
Nov 25 11:08:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-02bdabefb3b44981d3ad384c2287e891117f74d1bcf668f0fa5c4471bf0010fe-merged.mount: Deactivated successfully.
Nov 25 11:08:52 np0005535469 podman[214576]: 2025-11-25 16:08:52.777712939 +0000 UTC m=+1.226477657 container remove 4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:08:52 np0005535469 systemd[1]: libpod-conmon-4fea901353a30653ba3a04a39fa561fcd5664be407fb89571ec6aef54e87af88.scope: Deactivated successfully.
Nov 25 11:08:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:08:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:08:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:08:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:08:52 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 99e151c0-c50d-4d0f-9885-0c4415d8cfd1 does not exist
Nov 25 11:08:52 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ca5bce99-a81b-4e7e-b38d-a0fc7fe62ff7 does not exist
Nov 25 11:08:53 np0005535469 python3.9[215067]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:08:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:08:53 np0005535469 python3.9[215219]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:54 np0005535469 python3.9[215371]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:08:55 np0005535469 python3.9[215523]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:08:55 np0005535469 python3.9[215675]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:08:55 np0005535469 systemd[1]: Reloading.
Nov 25 11:08:55 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:08:55 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:08:56 np0005535469 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 11:08:56 np0005535469 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 11:08:56 np0005535469 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 11:08:56 np0005535469 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 11:08:56 np0005535469 systemd[1]: Starting libvirt logging daemon...
Nov 25 11:08:56 np0005535469 systemd[1]: Started libvirt logging daemon.
Nov 25 11:08:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:56 np0005535469 python3.9[215867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:08:56 np0005535469 systemd[1]: Reloading.
Nov 25 11:08:57 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:08:57 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:08:57 np0005535469 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 11:08:57 np0005535469 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 11:08:57 np0005535469 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 11:08:57 np0005535469 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 11:08:57 np0005535469 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 11:08:57 np0005535469 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 11:08:57 np0005535469 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 11:08:57 np0005535469 systemd[1]: Started libvirt nodedev daemon.
Nov 25 11:08:58 np0005535469 python3.9[216083]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:08:58 np0005535469 systemd[1]: Reloading.
Nov 25 11:08:58 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:08:58 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:08:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:08:58 np0005535469 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 11:08:58 np0005535469 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 11:08:58 np0005535469 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 11:08:58 np0005535469 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 11:08:58 np0005535469 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 11:08:58 np0005535469 systemd[1]: Starting libvirt proxy daemon...
Nov 25 11:08:58 np0005535469 systemd[1]: Started libvirt proxy daemon.
Nov 25 11:08:58 np0005535469 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 11:08:58 np0005535469 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 11:08:58 np0005535469 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 11:08:59 np0005535469 python3.9[216301]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:08:59 np0005535469 systemd[1]: Reloading.
Nov 25 11:08:59 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:08:59 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:08:59 np0005535469 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 11:08:59 np0005535469 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 11:08:59 np0005535469 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 11:08:59 np0005535469 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 11:08:59 np0005535469 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 11:08:59 np0005535469 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 11:08:59 np0005535469 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 11:08:59 np0005535469 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 11:08:59 np0005535469 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 11:08:59 np0005535469 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 11:08:59 np0005535469 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 11:08:59 np0005535469 systemd[1]: Started libvirt QEMU daemon.
Nov 25 11:08:59 np0005535469 setroubleshoot[216119]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 62e26a07-2c31-41e8-8159-5d15e8d8e99f
Nov 25 11:08:59 np0005535469 setroubleshoot[216119]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 11:08:59 np0005535469 setroubleshoot[216119]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 62e26a07-2c31-41e8-8159-5d15e8d8e99f
Nov 25 11:08:59 np0005535469 setroubleshoot[216119]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 11:08:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:00 np0005535469 python3.9[216519]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:09:00 np0005535469 systemd[1]: Reloading.
Nov 25 11:09:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:00 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:09:00 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:09:00 np0005535469 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 11:09:00 np0005535469 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 11:09:00 np0005535469 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 11:09:00 np0005535469 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 11:09:00 np0005535469 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 11:09:00 np0005535469 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 11:09:00 np0005535469 systemd[1]: Starting libvirt secret daemon...
Nov 25 11:09:00 np0005535469 systemd[1]: Started libvirt secret daemon.
Nov 25 11:09:01 np0005535469 python3.9[216732]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:02 np0005535469 python3.9[216884]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 11:09:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:02 np0005535469 python3.9[217036]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:03 np0005535469 python3.9[217190]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 11:09:04 np0005535469 python3.9[217340]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:04 np0005535469 python3.9[217461]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086943.7284944-1133-217084907217725/.source.xml follow=False _original_basename=secret.xml.j2 checksum=d78e40eb0aba2a925654547c1d2cb784414f4c3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:05 np0005535469 python3.9[217613]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine d82baeae-c742-50a4-b8f6-b5257c8a2c92#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:06 np0005535469 python3.9[217775]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:08 np0005535469 python3.9[218238]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:08 np0005535469 python3.9[218390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:09 np0005535469 python3.9[218513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086948.4616637-1188-79588340008631/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:09 np0005535469 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 11:09:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:09 np0005535469 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 11:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:09:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:10 np0005535469 python3.9[218665]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:10 np0005535469 python3.9[218817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:11 np0005535469 python3.9[218895]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:12 np0005535469 python3.9[219047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:12 np0005535469 python3.9[219125]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.cb8fpyvb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:13 np0005535469 python3.9[219277]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:13 np0005535469 podman[219327]: 2025-11-25 16:09:13.477679974 +0000 UTC m=+0.101735687 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:09:13.573 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:09:13.574 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:09:13.574 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:09:13 np0005535469 python3.9[219373]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:14 np0005535469 python3.9[219534]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:15 np0005535469 python3[219687]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 11:09:15 np0005535469 python3.9[219839]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:16 np0005535469 python3.9[219917]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:16 np0005535469 podman[219994]: 2025-11-25 16:09:16.629854561 +0000 UTC m=+0.051896178 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 11:09:16 np0005535469 python3.9[220088]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:17 np0005535469 python3.9[220166]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:18 np0005535469 python3.9[220318]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:18 np0005535469 python3.9[220396]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:19 np0005535469 python3.9[220548]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:19 np0005535469 python3.9[220626]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:20 np0005535469 python3.9[220778]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:21 np0005535469 python3.9[220903]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764086960.1389465-1313-32640212150668/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:22 np0005535469 python3.9[221055]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:23 np0005535469 python3.9[221207]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:24 np0005535469 python3.9[221362]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:24 np0005535469 python3.9[221514]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:25 np0005535469 python3.9[221667]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:09:26 np0005535469 python3.9[221821]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:27 np0005535469 python3.9[221976]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:27 np0005535469 python3.9[222128]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:28 np0005535469 python3.9[222251]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086967.3683693-1385-281132725730842/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:29 np0005535469 python3.9[222403]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:29 np0005535469 python3.9[222526]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086968.7358873-1400-252876243719595/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:30 np0005535469 python3.9[222678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:31 np0005535469 python3.9[222801]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086969.9999378-1415-42829904528844/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:31 np0005535469 python3.9[222953]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:09:31 np0005535469 systemd[1]: Reloading.
Nov 25 11:09:31 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:09:31 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:09:32 np0005535469 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 11:09:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:33 np0005535469 python3.9[223144]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 11:09:33 np0005535469 systemd[1]: Reloading.
Nov 25 11:09:33 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:09:33 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:09:33 np0005535469 systemd[1]: Reloading.
Nov 25 11:09:33 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:09:33 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:09:34 np0005535469 systemd[1]: session-49.scope: Deactivated successfully.
Nov 25 11:09:34 np0005535469 systemd[1]: session-49.scope: Consumed 3min 21.312s CPU time.
Nov 25 11:09:34 np0005535469 systemd-logind[791]: Session 49 logged out. Waiting for processes to exit.
Nov 25 11:09:34 np0005535469 systemd-logind[791]: Removed session 49.
Nov 25 11:09:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:39 np0005535469 systemd-logind[791]: New session 50 of user zuul.
Nov 25 11:09:39 np0005535469 systemd[1]: Started Session 50 of User zuul.
Nov 25 11:09:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:09:39
Nov 25 11:09:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:09:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:09:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Nov 25 11:09:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:09:40 np0005535469 python3.9[223392]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:09:41 np0005535469 python3.9[223546]: ansible-ansible.builtin.service_facts Invoked
Nov 25 11:09:41 np0005535469 network[223563]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 11:09:41 np0005535469 network[223564]: 'network-scripts' will be removed from distribution in near future.
Nov 25 11:09:41 np0005535469 network[223565]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 11:09:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:43 np0005535469 podman[223628]: 2025-11-25 16:09:43.621500241 +0000 UTC m=+0.093494163 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:09:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:45 np0005535469 python3.9[223863]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 11:09:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:46 np0005535469 python3.9[223947]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 11:09:47 np0005535469 podman[223949]: 2025-11-25 16:09:47.620076648 +0000 UTC m=+0.044018303 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:09:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:09:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:09:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:52 np0005535469 python3.9[224121]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:09:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 668e812e-ec75-4b6e-af15-8da3fbc3d702 does not exist
Nov 25 11:09:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 77d6de01-0420-429a-bcdc-8f794c71cf1a does not exist
Nov 25 11:09:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cd8391bd-4f22-40b2-994e-526116fd6e15 does not exist
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:09:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:09:53 np0005535469 python3.9[224394]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:54 np0005535469 podman[224645]: 2025-11-25 16:09:54.181809114 +0000 UTC m=+0.052058571 container create 31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 25 11:09:54 np0005535469 systemd[1]: Started libpod-conmon-31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad.scope.
Nov 25 11:09:54 np0005535469 podman[224645]: 2025-11-25 16:09:54.151126017 +0000 UTC m=+0.021375494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:09:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:09:54 np0005535469 podman[224645]: 2025-11-25 16:09:54.296256298 +0000 UTC m=+0.166505775 container init 31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:09:54 np0005535469 podman[224645]: 2025-11-25 16:09:54.30328646 +0000 UTC m=+0.173535917 container start 31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:09:54 np0005535469 stupefied_meninsky[224712]: 167 167
Nov 25 11:09:54 np0005535469 systemd[1]: libpod-31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad.scope: Deactivated successfully.
Nov 25 11:09:54 np0005535469 podman[224645]: 2025-11-25 16:09:54.31830265 +0000 UTC m=+0.188552127 container attach 31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:09:54 np0005535469 podman[224645]: 2025-11-25 16:09:54.318740772 +0000 UTC m=+0.188990229 container died 31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:09:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-71d5474e4f3099b76fa126d521d628b77e174a7a59e33e2a1e0191532e0ac825-merged.mount: Deactivated successfully.
Nov 25 11:09:54 np0005535469 podman[224645]: 2025-11-25 16:09:54.377012262 +0000 UTC m=+0.247261719 container remove 31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_meninsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:09:54 np0005535469 systemd[1]: libpod-conmon-31f238572a5b0ce3dd950081a2f26ab4cdd6365fcb1b68a1ac8a973d0fe218ad.scope: Deactivated successfully.
Nov 25 11:09:54 np0005535469 python3.9[224715]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:09:54 np0005535469 podman[224736]: 2025-11-25 16:09:54.511930185 +0000 UTC m=+0.022481245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:09:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:09:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:09:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:09:54 np0005535469 podman[224736]: 2025-11-25 16:09:54.720978191 +0000 UTC m=+0.231529221 container create 8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:09:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:54 np0005535469 systemd[1]: Started libpod-conmon-8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171.scope.
Nov 25 11:09:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:09:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6587685e4bb52e600cdbc1b5447bc77fd533f6b805af31305612000232a5515/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6587685e4bb52e600cdbc1b5447bc77fd533f6b805af31305612000232a5515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6587685e4bb52e600cdbc1b5447bc77fd533f6b805af31305612000232a5515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6587685e4bb52e600cdbc1b5447bc77fd533f6b805af31305612000232a5515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6587685e4bb52e600cdbc1b5447bc77fd533f6b805af31305612000232a5515/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:55 np0005535469 podman[224736]: 2025-11-25 16:09:55.019740726 +0000 UTC m=+0.530291766 container init 8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:09:55 np0005535469 podman[224736]: 2025-11-25 16:09:55.033028957 +0000 UTC m=+0.543579987 container start 8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:09:55 np0005535469 python3.9[224901]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:09:55 np0005535469 podman[224736]: 2025-11-25 16:09:55.05878003 +0000 UTC m=+0.569331050 container attach 8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:09:55 np0005535469 python3.9[225061]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:09:56 np0005535469 objective_roentgen[224904]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:09:56 np0005535469 objective_roentgen[224904]: --> relative data size: 1.0
Nov 25 11:09:56 np0005535469 objective_roentgen[224904]: --> All data devices are unavailable
Nov 25 11:09:56 np0005535469 systemd[1]: libpod-8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171.scope: Deactivated successfully.
Nov 25 11:09:56 np0005535469 podman[224736]: 2025-11-25 16:09:56.088276329 +0000 UTC m=+1.598827359 container died 8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:09:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c6587685e4bb52e600cdbc1b5447bc77fd533f6b805af31305612000232a5515-merged.mount: Deactivated successfully.
Nov 25 11:09:56 np0005535469 podman[224736]: 2025-11-25 16:09:56.170883923 +0000 UTC m=+1.681434953 container remove 8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:09:56 np0005535469 systemd[1]: libpod-conmon-8da7dbea0ef3b46f6a5ffa1ae0b453f27027491888e0d3a7581af0744a576171.scope: Deactivated successfully.
Nov 25 11:09:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:56 np0005535469 python3.9[225227]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764086995.2128015-95-178435162543356/.source.iscsi _original_basename=.5s6hlmx2 follow=False checksum=9e2444b2e10d234e3797455c5272d1df0507338d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:56 np0005535469 podman[225439]: 2025-11-25 16:09:56.776318759 +0000 UTC m=+0.042947764 container create 7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:09:56 np0005535469 systemd[1]: Started libpod-conmon-7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084.scope.
Nov 25 11:09:56 np0005535469 podman[225439]: 2025-11-25 16:09:56.756513087 +0000 UTC m=+0.023142112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:09:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:09:56 np0005535469 podman[225439]: 2025-11-25 16:09:56.869878142 +0000 UTC m=+0.136507167 container init 7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 11:09:56 np0005535469 podman[225439]: 2025-11-25 16:09:56.881649733 +0000 UTC m=+0.148278738 container start 7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:09:56 np0005535469 podman[225439]: 2025-11-25 16:09:56.884712967 +0000 UTC m=+0.151342002 container attach 7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_leavitt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:09:56 np0005535469 pedantic_leavitt[225456]: 167 167
Nov 25 11:09:56 np0005535469 systemd[1]: libpod-7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084.scope: Deactivated successfully.
Nov 25 11:09:56 np0005535469 podman[225439]: 2025-11-25 16:09:56.887118782 +0000 UTC m=+0.153747787 container died 7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 11:09:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bf8beb7ec8bb7074c7b2f8694336ec2312d50f1c476f9312bb27bae8063013de-merged.mount: Deactivated successfully.
Nov 25 11:09:57 np0005535469 podman[225439]: 2025-11-25 16:09:57.027882244 +0000 UTC m=+0.294511249 container remove 7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:09:57 np0005535469 systemd[1]: libpod-conmon-7d70c4989dc11bfb54ec84717b9d5222094ec5b0549d6f35ea87cacc5c444084.scope: Deactivated successfully.
Nov 25 11:09:57 np0005535469 python3.9[225548]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:57 np0005535469 podman[225557]: 2025-11-25 16:09:57.197877824 +0000 UTC m=+0.041012871 container create 075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:09:57 np0005535469 systemd[1]: Started libpod-conmon-075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37.scope.
Nov 25 11:09:57 np0005535469 podman[225557]: 2025-11-25 16:09:57.179172323 +0000 UTC m=+0.022307390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:09:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:09:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9802f0b552e10a5e0872efe96186064f83fff4b402fe68d6f30c350a0c4e7bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9802f0b552e10a5e0872efe96186064f83fff4b402fe68d6f30c350a0c4e7bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9802f0b552e10a5e0872efe96186064f83fff4b402fe68d6f30c350a0c4e7bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9802f0b552e10a5e0872efe96186064f83fff4b402fe68d6f30c350a0c4e7bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:57 np0005535469 podman[225557]: 2025-11-25 16:09:57.289747262 +0000 UTC m=+0.132882329 container init 075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:09:57 np0005535469 podman[225557]: 2025-11-25 16:09:57.297705359 +0000 UTC m=+0.140840406 container start 075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carver, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:09:57 np0005535469 podman[225557]: 2025-11-25 16:09:57.301554644 +0000 UTC m=+0.144689691 container attach 075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:09:58 np0005535469 bold_carver[225581]: {
Nov 25 11:09:58 np0005535469 bold_carver[225581]:    "0": [
Nov 25 11:09:58 np0005535469 bold_carver[225581]:        {
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "devices": [
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "/dev/loop3"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            ],
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_name": "ceph_lv0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_size": "21470642176",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "name": "ceph_lv0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "tags": {
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cluster_name": "ceph",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.crush_device_class": "",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.encrypted": "0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osd_id": "0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.type": "block",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.vdo": "0"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            },
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "type": "block",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "vg_name": "ceph_vg0"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:        }
Nov 25 11:09:58 np0005535469 bold_carver[225581]:    ],
Nov 25 11:09:58 np0005535469 bold_carver[225581]:    "1": [
Nov 25 11:09:58 np0005535469 bold_carver[225581]:        {
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "devices": [
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "/dev/loop4"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            ],
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_name": "ceph_lv1",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_size": "21470642176",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "name": "ceph_lv1",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "tags": {
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cluster_name": "ceph",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.crush_device_class": "",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.encrypted": "0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osd_id": "1",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.type": "block",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.vdo": "0"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            },
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "type": "block",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "vg_name": "ceph_vg1"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:        }
Nov 25 11:09:58 np0005535469 bold_carver[225581]:    ],
Nov 25 11:09:58 np0005535469 bold_carver[225581]:    "2": [
Nov 25 11:09:58 np0005535469 bold_carver[225581]:        {
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "devices": [
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "/dev/loop5"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            ],
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_name": "ceph_lv2",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_size": "21470642176",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "name": "ceph_lv2",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "tags": {
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.cluster_name": "ceph",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.crush_device_class": "",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.encrypted": "0",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osd_id": "2",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.type": "block",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:                "ceph.vdo": "0"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            },
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "type": "block",
Nov 25 11:09:58 np0005535469 bold_carver[225581]:            "vg_name": "ceph_vg2"
Nov 25 11:09:58 np0005535469 bold_carver[225581]:        }
Nov 25 11:09:58 np0005535469 bold_carver[225581]:    ]
Nov 25 11:09:58 np0005535469 bold_carver[225581]: }
Nov 25 11:09:58 np0005535469 podman[225557]: 2025-11-25 16:09:58.049398856 +0000 UTC m=+0.892533913 container died 075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:09:58 np0005535469 systemd[1]: libpod-075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37.scope: Deactivated successfully.
Nov 25 11:09:58 np0005535469 python3.9[225730]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:09:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c9802f0b552e10a5e0872efe96186064f83fff4b402fe68d6f30c350a0c4e7bc-merged.mount: Deactivated successfully.
Nov 25 11:09:58 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:09:58 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:09:58 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:09:58 np0005535469 podman[225557]: 2025-11-25 16:09:58.116379424 +0000 UTC m=+0.959514491 container remove 075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:09:58 np0005535469 systemd[1]: libpod-conmon-075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37.scope: Deactivated successfully.
Nov 25 11:09:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:09:58 np0005535469 podman[225967]: 2025-11-25 16:09:58.697463644 +0000 UTC m=+0.037744331 container create 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:09:58 np0005535469 systemd[1]: Started libpod-conmon-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope.
Nov 25 11:09:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:09:58 np0005535469 podman[225967]: 2025-11-25 16:09:58.774573669 +0000 UTC m=+0.114854376 container init 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:09:58 np0005535469 podman[225967]: 2025-11-25 16:09:58.681720854 +0000 UTC m=+0.022001561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:09:58 np0005535469 podman[225967]: 2025-11-25 16:09:58.783668347 +0000 UTC m=+0.123949044 container start 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:09:58 np0005535469 podman[225967]: 2025-11-25 16:09:58.787370227 +0000 UTC m=+0.127650914 container attach 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:09:58 np0005535469 competent_beaver[226014]: 167 167
Nov 25 11:09:58 np0005535469 systemd[1]: libpod-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope: Deactivated successfully.
Nov 25 11:09:58 np0005535469 conmon[226014]: conmon 293314731c903ffdbde6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope/container/memory.events
Nov 25 11:09:58 np0005535469 podman[225967]: 2025-11-25 16:09:58.790323818 +0000 UTC m=+0.130604505 container died 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 11:09:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-abe6caf93d954f6449c7f160b494fac4113788f10a5fb30efdfce59efee8a789-merged.mount: Deactivated successfully.
Nov 25 11:09:58 np0005535469 podman[225967]: 2025-11-25 16:09:58.829598511 +0000 UTC m=+0.169879198 container remove 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:09:58 np0005535469 systemd[1]: libpod-conmon-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope: Deactivated successfully.
Nov 25 11:09:58 np0005535469 podman[226081]: 2025-11-25 16:09:58.996511616 +0000 UTC m=+0.041093753 container create e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:09:59 np0005535469 systemd[1]: Started libpod-conmon-e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606.scope.
Nov 25 11:09:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:09:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:09:59 np0005535469 podman[226081]: 2025-11-25 16:09:59.072152931 +0000 UTC m=+0.116735058 container init e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:09:59 np0005535469 podman[226081]: 2025-11-25 16:09:58.978446533 +0000 UTC m=+0.023028690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:09:59 np0005535469 podman[226081]: 2025-11-25 16:09:59.080349214 +0000 UTC m=+0.124931341 container start e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:09:59 np0005535469 podman[226081]: 2025-11-25 16:09:59.083594303 +0000 UTC m=+0.128176450 container attach e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:09:59 np0005535469 python3.9[226075]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:09:59 np0005535469 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.889494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999889580, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 253, "total_data_size": 2458656, "memory_usage": 2491128, "flush_reason": "Manual Compaction"}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999903479, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1400692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11873, "largest_seqno": 13393, "table_properties": {"data_size": 1395509, "index_size": 2451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13064, "raw_average_key_size": 20, "raw_value_size": 1384147, "raw_average_value_size": 2129, "num_data_blocks": 113, "num_entries": 650, "num_filter_entries": 650, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086834, "oldest_key_time": 1764086834, "file_creation_time": 1764086999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 13996 microseconds, and 7412 cpu microseconds.
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.903516) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1400692 bytes OK
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.903533) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.905690) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.905706) EVENT_LOG_v1 {"time_micros": 1764086999905701, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.905725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2452028, prev total WAL file size 2452028, number of live WAL files 2.
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.906503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1367KB)], [29(8085KB)]
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999906565, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9680411, "oldest_snapshot_seqno": -1}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4012 keys, 7366962 bytes, temperature: kUnknown
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999962904, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7366962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7338550, "index_size": 17294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 96353, "raw_average_key_size": 24, "raw_value_size": 7264537, "raw_average_value_size": 1810, "num_data_blocks": 752, "num_entries": 4012, "num_filter_entries": 4012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764086999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.963146) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7366962 bytes
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.964376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.4 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(12.2) write-amplify(5.3) OK, records in: 4452, records dropped: 440 output_compression: NoCompression
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.964392) EVENT_LOG_v1 {"time_micros": 1764086999964383, "job": 12, "event": "compaction_finished", "compaction_time_micros": 56150, "compaction_time_cpu_micros": 16020, "output_level": 6, "num_output_files": 1, "total_output_size": 7366962, "num_input_records": 4452, "num_output_records": 4012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999964673, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999965885, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.906412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:09:59 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]: {
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "osd_id": 1,
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "type": "bluestore"
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:    },
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "osd_id": 2,
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "type": "bluestore"
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:    },
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "osd_id": 0,
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:        "type": "bluestore"
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]:    }
Nov 25 11:09:59 np0005535469 goofy_ellis[226098]: }
Nov 25 11:10:00 np0005535469 python3.9[226260]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:10:00 np0005535469 systemd[1]: libpod-e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606.scope: Deactivated successfully.
Nov 25 11:10:00 np0005535469 podman[226081]: 2025-11-25 16:10:00.019255221 +0000 UTC m=+1.063837348 container died e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:10:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62-merged.mount: Deactivated successfully.
Nov 25 11:10:00 np0005535469 systemd[1]: Reloading.
Nov 25 11:10:00 np0005535469 podman[226081]: 2025-11-25 16:10:00.072298769 +0000 UTC m=+1.116880896 container remove e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:10:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:10:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:10:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:10:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:10:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fbbd77ce-3655-42aa-9e92-1c4d81f4814c does not exist
Nov 25 11:10:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9b45cf01-092a-4150-9248-f84004e1f464 does not exist
Nov 25 11:10:00 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:10:00 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:10:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:00 np0005535469 systemd[1]: libpod-conmon-e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606.scope: Deactivated successfully.
Nov 25 11:10:00 np0005535469 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 11:10:00 np0005535469 systemd[1]: Starting Open-iSCSI...
Nov 25 11:10:00 np0005535469 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 11:10:00 np0005535469 systemd[1]: Started Open-iSCSI.
Nov 25 11:10:00 np0005535469 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 11:10:00 np0005535469 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 11:10:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:10:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:10:01 np0005535469 python3.9[226546]: ansible-ansible.builtin.service_facts Invoked
Nov 25 11:10:01 np0005535469 network[226563]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 11:10:01 np0005535469 network[226564]: 'network-scripts' will be removed from distribution in near future.
Nov 25 11:10:01 np0005535469 network[226565]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 11:10:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:06 np0005535469 python3.9[226837]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 11:10:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:06 np0005535469 python3.9[226989]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 11:10:07 np0005535469 python3.9[227145]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:08 np0005535469 python3.9[227268]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087007.126995-172-55631441861680/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:09 np0005535469 python3.9[227420]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:10:10 np0005535469 python3.9[227572]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:10:10 np0005535469 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 11:10:10 np0005535469 systemd[1]: Stopped Load Kernel Modules.
Nov 25 11:10:10 np0005535469 systemd[1]: Stopping Load Kernel Modules...
Nov 25 11:10:10 np0005535469 systemd[1]: Starting Load Kernel Modules...
Nov 25 11:10:10 np0005535469 systemd[1]: Finished Load Kernel Modules.
Nov 25 11:10:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:10 np0005535469 python3.9[227728]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:10:11 np0005535469 python3.9[227880]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:10:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:12 np0005535469 python3.9[228032]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:10:13 np0005535469 python3.9[228184]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:10:13.574 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:10:13.576 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:10:13.576 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:10:13 np0005535469 python3.9[228307]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087012.6032295-230-148085352226994/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:14 np0005535469 podman[228431]: 2025-11-25 16:10:14.269283962 +0000 UTC m=+0.106877728 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:10:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:14 np0005535469 python3.9[228475]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:10:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:15 np0005535469 python3.9[228638]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:15 np0005535469 python3.9[228790]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:16 np0005535469 python3.9[228942]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:17 np0005535469 python3.9[229094]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:17 np0005535469 podman[229246]: 2025-11-25 16:10:17.737775541 +0000 UTC m=+0.057376657 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:10:17 np0005535469 python3.9[229247]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:18 np0005535469 python3.9[229416]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:18 np0005535469 python3.9[229568]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:19 np0005535469 python3.9[229720]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:10:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:20 np0005535469 python3.9[229874]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:21 np0005535469 python3.9[230026]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:10:21 np0005535469 python3.9[230178]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:22 np0005535469 python3.9[230256]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:10:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:22 np0005535469 python3.9[230408]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:23 np0005535469 python3.9[230486]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:10:23 np0005535469 python3.9[230638]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:24 np0005535469 python3.9[230790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:25 np0005535469 python3.9[230868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:25 np0005535469 python3.9[231020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:26 np0005535469 python3.9[231098]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:27 np0005535469 python3.9[231250]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:10:27 np0005535469 systemd[1]: Reloading.
Nov 25 11:10:27 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:10:27 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:10:28 np0005535469 python3.9[231438]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:28 np0005535469 python3.9[231516]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:29 np0005535469 python3.9[231668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:29 np0005535469 python3.9[231746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:30 np0005535469 python3.9[231899]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:10:30 np0005535469 systemd[1]: Reloading.
Nov 25 11:10:30 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:10:30 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:10:31 np0005535469 systemd[1]: Starting Create netns directory...
Nov 25 11:10:31 np0005535469 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 11:10:31 np0005535469 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 11:10:31 np0005535469 systemd[1]: Finished Create netns directory.
Nov 25 11:10:32 np0005535469 python3.9[232092]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:10:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:32 np0005535469 python3.9[232244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:33 np0005535469 python3.9[232367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087032.238688-437-229986239869302/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:10:34 np0005535469 python3.9[232519]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:10:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:34 np0005535469 python3.9[232671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:35 np0005535469 python3.9[232794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087034.302232-462-52544286662836/.source.json _original_basename=.gy4z6kjq follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:35 np0005535469 python3.9[232946]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:37 np0005535469 python3.9[233373]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 11:10:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:38 np0005535469 python3.9[233525]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 11:10:39 np0005535469 python3.9[233677]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 11:10:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:10:39
Nov 25 11:10:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:10:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:10:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Nov 25 11:10:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:10:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:41 np0005535469 python3[233855]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 11:10:42 np0005535469 podman[233869]: 2025-11-25 16:10:42.25749629 +0000 UTC m=+1.027557252 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 11:10:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:42 np0005535469 podman[233926]: 2025-11-25 16:10:42.403273251 +0000 UTC m=+0.050521404 container create 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 25 11:10:42 np0005535469 podman[233926]: 2025-11-25 16:10:42.376374145 +0000 UTC m=+0.023622338 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 11:10:42 np0005535469 python3[233855]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 11:10:43 np0005535469 python3.9[234117]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:10:43 np0005535469 python3.9[234271]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:44 np0005535469 python3.9[234347]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:10:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:44 np0005535469 podman[234400]: 2025-11-25 16:10:44.704493888 +0000 UTC m=+0.120430669 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 11:10:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:45 np0005535469 python3.9[234524]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764087044.4149652-550-124698999493805/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:45 np0005535469 python3.9[234600]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:10:45 np0005535469 systemd[1]: Reloading.
Nov 25 11:10:45 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:10:45 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:10:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:46 np0005535469 python3.9[234712]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:10:46 np0005535469 systemd[1]: Reloading.
Nov 25 11:10:46 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:10:46 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:10:46 np0005535469 systemd[1]: Starting multipathd container...
Nov 25 11:10:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:10:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 11:10:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 11:10:46 np0005535469 systemd[1]: Started /usr/bin/podman healthcheck run 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.
Nov 25 11:10:46 np0005535469 podman[234752]: 2025-11-25 16:10:46.966327644 +0000 UTC m=+0.102210528 container init 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 11:10:46 np0005535469 multipathd[234768]: + sudo -E kolla_set_configs
Nov 25 11:10:46 np0005535469 podman[234752]: 2025-11-25 16:10:46.989020556 +0000 UTC m=+0.124903410 container start 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd)
Nov 25 11:10:46 np0005535469 podman[234752]: multipathd
Nov 25 11:10:46 np0005535469 systemd[1]: Started multipathd container.
Nov 25 11:10:47 np0005535469 multipathd[234768]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 11:10:47 np0005535469 multipathd[234768]: INFO:__main__:Validating config file
Nov 25 11:10:47 np0005535469 multipathd[234768]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 11:10:47 np0005535469 multipathd[234768]: INFO:__main__:Writing out command to execute
Nov 25 11:10:47 np0005535469 multipathd[234768]: ++ cat /run_command
Nov 25 11:10:47 np0005535469 multipathd[234768]: + CMD='/usr/sbin/multipathd -d'
Nov 25 11:10:47 np0005535469 multipathd[234768]: + ARGS=
Nov 25 11:10:47 np0005535469 multipathd[234768]: + sudo kolla_copy_cacerts
Nov 25 11:10:47 np0005535469 podman[234775]: 2025-11-25 16:10:47.064388429 +0000 UTC m=+0.059158906 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 11:10:47 np0005535469 systemd[1]: 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-4d17e05f0b7691ba.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 11:10:47 np0005535469 systemd[1]: 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-4d17e05f0b7691ba.service: Failed with result 'exit-code'.
Nov 25 11:10:47 np0005535469 multipathd[234768]: + [[ ! -n '' ]]
Nov 25 11:10:47 np0005535469 multipathd[234768]: + . kolla_extend_start
Nov 25 11:10:47 np0005535469 multipathd[234768]: Running command: '/usr/sbin/multipathd -d'
Nov 25 11:10:47 np0005535469 multipathd[234768]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 11:10:47 np0005535469 multipathd[234768]: + umask 0022
Nov 25 11:10:47 np0005535469 multipathd[234768]: + exec /usr/sbin/multipathd -d
Nov 25 11:10:47 np0005535469 multipathd[234768]: 3504.738920 | --------start up--------
Nov 25 11:10:47 np0005535469 multipathd[234768]: 3504.738941 | read /etc/multipath.conf
Nov 25 11:10:47 np0005535469 multipathd[234768]: 3504.744411 | path checkers start up
Nov 25 11:10:47 np0005535469 python3.9[234957]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:10:48 np0005535469 podman[235083]: 2025-11-25 16:10:48.035590619 +0000 UTC m=+0.055007054 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 11:10:48 np0005535469 python3.9[235130]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:10:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:49 np0005535469 python3.9[235295]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:10:49 np0005535469 systemd[1]: Stopping multipathd container...
Nov 25 11:10:49 np0005535469 multipathd[234768]: 3506.878211 | exit (signal)
Nov 25 11:10:49 np0005535469 multipathd[234768]: 3506.878267 | --------shut down-------
Nov 25 11:10:49 np0005535469 systemd[1]: libpod-917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.scope: Deactivated successfully.
Nov 25 11:10:49 np0005535469 podman[235299]: 2025-11-25 16:10:49.263604136 +0000 UTC m=+0.163639925 container died 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 11:10:49 np0005535469 systemd[1]: 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-4d17e05f0b7691ba.timer: Deactivated successfully.
Nov 25 11:10:49 np0005535469 systemd[1]: Stopped /usr/bin/podman healthcheck run 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.
Nov 25 11:10:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-userdata-shm.mount: Deactivated successfully.
Nov 25 11:10:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006-merged.mount: Deactivated successfully.
Nov 25 11:10:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:49 np0005535469 podman[235299]: 2025-11-25 16:10:49.97572893 +0000 UTC m=+0.875764719 container cleanup 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 11:10:49 np0005535469 podman[235299]: multipathd
Nov 25 11:10:50 np0005535469 podman[235327]: multipathd
Nov 25 11:10:50 np0005535469 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 11:10:50 np0005535469 systemd[1]: Stopped multipathd container.
Nov 25 11:10:50 np0005535469 systemd[1]: Starting multipathd container...
Nov 25 11:10:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:10:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 11:10:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 11:10:50 np0005535469 systemd[1]: Started /usr/bin/podman healthcheck run 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.
Nov 25 11:10:50 np0005535469 podman[235340]: 2025-11-25 16:10:50.192397922 +0000 UTC m=+0.123280075 container init 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:10:50 np0005535469 multipathd[235356]: + sudo -E kolla_set_configs
Nov 25 11:10:50 np0005535469 podman[235340]: 2025-11-25 16:10:50.219226346 +0000 UTC m=+0.150108499 container start 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 11:10:50 np0005535469 multipathd[235356]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 11:10:50 np0005535469 multipathd[235356]: INFO:__main__:Validating config file
Nov 25 11:10:50 np0005535469 multipathd[235356]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 11:10:50 np0005535469 multipathd[235356]: INFO:__main__:Writing out command to execute
Nov 25 11:10:50 np0005535469 multipathd[235356]: ++ cat /run_command
Nov 25 11:10:50 np0005535469 multipathd[235356]: + CMD='/usr/sbin/multipathd -d'
Nov 25 11:10:50 np0005535469 multipathd[235356]: + ARGS=
Nov 25 11:10:50 np0005535469 multipathd[235356]: + sudo kolla_copy_cacerts
Nov 25 11:10:50 np0005535469 podman[235340]: multipathd
Nov 25 11:10:50 np0005535469 multipathd[235356]: + [[ ! -n '' ]]
Nov 25 11:10:50 np0005535469 multipathd[235356]: + . kolla_extend_start
Nov 25 11:10:50 np0005535469 multipathd[235356]: Running command: '/usr/sbin/multipathd -d'
Nov 25 11:10:50 np0005535469 multipathd[235356]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 11:10:50 np0005535469 multipathd[235356]: + umask 0022
Nov 25 11:10:50 np0005535469 multipathd[235356]: + exec /usr/sbin/multipathd -d
Nov 25 11:10:50 np0005535469 systemd[1]: Started multipathd container.
Nov 25 11:10:50 np0005535469 multipathd[235356]: 3507.940134 | --------start up--------
Nov 25 11:10:50 np0005535469 multipathd[235356]: 3507.940152 | read /etc/multipath.conf
Nov 25 11:10:50 np0005535469 multipathd[235356]: 3507.945432 | path checkers start up
Nov 25 11:10:50 np0005535469 podman[235363]: 2025-11-25 16:10:50.340700321 +0000 UTC m=+0.110983603 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:10:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:10:50 np0005535469 python3.9[235546]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:51 np0005535469 python3.9[235698]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 11:10:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:52 np0005535469 python3.9[235850]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 11:10:52 np0005535469 kernel: Key type psk registered
Nov 25 11:10:53 np0005535469 python3.9[236011]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:10:53 np0005535469 python3.9[236134]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087052.6311238-630-142437976849660/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:54 np0005535469 python3.9[236286]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:10:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:10:55 np0005535469 python3.9[236438]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:10:55 np0005535469 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 11:10:55 np0005535469 systemd[1]: Stopped Load Kernel Modules.
Nov 25 11:10:55 np0005535469 systemd[1]: Stopping Load Kernel Modules...
Nov 25 11:10:55 np0005535469 systemd[1]: Starting Load Kernel Modules...
Nov 25 11:10:55 np0005535469 systemd[1]: Finished Load Kernel Modules.
Nov 25 11:10:55 np0005535469 python3.9[236594]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 11:10:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:57 np0005535469 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 11:10:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:10:58 np0005535469 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 11:10:58 np0005535469 systemd[1]: Reloading.
Nov 25 11:10:58 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:10:58 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:10:59 np0005535469 systemd[1]: Reloading.
Nov 25 11:10:59 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:10:59 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:10:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:05 np0005535469 systemd-logind[791]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 11:11:05 np0005535469 systemd-logind[791]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 11:11:05 np0005535469 lvm[236807]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 11:11:05 np0005535469 lvm[236808]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 11:11:05 np0005535469 lvm[236808]: VG ceph_vg2 finished
Nov 25 11:11:05 np0005535469 lvm[236807]: VG ceph_vg1 finished
Nov 25 11:11:05 np0005535469 lvm[236806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 11:11:05 np0005535469 lvm[236806]: VG ceph_vg0 finished
Nov 25 11:11:05 np0005535469 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 11:11:05 np0005535469 systemd[1]: Starting man-db-cache-update.service...
Nov 25 11:11:05 np0005535469 systemd[1]: Reloading.
Nov 25 11:11:05 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:11:05 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:11:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f817085-ecee-4285-bb52-8b093f634a35 does not exist
Nov 25 11:11:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev aacf5c1e-ae45-41e8-8b8b-48437e0bd015 does not exist
Nov 25 11:11:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev eeb1ee12-4bfd-43e6-bc12-00f74711af2c does not exist
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:11:06 np0005535469 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 11:11:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:11:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:11:06 np0005535469 podman[237689]: 2025-11-25 16:11:06.610074103 +0000 UTC m=+0.023946477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:11:06 np0005535469 podman[237689]: 2025-11-25 16:11:06.750288244 +0000 UTC m=+0.164160598 container create a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:11:06 np0005535469 systemd[1]: Started libpod-conmon-a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54.scope.
Nov 25 11:11:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:11:07 np0005535469 podman[237689]: 2025-11-25 16:11:07.009128904 +0000 UTC m=+0.423001298 container init a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:11:07 np0005535469 podman[237689]: 2025-11-25 16:11:07.019417682 +0000 UTC m=+0.433290036 container start a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:11:07 np0005535469 intelligent_snyder[238080]: 167 167
Nov 25 11:11:07 np0005535469 systemd[1]: libpod-a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54.scope: Deactivated successfully.
Nov 25 11:11:07 np0005535469 podman[237689]: 2025-11-25 16:11:07.086939292 +0000 UTC m=+0.500811666 container attach a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:11:07 np0005535469 podman[237689]: 2025-11-25 16:11:07.087675922 +0000 UTC m=+0.501548286 container died a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 11:11:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0253add0847978ea0206d19779a58f354e9fcd7cf9a3f7a450fee5ac5798a93a-merged.mount: Deactivated successfully.
Nov 25 11:11:07 np0005535469 podman[237689]: 2025-11-25 16:11:07.390723875 +0000 UTC m=+0.804596229 container remove a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:11:07 np0005535469 systemd[1]: libpod-conmon-a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54.scope: Deactivated successfully.
Nov 25 11:11:07 np0005535469 podman[238362]: 2025-11-25 16:11:07.519795216 +0000 UTC m=+0.021017918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:11:07 np0005535469 podman[238362]: 2025-11-25 16:11:07.627610123 +0000 UTC m=+0.128832805 container create d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:11:07 np0005535469 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 11:11:07 np0005535469 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 11:11:07 np0005535469 systemd[1]: Started libpod-conmon-d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae.scope.
Nov 25 11:11:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:11:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:07 np0005535469 python3.9[238372]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:11:07 np0005535469 podman[238362]: 2025-11-25 16:11:07.826227789 +0000 UTC m=+0.327450471 container init d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:11:07 np0005535469 podman[238362]: 2025-11-25 16:11:07.835784746 +0000 UTC m=+0.337007438 container start d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:11:07 np0005535469 systemd[1]: Stopping Open-iSCSI...
Nov 25 11:11:07 np0005535469 iscsid[226360]: iscsid shutting down.
Nov 25 11:11:07 np0005535469 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 11:11:07 np0005535469 systemd[1]: Stopped Open-iSCSI.
Nov 25 11:11:07 np0005535469 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 11:11:07 np0005535469 systemd[1]: Starting Open-iSCSI...
Nov 25 11:11:07 np0005535469 systemd[1]: Started Open-iSCSI.
Nov 25 11:11:07 np0005535469 podman[238362]: 2025-11-25 16:11:07.8778304 +0000 UTC m=+0.379053112 container attach d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 11:11:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:08 np0005535469 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 11:11:08 np0005535469 systemd[1]: Finished man-db-cache-update.service.
Nov 25 11:11:08 np0005535469 systemd[1]: man-db-cache-update.service: Consumed 1.501s CPU time.
Nov 25 11:11:08 np0005535469 systemd[1]: run-r2fae04956c864fab901613558534bf94.service: Deactivated successfully.
Nov 25 11:11:08 np0005535469 python3.9[238541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 11:11:08 np0005535469 reverent_wescoff[238383]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:11:08 np0005535469 reverent_wescoff[238383]: --> relative data size: 1.0
Nov 25 11:11:08 np0005535469 reverent_wescoff[238383]: --> All data devices are unavailable
Nov 25 11:11:08 np0005535469 systemd[1]: libpod-d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae.scope: Deactivated successfully.
Nov 25 11:11:08 np0005535469 podman[238362]: 2025-11-25 16:11:08.878564358 +0000 UTC m=+1.379787050 container died d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:11:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d-merged.mount: Deactivated successfully.
Nov 25 11:11:09 np0005535469 podman[238362]: 2025-11-25 16:11:09.014832372 +0000 UTC m=+1.516055054 container remove d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:11:09 np0005535469 systemd[1]: libpod-conmon-d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae.scope: Deactivated successfully.
Nov 25 11:11:09 np0005535469 python3.9[238834]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:09 np0005535469 podman[238873]: 2025-11-25 16:11:09.600924837 +0000 UTC m=+0.039652670 container create a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:11:09 np0005535469 systemd[1]: Started libpod-conmon-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope.
Nov 25 11:11:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:11:09 np0005535469 podman[238873]: 2025-11-25 16:11:09.582983414 +0000 UTC m=+0.021711267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:11:09 np0005535469 podman[238873]: 2025-11-25 16:11:09.686454964 +0000 UTC m=+0.125182807 container init a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:11:09 np0005535469 podman[238873]: 2025-11-25 16:11:09.695160849 +0000 UTC m=+0.133888662 container start a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 11:11:09 np0005535469 eager_wu[238913]: 167 167
Nov 25 11:11:09 np0005535469 systemd[1]: libpod-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope: Deactivated successfully.
Nov 25 11:11:09 np0005535469 conmon[238913]: conmon a9a81b144a65487fdf6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope/container/memory.events
Nov 25 11:11:09 np0005535469 podman[238873]: 2025-11-25 16:11:09.711849799 +0000 UTC m=+0.150577612 container attach a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:11:09 np0005535469 podman[238873]: 2025-11-25 16:11:09.712606339 +0000 UTC m=+0.151334242 container died a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:11:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4f0663495e1a81ce5737f5bd5534eebe8fdd8b9ed089ff3d9caa04f20f7e8338-merged.mount: Deactivated successfully.
Nov 25 11:11:09 np0005535469 podman[238873]: 2025-11-25 16:11:09.753730929 +0000 UTC m=+0.192458752 container remove a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:11:09 np0005535469 systemd[1]: libpod-conmon-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope: Deactivated successfully.
Nov 25 11:11:09 np0005535469 podman[238941]: 2025-11-25 16:11:09.910539817 +0000 UTC m=+0.050731429 container create 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:11:09 np0005535469 systemd[1]: Started libpod-conmon-92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3.scope.
Nov 25 11:11:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:11:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:09 np0005535469 podman[238941]: 2025-11-25 16:11:09.888359899 +0000 UTC m=+0.028551531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:11:10 np0005535469 podman[238941]: 2025-11-25 16:11:10.008713825 +0000 UTC m=+0.148905457 container init 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:11:10 np0005535469 podman[238941]: 2025-11-25 16:11:10.01521166 +0000 UTC m=+0.155403272 container start 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:11:10 np0005535469 podman[238941]: 2025-11-25 16:11:10.051751465 +0000 UTC m=+0.191943097 container attach 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:11:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:10 np0005535469 python3.9[239090]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:11:10 np0005535469 systemd[1]: Reloading.
Nov 25 11:11:10 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:11:10 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:11:10 np0005535469 jolly_carson[238989]: {
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:    "0": [
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:        {
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "devices": [
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "/dev/loop3"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            ],
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_name": "ceph_lv0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_size": "21470642176",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "name": "ceph_lv0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "tags": {
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cluster_name": "ceph",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.crush_device_class": "",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.encrypted": "0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osd_id": "0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.type": "block",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.vdo": "0"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            },
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "type": "block",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "vg_name": "ceph_vg0"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:        }
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:    ],
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:    "1": [
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:        {
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "devices": [
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "/dev/loop4"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            ],
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_name": "ceph_lv1",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_size": "21470642176",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "name": "ceph_lv1",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "tags": {
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cluster_name": "ceph",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.crush_device_class": "",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.encrypted": "0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osd_id": "1",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.type": "block",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.vdo": "0"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            },
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "type": "block",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "vg_name": "ceph_vg1"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:        }
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:    ],
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:    "2": [
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:        {
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "devices": [
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "/dev/loop5"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            ],
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_name": "ceph_lv2",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_size": "21470642176",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "name": "ceph_lv2",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "tags": {
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.cluster_name": "ceph",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.crush_device_class": "",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.encrypted": "0",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osd_id": "2",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.type": "block",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:                "ceph.vdo": "0"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            },
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "type": "block",
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:            "vg_name": "ceph_vg2"
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:        }
Nov 25 11:11:10 np0005535469 jolly_carson[238989]:    ]
Nov 25 11:11:10 np0005535469 jolly_carson[238989]: }
Nov 25 11:11:10 np0005535469 podman[238941]: 2025-11-25 16:11:10.823010074 +0000 UTC m=+0.963201696 container died 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:11:10 np0005535469 systemd[1]: libpod-92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3.scope: Deactivated successfully.
Nov 25 11:11:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057-merged.mount: Deactivated successfully.
Nov 25 11:11:11 np0005535469 podman[238941]: 2025-11-25 16:11:11.189590821 +0000 UTC m=+1.329782433 container remove 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:11:11 np0005535469 systemd[1]: libpod-conmon-92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3.scope: Deactivated successfully.
Nov 25 11:11:11 np0005535469 python3.9[239311]: ansible-ansible.builtin.service_facts Invoked
Nov 25 11:11:11 np0005535469 network[239411]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 11:11:11 np0005535469 network[239412]: 'network-scripts' will be removed from distribution in near future.
Nov 25 11:11:11 np0005535469 network[239413]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 11:11:11 np0005535469 podman[239459]: 2025-11-25 16:11:11.748473953 +0000 UTC m=+0.041367607 container create c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:11:11 np0005535469 podman[239459]: 2025-11-25 16:11:11.727546689 +0000 UTC m=+0.020440363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:11:12 np0005535469 systemd[1]: Started libpod-conmon-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope.
Nov 25 11:11:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:11:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:12 np0005535469 podman[239459]: 2025-11-25 16:11:12.379728646 +0000 UTC m=+0.672622320 container init c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:11:12 np0005535469 podman[239459]: 2025-11-25 16:11:12.38651637 +0000 UTC m=+0.679410024 container start c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 11:11:12 np0005535469 podman[239459]: 2025-11-25 16:11:12.389330085 +0000 UTC m=+0.682223769 container attach c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 11:11:12 np0005535469 systemd[1]: libpod-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope: Deactivated successfully.
Nov 25 11:11:12 np0005535469 compassionate_edison[239477]: 167 167
Nov 25 11:11:12 np0005535469 conmon[239477]: conmon c9a94decb1acb0a4c673 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope/container/memory.events
Nov 25 11:11:12 np0005535469 podman[239459]: 2025-11-25 16:11:12.392957583 +0000 UTC m=+0.685851237 container died c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:11:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-41c331054c6a3e6f590608a04b983851ca81139fcb04c6aced9dbe8cc43b1b7c-merged.mount: Deactivated successfully.
Nov 25 11:11:12 np0005535469 podman[239459]: 2025-11-25 16:11:12.47145155 +0000 UTC m=+0.764345204 container remove c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:11:12 np0005535469 systemd[1]: libpod-conmon-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope: Deactivated successfully.
Nov 25 11:11:12 np0005535469 podman[239515]: 2025-11-25 16:11:12.664304191 +0000 UTC m=+0.050518284 container create 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:11:12 np0005535469 systemd[1]: Started libpod-conmon-2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb.scope.
Nov 25 11:11:12 np0005535469 podman[239515]: 2025-11-25 16:11:12.636328516 +0000 UTC m=+0.022542639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:11:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:11:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:11:12 np0005535469 podman[239515]: 2025-11-25 16:11:12.778004227 +0000 UTC m=+0.164218320 container init 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:11:12 np0005535469 podman[239515]: 2025-11-25 16:11:12.785100428 +0000 UTC m=+0.171314521 container start 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:11:12 np0005535469 podman[239515]: 2025-11-25 16:11:12.824191042 +0000 UTC m=+0.210405135 container attach 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:11:13.576 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:11:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:11:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:11:13 np0005535469 friendly_benz[239538]: {
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "osd_id": 1,
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "type": "bluestore"
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:    },
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "osd_id": 2,
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "type": "bluestore"
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:    },
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "osd_id": 0,
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:        "type": "bluestore"
Nov 25 11:11:13 np0005535469 friendly_benz[239538]:    }
Nov 25 11:11:13 np0005535469 friendly_benz[239538]: }
Nov 25 11:11:13 np0005535469 systemd[1]: libpod-2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb.scope: Deactivated successfully.
Nov 25 11:11:13 np0005535469 podman[239515]: 2025-11-25 16:11:13.711038289 +0000 UTC m=+1.097252382 container died 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:11:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4-merged.mount: Deactivated successfully.
Nov 25 11:11:14 np0005535469 podman[239515]: 2025-11-25 16:11:14.078138098 +0000 UTC m=+1.464352191 container remove 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:11:14 np0005535469 systemd[1]: libpod-conmon-2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb.scope: Deactivated successfully.
Nov 25 11:11:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:11:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:11:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:11:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:11:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9cdfae03-cca0-4ae2-87bc-b6499a8a0ec5 does not exist
Nov 25 11:11:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9b832b82-ff17-4a48-a3a6-ada0773ce301 does not exist
Nov 25 11:11:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:14 np0005535469 podman[239854]: 2025-11-25 16:11:14.952473297 +0000 UTC m=+0.080263286 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:11:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:11:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:11:15 np0005535469 python3.9[239901]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:15 np0005535469 python3.9[240063]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:16 np0005535469 python3.9[240216]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:17 np0005535469 python3.9[240369]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:18 np0005535469 python3.9[240522]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:18 np0005535469 podman[240524]: 2025-11-25 16:11:18.142610348 +0000 UTC m=+0.068623382 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:11:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:18 np0005535469 python3.9[240694]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:19 np0005535469 python3.9[240847]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:20 np0005535469 python3.9[241000]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:11:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:20 np0005535469 podman[241102]: 2025-11-25 16:11:20.656427508 +0000 UTC m=+0.079043573 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:11:20 np0005535469 python3.9[241173]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:21 np0005535469 python3.9[241325]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:22 np0005535469 python3.9[241477]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:22 np0005535469 python3.9[241629]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:23 np0005535469 python3.9[241781]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:23 np0005535469 python3.9[241933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:24 np0005535469 python3.9[242085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:25 np0005535469 python3.9[242237]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:26 np0005535469 python3.9[242389]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:26 np0005535469 python3.9[242541]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:27 np0005535469 python3.9[242693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:27 np0005535469 python3.9[242845]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:28 np0005535469 python3.9[242997]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:29 np0005535469 python3.9[243149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:30 np0005535469 python3.9[243301]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:30 np0005535469 python3.9[243453]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:11:31 np0005535469 python3.9[243605]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:32 np0005535469 python3.9[243757]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 11:11:33 np0005535469 python3.9[243909]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:11:33 np0005535469 systemd[1]: Reloading.
Nov 25 11:11:33 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:11:33 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:11:34 np0005535469 python3.9[244096]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:34 np0005535469 python3.9[244249]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:35 np0005535469 python3.9[244402]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:36 np0005535469 python3.9[244555]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:36 np0005535469 python3.9[244708]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:11:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 3173 writes, 14K keys, 3173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3173 writes, 3173 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1273 writes, 5543 keys, 1273 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s#012Interval WAL: 1274 writes, 1274 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.99              0.05         6    0.165       0      0       0.0       0.0#012  L6      1/0    7.03 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     99.3     82.2      0.42              0.09         5    0.085     20K   2214       0.0       0.0#012 Sum      1/0    7.03 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     29.8     34.8      1.41              0.14        11    0.128     20K   2214       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     23.3     23.9      1.15              0.08         6    0.191     12K   1468       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     99.3     82.2      0.42              0.09         5    0.085     20K   2214       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.1      0.94              0.05         5    0.188       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 1.4 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 308.00 MB usage: 1.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(88,1.21 MB,0.391789%) FilterBlock(12,63.23 KB,0.0200495%) IndexBlock(12,126.14 KB,0.0399949%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 11:11:37 np0005535469 python3.9[244861]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:38 np0005535469 python3.9[245014]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:38 np0005535469 python3.9[245167]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 11:11:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:11:39
Nov 25 11:11:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:11:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:11:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'images']
Nov 25 11:11:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:11:40 np0005535469 python3.9[245320]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:11:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:41 np0005535469 python3.9[245472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:41 np0005535469 python3.9[245624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:42 np0005535469 python3.9[245776]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:43 np0005535469 python3.9[245928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:44 np0005535469 python3.9[246080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:44 np0005535469 python3.9[246232]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:45 np0005535469 podman[246356]: 2025-11-25 16:11:45.319579718 +0000 UTC m=+0.090710668 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:11:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:45 np0005535469 python3.9[246403]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:46 np0005535469 python3.9[246562]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:46 np0005535469 python3.9[246714]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:48 np0005535469 podman[246739]: 2025-11-25 16:11:48.638092389 +0000 UTC m=+0.050158593 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:11:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:11:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:11:51 np0005535469 podman[246758]: 2025-11-25 16:11:51.692861338 +0000 UTC m=+0.100396868 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 11:11:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:53 np0005535469 python3.9[246907]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 11:11:53 np0005535469 python3.9[247060]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 11:11:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:54 np0005535469 python3.9[247218]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 11:11:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:11:55 np0005535469 systemd-logind[791]: New session 51 of user zuul.
Nov 25 11:11:55 np0005535469 systemd[1]: Started Session 51 of User zuul.
Nov 25 11:11:56 np0005535469 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 11:11:56 np0005535469 systemd-logind[791]: Session 51 logged out. Waiting for processes to exit.
Nov 25 11:11:56 np0005535469 systemd-logind[791]: Removed session 51.
Nov 25 11:11:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:56 np0005535469 python3.9[247404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:11:57 np0005535469 python3.9[247525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087116.1825483-1249-150606599717485/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:57 np0005535469 python3.9[247675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:11:58 np0005535469 python3.9[247751]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:11:58 np0005535469 python3.9[247901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:11:59 np0005535469 python3.9[248022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087118.377047-1249-89315532051845/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:11:59 np0005535469 python3.9[248172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:12:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:00 np0005535469 python3.9[248293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087119.5167482-1249-233657873537978/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:12:01 np0005535469 python3.9[248443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:12:01 np0005535469 python3.9[248564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087120.6722069-1249-104652293965098/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:12:02 np0005535469 python3.9[248714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:12:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:02 np0005535469 python3.9[248835]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087121.8277698-1249-43994199884388/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:12:03 np0005535469 python3.9[248987]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:12:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:04 np0005535469 python3.9[249139]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:12:05 np0005535469 python3.9[249291]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:12:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:05 np0005535469 python3.9[249443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:12:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:06 np0005535469 python3.9[249566]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764087125.4237113-1356-157217534128131/.source _original_basename=.6h8mq2xx follow=False checksum=01c0e30e0516984f93c650504b23ca47535007dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 11:12:07 np0005535469 python3.9[249718]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:12:07 np0005535469 python3.9[249870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:12:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:08 np0005535469 python3.9[249991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087127.5072236-1382-42388292095463/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:12:09 np0005535469 python3.9[250141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 11:12:09 np0005535469 python3.9[250262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087128.6540806-1397-219940988914914/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 11:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:12:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:10 np0005535469 python3.9[250414]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 11:12:11 np0005535469 python3.9[250566]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 11:12:12 np0005535469 python3[250718]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 11:12:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:12:13.577 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:12:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:12:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:12:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:12:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1acfdcd6-e1ac-496b-bfb7-6c88178d902b does not exist
Nov 25 11:12:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5ae80b77-32d3-4614-b173-003b2c92dd2f does not exist
Nov 25 11:12:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d2a91fdc-f571-4453-a228-ca06abb7b400 does not exist
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:12:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:12:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:18 np0005535469 podman[250925]: 2025-11-25 16:12:18.26027166 +0000 UTC m=+2.874613042 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:12:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:22 np0005535469 podman[251055]: 2025-11-25 16:12:22.178952345 +0000 UTC m=+2.595929535 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:12:22 np0005535469 podman[251068]: 2025-11-25 16:12:22.23178805 +0000 UTC m=+0.072009952 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 11:12:22 np0005535469 podman[250732]: 2025-11-25 16:12:22.254556274 +0000 UTC m=+9.848608780 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 11:12:22 np0005535469 podman[251123]: 2025-11-25 16:12:22.346808122 +0000 UTC m=+0.042519958 container create 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:12:22 np0005535469 systemd[1]: Started libpod-conmon-8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f.scope.
Nov 25 11:12:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:22 np0005535469 podman[251155]: 2025-11-25 16:12:22.414472096 +0000 UTC m=+0.055592929 container create 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init)
Nov 25 11:12:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:22 np0005535469 podman[251155]: 2025-11-25 16:12:22.386349778 +0000 UTC m=+0.027470671 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 11:12:22 np0005535469 python3[250718]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 11:12:22 np0005535469 podman[251123]: 2025-11-25 16:12:22.32932164 +0000 UTC m=+0.025033496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:12:22 np0005535469 podman[251123]: 2025-11-25 16:12:22.431677201 +0000 UTC m=+0.127389097 container init 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:12:22 np0005535469 podman[251123]: 2025-11-25 16:12:22.440233892 +0000 UTC m=+0.135945728 container start 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:12:22 np0005535469 podman[251123]: 2025-11-25 16:12:22.443821808 +0000 UTC m=+0.139533654 container attach 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:12:22 np0005535469 hungry_lichterman[251167]: 167 167
Nov 25 11:12:22 np0005535469 systemd[1]: libpod-8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f.scope: Deactivated successfully.
Nov 25 11:12:22 np0005535469 podman[251123]: 2025-11-25 16:12:22.450432856 +0000 UTC m=+0.146144692 container died 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:12:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-288bcc85ba6a52f4a763c9ec57ba834355384d04aeef569d67f59b8a5176b29b-merged.mount: Deactivated successfully.
Nov 25 11:12:22 np0005535469 podman[251123]: 2025-11-25 16:12:22.532767467 +0000 UTC m=+0.228479303 container remove 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 11:12:22 np0005535469 systemd[1]: libpod-conmon-8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f.scope: Deactivated successfully.
Nov 25 11:12:22 np0005535469 podman[251238]: 2025-11-25 16:12:22.75980678 +0000 UTC m=+0.105334362 container create dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:12:22 np0005535469 podman[251238]: 2025-11-25 16:12:22.674950291 +0000 UTC m=+0.020477903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:12:22 np0005535469 systemd[1]: Started libpod-conmon-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope.
Nov 25 11:12:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:22 np0005535469 podman[251238]: 2025-11-25 16:12:22.854826972 +0000 UTC m=+0.200354574 container init dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:12:22 np0005535469 podman[251238]: 2025-11-25 16:12:22.863051304 +0000 UTC m=+0.208578896 container start dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:12:22 np0005535469 podman[251238]: 2025-11-25 16:12:22.866535738 +0000 UTC m=+0.212063310 container attach dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:12:23 np0005535469 python3.9[251393]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:12:23 np0005535469 angry_heisenberg[251289]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:12:23 np0005535469 angry_heisenberg[251289]: --> relative data size: 1.0
Nov 25 11:12:23 np0005535469 angry_heisenberg[251289]: --> All data devices are unavailable
Nov 25 11:12:23 np0005535469 systemd[1]: libpod-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope: Deactivated successfully.
Nov 25 11:12:23 np0005535469 podman[251238]: 2025-11-25 16:12:23.958112804 +0000 UTC m=+1.303640386 container died dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:12:23 np0005535469 systemd[1]: libpod-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope: Consumed 1.020s CPU time.
Nov 25 11:12:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1-merged.mount: Deactivated successfully.
Nov 25 11:12:24 np0005535469 podman[251238]: 2025-11-25 16:12:24.011257477 +0000 UTC m=+1.356785059 container remove dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:12:24 np0005535469 systemd[1]: libpod-conmon-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope: Deactivated successfully.
Nov 25 11:12:24 np0005535469 python3.9[251591]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 11:12:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:24 np0005535469 podman[251770]: 2025-11-25 16:12:24.562729269 +0000 UTC m=+0.041749967 container create 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:12:24 np0005535469 systemd[1]: Started libpod-conmon-349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5.scope.
Nov 25 11:12:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:24 np0005535469 podman[251770]: 2025-11-25 16:12:24.542495514 +0000 UTC m=+0.021516242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:12:24 np0005535469 podman[251770]: 2025-11-25 16:12:24.641791752 +0000 UTC m=+0.120812480 container init 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:12:24 np0005535469 podman[251770]: 2025-11-25 16:12:24.648443381 +0000 UTC m=+0.127464089 container start 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:12:24 np0005535469 podman[251770]: 2025-11-25 16:12:24.651785191 +0000 UTC m=+0.130805899 container attach 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:12:24 np0005535469 zealous_nobel[251820]: 167 167
Nov 25 11:12:24 np0005535469 systemd[1]: libpod-349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5.scope: Deactivated successfully.
Nov 25 11:12:24 np0005535469 podman[251770]: 2025-11-25 16:12:24.654663059 +0000 UTC m=+0.133683767 container died 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:12:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3b07b6c2d9e8d982758bde88981358e3ef9716715afc03e791b7ff0dd9738182-merged.mount: Deactivated successfully.
Nov 25 11:12:24 np0005535469 podman[251770]: 2025-11-25 16:12:24.685966183 +0000 UTC m=+0.164986891 container remove 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 11:12:24 np0005535469 systemd[1]: libpod-conmon-349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5.scope: Deactivated successfully.
Nov 25 11:12:24 np0005535469 podman[251915]: 2025-11-25 16:12:24.844368995 +0000 UTC m=+0.041809649 container create 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:12:24 np0005535469 systemd[1]: Started libpod-conmon-31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b.scope.
Nov 25 11:12:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:24 np0005535469 podman[251915]: 2025-11-25 16:12:24.82712121 +0000 UTC m=+0.024561884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:12:24 np0005535469 podman[251915]: 2025-11-25 16:12:24.924549166 +0000 UTC m=+0.121989840 container init 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:12:24 np0005535469 podman[251915]: 2025-11-25 16:12:24.930746944 +0000 UTC m=+0.128187598 container start 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 11:12:24 np0005535469 podman[251915]: 2025-11-25 16:12:24.933950121 +0000 UTC m=+0.131390775 container attach 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:12:24 np0005535469 python3.9[251910]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 11:12:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]: {
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:    "0": [
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:        {
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "devices": [
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "/dev/loop3"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            ],
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_name": "ceph_lv0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_size": "21470642176",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "name": "ceph_lv0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "tags": {
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cluster_name": "ceph",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.crush_device_class": "",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.encrypted": "0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osd_id": "0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.type": "block",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.vdo": "0"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            },
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "type": "block",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "vg_name": "ceph_vg0"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:        }
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:    ],
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:    "1": [
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:        {
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "devices": [
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "/dev/loop4"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            ],
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_name": "ceph_lv1",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_size": "21470642176",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "name": "ceph_lv1",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "tags": {
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cluster_name": "ceph",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.crush_device_class": "",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.encrypted": "0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osd_id": "1",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.type": "block",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.vdo": "0"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            },
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "type": "block",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "vg_name": "ceph_vg1"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:        }
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:    ],
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:    "2": [
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:        {
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "devices": [
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "/dev/loop5"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            ],
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_name": "ceph_lv2",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_size": "21470642176",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "name": "ceph_lv2",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "tags": {
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.cluster_name": "ceph",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.crush_device_class": "",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.encrypted": "0",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osd_id": "2",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.type": "block",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:                "ceph.vdo": "0"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            },
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "type": "block",
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:            "vg_name": "ceph_vg2"
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:        }
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]:    ]
Nov 25 11:12:25 np0005535469 bold_blackwell[251932]: }
Nov 25 11:12:25 np0005535469 systemd[1]: libpod-31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b.scope: Deactivated successfully.
Nov 25 11:12:25 np0005535469 podman[251915]: 2025-11-25 16:12:25.819752488 +0000 UTC m=+1.017193172 container died 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:12:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f-merged.mount: Deactivated successfully.
Nov 25 11:12:25 np0005535469 python3[252088]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 11:12:25 np0005535469 podman[251915]: 2025-11-25 16:12:25.901916484 +0000 UTC m=+1.099357138 container remove 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:12:25 np0005535469 systemd[1]: libpod-conmon-31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b.scope: Deactivated successfully.
Nov 25 11:12:26 np0005535469 podman[252163]: 2025-11-25 16:12:26.048344163 +0000 UTC m=+0.044685426 container create 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:12:26 np0005535469 podman[252163]: 2025-11-25 16:12:26.026618417 +0000 UTC m=+0.022959700 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 11:12:26 np0005535469 python3[252088]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 25 11:12:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:26 np0005535469 podman[252395]: 2025-11-25 16:12:26.463503889 +0000 UTC m=+0.042918949 container create 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:12:26 np0005535469 systemd[1]: Started libpod-conmon-64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388.scope.
Nov 25 11:12:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:26 np0005535469 podman[252395]: 2025-11-25 16:12:26.535325505 +0000 UTC m=+0.114740575 container init 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 11:12:26 np0005535469 podman[252395]: 2025-11-25 16:12:26.445820532 +0000 UTC m=+0.025235612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:12:26 np0005535469 podman[252395]: 2025-11-25 16:12:26.541682357 +0000 UTC m=+0.121097417 container start 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:12:26 np0005535469 elastic_feistel[252455]: 167 167
Nov 25 11:12:26 np0005535469 systemd[1]: libpod-64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388.scope: Deactivated successfully.
Nov 25 11:12:26 np0005535469 podman[252395]: 2025-11-25 16:12:26.585626732 +0000 UTC m=+0.165041822 container attach 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:12:26 np0005535469 podman[252395]: 2025-11-25 16:12:26.586440484 +0000 UTC m=+0.165855544 container died 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:12:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e607a99af8139fda1589e500f910cfe39671e112179191b52fad7a41f88212c9-merged.mount: Deactivated successfully.
Nov 25 11:12:26 np0005535469 podman[252395]: 2025-11-25 16:12:26.725252577 +0000 UTC m=+0.304667637 container remove 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:12:26 np0005535469 systemd[1]: libpod-conmon-64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388.scope: Deactivated successfully.
Nov 25 11:12:26 np0005535469 python3.9[252502]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:12:26 np0005535469 podman[252536]: 2025-11-25 16:12:26.884262325 +0000 UTC m=+0.039015213 container create 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:12:26 np0005535469 systemd[1]: Started libpod-conmon-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope.
Nov 25 11:12:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:26 np0005535469 podman[252536]: 2025-11-25 16:12:26.961248702 +0000 UTC m=+0.116001610 container init 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:12:26 np0005535469 podman[252536]: 2025-11-25 16:12:26.868770498 +0000 UTC m=+0.023523406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:12:26 np0005535469 podman[252536]: 2025-11-25 16:12:26.969418711 +0000 UTC m=+0.124171599 container start 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:12:27 np0005535469 podman[252536]: 2025-11-25 16:12:27.00312003 +0000 UTC m=+0.157872918 container attach 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:12:27 np0005535469 python3.9[252685]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:12:28 np0005535469 funny_shamir[252553]: {
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "osd_id": 1,
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "type": "bluestore"
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:    },
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "osd_id": 2,
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "type": "bluestore"
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:    },
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "osd_id": 0,
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:        "type": "bluestore"
Nov 25 11:12:28 np0005535469 funny_shamir[252553]:    }
Nov 25 11:12:28 np0005535469 funny_shamir[252553]: }
Nov 25 11:12:28 np0005535469 systemd[1]: libpod-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope: Deactivated successfully.
Nov 25 11:12:28 np0005535469 podman[252536]: 2025-11-25 16:12:28.031076532 +0000 UTC m=+1.185829420 container died 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:12:28 np0005535469 systemd[1]: libpod-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope: Consumed 1.065s CPU time.
Nov 25 11:12:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc-merged.mount: Deactivated successfully.
Nov 25 11:12:28 np0005535469 podman[252536]: 2025-11-25 16:12:28.090722381 +0000 UTC m=+1.245475269 container remove 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:12:28 np0005535469 systemd[1]: libpod-conmon-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope: Deactivated successfully.
Nov 25 11:12:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:12:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:12:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:12:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:12:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 95e38b67-6b4e-45fc-bd71-bbec646b7ccd does not exist
Nov 25 11:12:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 62f5cb10-3cd6-4bb3-aa87-bfa700226505 does not exist
Nov 25 11:12:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:28 np0005535469 python3.9[252878]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764087147.711346-1489-154206535093666/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 11:12:29 np0005535469 python3.9[253004]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 11:12:29 np0005535469 systemd[1]: Reloading.
Nov 25 11:12:29 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:12:29 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:12:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:12:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:12:29 np0005535469 python3.9[253114]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 11:12:30 np0005535469 systemd[1]: Reloading.
Nov 25 11:12:30 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 11:12:30 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 11:12:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:30 np0005535469 systemd[1]: Starting nova_compute container...
Nov 25 11:12:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:30 np0005535469 podman[253154]: 2025-11-25 16:12:30.489629683 +0000 UTC m=+0.091509459 container init 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:12:30 np0005535469 podman[253154]: 2025-11-25 16:12:30.497874825 +0000 UTC m=+0.099754601 container start 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 25 11:12:30 np0005535469 podman[253154]: nova_compute
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + sudo -E kolla_set_configs
Nov 25 11:12:30 np0005535469 systemd[1]: Started nova_compute container.
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Validating config file
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying service configuration files
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Deleting /etc/ceph
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Creating directory /etc/ceph
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Writing out command to execute
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:30 np0005535469 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 11:12:30 np0005535469 nova_compute[253170]: ++ cat /run_command
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + CMD=nova-compute
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + ARGS=
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + sudo kolla_copy_cacerts
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + [[ ! -n '' ]]
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + . kolla_extend_start
Nov 25 11:12:30 np0005535469 nova_compute[253170]: Running command: 'nova-compute'
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + umask 0022
Nov 25 11:12:30 np0005535469 nova_compute[253170]: + exec nova-compute
Nov 25 11:12:31 np0005535469 python3.9[253331]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:12:32 np0005535469 python3.9[253482]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:12:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:32 np0005535469 python3.9[253632]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.089 253174 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.089 253174 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.089 253174 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.090 253174 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.242 253174 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.258 253174 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.258 253174 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 11:12:33 np0005535469 nova_compute[253170]: 2025-11-25 16:12:33.921 253174 INFO nova.virt.driver [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 11:12:33 np0005535469 python3.9[253788]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 11:12:34 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:12:34 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.073 253174 INFO nova.compute.provider_config [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.089 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.089 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.090 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.090 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.090 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 WARNING oslo_config.cfg [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 11:12:34 np0005535469 nova_compute[253170]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 11:12:34 np0005535469 nova_compute[253170]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 11:12:34 np0005535469 nova_compute[253170]: and ``live_migration_inbound_addr`` respectively.
Nov 25 11:12:34 np0005535469 nova_compute[253170]: ).  Its value may be silently ignored in the future.#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_secret_uuid        = d82baeae-c742-50a4-b8f6-b5257c8a2c92 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.229 253174 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.242 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.242 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.242 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.243 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 25 11:12:34 np0005535469 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 11:12:34 np0005535469 systemd[1]: Started libvirt QEMU daemon.
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.324 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff5f3564910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.327 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff5f3564910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.328 253174 INFO nova.virt.libvirt.driver [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.342 253174 WARNING nova.virt.libvirt.driver [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.343 253174 DEBUG nova.virt.libvirt.volume.mount [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 25 11:12:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:34 np0005535469 python3.9[254015]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 11:12:34 np0005535469 systemd[1]: Stopping nova_compute container...
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.910 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.910 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:12:34 np0005535469 nova_compute[253170]: 2025-11-25 16:12:34.910 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:35 np0005535469 virtqemud[253880]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 11:12:35 np0005535469 virtqemud[253880]: hostname: compute-0
Nov 25 11:12:35 np0005535469 virtqemud[253880]: End of file while reading data: Input/output error
Nov 25 11:12:35 np0005535469 systemd[1]: libpod-38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508.scope: Deactivated successfully.
Nov 25 11:12:35 np0005535469 systemd[1]: libpod-38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508.scope: Consumed 3.098s CPU time.
Nov 25 11:12:35 np0005535469 podman[254027]: 2025-11-25 16:12:35.533302463 +0000 UTC m=+0.657304886 container died 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 11:12:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508-userdata-shm.mount: Deactivated successfully.
Nov 25 11:12:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a-merged.mount: Deactivated successfully.
Nov 25 11:12:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:37 np0005535469 podman[254027]: 2025-11-25 16:12:37.036038904 +0000 UTC m=+2.160041327 container cleanup 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:12:37 np0005535469 podman[254027]: nova_compute
Nov 25 11:12:37 np0005535469 podman[254064]: nova_compute
Nov 25 11:12:37 np0005535469 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 11:12:37 np0005535469 systemd[1]: Stopped nova_compute container.
Nov 25 11:12:37 np0005535469 systemd[1]: Starting nova_compute container...
Nov 25 11:12:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:37 np0005535469 podman[254076]: 2025-11-25 16:12:37.21039194 +0000 UTC m=+0.092324553 container init 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 11:12:37 np0005535469 podman[254076]: 2025-11-25 16:12:37.215906189 +0000 UTC m=+0.097838772 container start 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute)
Nov 25 11:12:37 np0005535469 podman[254076]: nova_compute
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + sudo -E kolla_set_configs
Nov 25 11:12:37 np0005535469 systemd[1]: Started nova_compute container.
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Validating config file
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying service configuration files
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /etc/ceph
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Creating directory /etc/ceph
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Writing out command to execute
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:37 np0005535469 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 11:12:37 np0005535469 nova_compute[254092]: ++ cat /run_command
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + CMD=nova-compute
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + ARGS=
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + sudo kolla_copy_cacerts
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + [[ ! -n '' ]]
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + . kolla_extend_start
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 11:12:37 np0005535469 nova_compute[254092]: Running command: 'nova-compute'
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + umask 0022
Nov 25 11:12:37 np0005535469 nova_compute[254092]: + exec nova-compute
Nov 25 11:12:37 np0005535469 python3.9[254255]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 11:12:38 np0005535469 systemd[1]: Started libpod-conmon-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0.scope.
Nov 25 11:12:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:12:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 11:12:38 np0005535469 podman[254280]: 2025-11-25 16:12:38.165905556 +0000 UTC m=+0.102117777 container init 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3)
Nov 25 11:12:38 np0005535469 podman[254280]: 2025-11-25 16:12:38.172951907 +0000 UTC m=+0.109164128 container start 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:12:38 np0005535469 python3.9[254255]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 11:12:38 np0005535469 nova_compute_init[254301]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 11:12:38 np0005535469 systemd[1]: libpod-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0.scope: Deactivated successfully.
Nov 25 11:12:38 np0005535469 podman[254314]: 2025-11-25 16:12:38.269709809 +0000 UTC m=+0.031343437 container died 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:12:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0-userdata-shm.mount: Deactivated successfully.
Nov 25 11:12:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677-merged.mount: Deactivated successfully.
Nov 25 11:12:38 np0005535469 podman[254314]: 2025-11-25 16:12:38.301284852 +0000 UTC m=+0.062918440 container cleanup 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:12:38 np0005535469 systemd[1]: libpod-conmon-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0.scope: Deactivated successfully.
Nov 25 11:12:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:38 np0005535469 systemd-logind[791]: Session 50 logged out. Waiting for processes to exit.
Nov 25 11:12:38 np0005535469 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 11:12:38 np0005535469 systemd[1]: session-50.scope: Consumed 2min 13.311s CPU time.
Nov 25 11:12:38 np0005535469 systemd-logind[791]: Removed session 50.
Nov 25 11:12:39 np0005535469 nova_compute[254092]: 2025-11-25 16:12:39.327 254096 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 11:12:39 np0005535469 nova_compute[254092]: 2025-11-25 16:12:39.328 254096 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 11:12:39 np0005535469 nova_compute[254092]: 2025-11-25 16:12:39.328 254096 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 11:12:39 np0005535469 nova_compute[254092]: 2025-11-25 16:12:39.328 254096 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 11:12:39 np0005535469 nova_compute[254092]: 2025-11-25 16:12:39.466 254096 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:12:39 np0005535469 nova_compute[254092]: 2025-11-25 16:12:39.486 254096 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:12:39 np0005535469 nova_compute[254092]: 2025-11-25 16:12:39.486 254096 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 11:12:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:12:39
Nov 25 11:12:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:12:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:12:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes']
Nov 25 11:12:39 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.094 254096 INFO nova.virt.driver [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.239 254096 INFO nova.compute.provider_config [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.255 254096 DEBUG oslo_concurrency.lockutils [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.255 254096 DEBUG oslo_concurrency.lockutils [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.255 254096 DEBUG oslo_concurrency.lockutils [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.269 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.269 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.269 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 WARNING oslo_config.cfg [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 11:12:40 np0005535469 nova_compute[254092]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 11:12:40 np0005535469 nova_compute[254092]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 11:12:40 np0005535469 nova_compute[254092]: and ``live_migration_inbound_addr`` respectively.
Nov 25 11:12:40 np0005535469 nova_compute[254092]: ).  Its value may be silently ignored in the future.#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_secret_uuid        = d82baeae-c742-50a4-b8f6-b5257c8a2c92 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.354 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.355 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.355 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.355 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.397 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.397 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.398 254096 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 25 11:12:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.416 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.416 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.416 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.417 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.427 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f649a344610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.429 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f649a344610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.430 254096 INFO nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.439 254096 INFO nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <host>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <uuid>3ad80417-8456-49f6-9219-21501d8909bb</uuid>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <arch>x86_64</arch>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model>EPYC-Rome-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <vendor>AMD</vendor>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <microcode version='16777317'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <signature family='23' model='49' stepping='0'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='x2apic'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='tsc-deadline'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='osxsave'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='hypervisor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='tsc_adjust'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='spec-ctrl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='stibp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='arch-capabilities'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='cmp_legacy'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='topoext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='virt-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='lbrv'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='tsc-scale'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='vmcb-clean'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='pause-filter'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='pfthreshold'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='svme-addr-chk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='rdctl-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='skip-l1dfl-vmentry'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='mds-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature name='pschange-mc-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <pages unit='KiB' size='4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <pages unit='KiB' size='2048'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <pages unit='KiB' size='1048576'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <power_management>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <suspend_mem/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </power_management>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <iommu support='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <migration_features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <live/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <uri_transports>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <uri_transport>tcp</uri_transport>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <uri_transport>rdma</uri_transport>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </uri_transports>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </migration_features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <topology>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <cells num='1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <cell id='0'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          <memory unit='KiB'>7864320</memory>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          <pages unit='KiB' size='2048'>0</pages>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          <distances>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <sibling id='0' value='10'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          </distances>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          <cpus num='8'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:          </cpus>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        </cell>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </cells>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </topology>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <cache>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </cache>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <secmodel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model>selinux</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <doi>0</doi>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </secmodel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <secmodel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model>dac</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <doi>0</doi>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </secmodel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </host>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <guest>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <os_type>hvm</os_type>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <arch name='i686'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <wordsize>32</wordsize>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <domain type='qemu'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <domain type='kvm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </arch>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <pae/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <nonpae/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <acpi default='on' toggle='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <apic default='on' toggle='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <cpuselection/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <deviceboot/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <disksnapshot default='on' toggle='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <externalSnapshot/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </guest>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <guest>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <os_type>hvm</os_type>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <arch name='x86_64'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <wordsize>64</wordsize>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <domain type='qemu'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <domain type='kvm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </arch>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <acpi default='on' toggle='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <apic default='on' toggle='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <cpuselection/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <deviceboot/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <disksnapshot default='on' toggle='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <externalSnapshot/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </guest>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 
Nov 25 11:12:40 np0005535469 nova_compute[254092]: </capabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: #033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.446 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.447 254096 WARNING nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.447 254096 DEBUG nova.virt.libvirt.volume.mount [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.469 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 11:12:40 np0005535469 nova_compute[254092]: <domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <domain>kvm</domain>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <arch>i686</arch>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <vcpu max='4096'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <iothreads supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <os supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='firmware'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <loader supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>rom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pflash</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='readonly'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>yes</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='secure'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </loader>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-passthrough' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='hostPassthroughMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='maximum' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='maximumMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-model' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <vendor>AMD</vendor>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='x2apic'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='hypervisor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='stibp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='overflow-recov'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='succor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lbrv'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-scale'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='flushbyasid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pause-filter'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pfthreshold'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='disable' name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='custom' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Dhyana-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-128'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-256'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-512'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v6'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v7'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <memoryBacking supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='sourceType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>anonymous</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>memfd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </memoryBacking>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <disk supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='diskDevice'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>disk</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cdrom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>floppy</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>lun</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>fdc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>sata</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <graphics supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vnc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egl-headless</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <video supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='modelType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vga</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cirrus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>none</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>bochs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ramfb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hostdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='mode'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>subsystem</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='startupPolicy'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>mandatory</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>requisite</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>optional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='subsysType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pci</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='capsType'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='pciBackend'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hostdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <rng supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>random</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <filesystem supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='driverType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>path</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>handle</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtiofs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </filesystem>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <tpm supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-tis</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-crb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emulator</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>external</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendVersion'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>2.0</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </tpm>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <redirdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </redirdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <channel supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </channel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <crypto supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </crypto>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <interface supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>passt</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <panic supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>isa</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>hyperv</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </panic>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <console supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>null</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dev</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pipe</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stdio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>udp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tcp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu-vdagent</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <gic supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <vmcoreinfo supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <genid supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backingStoreInput supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backup supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <async-teardown supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <ps2 supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sev supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sgx supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hyperv supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='features'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>relaxed</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vapic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>spinlocks</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vpindex</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>runtime</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>synic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stimer</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reset</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vendor_id</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>frequencies</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reenlightenment</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tlbflush</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ipi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>avic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emsr_bitmap</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>xmm_input</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <spinlocks>4095</spinlocks>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <stimer_direct>on</stimer_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hyperv>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <launchSecurity supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='sectype'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tdx</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </launchSecurity>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: </domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.474 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 11:12:40 np0005535469 nova_compute[254092]: <domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <domain>kvm</domain>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <arch>i686</arch>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <vcpu max='240'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <iothreads supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <os supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='firmware'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <loader supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>rom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pflash</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='readonly'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>yes</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='secure'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </loader>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-passthrough' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='hostPassthroughMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='maximum' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='maximumMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-model' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <vendor>AMD</vendor>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='x2apic'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='hypervisor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='stibp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='overflow-recov'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='succor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lbrv'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-scale'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='flushbyasid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pause-filter'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pfthreshold'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='disable' name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='custom' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Dhyana-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-128'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-256'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-512'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v6'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v7'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <memoryBacking supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='sourceType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>anonymous</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>memfd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </memoryBacking>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <disk supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='diskDevice'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>disk</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cdrom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>floppy</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>lun</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ide</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>fdc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>sata</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <graphics supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vnc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egl-headless</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <video supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='modelType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vga</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cirrus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>none</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>bochs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ramfb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hostdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='mode'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>subsystem</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='startupPolicy'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>mandatory</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>requisite</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>optional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='subsysType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pci</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='capsType'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='pciBackend'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hostdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <rng supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>random</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <filesystem supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='driverType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>path</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>handle</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtiofs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </filesystem>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <tpm supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-tis</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-crb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emulator</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>external</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendVersion'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>2.0</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </tpm>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <redirdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </redirdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <channel supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </channel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <crypto supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </crypto>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <interface supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>passt</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <panic supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>isa</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>hyperv</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </panic>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <console supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>null</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dev</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pipe</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stdio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>udp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tcp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu-vdagent</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <gic supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <vmcoreinfo supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <genid supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backingStoreInput supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backup supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <async-teardown supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <ps2 supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sev supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sgx supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hyperv supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='features'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>relaxed</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vapic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>spinlocks</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vpindex</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>runtime</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>synic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stimer</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reset</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vendor_id</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>frequencies</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reenlightenment</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tlbflush</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ipi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>avic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emsr_bitmap</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>xmm_input</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <spinlocks>4095</spinlocks>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <stimer_direct>on</stimer_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hyperv>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <launchSecurity supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='sectype'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tdx</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </launchSecurity>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: </domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.501 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.504 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 11:12:40 np0005535469 nova_compute[254092]: <domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <domain>kvm</domain>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <arch>x86_64</arch>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <vcpu max='4096'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <iothreads supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <os supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='firmware'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>efi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <loader supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>rom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pflash</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='readonly'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>yes</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='secure'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>yes</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </loader>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-passthrough' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='hostPassthroughMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='maximum' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='maximumMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-model' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <vendor>AMD</vendor>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='x2apic'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='hypervisor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='stibp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='overflow-recov'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='succor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lbrv'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-scale'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='flushbyasid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pause-filter'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pfthreshold'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='disable' name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='custom' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Dhyana-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-128'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-256'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-512'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v6'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v7'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <memoryBacking supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='sourceType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>anonymous</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>memfd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </memoryBacking>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <disk supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='diskDevice'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>disk</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cdrom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>floppy</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>lun</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>fdc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>sata</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <graphics supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vnc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egl-headless</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <video supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='modelType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vga</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cirrus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>none</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>bochs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ramfb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hostdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='mode'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>subsystem</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='startupPolicy'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>mandatory</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>requisite</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>optional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='subsysType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pci</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='capsType'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='pciBackend'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hostdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <rng supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>random</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <filesystem supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='driverType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>path</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>handle</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtiofs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </filesystem>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <tpm supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-tis</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-crb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emulator</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>external</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendVersion'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>2.0</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </tpm>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <redirdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </redirdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <channel supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </channel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <crypto supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </crypto>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <interface supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>passt</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <panic supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>isa</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>hyperv</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </panic>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <console supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>null</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dev</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pipe</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stdio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>udp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tcp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu-vdagent</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <gic supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <vmcoreinfo supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <genid supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backingStoreInput supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backup supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <async-teardown supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <ps2 supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sev supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sgx supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hyperv supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='features'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>relaxed</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vapic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>spinlocks</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vpindex</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>runtime</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>synic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stimer</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reset</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vendor_id</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>frequencies</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reenlightenment</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tlbflush</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ipi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>avic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emsr_bitmap</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>xmm_input</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <spinlocks>4095</spinlocks>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <stimer_direct>on</stimer_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hyperv>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <launchSecurity supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='sectype'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tdx</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </launchSecurity>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: </domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.581 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 11:12:40 np0005535469 nova_compute[254092]: <domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <domain>kvm</domain>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <arch>x86_64</arch>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <vcpu max='240'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <iothreads supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <os supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='firmware'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <loader supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>rom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pflash</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='readonly'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>yes</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='secure'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>no</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </loader>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-passthrough' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='hostPassthroughMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='maximum' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='maximumMigratable'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>on</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>off</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='host-model' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <vendor>AMD</vendor>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='x2apic'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='hypervisor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='stibp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='overflow-recov'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='succor'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='amd-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lbrv'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='tsc-scale'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='flushbyasid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pause-filter'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='pfthreshold'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='svme-addr-chk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <feature policy='disable' name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <mode name='custom' supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Broadwell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Cooperlake-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Denverton-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Dhyana-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='auto-ibrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Milan-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amd-psfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='no-nested-data-bp'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='null-sel-clr-base'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='stibp-always-on'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-Rome-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='EPYC-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='GraniteRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-128'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-256'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx10-512'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='prefetchiti'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Haswell-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v6'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Icelake-Server-v7'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='IvyBridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='KnightsMill-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4fmaps'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-4vnniw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512er'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512pf'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G4-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Opteron_G5-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fma4'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tbm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xop'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SapphireRapids-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='amx-tile'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-bf16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-fp16'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512-vpopcntdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bitalg'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vbmi2'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrc'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fzrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='la57'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='taa-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='tsx-ldtrk'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xfd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='SierraForest-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ifma'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-ne-convert'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx-vnni-int8'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='bus-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cmpccxadd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fbsdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='fsrs'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ibrs-all'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mcdt-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pbrsb-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='psdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='sbdr-ssdp-no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='serialize'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vaes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='vpclmulqdq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Client-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='hle'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='rtm'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Skylake-Server-v5'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512bw'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512cd'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512dq'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512f'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='avx512vl'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='invpcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pcid'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='pku'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='mpx'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v2'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v3'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='core-capability'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='split-lock-detect'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='Snowridge-v4'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='cldemote'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='erms'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='gfni'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdir64b'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='movdiri'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='xsaves'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='athlon-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='core2duo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='coreduo-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='n270-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='ss'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <blockers model='phenom-v1'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnow'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <feature name='3dnowext'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </blockers>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </mode>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <memoryBacking supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <enum name='sourceType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>anonymous</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <value>memfd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </memoryBacking>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <disk supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='diskDevice'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>disk</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cdrom</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>floppy</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>lun</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ide</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>fdc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>sata</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <graphics supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vnc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egl-headless</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <video supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='modelType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vga</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>cirrus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>none</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>bochs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ramfb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hostdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='mode'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>subsystem</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='startupPolicy'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>mandatory</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>requisite</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>optional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='subsysType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pci</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>scsi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='capsType'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='pciBackend'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hostdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <rng supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtio-non-transitional</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>random</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>egd</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <filesystem supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='driverType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>path</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>handle</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>virtiofs</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </filesystem>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <tpm supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-tis</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tpm-crb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emulator</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>external</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendVersion'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>2.0</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </tpm>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <redirdev supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='bus'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>usb</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </redirdev>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <channel supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </channel>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <crypto supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendModel'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>builtin</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </crypto>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <interface supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='backendType'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>default</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>passt</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <panic supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='model'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>isa</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>hyperv</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </panic>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <console supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='type'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>null</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vc</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pty</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dev</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>file</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>pipe</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stdio</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>udp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tcp</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>unix</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>qemu-vdagent</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>dbus</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <gic supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <vmcoreinfo supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <genid supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backingStoreInput supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <backup supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <async-teardown supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <ps2 supported='yes'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sev supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <sgx supported='no'/>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <hyperv supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='features'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>relaxed</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vapic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>spinlocks</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vpindex</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>runtime</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>synic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>stimer</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reset</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>vendor_id</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>frequencies</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>reenlightenment</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tlbflush</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>ipi</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>avic</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>emsr_bitmap</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>xmm_input</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <spinlocks>4095</spinlocks>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <stimer_direct>on</stimer_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </defaults>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </hyperv>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    <launchSecurity supported='yes'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      <enum name='sectype'>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:        <value>tdx</value>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:      </enum>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:    </launchSecurity>
Nov 25 11:12:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: </domainCapabilities>
Nov 25 11:12:40 np0005535469 nova_compute[254092]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.667 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.667 254096 INFO nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Secure Boot support detected#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.670 254096 INFO nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.670 254096 INFO nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.678 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.716 254096 INFO nova.virt.node [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Determined node identity 4f066da7-306c-41d7-8522-9a9189cacc49 from /var/lib/nova/compute_id#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.733 254096 WARNING nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Compute nodes ['4f066da7-306c-41d7-8522-9a9189cacc49'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.767 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.802 254096 WARNING nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.802 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.802 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.803 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.803 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:12:40 np0005535469 nova_compute[254092]: 2025-11-25 16:12:40.803 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:12:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:12:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880014724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.200 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:12:41 np0005535469 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 11:12:41 np0005535469 systemd[1]: Started libvirt nodedev daemon.
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.482 254096 WARNING nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.483 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.483 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.484 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.496 254096 WARNING nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] No compute node record for compute-0.ctlplane.example.com:4f066da7-306c-41d7-8522-9a9189cacc49: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 4f066da7-306c-41d7-8522-9a9189cacc49 could not be found.#033[00m
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.517 254096 INFO nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 4f066da7-306c-41d7-8522-9a9189cacc49#033[00m
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.585 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:12:41 np0005535469 nova_compute[254092]: 2025-11-25 16:12:41.585 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:12:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:42 np0005535469 nova_compute[254092]: 2025-11-25 16:12:42.414 254096 INFO nova.scheduler.client.report [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [req-a439584f-a634-409a-9edc-28be634c7cef] Created resource provider record via placement API for resource provider with UUID 4f066da7-306c-41d7-8522-9a9189cacc49 and name compute-0.ctlplane.example.com.#033[00m
Nov 25 11:12:42 np0005535469 nova_compute[254092]: 2025-11-25 16:12:42.775 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:12:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:12:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305549128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.223 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.228 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 11:12:43 np0005535469 nova_compute[254092]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.229 254096 INFO nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.229 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.230 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.308 254096 DEBUG nova.scheduler.client.report [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updated inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.308 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.308 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.395 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.430 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.431 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.431 254096 DEBUG nova.service [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.475 254096 DEBUG nova.service [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 25 11:12:43 np0005535469 nova_compute[254092]: 2025-11-25 16:12:43.476 254096 DEBUG nova.servicegroup.drivers.db [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 25 11:12:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.354413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170354481, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1772, "num_deletes": 507, "total_data_size": 2507206, "memory_usage": 2540232, "flush_reason": "Manual Compaction"}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170407326, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2462496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13394, "largest_seqno": 15165, "table_properties": {"data_size": 2454720, "index_size": 4206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 17980, "raw_average_key_size": 18, "raw_value_size": 2437367, "raw_average_value_size": 2487, "num_data_blocks": 192, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087000, "oldest_key_time": 1764087000, "file_creation_time": 1764087170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 52949 microseconds, and 6710 cpu microseconds.
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.407374) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2462496 bytes OK
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.407393) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413148) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413199) EVENT_LOG_v1 {"time_micros": 1764087170413188, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413220) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2498567, prev total WAL file size 2498567, number of live WAL files 2.
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.414032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2404KB)], [32(7194KB)]
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170414079, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9829458, "oldest_snapshot_seqno": -1}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3965 keys, 7796220 bytes, temperature: kUnknown
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170478447, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7796220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7767428, "index_size": 17802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 97356, "raw_average_key_size": 24, "raw_value_size": 7693324, "raw_average_value_size": 1940, "num_data_blocks": 756, "num_entries": 3965, "num_filter_entries": 3965, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.478735) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7796220 bytes
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.479849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.6 rd, 121.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4992, records dropped: 1027 output_compression: NoCompression
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.479864) EVENT_LOG_v1 {"time_micros": 1764087170479857, "job": 14, "event": "compaction_finished", "compaction_time_micros": 64434, "compaction_time_cpu_micros": 22725, "output_level": 6, "num_output_files": 1, "total_output_size": 7796220, "num_input_records": 4992, "num_output_records": 3965, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170480279, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170481432, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:12:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:12:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:12:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:52 np0005535469 podman[254462]: 2025-11-25 16:12:52.641457445 +0000 UTC m=+0.061726407 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:12:52 np0005535469 podman[254463]: 2025-11-25 16:12:52.661542318 +0000 UTC m=+0.081909683 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:12:52 np0005535469 podman[254464]: 2025-11-25 16:12:52.669884673 +0000 UTC m=+0.089128777 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:12:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:12:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:12:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/327896286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/327896286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651973424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:13:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651973424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:13:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:13:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106676261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:13:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:13:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106676261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:13:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:13:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:13:13.579 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:13:13.579 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:13:13.579 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:13:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:23 np0005535469 podman[254530]: 2025-11-25 16:13:23.634496071 +0000 UTC m=+0.048451638 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 11:13:23 np0005535469 podman[254529]: 2025-11-25 16:13:23.659483456 +0000 UTC m=+0.077176475 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:13:23 np0005535469 podman[254531]: 2025-11-25 16:13:23.66373018 +0000 UTC m=+0.075554431 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:13:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:13:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:13:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9ae456da-3f1e-444c-b28f-abba3dcf71c1 does not exist
Nov 25 11:13:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b3d3ec1a-03d8-4200-b489-c4dbc8eb79e4 does not exist
Nov 25 11:13:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev bf74416e-8e2e-460c-a4b8-c4be4ee165df does not exist
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:13:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:13:30 np0005535469 podman[254982]: 2025-11-25 16:13:30.332441746 +0000 UTC m=+0.050564686 container create 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:13:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:30 np0005535469 systemd[1]: Started libpod-conmon-3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56.scope.
Nov 25 11:13:30 np0005535469 podman[254982]: 2025-11-25 16:13:30.308534711 +0000 UTC m=+0.026657671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:13:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:13:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:30 np0005535469 podman[254982]: 2025-11-25 16:13:30.426066334 +0000 UTC m=+0.144189304 container init 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:13:30 np0005535469 podman[254982]: 2025-11-25 16:13:30.436416513 +0000 UTC m=+0.154539453 container start 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 25 11:13:30 np0005535469 podman[254982]: 2025-11-25 16:13:30.440186495 +0000 UTC m=+0.158309465 container attach 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:13:30 np0005535469 practical_merkle[254998]: 167 167
Nov 25 11:13:30 np0005535469 systemd[1]: libpod-3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56.scope: Deactivated successfully.
Nov 25 11:13:30 np0005535469 podman[254982]: 2025-11-25 16:13:30.445003145 +0000 UTC m=+0.163126085 container died 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 25 11:13:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7d88346ed80f00c4e2a1b43c7bbaa410f199ca2a18b247574ef9e72989b9731d-merged.mount: Deactivated successfully.
Nov 25 11:13:30 np0005535469 podman[254982]: 2025-11-25 16:13:30.499163967 +0000 UTC m=+0.217286907 container remove 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:13:30 np0005535469 systemd[1]: libpod-conmon-3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56.scope: Deactivated successfully.
Nov 25 11:13:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:13:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:13:30 np0005535469 podman[255021]: 2025-11-25 16:13:30.659954118 +0000 UTC m=+0.042338514 container create 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 11:13:30 np0005535469 systemd[1]: Started libpod-conmon-4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c.scope.
Nov 25 11:13:30 np0005535469 podman[255021]: 2025-11-25 16:13:30.640254327 +0000 UTC m=+0.022638743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:13:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:13:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:30 np0005535469 podman[255021]: 2025-11-25 16:13:30.771707766 +0000 UTC m=+0.154092172 container init 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:13:30 np0005535469 podman[255021]: 2025-11-25 16:13:30.782315131 +0000 UTC m=+0.164699527 container start 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:13:30 np0005535469 podman[255021]: 2025-11-25 16:13:30.789237339 +0000 UTC m=+0.171621735 container attach 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:13:31 np0005535469 elated_chandrasekhar[255037]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:13:31 np0005535469 elated_chandrasekhar[255037]: --> relative data size: 1.0
Nov 25 11:13:31 np0005535469 elated_chandrasekhar[255037]: --> All data devices are unavailable
Nov 25 11:13:31 np0005535469 systemd[1]: libpod-4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c.scope: Deactivated successfully.
Nov 25 11:13:31 np0005535469 podman[255066]: 2025-11-25 16:13:31.822768441 +0000 UTC m=+0.020467263 container died 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:13:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e-merged.mount: Deactivated successfully.
Nov 25 11:13:31 np0005535469 podman[255066]: 2025-11-25 16:13:31.874586299 +0000 UTC m=+0.072285101 container remove 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 11:13:31 np0005535469 systemd[1]: libpod-conmon-4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c.scope: Deactivated successfully.
Nov 25 11:13:32 np0005535469 podman[255224]: 2025-11-25 16:13:32.404173116 +0000 UTC m=+0.039019173 container create 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 11:13:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:32 np0005535469 systemd[1]: Started libpod-conmon-0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82.scope.
Nov 25 11:13:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:13:32 np0005535469 podman[255224]: 2025-11-25 16:13:32.468572595 +0000 UTC m=+0.103418682 container init 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:13:32 np0005535469 podman[255224]: 2025-11-25 16:13:32.474431714 +0000 UTC m=+0.109277771 container start 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:13:32 np0005535469 amazing_agnesi[255240]: 167 167
Nov 25 11:13:32 np0005535469 podman[255224]: 2025-11-25 16:13:32.478158414 +0000 UTC m=+0.113004531 container attach 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 11:13:32 np0005535469 systemd[1]: libpod-0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82.scope: Deactivated successfully.
Nov 25 11:13:32 np0005535469 podman[255224]: 2025-11-25 16:13:32.47913563 +0000 UTC m=+0.113981687 container died 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:13:32 np0005535469 podman[255224]: 2025-11-25 16:13:32.389067559 +0000 UTC m=+0.023913616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:13:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-74d78250b15acd04d23e7ef844ab5347e921fa48ba875b60d59b2a2f5679bfcc-merged.mount: Deactivated successfully.
Nov 25 11:13:32 np0005535469 podman[255224]: 2025-11-25 16:13:32.5087395 +0000 UTC m=+0.143585557 container remove 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:13:32 np0005535469 systemd[1]: libpod-conmon-0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82.scope: Deactivated successfully.
Nov 25 11:13:32 np0005535469 podman[255262]: 2025-11-25 16:13:32.669476799 +0000 UTC m=+0.055474059 container create 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:13:32 np0005535469 systemd[1]: Started libpod-conmon-5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621.scope.
Nov 25 11:13:32 np0005535469 podman[255262]: 2025-11-25 16:13:32.636314634 +0000 UTC m=+0.022311924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:13:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:13:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:32 np0005535469 podman[255262]: 2025-11-25 16:13:32.82397456 +0000 UTC m=+0.209971840 container init 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 11:13:32 np0005535469 podman[255262]: 2025-11-25 16:13:32.831673918 +0000 UTC m=+0.217671218 container start 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 11:13:32 np0005535469 podman[255262]: 2025-11-25 16:13:32.836504509 +0000 UTC m=+0.222501849 container attach 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]: {
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:    "0": [
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:        {
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "devices": [
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "/dev/loop3"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            ],
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_name": "ceph_lv0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_size": "21470642176",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "name": "ceph_lv0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "tags": {
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cluster_name": "ceph",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.crush_device_class": "",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.encrypted": "0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osd_id": "0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.type": "block",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.vdo": "0"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            },
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "type": "block",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "vg_name": "ceph_vg0"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:        }
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:    ],
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:    "1": [
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:        {
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "devices": [
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "/dev/loop4"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            ],
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_name": "ceph_lv1",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_size": "21470642176",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "name": "ceph_lv1",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "tags": {
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cluster_name": "ceph",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.crush_device_class": "",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.encrypted": "0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osd_id": "1",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.type": "block",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.vdo": "0"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            },
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "type": "block",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "vg_name": "ceph_vg1"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:        }
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:    ],
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:    "2": [
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:        {
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "devices": [
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "/dev/loop5"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            ],
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_name": "ceph_lv2",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_size": "21470642176",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "name": "ceph_lv2",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "tags": {
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.cluster_name": "ceph",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.crush_device_class": "",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.encrypted": "0",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osd_id": "2",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.type": "block",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:                "ceph.vdo": "0"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            },
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "type": "block",
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:            "vg_name": "ceph_vg2"
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:        }
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]:    ]
Nov 25 11:13:33 np0005535469 competent_leavitt[255279]: }
Nov 25 11:13:33 np0005535469 systemd[1]: libpod-5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621.scope: Deactivated successfully.
Nov 25 11:13:33 np0005535469 podman[255262]: 2025-11-25 16:13:33.636117395 +0000 UTC m=+1.022114655 container died 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:13:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c-merged.mount: Deactivated successfully.
Nov 25 11:13:33 np0005535469 podman[255262]: 2025-11-25 16:13:33.696398453 +0000 UTC m=+1.082395753 container remove 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:13:33 np0005535469 systemd[1]: libpod-conmon-5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621.scope: Deactivated successfully.
Nov 25 11:13:34 np0005535469 podman[255442]: 2025-11-25 16:13:34.316270528 +0000 UTC m=+0.055734716 container create a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:13:34 np0005535469 systemd[1]: Started libpod-conmon-a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae.scope.
Nov 25 11:13:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:13:34 np0005535469 podman[255442]: 2025-11-25 16:13:34.29040405 +0000 UTC m=+0.029868298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:13:34 np0005535469 podman[255442]: 2025-11-25 16:13:34.391283373 +0000 UTC m=+0.130747531 container init a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:13:34 np0005535469 podman[255442]: 2025-11-25 16:13:34.397138771 +0000 UTC m=+0.136602919 container start a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:13:34 np0005535469 podman[255442]: 2025-11-25 16:13:34.399934477 +0000 UTC m=+0.139398665 container attach a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:13:34 np0005535469 pedantic_pascal[255458]: 167 167
Nov 25 11:13:34 np0005535469 systemd[1]: libpod-a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae.scope: Deactivated successfully.
Nov 25 11:13:34 np0005535469 podman[255442]: 2025-11-25 16:13:34.403364329 +0000 UTC m=+0.142828487 container died a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:13:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-aaff202f9479d468926b2db9c4f4902c3cba11e794dcfdf71ad74d4a2259fee1-merged.mount: Deactivated successfully.
Nov 25 11:13:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:34 np0005535469 podman[255442]: 2025-11-25 16:13:34.433512553 +0000 UTC m=+0.172976711 container remove a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:13:34 np0005535469 systemd[1]: libpod-conmon-a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae.scope: Deactivated successfully.
Nov 25 11:13:34 np0005535469 nova_compute[254092]: 2025-11-25 16:13:34.477 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:34 np0005535469 nova_compute[254092]: 2025-11-25 16:13:34.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:34 np0005535469 podman[255481]: 2025-11-25 16:13:34.591873348 +0000 UTC m=+0.037376880 container create 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 11:13:34 np0005535469 systemd[1]: Started libpod-conmon-6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8.scope.
Nov 25 11:13:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:13:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:13:34 np0005535469 podman[255481]: 2025-11-25 16:13:34.664129389 +0000 UTC m=+0.109632921 container init 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:13:34 np0005535469 podman[255481]: 2025-11-25 16:13:34.670874921 +0000 UTC m=+0.116378463 container start 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 11:13:34 np0005535469 podman[255481]: 2025-11-25 16:13:34.576609276 +0000 UTC m=+0.022112828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:13:34 np0005535469 podman[255481]: 2025-11-25 16:13:34.751392614 +0000 UTC m=+0.196896176 container attach 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:13:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]: {
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "osd_id": 1,
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "type": "bluestore"
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:    },
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "osd_id": 2,
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "type": "bluestore"
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:    },
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "osd_id": 0,
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:        "type": "bluestore"
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]:    }
Nov 25 11:13:35 np0005535469 sleepy_cori[255498]: }
Nov 25 11:13:35 np0005535469 systemd[1]: libpod-6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8.scope: Deactivated successfully.
Nov 25 11:13:35 np0005535469 podman[255531]: 2025-11-25 16:13:35.636725216 +0000 UTC m=+0.021658355 container died 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:13:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5-merged.mount: Deactivated successfully.
Nov 25 11:13:35 np0005535469 podman[255531]: 2025-11-25 16:13:35.693788937 +0000 UTC m=+0.078722046 container remove 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:13:35 np0005535469 systemd[1]: libpod-conmon-6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8.scope: Deactivated successfully.
Nov 25 11:13:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:13:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:13:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 54690f2b-7ecf-487d-8d86-1f7b432247cf does not exist
Nov 25 11:13:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ffcf5784-979d-44cf-a754-097a2e0067d3 does not exist
Nov 25 11:13:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:36 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:36 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:13:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.535 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:13:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:13:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478020630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:13:39 np0005535469 nova_compute[254092]: 2025-11-25 16:13:39.963 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:13:39
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log']
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.154 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.156 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.156 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.156 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.262 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:13:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:13:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:13:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5544 writes, 23K keys, 5544 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5544 writes, 828 syncs, 6.70 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 11:13:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:13:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1037459494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.742 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.748 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.764 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.766 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:13:40 np0005535469 nova_compute[254092]: 2025-11-25 16:13:40.766 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:13:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 11:13:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16247239' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 11:13:48 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 11:13:48 np0005535469 ceph-mgr[75280]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 11:13:48 np0005535469 ceph-mgr[75280]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 11:13:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:13:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.2 total, 600.0 interval#012Cumulative writes: 6612 writes, 27K keys, 6612 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6612 writes, 1143 syncs, 5.78 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 25 11:13:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:13:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:13:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:54 np0005535469 podman[255640]: 2025-11-25 16:13:54.652606945 +0000 UTC m=+0.073052153 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 25 11:13:54 np0005535469 podman[255641]: 2025-11-25 16:13:54.666392437 +0000 UTC m=+0.082843237 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 11:13:54 np0005535469 podman[255642]: 2025-11-25 16:13:54.713540531 +0000 UTC m=+0.122419707 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 11:13:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:13:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:13:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:14:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.5 total, 600.0 interval#012Cumulative writes: 5678 writes, 24K keys, 5678 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5678 writes, 844 syncs, 6.73 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 25 11:14:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:14:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 11:14:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/204269038' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 11:14:13 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 11:14:13 np0005535469 ceph-mgr[75280]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 11:14:13 np0005535469 ceph-mgr[75280]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 11:14:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:14:13.580 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:14:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:14:13.580 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:14:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:14:13.581 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:14:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 11:14:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:25 np0005535469 podman[255705]: 2025-11-25 16:14:25.639850513 +0000 UTC m=+0.056807614 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:14:25 np0005535469 podman[255704]: 2025-11-25 16:14:25.68495393 +0000 UTC m=+0.101360097 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:14:25 np0005535469 podman[255706]: 2025-11-25 16:14:25.685316 +0000 UTC m=+0.100373231 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:14:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:14:36 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1f5c59a4-f631-4012-a929-302685491e83 does not exist
Nov 25 11:14:36 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 04fddb78-95a1-4a10-b485-713d0cdb6911 does not exist
Nov 25 11:14:36 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a501fadc-fbcb-4853-86c8-bbe49be75768 does not exist
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:14:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:14:37 np0005535469 podman[256044]: 2025-11-25 16:14:37.331243875 +0000 UTC m=+0.020172685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:14:37 np0005535469 podman[256044]: 2025-11-25 16:14:37.517617767 +0000 UTC m=+0.206546557 container create 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:14:37 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:14:37 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:14:37 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:14:37 np0005535469 systemd[1]: Started libpod-conmon-45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc.scope.
Nov 25 11:14:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:14:37 np0005535469 podman[256044]: 2025-11-25 16:14:37.820288699 +0000 UTC m=+0.509217529 container init 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:14:37 np0005535469 podman[256044]: 2025-11-25 16:14:37.82628147 +0000 UTC m=+0.515210280 container start 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:14:37 np0005535469 keen_noyce[256060]: 167 167
Nov 25 11:14:37 np0005535469 systemd[1]: libpod-45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc.scope: Deactivated successfully.
Nov 25 11:14:37 np0005535469 podman[256044]: 2025-11-25 16:14:37.953281298 +0000 UTC m=+0.642210128 container attach 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:14:37 np0005535469 podman[256044]: 2025-11-25 16:14:37.954568944 +0000 UTC m=+0.643497734 container died 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:14:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cecc8d6e0434490aa99fb5ac0c2c5dff76faa9395d8a1c1f5e95a4faf40a81f6-merged.mount: Deactivated successfully.
Nov 25 11:14:38 np0005535469 podman[256044]: 2025-11-25 16:14:38.102125617 +0000 UTC m=+0.791054447 container remove 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:14:38 np0005535469 systemd[1]: libpod-conmon-45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc.scope: Deactivated successfully.
Nov 25 11:14:38 np0005535469 podman[256084]: 2025-11-25 16:14:38.275088337 +0000 UTC m=+0.045932931 container create 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:14:38 np0005535469 systemd[1]: Started libpod-conmon-9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621.scope.
Nov 25 11:14:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:14:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:38 np0005535469 podman[256084]: 2025-11-25 16:14:38.326063153 +0000 UTC m=+0.096907767 container init 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:14:38 np0005535469 podman[256084]: 2025-11-25 16:14:38.333630067 +0000 UTC m=+0.104474671 container start 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:14:38 np0005535469 podman[256084]: 2025-11-25 16:14:38.337023609 +0000 UTC m=+0.107868223 container attach 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:14:38 np0005535469 podman[256084]: 2025-11-25 16:14:38.256433753 +0000 UTC m=+0.027278397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:14:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:39 np0005535469 hopeful_hypatia[256102]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:14:39 np0005535469 hopeful_hypatia[256102]: --> relative data size: 1.0
Nov 25 11:14:39 np0005535469 hopeful_hypatia[256102]: --> All data devices are unavailable
Nov 25 11:14:39 np0005535469 systemd[1]: libpod-9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621.scope: Deactivated successfully.
Nov 25 11:14:39 np0005535469 podman[256084]: 2025-11-25 16:14:39.38201239 +0000 UTC m=+1.152857034 container died 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:14:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9-merged.mount: Deactivated successfully.
Nov 25 11:14:39 np0005535469 podman[256084]: 2025-11-25 16:14:39.432173335 +0000 UTC m=+1.203017929 container remove 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:14:39 np0005535469 systemd[1]: libpod-conmon-9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621.scope: Deactivated successfully.
Nov 25 11:14:39 np0005535469 podman[256287]: 2025-11-25 16:14:39.962862472 +0000 UTC m=+0.038870500 container create f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:14:40 np0005535469 systemd[1]: Started libpod-conmon-f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51.scope.
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:14:40
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'images', '.rgw.root']
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:14:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:14:40 np0005535469 podman[256287]: 2025-11-25 16:14:40.026101859 +0000 UTC m=+0.102109897 container init f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:14:40 np0005535469 podman[256287]: 2025-11-25 16:14:40.031810663 +0000 UTC m=+0.107818671 container start f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 11:14:40 np0005535469 wonderful_bhabha[256303]: 167 167
Nov 25 11:14:40 np0005535469 systemd[1]: libpod-f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51.scope: Deactivated successfully.
Nov 25 11:14:40 np0005535469 podman[256287]: 2025-11-25 16:14:40.035915054 +0000 UTC m=+0.111923122 container attach f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:14:40 np0005535469 podman[256287]: 2025-11-25 16:14:40.03650915 +0000 UTC m=+0.112517188 container died f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:14:40 np0005535469 podman[256287]: 2025-11-25 16:14:39.948254927 +0000 UTC m=+0.024262955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:14:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f85442d45cf649ca5f022e9314ae62cb0d96b1eaa062696c0f399e14d6568d20-merged.mount: Deactivated successfully.
Nov 25 11:14:40 np0005535469 podman[256287]: 2025-11-25 16:14:40.067563449 +0000 UTC m=+0.143571477 container remove f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 11:14:40 np0005535469 systemd[1]: libpod-conmon-f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51.scope: Deactivated successfully.
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:14:40 np0005535469 podman[256326]: 2025-11-25 16:14:40.223692764 +0000 UTC m=+0.034886233 container create 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:14:40 np0005535469 systemd[1]: Started libpod-conmon-383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d.scope.
Nov 25 11:14:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:14:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:40 np0005535469 podman[256326]: 2025-11-25 16:14:40.300463556 +0000 UTC m=+0.111657045 container init 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:14:40 np0005535469 podman[256326]: 2025-11-25 16:14:40.210199289 +0000 UTC m=+0.021392778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:14:40 np0005535469 podman[256326]: 2025-11-25 16:14:40.311256948 +0000 UTC m=+0.122450417 container start 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 11:14:40 np0005535469 podman[256326]: 2025-11-25 16:14:40.313997792 +0000 UTC m=+0.125191281 container attach 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:14:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.759 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.774 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.775 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.775 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.785 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.786 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.786 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.786 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.805 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:14:40 np0005535469 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]: {
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:    "0": [
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:        {
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "devices": [
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "/dev/loop3"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            ],
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_name": "ceph_lv0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_size": "21470642176",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "name": "ceph_lv0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "tags": {
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cluster_name": "ceph",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.crush_device_class": "",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.encrypted": "0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osd_id": "0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.type": "block",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.vdo": "0"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            },
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "type": "block",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "vg_name": "ceph_vg0"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:        }
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:    ],
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:    "1": [
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:        {
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "devices": [
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "/dev/loop4"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            ],
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_name": "ceph_lv1",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_size": "21470642176",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "name": "ceph_lv1",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "tags": {
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cluster_name": "ceph",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.crush_device_class": "",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.encrypted": "0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osd_id": "1",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.type": "block",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.vdo": "0"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            },
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "type": "block",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "vg_name": "ceph_vg1"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:        }
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:    ],
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:    "2": [
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:        {
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "devices": [
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "/dev/loop5"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            ],
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_name": "ceph_lv2",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_size": "21470642176",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "name": "ceph_lv2",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "tags": {
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.cluster_name": "ceph",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.crush_device_class": "",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.encrypted": "0",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osd_id": "2",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.type": "block",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:                "ceph.vdo": "0"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            },
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "type": "block",
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:            "vg_name": "ceph_vg2"
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:        }
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]:    ]
Nov 25 11:14:41 np0005535469 competent_ptolemy[256341]: }
Nov 25 11:14:41 np0005535469 systemd[1]: libpod-383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d.scope: Deactivated successfully.
Nov 25 11:14:41 np0005535469 podman[256326]: 2025-11-25 16:14:41.06958134 +0000 UTC m=+0.880774809 container died 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:14:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc-merged.mount: Deactivated successfully.
Nov 25 11:14:41 np0005535469 podman[256326]: 2025-11-25 16:14:41.126487906 +0000 UTC m=+0.937681375 container remove 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:14:41 np0005535469 systemd[1]: libpod-conmon-383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d.scope: Deactivated successfully.
Nov 25 11:14:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:14:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652670263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.226 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.370 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.371 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.371 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.371 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.439 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.454 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:14:41 np0005535469 podman[256543]: 2025-11-25 16:14:41.743404014 +0000 UTC m=+0.046606367 container create 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:14:41 np0005535469 systemd[1]: Started libpod-conmon-6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77.scope.
Nov 25 11:14:41 np0005535469 podman[256543]: 2025-11-25 16:14:41.725440224 +0000 UTC m=+0.028642557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:14:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:14:41 np0005535469 podman[256543]: 2025-11-25 16:14:41.83672801 +0000 UTC m=+0.139930413 container init 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:14:41 np0005535469 podman[256543]: 2025-11-25 16:14:41.845015721 +0000 UTC m=+0.148218034 container start 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:14:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:14:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728102570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:14:41 np0005535469 podman[256543]: 2025-11-25 16:14:41.849317756 +0000 UTC m=+0.152520149 container attach 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:14:41 np0005535469 angry_taussig[256559]: 167 167
Nov 25 11:14:41 np0005535469 systemd[1]: libpod-6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77.scope: Deactivated successfully.
Nov 25 11:14:41 np0005535469 podman[256543]: 2025-11-25 16:14:41.853790036 +0000 UTC m=+0.156992349 container died 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.873 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:14:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-323e7eb0510db34d8c153a4f08b82a7a122cb7b73cdd4ba5caef1edbcde9b061-merged.mount: Deactivated successfully.
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.884 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:14:41 np0005535469 podman[256543]: 2025-11-25 16:14:41.890471137 +0000 UTC m=+0.193673450 container remove 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:14:41 np0005535469 systemd[1]: libpod-conmon-6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77.scope: Deactivated successfully.
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.903 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.907 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:14:41 np0005535469 nova_compute[254092]: 2025-11-25 16:14:41.907 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:14:42 np0005535469 podman[256585]: 2025-11-25 16:14:42.052454149 +0000 UTC m=+0.041435719 container create 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:14:42 np0005535469 systemd[1]: Started libpod-conmon-472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197.scope.
Nov 25 11:14:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:14:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:14:42 np0005535469 podman[256585]: 2025-11-25 16:14:42.117823057 +0000 UTC m=+0.106804657 container init 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 11:14:42 np0005535469 podman[256585]: 2025-11-25 16:14:42.126497278 +0000 UTC m=+0.115478848 container start 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:14:42 np0005535469 podman[256585]: 2025-11-25 16:14:42.035417142 +0000 UTC m=+0.024398732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:14:42 np0005535469 podman[256585]: 2025-11-25 16:14:42.130444034 +0000 UTC m=+0.119425624 container attach 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:14:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:42 np0005535469 nova_compute[254092]: 2025-11-25 16:14:42.618 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:42 np0005535469 nova_compute[254092]: 2025-11-25 16:14:42.618 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:14:43 np0005535469 sharp_pike[256601]: {
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "osd_id": 1,
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "type": "bluestore"
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:    },
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "osd_id": 2,
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "type": "bluestore"
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:    },
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "osd_id": 0,
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:        "type": "bluestore"
Nov 25 11:14:43 np0005535469 sharp_pike[256601]:    }
Nov 25 11:14:43 np0005535469 sharp_pike[256601]: }
Nov 25 11:14:43 np0005535469 systemd[1]: libpod-472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197.scope: Deactivated successfully.
Nov 25 11:14:43 np0005535469 podman[256585]: 2025-11-25 16:14:43.051159665 +0000 UTC m=+1.040141235 container died 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:14:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99-merged.mount: Deactivated successfully.
Nov 25 11:14:43 np0005535469 podman[256585]: 2025-11-25 16:14:43.102306852 +0000 UTC m=+1.091288422 container remove 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:14:43 np0005535469 systemd[1]: libpod-conmon-472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197.scope: Deactivated successfully.
Nov 25 11:14:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:14:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:14:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:14:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:14:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dccdd01a-05bb-4ede-b510-4daac279521a does not exist
Nov 25 11:14:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 225a0fbe-878d-4a21-b708-07bea7aec32f does not exist
Nov 25 11:14:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:14:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:14:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:14:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:14:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:14:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/768575814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:14:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:14:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/768575814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:14:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:14:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:14:56 np0005535469 podman[256699]: 2025-11-25 16:14:56.6402554 +0000 UTC m=+0.053871351 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:14:56 np0005535469 podman[256698]: 2025-11-25 16:14:56.640034915 +0000 UTC m=+0.061030303 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 11:14:56 np0005535469 podman[256700]: 2025-11-25 16:14:56.675593125 +0000 UTC m=+0.088691832 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:14:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:15:01.986 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:15:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:15:01.987 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:15:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:15:01.987 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:15:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.533841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302533887, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1305, "num_deletes": 251, "total_data_size": 2042198, "memory_usage": 2066504, "flush_reason": "Manual Compaction"}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302545846, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2001882, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15166, "largest_seqno": 16470, "table_properties": {"data_size": 1995681, "index_size": 3468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12916, "raw_average_key_size": 19, "raw_value_size": 1983240, "raw_average_value_size": 3023, "num_data_blocks": 159, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087171, "oldest_key_time": 1764087171, "file_creation_time": 1764087302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 12042 microseconds, and 5439 cpu microseconds.
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.545882) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2001882 bytes OK
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.545899) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547201) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547218) EVENT_LOG_v1 {"time_micros": 1764087302547212, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547237) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2036316, prev total WAL file size 2036316, number of live WAL files 2.
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1954KB)], [35(7613KB)]
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302547987, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9798102, "oldest_snapshot_seqno": -1}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4107 keys, 8019197 bytes, temperature: kUnknown
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302599157, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8019197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7989288, "index_size": 18535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100845, "raw_average_key_size": 24, "raw_value_size": 7912456, "raw_average_value_size": 1926, "num_data_blocks": 785, "num_entries": 4107, "num_filter_entries": 4107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.599386) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8019197 bytes
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.600885) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.2 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(8.9) write-amplify(4.0) OK, records in: 4621, records dropped: 514 output_compression: NoCompression
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.600904) EVENT_LOG_v1 {"time_micros": 1764087302600894, "job": 16, "event": "compaction_finished", "compaction_time_micros": 51241, "compaction_time_cpu_micros": 15941, "output_level": 6, "num_output_files": 1, "total_output_size": 8019197, "num_input_records": 4621, "num_output_records": 4107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302601295, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302602761, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:15:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:15:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:15:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:15:13.582 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:15:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:15:13.582 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:15:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:15:13.582 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:15:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:27 np0005535469 podman[256765]: 2025-11-25 16:15:27.639336553 +0000 UTC m=+0.059475533 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:15:27 np0005535469 podman[256766]: 2025-11-25 16:15:27.654317083 +0000 UTC m=+0.073698992 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:15:27 np0005535469 podman[256767]: 2025-11-25 16:15:27.655665368 +0000 UTC m=+0.071638076 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:15:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:39 np0005535469 nova_compute[254092]: 2025-11-25 16:15:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:39 np0005535469 nova_compute[254092]: 2025-11-25 16:15:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:15:39 np0005535469 nova_compute[254092]: 2025-11-25 16:15:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:15:39 np0005535469 nova_compute[254092]: 2025-11-25 16:15:39.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:15:39 np0005535469 nova_compute[254092]: 2025-11-25 16:15:39.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:15:39 np0005535469 nova_compute[254092]: 2025-11-25 16:15:39.532 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:15:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:15:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616823395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:15:39 np0005535469 nova_compute[254092]: 2025-11-25 16:15:39.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:15:40
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr']
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.148 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.150 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.230 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.231 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.256 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:15:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:15:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945270563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.659 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.663 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.679 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.681 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:15:40 np0005535469 nova_compute[254092]: 2025-11-25 16:15:40.681 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:15:41 np0005535469 nova_compute[254092]: 2025-11-25 16:15:41.681 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:41 np0005535469 nova_compute[254092]: 2025-11-25 16:15:41.681 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:15:41 np0005535469 nova_compute[254092]: 2025-11-25 16:15:41.681 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:15:41 np0005535469 nova_compute[254092]: 2025-11-25 16:15:41.694 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:15:41 np0005535469 nova_compute[254092]: 2025-11-25 16:15:41.695 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:41 np0005535469 nova_compute[254092]: 2025-11-25 16:15:41.695 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:41 np0005535469 nova_compute[254092]: 2025-11-25 16:15:41.695 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:42 np0005535469 nova_compute[254092]: 2025-11-25 16:15:42.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:42 np0005535469 nova_compute[254092]: 2025-11-25 16:15:42.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:42 np0005535469 nova_compute[254092]: 2025-11-25 16:15:42.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:42 np0005535469 nova_compute[254092]: 2025-11-25 16:15:42.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:15:43 np0005535469 nova_compute[254092]: 2025-11-25 16:15:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:15:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:15:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev be3640c4-23fb-4043-b2fd-93b684932138 does not exist
Nov 25 11:15:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6731975a-b50c-4aa5-9093-5431f5d20487 does not exist
Nov 25 11:15:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5fd52b80-9d7d-4aa4-98d7-57a56344630c does not exist
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:15:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:15:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:44 np0005535469 podman[257147]: 2025-11-25 16:15:44.698836587 +0000 UTC m=+0.099678926 container create 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:15:44 np0005535469 podman[257147]: 2025-11-25 16:15:44.619351482 +0000 UTC m=+0.020193831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:15:44 np0005535469 systemd[1]: Started libpod-conmon-2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b.scope.
Nov 25 11:15:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:15:45 np0005535469 podman[257147]: 2025-11-25 16:15:45.025521644 +0000 UTC m=+0.426363973 container init 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:15:45 np0005535469 podman[257147]: 2025-11-25 16:15:45.032575182 +0000 UTC m=+0.433417501 container start 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:15:45 np0005535469 zen_noether[257164]: 167 167
Nov 25 11:15:45 np0005535469 systemd[1]: libpod-2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b.scope: Deactivated successfully.
Nov 25 11:15:45 np0005535469 podman[257147]: 2025-11-25 16:15:45.126892204 +0000 UTC m=+0.527734553 container attach 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:15:45 np0005535469 podman[257147]: 2025-11-25 16:15:45.128079865 +0000 UTC m=+0.528922194 container died 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 11:15:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:15:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:15:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1cb58db6c1a0616e4655dbaed01b45b5efa27a1d9fe23b35cbd5780d409cc450-merged.mount: Deactivated successfully.
Nov 25 11:15:45 np0005535469 podman[257147]: 2025-11-25 16:15:45.549108065 +0000 UTC m=+0.949950394 container remove 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:15:45 np0005535469 systemd[1]: libpod-conmon-2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b.scope: Deactivated successfully.
Nov 25 11:15:45 np0005535469 podman[257188]: 2025-11-25 16:15:45.705544798 +0000 UTC m=+0.045691193 container create a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:15:45 np0005535469 podman[257188]: 2025-11-25 16:15:45.680033175 +0000 UTC m=+0.020179590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:15:45 np0005535469 systemd[1]: Started libpod-conmon-a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee.scope.
Nov 25 11:15:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:15:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:45 np0005535469 podman[257188]: 2025-11-25 16:15:45.857004818 +0000 UTC m=+0.197151213 container init a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:15:45 np0005535469 podman[257188]: 2025-11-25 16:15:45.863627795 +0000 UTC m=+0.203774180 container start a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:15:45 np0005535469 podman[257188]: 2025-11-25 16:15:45.889799495 +0000 UTC m=+0.229945920 container attach a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:15:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:46 np0005535469 elated_wilson[257205]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:15:46 np0005535469 elated_wilson[257205]: --> relative data size: 1.0
Nov 25 11:15:46 np0005535469 elated_wilson[257205]: --> All data devices are unavailable
Nov 25 11:15:46 np0005535469 systemd[1]: libpod-a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee.scope: Deactivated successfully.
Nov 25 11:15:46 np0005535469 podman[257188]: 2025-11-25 16:15:46.857183174 +0000 UTC m=+1.197329589 container died a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:15:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0-merged.mount: Deactivated successfully.
Nov 25 11:15:47 np0005535469 podman[257188]: 2025-11-25 16:15:47.386805247 +0000 UTC m=+1.726951642 container remove a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:15:47 np0005535469 systemd[1]: libpod-conmon-a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee.scope: Deactivated successfully.
Nov 25 11:15:48 np0005535469 podman[257386]: 2025-11-25 16:15:47.913982084 +0000 UTC m=+0.023287794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:15:48 np0005535469 podman[257386]: 2025-11-25 16:15:48.013273528 +0000 UTC m=+0.122579258 container create ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:15:48 np0005535469 systemd[1]: Started libpod-conmon-ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2.scope.
Nov 25 11:15:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:15:48 np0005535469 podman[257386]: 2025-11-25 16:15:48.22160204 +0000 UTC m=+0.330907740 container init ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:15:48 np0005535469 podman[257386]: 2025-11-25 16:15:48.228394041 +0000 UTC m=+0.337699751 container start ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 11:15:48 np0005535469 quirky_rubin[257402]: 167 167
Nov 25 11:15:48 np0005535469 systemd[1]: libpod-ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2.scope: Deactivated successfully.
Nov 25 11:15:48 np0005535469 podman[257386]: 2025-11-25 16:15:48.278610524 +0000 UTC m=+0.387916244 container attach ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:15:48 np0005535469 podman[257386]: 2025-11-25 16:15:48.279055556 +0000 UTC m=+0.388361256 container died ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:15:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d4da6d6ffc8d4a87437c7ad09430340e6fd5b764b68c857a3c795d27a78bdeb4-merged.mount: Deactivated successfully.
Nov 25 11:15:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:48 np0005535469 podman[257386]: 2025-11-25 16:15:48.62885019 +0000 UTC m=+0.738155890 container remove ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 11:15:48 np0005535469 systemd[1]: libpod-conmon-ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2.scope: Deactivated successfully.
Nov 25 11:15:48 np0005535469 podman[257427]: 2025-11-25 16:15:48.864853771 +0000 UTC m=+0.109567881 container create 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:15:48 np0005535469 podman[257427]: 2025-11-25 16:15:48.776518009 +0000 UTC m=+0.021232139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:15:49 np0005535469 systemd[1]: Started libpod-conmon-87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67.scope.
Nov 25 11:15:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:15:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:49 np0005535469 podman[257427]: 2025-11-25 16:15:49.396267912 +0000 UTC m=+0.640982042 container init 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:15:49 np0005535469 podman[257427]: 2025-11-25 16:15:49.403142435 +0000 UTC m=+0.647856545 container start 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 11:15:49 np0005535469 podman[257427]: 2025-11-25 16:15:49.493899243 +0000 UTC m=+0.738613363 container attach 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]: {
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:    "0": [
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:        {
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "devices": [
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "/dev/loop3"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            ],
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_name": "ceph_lv0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_size": "21470642176",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "name": "ceph_lv0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "tags": {
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cluster_name": "ceph",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.crush_device_class": "",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.encrypted": "0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osd_id": "0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.type": "block",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.vdo": "0"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            },
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "type": "block",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "vg_name": "ceph_vg0"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:        }
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:    ],
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:    "1": [
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:        {
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "devices": [
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "/dev/loop4"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            ],
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_name": "ceph_lv1",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_size": "21470642176",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "name": "ceph_lv1",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "tags": {
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cluster_name": "ceph",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.crush_device_class": "",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.encrypted": "0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osd_id": "1",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.type": "block",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.vdo": "0"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            },
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "type": "block",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "vg_name": "ceph_vg1"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:        }
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:    ],
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:    "2": [
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:        {
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "devices": [
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "/dev/loop5"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            ],
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_name": "ceph_lv2",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_size": "21470642176",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "name": "ceph_lv2",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "tags": {
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.cluster_name": "ceph",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.crush_device_class": "",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.encrypted": "0",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osd_id": "2",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.type": "block",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:                "ceph.vdo": "0"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            },
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "type": "block",
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:            "vg_name": "ceph_vg2"
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:        }
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]:    ]
Nov 25 11:15:50 np0005535469 exciting_ishizaka[257443]: }
Nov 25 11:15:50 np0005535469 systemd[1]: libpod-87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67.scope: Deactivated successfully.
Nov 25 11:15:50 np0005535469 podman[257452]: 2025-11-25 16:15:50.208305216 +0000 UTC m=+0.027160947 container died 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 11:15:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad-merged.mount: Deactivated successfully.
Nov 25 11:15:50 np0005535469 podman[257452]: 2025-11-25 16:15:50.832412966 +0000 UTC m=+0.651268617 container remove 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:15:50 np0005535469 systemd[1]: libpod-conmon-87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67.scope: Deactivated successfully.
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:15:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:15:51 np0005535469 podman[257607]: 2025-11-25 16:15:51.537094389 +0000 UTC m=+0.104685630 container create 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:15:51 np0005535469 podman[257607]: 2025-11-25 16:15:51.452481617 +0000 UTC m=+0.020072888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:15:51 np0005535469 systemd[1]: Started libpod-conmon-425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e.scope.
Nov 25 11:15:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:15:51 np0005535469 podman[257607]: 2025-11-25 16:15:51.905793539 +0000 UTC m=+0.473384790 container init 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:15:51 np0005535469 podman[257607]: 2025-11-25 16:15:51.913921666 +0000 UTC m=+0.481512897 container start 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:15:51 np0005535469 xenodochial_stonebraker[257623]: 167 167
Nov 25 11:15:51 np0005535469 systemd[1]: libpod-425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e.scope: Deactivated successfully.
Nov 25 11:15:51 np0005535469 podman[257607]: 2025-11-25 16:15:51.963553673 +0000 UTC m=+0.531144904 container attach 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:15:51 np0005535469 podman[257607]: 2025-11-25 16:15:51.964141289 +0000 UTC m=+0.531732520 container died 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 11:15:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9064734063b702ac7b02467e929cc20e79c430d84ec8b36b7a667e1da2c0e0e7-merged.mount: Deactivated successfully.
Nov 25 11:15:52 np0005535469 podman[257607]: 2025-11-25 16:15:52.036911705 +0000 UTC m=+0.604502936 container remove 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:15:52 np0005535469 systemd[1]: libpod-conmon-425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e.scope: Deactivated successfully.
Nov 25 11:15:52 np0005535469 podman[257647]: 2025-11-25 16:15:52.19569364 +0000 UTC m=+0.041510481 container create df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:15:52 np0005535469 systemd[1]: Started libpod-conmon-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope.
Nov 25 11:15:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:15:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:15:52 np0005535469 podman[257647]: 2025-11-25 16:15:52.260986467 +0000 UTC m=+0.106803338 container init df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:15:52 np0005535469 podman[257647]: 2025-11-25 16:15:52.267743507 +0000 UTC m=+0.113560348 container start df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:15:52 np0005535469 podman[257647]: 2025-11-25 16:15:52.271523688 +0000 UTC m=+0.117340549 container attach df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:15:52 np0005535469 podman[257647]: 2025-11-25 16:15:52.17995363 +0000 UTC m=+0.025770491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:15:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]: {
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "osd_id": 1,
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "type": "bluestore"
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:    },
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "osd_id": 2,
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "type": "bluestore"
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:    },
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "osd_id": 0,
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:        "type": "bluestore"
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]:    }
Nov 25 11:15:53 np0005535469 eloquent_beaver[257664]: }
Nov 25 11:15:53 np0005535469 systemd[1]: libpod-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope: Deactivated successfully.
Nov 25 11:15:53 np0005535469 conmon[257664]: conmon df4cb368b8a4ecf70afd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope/container/memory.events
Nov 25 11:15:53 np0005535469 podman[257647]: 2025-11-25 16:15:53.216072917 +0000 UTC m=+1.061889758 container died df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:15:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224-merged.mount: Deactivated successfully.
Nov 25 11:15:53 np0005535469 podman[257647]: 2025-11-25 16:15:53.275046784 +0000 UTC m=+1.120863635 container remove df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:15:53 np0005535469 systemd[1]: libpod-conmon-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope: Deactivated successfully.
Nov 25 11:15:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:15:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:15:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:15:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:15:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 435e1140-df1d-43ae-8989-a8ea973be967 does not exist
Nov 25 11:15:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f8c11953-e7f8-4a26-babb-521a4030b5a9 does not exist
Nov 25 11:15:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:15:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:15:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:15:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3531795733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:15:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:15:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3531795733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:15:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:15:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:15:58 np0005535469 podman[257762]: 2025-11-25 16:15:58.643813029 +0000 UTC m=+0.060366284 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:15:58 np0005535469 podman[257763]: 2025-11-25 16:15:58.671558112 +0000 UTC m=+0.081979343 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:15:58 np0005535469 podman[257761]: 2025-11-25 16:15:58.671740626 +0000 UTC m=+0.088141607 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:16:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 25 11:16:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 25 11:16:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 25 11:16:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 11:16:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 11:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:16:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 25 11:16:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 11:16:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:16:13.583 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:16:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:16:13.584 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:16:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:16:13.584 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:16:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 11:16:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 25 11:16:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 11:16:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 11:16:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Nov 25 11:16:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 25 11:16:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 25 11:16:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 25 11:16:29 np0005535469 podman[257823]: 2025-11-25 16:16:29.665384828 +0000 UTC m=+0.071778411 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:16:29 np0005535469 podman[257822]: 2025-11-25 16:16:29.667428962 +0000 UTC m=+0.077704478 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:16:29 np0005535469 podman[257824]: 2025-11-25 16:16:29.690796087 +0000 UTC m=+0.101850255 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:16:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 11:16:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:39 np0005535469 nova_compute[254092]: 2025-11-25 16:16:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:39 np0005535469 nova_compute[254092]: 2025-11-25 16:16:39.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:16:39 np0005535469 nova_compute[254092]: 2025-11-25 16:16:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:16:39 np0005535469 nova_compute[254092]: 2025-11-25 16:16:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:16:39 np0005535469 nova_compute[254092]: 2025-11-25 16:16:39.524 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:16:39 np0005535469 nova_compute[254092]: 2025-11-25 16:16:39.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:16:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:16:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919789275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.002 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:16:40
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'vms', 'volumes', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.171 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.172 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5203MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.172 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.173 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.225 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.225 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.240 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:16:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:16:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638642740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.671 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.678 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.704 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.706 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:16:40 np0005535469 nova_compute[254092]: 2025-11-25 16:16:40.706 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:16:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.707 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.707 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.726 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.726 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:43 np0005535469 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:44 np0005535469 nova_compute[254092]: 2025-11-25 16:16:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:16:44 np0005535469 nova_compute[254092]: 2025-11-25 16:16:44.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:16:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:16:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:16:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:54 np0005535469 podman[258100]: 2025-11-25 16:16:54.343144037 +0000 UTC m=+0.224650168 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:16:54 np0005535469 podman[258100]: 2025-11-25 16:16:54.478079981 +0000 UTC m=+0.359586062 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:16:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042639771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042639771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:16:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4f2f4a8d-95b5-433c-87af-7d1e66878b0b does not exist
Nov 25 11:16:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5a09352a-8517-4f5b-be66-1cbe5361a957 does not exist
Nov 25 11:16:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 696dbcea-ec2b-4744-a7ed-f924ca7f1309 does not exist
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:16:56 np0005535469 podman[258528]: 2025-11-25 16:16:56.366771008 +0000 UTC m=+0.036175818 container create 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:16:56 np0005535469 systemd[1]: Started libpod-conmon-450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170.scope.
Nov 25 11:16:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:16:56 np0005535469 podman[258528]: 2025-11-25 16:16:56.349133302 +0000 UTC m=+0.018538142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:16:56 np0005535469 podman[258528]: 2025-11-25 16:16:56.453254734 +0000 UTC m=+0.122659594 container init 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:16:56 np0005535469 podman[258528]: 2025-11-25 16:16:56.460984663 +0000 UTC m=+0.130389493 container start 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:16:56 np0005535469 inspiring_swartz[258544]: 167 167
Nov 25 11:16:56 np0005535469 systemd[1]: libpod-450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170.scope: Deactivated successfully.
Nov 25 11:16:56 np0005535469 podman[258528]: 2025-11-25 16:16:56.473186622 +0000 UTC m=+0.142591492 container attach 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:16:56 np0005535469 podman[258528]: 2025-11-25 16:16:56.474062226 +0000 UTC m=+0.143467046 container died 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:16:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ec3719d3e3750e0d855951d557fd1341c96ae0c8cf45a708348e48ecd796e87f-merged.mount: Deactivated successfully.
Nov 25 11:16:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:56 np0005535469 podman[258528]: 2025-11-25 16:16:56.513968944 +0000 UTC m=+0.183373754 container remove 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:16:56 np0005535469 systemd[1]: libpod-conmon-450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170.scope: Deactivated successfully.
Nov 25 11:16:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:16:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:16:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:16:56 np0005535469 podman[258570]: 2025-11-25 16:16:56.660376567 +0000 UTC m=+0.042835557 container create 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:16:56 np0005535469 systemd[1]: Started libpod-conmon-25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be.scope.
Nov 25 11:16:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:16:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:56 np0005535469 podman[258570]: 2025-11-25 16:16:56.640357387 +0000 UTC m=+0.022816417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:16:56 np0005535469 podman[258570]: 2025-11-25 16:16:56.734802188 +0000 UTC m=+0.117261188 container init 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:16:56 np0005535469 podman[258570]: 2025-11-25 16:16:56.746546055 +0000 UTC m=+0.129005035 container start 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:16:56 np0005535469 podman[258570]: 2025-11-25 16:16:56.749901965 +0000 UTC m=+0.132360965 container attach 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:16:57 np0005535469 beautiful_hamilton[258586]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:16:57 np0005535469 beautiful_hamilton[258586]: --> relative data size: 1.0
Nov 25 11:16:57 np0005535469 beautiful_hamilton[258586]: --> All data devices are unavailable
Nov 25 11:16:57 np0005535469 systemd[1]: libpod-25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be.scope: Deactivated successfully.
Nov 25 11:16:57 np0005535469 podman[258615]: 2025-11-25 16:16:57.801064223 +0000 UTC m=+0.024586525 container died 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:16:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11-merged.mount: Deactivated successfully.
Nov 25 11:16:57 np0005535469 podman[258615]: 2025-11-25 16:16:57.851714251 +0000 UTC m=+0.075236463 container remove 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 11:16:57 np0005535469 systemd[1]: libpod-conmon-25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be.scope: Deactivated successfully.
Nov 25 11:16:58 np0005535469 podman[258771]: 2025-11-25 16:16:58.433949166 +0000 UTC m=+0.039832127 container create 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:16:58 np0005535469 systemd[1]: Started libpod-conmon-519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0.scope.
Nov 25 11:16:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:16:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:16:58 np0005535469 podman[258771]: 2025-11-25 16:16:58.416609077 +0000 UTC m=+0.022492058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:16:58 np0005535469 podman[258771]: 2025-11-25 16:16:58.525388985 +0000 UTC m=+0.131271966 container init 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:16:58 np0005535469 podman[258771]: 2025-11-25 16:16:58.532100266 +0000 UTC m=+0.137983227 container start 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:16:58 np0005535469 podman[258771]: 2025-11-25 16:16:58.535424076 +0000 UTC m=+0.141307067 container attach 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:16:58 np0005535469 flamboyant_wiles[258787]: 167 167
Nov 25 11:16:58 np0005535469 systemd[1]: libpod-519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0.scope: Deactivated successfully.
Nov 25 11:16:58 np0005535469 podman[258771]: 2025-11-25 16:16:58.536380582 +0000 UTC m=+0.142263533 container died 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:16:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-84649de9f5f5473245159c4bb29f41c65e63f1ef7a6e5c65c7fa08866390eb70-merged.mount: Deactivated successfully.
Nov 25 11:16:58 np0005535469 podman[258771]: 2025-11-25 16:16:58.567248806 +0000 UTC m=+0.173131767 container remove 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 11:16:58 np0005535469 systemd[1]: libpod-conmon-519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0.scope: Deactivated successfully.
Nov 25 11:16:58 np0005535469 podman[258810]: 2025-11-25 16:16:58.727826302 +0000 UTC m=+0.040709330 container create 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:16:58 np0005535469 systemd[1]: Started libpod-conmon-366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b.scope.
Nov 25 11:16:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:16:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:16:58 np0005535469 podman[258810]: 2025-11-25 16:16:58.808052769 +0000 UTC m=+0.120935867 container init 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:16:58 np0005535469 podman[258810]: 2025-11-25 16:16:58.71292021 +0000 UTC m=+0.025803258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:16:58 np0005535469 podman[258810]: 2025-11-25 16:16:58.815359266 +0000 UTC m=+0.128242304 container start 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:16:58 np0005535469 podman[258810]: 2025-11-25 16:16:58.818279915 +0000 UTC m=+0.131162973 container attach 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]: {
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:    "0": [
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:        {
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "devices": [
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "/dev/loop3"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            ],
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_name": "ceph_lv0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_size": "21470642176",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "name": "ceph_lv0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "tags": {
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cluster_name": "ceph",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.crush_device_class": "",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.encrypted": "0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osd_id": "0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.type": "block",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.vdo": "0"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            },
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "type": "block",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "vg_name": "ceph_vg0"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:        }
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:    ],
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:    "1": [
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:        {
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "devices": [
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "/dev/loop4"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            ],
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_name": "ceph_lv1",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_size": "21470642176",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "name": "ceph_lv1",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "tags": {
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cluster_name": "ceph",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.crush_device_class": "",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.encrypted": "0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osd_id": "1",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.type": "block",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.vdo": "0"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            },
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "type": "block",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "vg_name": "ceph_vg1"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:        }
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:    ],
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:    "2": [
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:        {
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "devices": [
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "/dev/loop5"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            ],
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_name": "ceph_lv2",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_size": "21470642176",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "name": "ceph_lv2",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "tags": {
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.cluster_name": "ceph",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.crush_device_class": "",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.encrypted": "0",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osd_id": "2",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.type": "block",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:                "ceph.vdo": "0"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            },
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "type": "block",
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:            "vg_name": "ceph_vg2"
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:        }
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]:    ]
Nov 25 11:16:59 np0005535469 nervous_vaughan[258827]: }
Nov 25 11:16:59 np0005535469 systemd[1]: libpod-366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b.scope: Deactivated successfully.
Nov 25 11:16:59 np0005535469 podman[258810]: 2025-11-25 16:16:59.597952992 +0000 UTC m=+0.910836020 container died 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:16:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1-merged.mount: Deactivated successfully.
Nov 25 11:16:59 np0005535469 podman[258810]: 2025-11-25 16:16:59.654118499 +0000 UTC m=+0.967001527 container remove 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:16:59 np0005535469 systemd[1]: libpod-conmon-366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b.scope: Deactivated successfully.
Nov 25 11:16:59 np0005535469 podman[258872]: 2025-11-25 16:16:59.840905352 +0000 UTC m=+0.058855150 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 11:16:59 np0005535469 podman[258874]: 2025-11-25 16:16:59.863435371 +0000 UTC m=+0.081254856 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 11:16:59 np0005535469 podman[258873]: 2025-11-25 16:16:59.86706954 +0000 UTC m=+0.085194023 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 11:17:00 np0005535469 podman[259051]: 2025-11-25 16:17:00.288795229 +0000 UTC m=+0.038222323 container create fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:17:00 np0005535469 systemd[1]: Started libpod-conmon-fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688.scope.
Nov 25 11:17:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:17:00 np0005535469 podman[259051]: 2025-11-25 16:17:00.356745144 +0000 UTC m=+0.106172268 container init fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:17:00 np0005535469 podman[259051]: 2025-11-25 16:17:00.363054125 +0000 UTC m=+0.112481219 container start fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:17:00 np0005535469 focused_rhodes[259067]: 167 167
Nov 25 11:17:00 np0005535469 systemd[1]: libpod-fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688.scope: Deactivated successfully.
Nov 25 11:17:00 np0005535469 podman[259051]: 2025-11-25 16:17:00.366689502 +0000 UTC m=+0.116116616 container attach fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:17:00 np0005535469 podman[259051]: 2025-11-25 16:17:00.367846383 +0000 UTC m=+0.117273487 container died fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:17:00 np0005535469 podman[259051]: 2025-11-25 16:17:00.27218008 +0000 UTC m=+0.021607204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:17:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-66a83a510060c37154acf89506e391d13f0882b4143be5ba704df03d4648fe56-merged.mount: Deactivated successfully.
Nov 25 11:17:00 np0005535469 podman[259051]: 2025-11-25 16:17:00.399784006 +0000 UTC m=+0.149211100 container remove fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:17:00 np0005535469 systemd[1]: libpod-conmon-fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688.scope: Deactivated successfully.
Nov 25 11:17:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:00 np0005535469 podman[259091]: 2025-11-25 16:17:00.548155553 +0000 UTC m=+0.036423664 container create 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 11:17:00 np0005535469 systemd[1]: Started libpod-conmon-735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09.scope.
Nov 25 11:17:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:17:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:17:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:17:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:17:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:17:00 np0005535469 podman[259091]: 2025-11-25 16:17:00.621681979 +0000 UTC m=+0.109950100 container init 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:17:00 np0005535469 podman[259091]: 2025-11-25 16:17:00.628494763 +0000 UTC m=+0.116762874 container start 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:17:00 np0005535469 podman[259091]: 2025-11-25 16:17:00.5343504 +0000 UTC m=+0.022618531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:17:00 np0005535469 podman[259091]: 2025-11-25 16:17:00.631577567 +0000 UTC m=+0.119845688 container attach 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:17:01 np0005535469 funny_leakey[259107]: {
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "osd_id": 1,
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "type": "bluestore"
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:    },
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "osd_id": 2,
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "type": "bluestore"
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:    },
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "osd_id": 0,
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:        "type": "bluestore"
Nov 25 11:17:01 np0005535469 funny_leakey[259107]:    }
Nov 25 11:17:01 np0005535469 funny_leakey[259107]: }
Nov 25 11:17:01 np0005535469 systemd[1]: libpod-735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09.scope: Deactivated successfully.
Nov 25 11:17:01 np0005535469 podman[259140]: 2025-11-25 16:17:01.597660806 +0000 UTC m=+0.020430432 container died 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:17:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9-merged.mount: Deactivated successfully.
Nov 25 11:17:01 np0005535469 podman[259140]: 2025-11-25 16:17:01.673609208 +0000 UTC m=+0.096378834 container remove 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 11:17:01 np0005535469 systemd[1]: libpod-conmon-735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09.scope: Deactivated successfully.
Nov 25 11:17:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:17:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:17:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:17:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:17:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ba3947af-46a3-4f74-afff-8023cd600b1c does not exist
Nov 25 11:17:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 484b273f-f87d-4f23-b532-954490120cfb does not exist
Nov 25 11:17:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:17:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:17:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:17:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:17:13.585 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:17:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:17:13.586 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:17:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:17:13.586 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:17:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:30 np0005535469 podman[259206]: 2025-11-25 16:17:30.666742976 +0000 UTC m=+0.076818245 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 11:17:30 np0005535469 podman[259207]: 2025-11-25 16:17:30.675718439 +0000 UTC m=+0.085982753 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:17:30 np0005535469 podman[259208]: 2025-11-25 16:17:30.685521224 +0000 UTC m=+0.089734694 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 11:17:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:39 np0005535469 nova_compute[254092]: 2025-11-25 16:17:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:39 np0005535469 nova_compute[254092]: 2025-11-25 16:17:39.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:17:39 np0005535469 nova_compute[254092]: 2025-11-25 16:17:39.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:17:39 np0005535469 nova_compute[254092]: 2025-11-25 16:17:39.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:39 np0005535469 nova_compute[254092]: 2025-11-25 16:17:39.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:17:39 np0005535469 nova_compute[254092]: 2025-11-25 16:17:39.559 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:17:40
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'images', 'default.rgw.meta', '.mgr', '.rgw.root']
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:17:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:40 np0005535469 nova_compute[254092]: 2025-11-25 16:17:40.578 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:40 np0005535469 nova_compute[254092]: 2025-11-25 16:17:40.616 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:17:40 np0005535469 nova_compute[254092]: 2025-11-25 16:17:40.616 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:17:40 np0005535469 nova_compute[254092]: 2025-11-25 16:17:40.617 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:17:40 np0005535469 nova_compute[254092]: 2025-11-25 16:17:40.617 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:17:40 np0005535469 nova_compute[254092]: 2025-11-25 16:17:40.617 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:17:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:17:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520662425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.040 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.206 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.207 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.208 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.208 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.274 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.274 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.294 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:17:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:17:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592436445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.764 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.773 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.791 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.794 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:17:41 np0005535469 nova_compute[254092]: 2025-11-25 16:17:41.794 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:17:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:43 np0005535469 nova_compute[254092]: 2025-11-25 16:17:43.712 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:43 np0005535469 nova_compute[254092]: 2025-11-25 16:17:43.712 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:43 np0005535469 nova_compute[254092]: 2025-11-25 16:17:43.713 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:43 np0005535469 nova_compute[254092]: 2025-11-25 16:17:43.713 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:43 np0005535469 nova_compute[254092]: 2025-11-25 16:17:43.713 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:44 np0005535469 nova_compute[254092]: 2025-11-25 16:17:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:44 np0005535469 nova_compute[254092]: 2025-11-25 16:17:44.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:17:44 np0005535469 nova_compute[254092]: 2025-11-25 16:17:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:17:44 np0005535469 nova_compute[254092]: 2025-11-25 16:17:44.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:17:44 np0005535469 nova_compute[254092]: 2025-11-25 16:17:44.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:44 np0005535469 nova_compute[254092]: 2025-11-25 16:17:44.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:17:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:45 np0005535469 nova_compute[254092]: 2025-11-25 16:17:45.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:17:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:17:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:17:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:17:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225908457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:17:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:17:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225908457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:17:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:17:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:17:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:01 np0005535469 podman[259315]: 2025-11-25 16:18:01.68253941 +0000 UTC m=+0.091384259 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:18:01 np0005535469 podman[259317]: 2025-11-25 16:18:01.699857518 +0000 UTC m=+0.102899181 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:18:01 np0005535469 podman[259316]: 2025-11-25 16:18:01.701724928 +0000 UTC m=+0.104550925 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 11:18:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:18:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 11f74cfa-aa01-4b8e-b12d-9e912439092a does not exist
Nov 25 11:18:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 22df2f84-7673-4fc4-9617-f5ad03d5d0b3 does not exist
Nov 25 11:18:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 21e659e9-0922-4c62-9c5a-446a5ae6ff69 does not exist
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:18:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:18:03 np0005535469 podman[259651]: 2025-11-25 16:18:03.567471906 +0000 UTC m=+0.052033157 container create 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:18:03 np0005535469 systemd[1]: Started libpod-conmon-016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5.scope.
Nov 25 11:18:03 np0005535469 podman[259651]: 2025-11-25 16:18:03.542385938 +0000 UTC m=+0.026947279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:18:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:18:03 np0005535469 podman[259651]: 2025-11-25 16:18:03.658017271 +0000 UTC m=+0.142578582 container init 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:18:03 np0005535469 podman[259651]: 2025-11-25 16:18:03.664650819 +0000 UTC m=+0.149212060 container start 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 11:18:03 np0005535469 podman[259651]: 2025-11-25 16:18:03.667761423 +0000 UTC m=+0.152322734 container attach 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:18:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:18:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:18:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:18:03 np0005535469 boring_davinci[259667]: 167 167
Nov 25 11:18:03 np0005535469 systemd[1]: libpod-016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5.scope: Deactivated successfully.
Nov 25 11:18:03 np0005535469 podman[259651]: 2025-11-25 16:18:03.66985312 +0000 UTC m=+0.154414371 container died 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:18:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2bc8817d10a833dc6061df6bd8aa5bec2cd6f3e931c3eda80506e4d48a81cd75-merged.mount: Deactivated successfully.
Nov 25 11:18:03 np0005535469 podman[259651]: 2025-11-25 16:18:03.708867324 +0000 UTC m=+0.193428565 container remove 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:18:03 np0005535469 systemd[1]: libpod-conmon-016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5.scope: Deactivated successfully.
Nov 25 11:18:03 np0005535469 podman[259691]: 2025-11-25 16:18:03.920136039 +0000 UTC m=+0.085651224 container create 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:18:03 np0005535469 systemd[1]: Started libpod-conmon-65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608.scope.
Nov 25 11:18:03 np0005535469 podman[259691]: 2025-11-25 16:18:03.902615906 +0000 UTC m=+0.068131171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:18:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:18:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:04 np0005535469 podman[259691]: 2025-11-25 16:18:04.024972061 +0000 UTC m=+0.190487276 container init 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:18:04 np0005535469 podman[259691]: 2025-11-25 16:18:04.035121785 +0000 UTC m=+0.200636980 container start 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:18:04 np0005535469 podman[259691]: 2025-11-25 16:18:04.03865587 +0000 UTC m=+0.204171065 container attach 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:18:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:05 np0005535469 nice_montalcini[259707]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:18:05 np0005535469 nice_montalcini[259707]: --> relative data size: 1.0
Nov 25 11:18:05 np0005535469 nice_montalcini[259707]: --> All data devices are unavailable
Nov 25 11:18:05 np0005535469 systemd[1]: libpod-65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608.scope: Deactivated successfully.
Nov 25 11:18:05 np0005535469 podman[259691]: 2025-11-25 16:18:05.084075234 +0000 UTC m=+1.249590429 container died 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:18:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b-merged.mount: Deactivated successfully.
Nov 25 11:18:05 np0005535469 podman[259691]: 2025-11-25 16:18:05.21278196 +0000 UTC m=+1.378297155 container remove 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:18:05 np0005535469 systemd[1]: libpod-conmon-65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608.scope: Deactivated successfully.
Nov 25 11:18:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:05 np0005535469 podman[259891]: 2025-11-25 16:18:05.948114748 +0000 UTC m=+0.050102184 container create ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:18:05 np0005535469 systemd[1]: Started libpod-conmon-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope.
Nov 25 11:18:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:18:06 np0005535469 podman[259891]: 2025-11-25 16:18:05.92450889 +0000 UTC m=+0.026496426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:18:06 np0005535469 podman[259891]: 2025-11-25 16:18:06.024498091 +0000 UTC m=+0.126485547 container init ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:18:06 np0005535469 podman[259891]: 2025-11-25 16:18:06.036020902 +0000 UTC m=+0.138008378 container start ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:18:06 np0005535469 systemd[1]: libpod-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope: Deactivated successfully.
Nov 25 11:18:06 np0005535469 great_bassi[259907]: 167 167
Nov 25 11:18:06 np0005535469 conmon[259907]: conmon ad8d7e60fa15d57b910b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope/container/memory.events
Nov 25 11:18:06 np0005535469 podman[259891]: 2025-11-25 16:18:06.040988357 +0000 UTC m=+0.142975813 container attach ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:18:06 np0005535469 podman[259891]: 2025-11-25 16:18:06.041363267 +0000 UTC m=+0.143350713 container died ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:18:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1d0a9caadac5e3d9bb71a1d14cf9b16f08eb120a8a38baeefd86855d5243c920-merged.mount: Deactivated successfully.
Nov 25 11:18:06 np0005535469 podman[259891]: 2025-11-25 16:18:06.079942228 +0000 UTC m=+0.181929684 container remove ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:18:06 np0005535469 systemd[1]: libpod-conmon-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope: Deactivated successfully.
Nov 25 11:18:06 np0005535469 podman[259929]: 2025-11-25 16:18:06.250290759 +0000 UTC m=+0.040663039 container create 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:18:06 np0005535469 systemd[1]: Started libpod-conmon-6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a.scope.
Nov 25 11:18:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:18:06 np0005535469 podman[259929]: 2025-11-25 16:18:06.230984457 +0000 UTC m=+0.021356727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:18:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:06 np0005535469 podman[259929]: 2025-11-25 16:18:06.347986317 +0000 UTC m=+0.138358587 container init 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:18:06 np0005535469 podman[259929]: 2025-11-25 16:18:06.360161827 +0000 UTC m=+0.150534107 container start 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:18:06 np0005535469 podman[259929]: 2025-11-25 16:18:06.363798205 +0000 UTC m=+0.154170465 container attach 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:18:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]: {
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:    "0": [
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:        {
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "devices": [
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "/dev/loop3"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            ],
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_name": "ceph_lv0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_size": "21470642176",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "name": "ceph_lv0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "tags": {
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cluster_name": "ceph",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.crush_device_class": "",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.encrypted": "0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osd_id": "0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.type": "block",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.vdo": "0"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            },
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "type": "block",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "vg_name": "ceph_vg0"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:        }
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:    ],
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:    "1": [
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:        {
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "devices": [
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "/dev/loop4"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            ],
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_name": "ceph_lv1",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_size": "21470642176",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "name": "ceph_lv1",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "tags": {
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cluster_name": "ceph",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.crush_device_class": "",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.encrypted": "0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osd_id": "1",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.type": "block",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.vdo": "0"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            },
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "type": "block",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "vg_name": "ceph_vg1"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:        }
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:    ],
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:    "2": [
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:        {
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "devices": [
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "/dev/loop5"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            ],
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_name": "ceph_lv2",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_size": "21470642176",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "name": "ceph_lv2",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "tags": {
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.cluster_name": "ceph",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.crush_device_class": "",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.encrypted": "0",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osd_id": "2",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.type": "block",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:                "ceph.vdo": "0"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            },
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "type": "block",
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:            "vg_name": "ceph_vg2"
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:        }
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]:    ]
Nov 25 11:18:07 np0005535469 distracted_vaughan[259946]: }
Nov 25 11:18:07 np0005535469 systemd[1]: libpod-6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a.scope: Deactivated successfully.
Nov 25 11:18:07 np0005535469 podman[259929]: 2025-11-25 16:18:07.11934094 +0000 UTC m=+0.909713240 container died 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:18:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5-merged.mount: Deactivated successfully.
Nov 25 11:18:07 np0005535469 podman[259929]: 2025-11-25 16:18:07.183929644 +0000 UTC m=+0.974301924 container remove 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 11:18:07 np0005535469 systemd[1]: libpod-conmon-6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a.scope: Deactivated successfully.
Nov 25 11:18:07 np0005535469 podman[260111]: 2025-11-25 16:18:07.977820264 +0000 UTC m=+0.060092514 container create 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 11:18:08 np0005535469 systemd[1]: Started libpod-conmon-0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310.scope.
Nov 25 11:18:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:18:08 np0005535469 podman[260111]: 2025-11-25 16:18:07.958019189 +0000 UTC m=+0.040291429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:18:08 np0005535469 podman[260111]: 2025-11-25 16:18:08.066480159 +0000 UTC m=+0.148752429 container init 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:18:08 np0005535469 podman[260111]: 2025-11-25 16:18:08.076083198 +0000 UTC m=+0.158355448 container start 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:18:08 np0005535469 podman[260111]: 2025-11-25 16:18:08.079374086 +0000 UTC m=+0.161646386 container attach 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:18:08 np0005535469 thirsty_almeida[260127]: 167 167
Nov 25 11:18:08 np0005535469 systemd[1]: libpod-0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310.scope: Deactivated successfully.
Nov 25 11:18:08 np0005535469 podman[260111]: 2025-11-25 16:18:08.084021082 +0000 UTC m=+0.166293342 container died 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 11:18:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-098a43f94108413b595874b99fc9c95e16b8d326155f37e6224ade0d2ee92ed4-merged.mount: Deactivated successfully.
Nov 25 11:18:08 np0005535469 podman[260111]: 2025-11-25 16:18:08.120504707 +0000 UTC m=+0.202776927 container remove 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:18:08 np0005535469 systemd[1]: libpod-conmon-0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310.scope: Deactivated successfully.
Nov 25 11:18:08 np0005535469 podman[260150]: 2025-11-25 16:18:08.344396304 +0000 UTC m=+0.046800625 container create 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:18:08 np0005535469 systemd[1]: Started libpod-conmon-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope.
Nov 25 11:18:08 np0005535469 podman[260150]: 2025-11-25 16:18:08.320416976 +0000 UTC m=+0.022821307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:18:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:18:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:18:08 np0005535469 podman[260150]: 2025-11-25 16:18:08.454409315 +0000 UTC m=+0.156813636 container init 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:18:08 np0005535469 podman[260150]: 2025-11-25 16:18:08.460096998 +0000 UTC m=+0.162501289 container start 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:18:08 np0005535469 podman[260150]: 2025-11-25 16:18:08.463936112 +0000 UTC m=+0.166340413 container attach 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 11:18:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:09 np0005535469 gracious_colden[260166]: {
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "osd_id": 1,
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "type": "bluestore"
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:    },
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "osd_id": 2,
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "type": "bluestore"
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:    },
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "osd_id": 0,
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:        "type": "bluestore"
Nov 25 11:18:09 np0005535469 gracious_colden[260166]:    }
Nov 25 11:18:09 np0005535469 gracious_colden[260166]: }
Nov 25 11:18:09 np0005535469 systemd[1]: libpod-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope: Deactivated successfully.
Nov 25 11:18:09 np0005535469 systemd[1]: libpod-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope: Consumed 1.144s CPU time.
Nov 25 11:18:09 np0005535469 podman[260150]: 2025-11-25 16:18:09.599337896 +0000 UTC m=+1.301742247 container died 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:18:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4-merged.mount: Deactivated successfully.
Nov 25 11:18:09 np0005535469 podman[260150]: 2025-11-25 16:18:09.67503285 +0000 UTC m=+1.377437151 container remove 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:18:09 np0005535469 systemd[1]: libpod-conmon-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope: Deactivated successfully.
Nov 25 11:18:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:18:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:18:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:18:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:18:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a0fef568-ee18-44f0-a441-b573c17694eb does not exist
Nov 25 11:18:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 77f1e62b-04d1-487c-82e8-3e3e9288c046 does not exist
Nov 25 11:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:18:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:18:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:18:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:18:13.586 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:18:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:18:13.587 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:18:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:18:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:18:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.584992) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505585036, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1827, "num_deletes": 252, "total_data_size": 3039018, "memory_usage": 3084856, "flush_reason": "Manual Compaction"}
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505611892, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1730282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16471, "largest_seqno": 18297, "table_properties": {"data_size": 1724297, "index_size": 2996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15145, "raw_average_key_size": 20, "raw_value_size": 1711044, "raw_average_value_size": 2281, "num_data_blocks": 139, "num_entries": 750, "num_filter_entries": 750, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087303, "oldest_key_time": 1764087303, "file_creation_time": 1764087505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 26998 microseconds, and 5801 cpu microseconds.
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.611989) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1730282 bytes OK
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.612015) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.613794) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.613824) EVENT_LOG_v1 {"time_micros": 1764087505613814, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.613851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3031257, prev total WAL file size 3031257, number of live WAL files 2.
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.615782) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353034' seq:72057594037927935, type:22 .. '6D67727374617400373537' seq:0, type:0; will stop at (end)
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1689KB)], [38(7831KB)]
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505615881, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9749479, "oldest_snapshot_seqno": -1}
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4436 keys, 7672410 bytes, temperature: kUnknown
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505711551, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7672410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7642237, "index_size": 17967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107826, "raw_average_key_size": 24, "raw_value_size": 7561507, "raw_average_value_size": 1704, "num_data_blocks": 765, "num_entries": 4436, "num_filter_entries": 4436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.711823) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7672410 bytes
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.8 rd, 80.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.1) write-amplify(4.4) OK, records in: 4857, records dropped: 421 output_compression: NoCompression
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714287) EVENT_LOG_v1 {"time_micros": 1764087505714279, "job": 18, "event": "compaction_finished", "compaction_time_micros": 95789, "compaction_time_cpu_micros": 20177, "output_level": 6, "num_output_files": 1, "total_output_size": 7672410, "num_input_records": 4857, "num_output_records": 4436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.615610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505716083, "job": 0, "event": "table_file_deletion", "file_number": 40}
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:18:25 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505717865, "job": 0, "event": "table_file_deletion", "file_number": 38}
Nov 25 11:18:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:32 np0005535469 podman[260265]: 2025-11-25 16:18:32.698943461 +0000 UTC m=+0.094771881 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 11:18:32 np0005535469 podman[260264]: 2025-11-25 16:18:32.70853278 +0000 UTC m=+0.107260158 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:18:32 np0005535469 podman[260266]: 2025-11-25 16:18:32.734391978 +0000 UTC m=+0.126005334 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 25 11:18:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:18:40
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:18:40 np0005535469 nova_compute[254092]: 2025-11-25 16:18:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:40 np0005535469 nova_compute[254092]: 2025-11-25 16:18:40.535 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:18:40 np0005535469 nova_compute[254092]: 2025-11-25 16:18:40.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:18:40 np0005535469 nova_compute[254092]: 2025-11-25 16:18:40.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:18:40 np0005535469 nova_compute[254092]: 2025-11-25 16:18:40.536 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:18:40 np0005535469 nova_compute[254092]: 2025-11-25 16:18:40.537 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:18:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:18:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422752470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.003 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.146 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.148 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.148 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.148 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.419 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.420 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.522 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.614 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.614 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.631 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.658 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 11:18:41 np0005535469 nova_compute[254092]: 2025-11-25 16:18:41.687 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:18:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:18:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192207903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:18:42 np0005535469 nova_compute[254092]: 2025-11-25 16:18:42.094 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:18:42 np0005535469 nova_compute[254092]: 2025-11-25 16:18:42.100 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:18:42 np0005535469 nova_compute[254092]: 2025-11-25 16:18:42.118 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:18:42 np0005535469 nova_compute[254092]: 2025-11-25 16:18:42.120 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:18:42 np0005535469 nova_compute[254092]: 2025-11-25 16:18:42.120 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:18:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.120 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:18:44 np0005535469 nova_compute[254092]: 2025-11-25 16:18:44.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:18:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:45 np0005535469 nova_compute[254092]: 2025-11-25 16:18:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:45 np0005535469 nova_compute[254092]: 2025-11-25 16:18:45.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:45 np0005535469 nova_compute[254092]: 2025-11-25 16:18:45.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:18:45 np0005535469 nova_compute[254092]: 2025-11-25 16:18:45.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:18:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:18:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:18:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.782023) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533782069, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 467, "num_deletes": 251, "total_data_size": 404699, "memory_usage": 413096, "flush_reason": "Manual Compaction"}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533787914, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 400889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18298, "largest_seqno": 18764, "table_properties": {"data_size": 398229, "index_size": 696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6270, "raw_average_key_size": 18, "raw_value_size": 393012, "raw_average_value_size": 1169, "num_data_blocks": 33, "num_entries": 336, "num_filter_entries": 336, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087505, "oldest_key_time": 1764087505, "file_creation_time": 1764087533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5941 microseconds, and 2775 cpu microseconds.
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.787965) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 400889 bytes OK
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.787984) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790157) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790179) EVENT_LOG_v1 {"time_micros": 1764087533790172, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790198) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 401923, prev total WAL file size 401923, number of live WAL files 2.
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(391KB)], [41(7492KB)]
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533790836, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8073299, "oldest_snapshot_seqno": -1}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4263 keys, 6317089 bytes, temperature: kUnknown
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533831217, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6317089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6289431, "index_size": 15901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104877, "raw_average_key_size": 24, "raw_value_size": 6213033, "raw_average_value_size": 1457, "num_data_blocks": 669, "num_entries": 4263, "num_filter_entries": 4263, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.831509) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6317089 bytes
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.833130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.5 rd, 156.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 7.3 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(35.9) write-amplify(15.8) OK, records in: 4772, records dropped: 509 output_compression: NoCompression
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.833157) EVENT_LOG_v1 {"time_micros": 1764087533833145, "job": 20, "event": "compaction_finished", "compaction_time_micros": 40469, "compaction_time_cpu_micros": 27856, "output_level": 6, "num_output_files": 1, "total_output_size": 6317089, "num_input_records": 4772, "num_output_records": 4263, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533833420, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533835701, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:53 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:18:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:18:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904060504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:18:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:18:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904060504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:18:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:18:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:18:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:03 np0005535469 podman[260368]: 2025-11-25 16:19:03.635528249 +0000 UTC m=+0.055019110 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:19:03 np0005535469 podman[260367]: 2025-11-25 16:19:03.668979984 +0000 UTC m=+0.089801340 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:19:03 np0005535469 podman[260369]: 2025-11-25 16:19:03.680618028 +0000 UTC m=+0.092795190 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:19:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 366c5ad1-a771-4256-bbbd-410c879c56bd does not exist
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 71a72332-ba7d-4f78-b278-87048da49e50 does not exist
Nov 25 11:19:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 38d26483-42a2-43fa-aeea-f3060b6b91d9 does not exist
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:19:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:19:11 np0005535469 podman[260697]: 2025-11-25 16:19:11.51916062 +0000 UTC m=+0.068594126 container create ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:19:11 np0005535469 systemd[1]: Started libpod-conmon-ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55.scope.
Nov 25 11:19:11 np0005535469 podman[260697]: 2025-11-25 16:19:11.491954394 +0000 UTC m=+0.041387960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:19:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:19:11 np0005535469 podman[260697]: 2025-11-25 16:19:11.622140566 +0000 UTC m=+0.171574162 container init ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:19:11 np0005535469 podman[260697]: 2025-11-25 16:19:11.63635736 +0000 UTC m=+0.185790896 container start ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:19:11 np0005535469 frosty_lichterman[260713]: 167 167
Nov 25 11:19:11 np0005535469 podman[260697]: 2025-11-25 16:19:11.642255689 +0000 UTC m=+0.191689225 container attach ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:19:11 np0005535469 systemd[1]: libpod-ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55.scope: Deactivated successfully.
Nov 25 11:19:11 np0005535469 podman[260697]: 2025-11-25 16:19:11.64743322 +0000 UTC m=+0.196866756 container died ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 11:19:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:19:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:19:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:19:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6c5fc41bb0bd1f7c7ca8b4752f96f2708e488648e4d1bc85f3567c5b5fa863df-merged.mount: Deactivated successfully.
Nov 25 11:19:11 np0005535469 podman[260697]: 2025-11-25 16:19:11.725018848 +0000 UTC m=+0.274452354 container remove ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:19:11 np0005535469 systemd[1]: libpod-conmon-ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55.scope: Deactivated successfully.
Nov 25 11:19:11 np0005535469 podman[260737]: 2025-11-25 16:19:11.951776912 +0000 UTC m=+0.062917724 container create 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:19:11 np0005535469 systemd[1]: Started libpod-conmon-380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638.scope.
Nov 25 11:19:12 np0005535469 podman[260737]: 2025-11-25 16:19:11.922015826 +0000 UTC m=+0.033156648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:19:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:19:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:12 np0005535469 podman[260737]: 2025-11-25 16:19:12.065748174 +0000 UTC m=+0.176888986 container init 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:19:12 np0005535469 podman[260737]: 2025-11-25 16:19:12.074295785 +0000 UTC m=+0.185436597 container start 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:19:12 np0005535469 podman[260737]: 2025-11-25 16:19:12.087826671 +0000 UTC m=+0.198967533 container attach 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:19:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:13 np0005535469 xenodochial_gould[260753]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:19:13 np0005535469 xenodochial_gould[260753]: --> relative data size: 1.0
Nov 25 11:19:13 np0005535469 xenodochial_gould[260753]: --> All data devices are unavailable
Nov 25 11:19:13 np0005535469 systemd[1]: libpod-380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638.scope: Deactivated successfully.
Nov 25 11:19:13 np0005535469 podman[260737]: 2025-11-25 16:19:13.090771758 +0000 UTC m=+1.201912610 container died 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:19:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7-merged.mount: Deactivated successfully.
Nov 25 11:19:13 np0005535469 podman[260737]: 2025-11-25 16:19:13.147328098 +0000 UTC m=+1.258468910 container remove 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:19:13 np0005535469 systemd[1]: libpod-conmon-380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638.scope: Deactivated successfully.
Nov 25 11:19:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:19:13.587 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:19:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:19:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:19:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:19:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:19:13 np0005535469 podman[260936]: 2025-11-25 16:19:13.981064598 +0000 UTC m=+0.063302963 container create 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:19:14 np0005535469 systemd[1]: Started libpod-conmon-83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37.scope.
Nov 25 11:19:14 np0005535469 podman[260936]: 2025-11-25 16:19:13.955286301 +0000 UTC m=+0.037524746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:19:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:19:14 np0005535469 podman[260936]: 2025-11-25 16:19:14.104796885 +0000 UTC m=+0.187035250 container init 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:19:14 np0005535469 podman[260936]: 2025-11-25 16:19:14.111583848 +0000 UTC m=+0.193822213 container start 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 11:19:14 np0005535469 podman[260936]: 2025-11-25 16:19:14.115057743 +0000 UTC m=+0.197296138 container attach 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:19:14 np0005535469 stoic_shannon[260953]: 167 167
Nov 25 11:19:14 np0005535469 systemd[1]: libpod-83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37.scope: Deactivated successfully.
Nov 25 11:19:14 np0005535469 podman[260936]: 2025-11-25 16:19:14.118948198 +0000 UTC m=+0.201186603 container died 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:19:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-06656087c859aa69e6f9bf0d7a06480b1a1e74163fc2ad274eb5536768e6aa9e-merged.mount: Deactivated successfully.
Nov 25 11:19:14 np0005535469 podman[260936]: 2025-11-25 16:19:14.16155309 +0000 UTC m=+0.243791455 container remove 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:19:14 np0005535469 systemd[1]: libpod-conmon-83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37.scope: Deactivated successfully.
Nov 25 11:19:14 np0005535469 podman[260978]: 2025-11-25 16:19:14.397879172 +0000 UTC m=+0.073538531 container create 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:19:14 np0005535469 systemd[1]: Started libpod-conmon-351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53.scope.
Nov 25 11:19:14 np0005535469 podman[260978]: 2025-11-25 16:19:14.368253011 +0000 UTC m=+0.043912440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:19:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:19:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:14 np0005535469 podman[260978]: 2025-11-25 16:19:14.49727362 +0000 UTC m=+0.172933029 container init 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:19:14 np0005535469 podman[260978]: 2025-11-25 16:19:14.509147641 +0000 UTC m=+0.184806990 container start 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:19:14 np0005535469 podman[260978]: 2025-11-25 16:19:14.513300804 +0000 UTC m=+0.188960153 container attach 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:19:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]: {
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:    "0": [
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:        {
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "devices": [
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "/dev/loop3"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            ],
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_name": "ceph_lv0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_size": "21470642176",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "name": "ceph_lv0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "tags": {
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cluster_name": "ceph",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.crush_device_class": "",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.encrypted": "0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osd_id": "0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.type": "block",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.vdo": "0"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            },
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "type": "block",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "vg_name": "ceph_vg0"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:        }
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:    ],
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:    "1": [
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:        {
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "devices": [
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "/dev/loop4"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            ],
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_name": "ceph_lv1",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_size": "21470642176",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "name": "ceph_lv1",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "tags": {
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cluster_name": "ceph",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.crush_device_class": "",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.encrypted": "0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osd_id": "1",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.type": "block",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.vdo": "0"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            },
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "type": "block",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "vg_name": "ceph_vg1"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:        }
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:    ],
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:    "2": [
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:        {
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "devices": [
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "/dev/loop5"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            ],
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_name": "ceph_lv2",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_size": "21470642176",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "name": "ceph_lv2",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "tags": {
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.cluster_name": "ceph",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.crush_device_class": "",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.encrypted": "0",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osd_id": "2",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.type": "block",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:                "ceph.vdo": "0"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            },
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "type": "block",
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:            "vg_name": "ceph_vg2"
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:        }
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]:    ]
Nov 25 11:19:15 np0005535469 quirky_neumann[260994]: }
Nov 25 11:19:15 np0005535469 systemd[1]: libpod-351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53.scope: Deactivated successfully.
Nov 25 11:19:15 np0005535469 podman[260978]: 2025-11-25 16:19:15.357145927 +0000 UTC m=+1.032805256 container died 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:19:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478-merged.mount: Deactivated successfully.
Nov 25 11:19:15 np0005535469 podman[260978]: 2025-11-25 16:19:15.442007852 +0000 UTC m=+1.117667201 container remove 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:19:15 np0005535469 systemd[1]: libpod-conmon-351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53.scope: Deactivated successfully.
Nov 25 11:19:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:16 np0005535469 podman[261159]: 2025-11-25 16:19:16.202601445 +0000 UTC m=+0.062825181 container create c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:19:16 np0005535469 systemd[1]: Started libpod-conmon-c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207.scope.
Nov 25 11:19:16 np0005535469 podman[261159]: 2025-11-25 16:19:16.177369612 +0000 UTC m=+0.037593398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:19:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:19:16 np0005535469 podman[261159]: 2025-11-25 16:19:16.293591196 +0000 UTC m=+0.153815002 container init c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:19:16 np0005535469 podman[261159]: 2025-11-25 16:19:16.301420997 +0000 UTC m=+0.161644743 container start c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:19:16 np0005535469 podman[261159]: 2025-11-25 16:19:16.30517905 +0000 UTC m=+0.165402796 container attach c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:19:16 np0005535469 distracted_williamson[261176]: 167 167
Nov 25 11:19:16 np0005535469 systemd[1]: libpod-c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207.scope: Deactivated successfully.
Nov 25 11:19:16 np0005535469 podman[261159]: 2025-11-25 16:19:16.308446218 +0000 UTC m=+0.168669954 container died c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:19:16 np0005535469 systemd[1]: var-lib-containers-storage-overlay-55405a4156b501754d0afaf0258ef8c50723a73eaa74bf90a872ae92d8e4ad22-merged.mount: Deactivated successfully.
Nov 25 11:19:16 np0005535469 podman[261159]: 2025-11-25 16:19:16.360082454 +0000 UTC m=+0.220306200 container remove c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:19:16 np0005535469 systemd[1]: libpod-conmon-c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207.scope: Deactivated successfully.
Nov 25 11:19:16 np0005535469 podman[261202]: 2025-11-25 16:19:16.537790511 +0000 UTC m=+0.061829934 container create af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:19:16 np0005535469 podman[261202]: 2025-11-25 16:19:16.515466897 +0000 UTC m=+0.039506330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:19:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:16 np0005535469 systemd[1]: Started libpod-conmon-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope.
Nov 25 11:19:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:19:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:19:16 np0005535469 podman[261202]: 2025-11-25 16:19:16.680662566 +0000 UTC m=+0.204701969 container init af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:19:16 np0005535469 podman[261202]: 2025-11-25 16:19:16.688131067 +0000 UTC m=+0.212170450 container start af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:19:16 np0005535469 podman[261202]: 2025-11-25 16:19:16.691217961 +0000 UTC m=+0.215257334 container attach af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:19:17 np0005535469 loving_gates[261218]: {
Nov 25 11:19:17 np0005535469 loving_gates[261218]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "osd_id": 1,
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "type": "bluestore"
Nov 25 11:19:17 np0005535469 loving_gates[261218]:    },
Nov 25 11:19:17 np0005535469 loving_gates[261218]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "osd_id": 2,
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "type": "bluestore"
Nov 25 11:19:17 np0005535469 loving_gates[261218]:    },
Nov 25 11:19:17 np0005535469 loving_gates[261218]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "osd_id": 0,
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:19:17 np0005535469 loving_gates[261218]:        "type": "bluestore"
Nov 25 11:19:17 np0005535469 loving_gates[261218]:    }
Nov 25 11:19:17 np0005535469 loving_gates[261218]: }
Nov 25 11:19:17 np0005535469 systemd[1]: libpod-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope: Deactivated successfully.
Nov 25 11:19:17 np0005535469 systemd[1]: libpod-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope: Consumed 1.026s CPU time.
Nov 25 11:19:17 np0005535469 podman[261202]: 2025-11-25 16:19:17.70604526 +0000 UTC m=+1.230084673 container died af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:19:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca-merged.mount: Deactivated successfully.
Nov 25 11:19:17 np0005535469 podman[261202]: 2025-11-25 16:19:17.89239888 +0000 UTC m=+1.416438303 container remove af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 11:19:17 np0005535469 systemd[1]: libpod-conmon-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope: Deactivated successfully.
Nov 25 11:19:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:19:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:19:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:19:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:19:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b8ba95c7-4f0d-463c-a6eb-f83c1aa3d63a does not exist
Nov 25 11:19:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev de1d9ecc-0d5f-42ea-b0e3-44d75af26b3a does not exist
Nov 25 11:19:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:19:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:19:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:34 np0005535469 podman[261316]: 2025-11-25 16:19:34.682609653 +0000 UTC m=+0.084705843 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 11:19:34 np0005535469 podman[261315]: 2025-11-25 16:19:34.689715614 +0000 UTC m=+0.094677412 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 11:19:34 np0005535469 podman[261317]: 2025-11-25 16:19:34.745816712 +0000 UTC m=+0.148613831 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:19:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:19:40
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:19:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.1459 seconds
Nov 25 11:19:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:19:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:19:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806324696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:19:42 np0005535469 nova_compute[254092]: 2025-11-25 16:19:42.940 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.098 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.099 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.099 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.100 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.191 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.191 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.214 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:19:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:19:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179185872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.604 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.609 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.628 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.629 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:19:43 np0005535469 nova_compute[254092]: 2025-11-25 16:19:43.629 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:19:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:44 np0005535469 nova_compute[254092]: 2025-11-25 16:19:44.630 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:44 np0005535469 nova_compute[254092]: 2025-11-25 16:19:44.630 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:45 np0005535469 nova_compute[254092]: 2025-11-25 16:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:45 np0005535469 nova_compute[254092]: 2025-11-25 16:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:45 np0005535469 nova_compute[254092]: 2025-11-25 16:19:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:19:46 np0005535469 nova_compute[254092]: 2025-11-25 16:19:46.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:46 np0005535469 nova_compute[254092]: 2025-11-25 16:19:46.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:46 np0005535469 nova_compute[254092]: 2025-11-25 16:19:46.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:19:46 np0005535469 nova_compute[254092]: 2025-11-25 16:19:46.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:19:46 np0005535469 nova_compute[254092]: 2025-11-25 16:19:46.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:19:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:47 np0005535469 nova_compute[254092]: 2025-11-25 16:19:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:19:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:19:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:19:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:19:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2254086920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:19:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:19:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2254086920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:19:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:19:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:19:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:05 np0005535469 podman[261426]: 2025-11-25 16:20:05.626609797 +0000 UTC m=+0.049576522 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:20:05 np0005535469 podman[261425]: 2025-11-25 16:20:05.629821964 +0000 UTC m=+0.054384562 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:20:05 np0005535469 podman[261427]: 2025-11-25 16:20:05.722415989 +0000 UTC m=+0.132425483 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 11:20:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:20:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:20:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:20:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:20:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:20:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:20:13.589 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:20:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:20:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:20:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:20:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:20:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:20:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:20:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b54bf14e-b5ce-4991-b67f-99c3dbd8d493 does not exist
Nov 25 11:20:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d673727d-e49d-4b9e-aa56-1f1256f62d03 does not exist
Nov 25 11:20:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8fe345b5-9bca-479a-86f5-e861b1629950 does not exist
Nov 25 11:20:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:20:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:20:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:20:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:20:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:20:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:20:19 np0005535469 podman[261755]: 2025-11-25 16:20:19.669263153 +0000 UTC m=+0.079743167 container create 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 11:20:19 np0005535469 podman[261755]: 2025-11-25 16:20:19.615438998 +0000 UTC m=+0.025919062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:20:19 np0005535469 systemd[1]: Started libpod-conmon-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope.
Nov 25 11:20:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:20:19 np0005535469 podman[261755]: 2025-11-25 16:20:19.797108962 +0000 UTC m=+0.207589036 container init 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:20:19 np0005535469 podman[261755]: 2025-11-25 16:20:19.808256623 +0000 UTC m=+0.218736657 container start 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:20:19 np0005535469 blissful_jones[261771]: 167 167
Nov 25 11:20:19 np0005535469 systemd[1]: libpod-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope: Deactivated successfully.
Nov 25 11:20:19 np0005535469 conmon[261771]: conmon 76179ae8392914f09777 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope/container/memory.events
Nov 25 11:20:19 np0005535469 podman[261755]: 2025-11-25 16:20:19.821872012 +0000 UTC m=+0.232352046 container attach 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 11:20:19 np0005535469 podman[261755]: 2025-11-25 16:20:19.822258272 +0000 UTC m=+0.232738296 container died 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:20:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f7b87003fdf557c52f5c07cdf42576c3deab79caa833fb99a9474f5df05377a3-merged.mount: Deactivated successfully.
Nov 25 11:20:20 np0005535469 podman[261755]: 2025-11-25 16:20:20.052812377 +0000 UTC m=+0.463292391 container remove 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 11:20:20 np0005535469 systemd[1]: libpod-conmon-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope: Deactivated successfully.
Nov 25 11:20:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:20:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:20:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:20:20 np0005535469 podman[261797]: 2025-11-25 16:20:20.227631186 +0000 UTC m=+0.051045602 container create 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:20:20 np0005535469 systemd[1]: Started libpod-conmon-10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4.scope.
Nov 25 11:20:20 np0005535469 podman[261797]: 2025-11-25 16:20:20.199245659 +0000 UTC m=+0.022660095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:20:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:20:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:20 np0005535469 podman[261797]: 2025-11-25 16:20:20.326978823 +0000 UTC m=+0.150393239 container init 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:20:20 np0005535469 podman[261797]: 2025-11-25 16:20:20.335800051 +0000 UTC m=+0.159214467 container start 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 11:20:20 np0005535469 podman[261797]: 2025-11-25 16:20:20.338808203 +0000 UTC m=+0.162222639 container attach 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:20:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:21 np0005535469 sleepy_dewdney[261813]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:20:21 np0005535469 sleepy_dewdney[261813]: --> relative data size: 1.0
Nov 25 11:20:21 np0005535469 sleepy_dewdney[261813]: --> All data devices are unavailable
Nov 25 11:20:21 np0005535469 systemd[1]: libpod-10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4.scope: Deactivated successfully.
Nov 25 11:20:21 np0005535469 podman[261797]: 2025-11-25 16:20:21.309720864 +0000 UTC m=+1.133135270 container died 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:20:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec-merged.mount: Deactivated successfully.
Nov 25 11:20:21 np0005535469 podman[261797]: 2025-11-25 16:20:21.357218779 +0000 UTC m=+1.180633195 container remove 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:20:21 np0005535469 systemd[1]: libpod-conmon-10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4.scope: Deactivated successfully.
Nov 25 11:20:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:22 np0005535469 podman[261994]: 2025-11-25 16:20:22.067908141 +0000 UTC m=+0.066322085 container create 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:20:22 np0005535469 systemd[1]: Started libpod-conmon-20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a.scope.
Nov 25 11:20:22 np0005535469 podman[261994]: 2025-11-25 16:20:22.042596037 +0000 UTC m=+0.041010071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:20:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:20:22 np0005535469 podman[261994]: 2025-11-25 16:20:22.161453601 +0000 UTC m=+0.159867585 container init 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:20:22 np0005535469 podman[261994]: 2025-11-25 16:20:22.170169807 +0000 UTC m=+0.168583751 container start 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:20:22 np0005535469 podman[261994]: 2025-11-25 16:20:22.172996753 +0000 UTC m=+0.171410737 container attach 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:20:22 np0005535469 hardcore_kowalevski[262011]: 167 167
Nov 25 11:20:22 np0005535469 systemd[1]: libpod-20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a.scope: Deactivated successfully.
Nov 25 11:20:22 np0005535469 podman[261994]: 2025-11-25 16:20:22.178865162 +0000 UTC m=+0.177279116 container died 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:20:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-30ed20ba18a1959df37f034d121dd6bb39748403a6930d3c69bc8c44a004179c-merged.mount: Deactivated successfully.
Nov 25 11:20:22 np0005535469 podman[261994]: 2025-11-25 16:20:22.214034753 +0000 UTC m=+0.212448697 container remove 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:20:22 np0005535469 systemd[1]: libpod-conmon-20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a.scope: Deactivated successfully.
Nov 25 11:20:22 np0005535469 podman[262033]: 2025-11-25 16:20:22.394316059 +0000 UTC m=+0.050426055 container create b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:20:22 np0005535469 systemd[1]: Started libpod-conmon-b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65.scope.
Nov 25 11:20:22 np0005535469 podman[262033]: 2025-11-25 16:20:22.373392344 +0000 UTC m=+0.029502390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:20:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:20:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:22 np0005535469 podman[262033]: 2025-11-25 16:20:22.493281106 +0000 UTC m=+0.149391132 container init b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:20:22 np0005535469 podman[262033]: 2025-11-25 16:20:22.501804976 +0000 UTC m=+0.157914972 container start b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 11:20:22 np0005535469 podman[262033]: 2025-11-25 16:20:22.505032054 +0000 UTC m=+0.161142060 container attach b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:20:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]: {
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:    "0": [
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:        {
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "devices": [
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "/dev/loop3"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            ],
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_name": "ceph_lv0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_size": "21470642176",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "name": "ceph_lv0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "tags": {
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cluster_name": "ceph",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.crush_device_class": "",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.encrypted": "0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osd_id": "0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.type": "block",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.vdo": "0"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            },
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "type": "block",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "vg_name": "ceph_vg0"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:        }
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:    ],
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:    "1": [
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:        {
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "devices": [
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "/dev/loop4"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            ],
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_name": "ceph_lv1",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_size": "21470642176",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "name": "ceph_lv1",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "tags": {
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cluster_name": "ceph",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.crush_device_class": "",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.encrypted": "0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osd_id": "1",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.type": "block",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.vdo": "0"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            },
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "type": "block",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "vg_name": "ceph_vg1"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:        }
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:    ],
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:    "2": [
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:        {
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "devices": [
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "/dev/loop5"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            ],
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_name": "ceph_lv2",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_size": "21470642176",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "name": "ceph_lv2",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "tags": {
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.cluster_name": "ceph",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.crush_device_class": "",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.encrypted": "0",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osd_id": "2",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.type": "block",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:                "ceph.vdo": "0"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            },
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "type": "block",
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:            "vg_name": "ceph_vg2"
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:        }
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]:    ]
Nov 25 11:20:23 np0005535469 recursing_haibt[262050]: }
Nov 25 11:20:23 np0005535469 systemd[1]: libpod-b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65.scope: Deactivated successfully.
Nov 25 11:20:23 np0005535469 podman[262033]: 2025-11-25 16:20:23.216999931 +0000 UTC m=+0.873109967 container died b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:20:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033-merged.mount: Deactivated successfully.
Nov 25 11:20:23 np0005535469 podman[262033]: 2025-11-25 16:20:23.277218039 +0000 UTC m=+0.933328045 container remove b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:20:23 np0005535469 systemd[1]: libpod-conmon-b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65.scope: Deactivated successfully.
Nov 25 11:20:23 np0005535469 podman[262216]: 2025-11-25 16:20:23.913113349 +0000 UTC m=+0.047976039 container create a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:20:23 np0005535469 systemd[1]: Started libpod-conmon-a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a.scope.
Nov 25 11:20:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:20:23 np0005535469 podman[262216]: 2025-11-25 16:20:23.898784881 +0000 UTC m=+0.033647581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:20:24 np0005535469 podman[262216]: 2025-11-25 16:20:24.004589113 +0000 UTC m=+0.139451833 container init a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:20:24 np0005535469 podman[262216]: 2025-11-25 16:20:24.01555595 +0000 UTC m=+0.150418640 container start a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:20:24 np0005535469 podman[262216]: 2025-11-25 16:20:24.018658874 +0000 UTC m=+0.153521604 container attach a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:20:24 np0005535469 gallant_gates[262232]: 167 167
Nov 25 11:20:24 np0005535469 systemd[1]: libpod-a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a.scope: Deactivated successfully.
Nov 25 11:20:24 np0005535469 podman[262216]: 2025-11-25 16:20:24.020465203 +0000 UTC m=+0.155327893 container died a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 11:20:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1638d8160de29c4e82e18a24cbc52faf2c00f04239feb8a30ec749e6b6872530-merged.mount: Deactivated successfully.
Nov 25 11:20:24 np0005535469 podman[262216]: 2025-11-25 16:20:24.061090871 +0000 UTC m=+0.195953561 container remove a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:20:24 np0005535469 systemd[1]: libpod-conmon-a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a.scope: Deactivated successfully.
Nov 25 11:20:24 np0005535469 podman[262257]: 2025-11-25 16:20:24.2447736 +0000 UTC m=+0.044869445 container create f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:20:24 np0005535469 systemd[1]: Started libpod-conmon-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope.
Nov 25 11:20:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:20:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:20:24 np0005535469 podman[262257]: 2025-11-25 16:20:24.227495262 +0000 UTC m=+0.027591117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:20:24 np0005535469 podman[262257]: 2025-11-25 16:20:24.324431484 +0000 UTC m=+0.124527409 container init f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:20:24 np0005535469 podman[262257]: 2025-11-25 16:20:24.340324384 +0000 UTC m=+0.140420219 container start f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:20:24 np0005535469 podman[262257]: 2025-11-25 16:20:24.344402474 +0000 UTC m=+0.144498309 container attach f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 11:20:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:25 np0005535469 cool_moore[262274]: {
Nov 25 11:20:25 np0005535469 cool_moore[262274]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "osd_id": 1,
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "type": "bluestore"
Nov 25 11:20:25 np0005535469 cool_moore[262274]:    },
Nov 25 11:20:25 np0005535469 cool_moore[262274]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "osd_id": 2,
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "type": "bluestore"
Nov 25 11:20:25 np0005535469 cool_moore[262274]:    },
Nov 25 11:20:25 np0005535469 cool_moore[262274]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "osd_id": 0,
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:20:25 np0005535469 cool_moore[262274]:        "type": "bluestore"
Nov 25 11:20:25 np0005535469 cool_moore[262274]:    }
Nov 25 11:20:25 np0005535469 cool_moore[262274]: }
Nov 25 11:20:25 np0005535469 systemd[1]: libpod-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope: Deactivated successfully.
Nov 25 11:20:25 np0005535469 podman[262257]: 2025-11-25 16:20:25.458029545 +0000 UTC m=+1.258125390 container died f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:20:25 np0005535469 systemd[1]: libpod-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope: Consumed 1.119s CPU time.
Nov 25 11:20:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de-merged.mount: Deactivated successfully.
Nov 25 11:20:25 np0005535469 podman[262257]: 2025-11-25 16:20:25.524619526 +0000 UTC m=+1.324715371 container remove f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:20:25 np0005535469 systemd[1]: libpod-conmon-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope: Deactivated successfully.
Nov 25 11:20:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:20:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:20:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:20:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:20:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3cd9a751-4fe2-4bdc-b888-9f00d40ca504 does not exist
Nov 25 11:20:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d1d7a164-b58b-4f85-94b7-60cdf6f28c4f does not exist
Nov 25 11:20:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:20:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:20:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:36 np0005535469 podman[262371]: 2025-11-25 16:20:36.665687351 +0000 UTC m=+0.072499472 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd)
Nov 25 11:20:36 np0005535469 podman[262372]: 2025-11-25 16:20:36.698088927 +0000 UTC m=+0.103837269 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 25 11:20:36 np0005535469 podman[262373]: 2025-11-25 16:20:36.711669855 +0000 UTC m=+0.116485112 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:20:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:20:40
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:20:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:20:44 np0005535469 nova_compute[254092]: 2025-11-25 16:20:44.532 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:20:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:20:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238979290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.013 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.187 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.188 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.188 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.188 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.283 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.283 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.305 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:20:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:20:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703015220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.741 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.747 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.763 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.765 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:20:45 np0005535469 nova_compute[254092]: 2025-11-25 16:20:45.765 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:20:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:46 np0005535469 nova_compute[254092]: 2025-11-25 16:20:46.764 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:46 np0005535469 nova_compute[254092]: 2025-11-25 16:20:46.764 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:46 np0005535469 nova_compute[254092]: 2025-11-25 16:20:46.765 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:46 np0005535469 nova_compute[254092]: 2025-11-25 16:20:46.765 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.889451) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087646889519, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1104, "num_deletes": 256, "total_data_size": 1601280, "memory_usage": 1626496, "flush_reason": "Manual Compaction"}
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087646959037, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1575714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18765, "largest_seqno": 19868, "table_properties": {"data_size": 1570421, "index_size": 2753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10952, "raw_average_key_size": 18, "raw_value_size": 1559700, "raw_average_value_size": 2670, "num_data_blocks": 126, "num_entries": 584, "num_filter_entries": 584, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087534, "oldest_key_time": 1764087534, "file_creation_time": 1764087646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 69670 microseconds, and 8093 cpu microseconds.
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.959117) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1575714 bytes OK
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.959147) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.981177) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.981202) EVENT_LOG_v1 {"time_micros": 1764087646981194, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.981225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1596128, prev total WAL file size 1596128, number of live WAL files 2.
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.982352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1538KB)], [44(6169KB)]
Nov 25 11:20:46 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087646982388, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7892803, "oldest_snapshot_seqno": -1}
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4323 keys, 7751701 bytes, temperature: kUnknown
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087647251028, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7751701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7721711, "index_size": 18104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 107197, "raw_average_key_size": 24, "raw_value_size": 7642311, "raw_average_value_size": 1767, "num_data_blocks": 762, "num_entries": 4323, "num_filter_entries": 4323, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.251262) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7751701 bytes
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.282979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.4 rd, 28.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 6.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 4847, records dropped: 524 output_compression: NoCompression
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.283017) EVENT_LOG_v1 {"time_micros": 1764087647282990, "job": 22, "event": "compaction_finished", "compaction_time_micros": 268731, "compaction_time_cpu_micros": 26522, "output_level": 6, "num_output_files": 1, "total_output_size": 7751701, "num_input_records": 4847, "num_output_records": 4323, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087647283363, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087647285015, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.982250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:20:47 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:20:47 np0005535469 nova_compute[254092]: 2025-11-25 16:20:47.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:48 np0005535469 nova_compute[254092]: 2025-11-25 16:20:48.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:48 np0005535469 nova_compute[254092]: 2025-11-25 16:20:48.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:48 np0005535469 nova_compute[254092]: 2025-11-25 16:20:48.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:20:48 np0005535469 nova_compute[254092]: 2025-11-25 16:20:48.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:20:48 np0005535469 nova_compute[254092]: 2025-11-25 16:20:48.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:20:48 np0005535469 nova_compute[254092]: 2025-11-25 16:20:48.525 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:20:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:20:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 25 11:20:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 25 11:20:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:20:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:20:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 25 11:20:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 25 11:20:51 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 25 11:20:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 25 11:20:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 25 11:20:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 25 11:20:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Nov 25 11:20:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 25 11:20:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 25 11:20:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 25 11:20:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Nov 25 11:20:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:20:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800492187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:20:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:20:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800492187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:20:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 13 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Nov 25 11:20:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:20:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 25 11:20:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 25 11:20:57 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 25 11:20:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.2 MiB/s wr, 47 op/s
Nov 25 11:21:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Nov 25 11:21:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.9 MiB/s wr, 35 op/s
Nov 25 11:21:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.3 MiB/s wr, 29 op/s
Nov 25 11:21:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.8 MiB/s wr, 15 op/s
Nov 25 11:21:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:07 np0005535469 podman[262480]: 2025-11-25 16:21:07.71118104 +0000 UTC m=+0.103214823 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:21:07 np0005535469 podman[262481]: 2025-11-25 16:21:07.711311464 +0000 UTC m=+0.102679349 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 11:21:07 np0005535469 podman[262482]: 2025-11-25 16:21:07.773866565 +0000 UTC m=+0.158666193 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:21:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:21:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:21:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:21:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:21:13.589 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:21:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:21:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:21:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:21:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:21:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:21:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 25 11:21:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 25 11:21:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 25 11:21:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Nov 25 11:21:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 715 B/s wr, 17 op/s
Nov 25 11:21:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1021 B/s wr, 18 op/s
Nov 25 11:21:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 25 11:21:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 25 11:21:22 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 25 11:21:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 25 11:21:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 11:21:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 31 op/s
Nov 25 11:21:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 25 11:21:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:21:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2f4d3432-2689-4749-a06c-7940df3d1ce0 does not exist
Nov 25 11:21:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 551d0327-0173-49cb-a2d8-a9093e29a2f5 does not exist
Nov 25 11:21:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 87856292-307e-4334-8dd0-16fb9e0f3968 does not exist
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:21:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:21:28 np0005535469 podman[262816]: 2025-11-25 16:21:28.010979037 +0000 UTC m=+0.025773568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:21:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:21:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:21:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:21:28 np0005535469 podman[262816]: 2025-11-25 16:21:28.31981197 +0000 UTC m=+0.334606481 container create ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:21:28 np0005535469 systemd[1]: Started libpod-conmon-ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08.scope.
Nov 25 11:21:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:21:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 895 B/s wr, 17 op/s
Nov 25 11:21:28 np0005535469 podman[262816]: 2025-11-25 16:21:28.743454149 +0000 UTC m=+0.758248680 container init ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:21:28 np0005535469 podman[262816]: 2025-11-25 16:21:28.753004357 +0000 UTC m=+0.767798868 container start ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:21:28 np0005535469 clever_hermann[262832]: 167 167
Nov 25 11:21:28 np0005535469 systemd[1]: libpod-ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08.scope: Deactivated successfully.
Nov 25 11:21:28 np0005535469 podman[262816]: 2025-11-25 16:21:28.811824888 +0000 UTC m=+0.826619419 container attach ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:21:28 np0005535469 podman[262816]: 2025-11-25 16:21:28.812832855 +0000 UTC m=+0.827627366 container died ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:21:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e7cd781ac9ba902579107db221c77b9871038cb5e6a309ecb4b2d33a6202890e-merged.mount: Deactivated successfully.
Nov 25 11:21:29 np0005535469 podman[262816]: 2025-11-25 16:21:29.671002726 +0000 UTC m=+1.685797237 container remove ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:21:29 np0005535469 systemd[1]: libpod-conmon-ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08.scope: Deactivated successfully.
Nov 25 11:21:29 np0005535469 podman[262857]: 2025-11-25 16:21:29.890130073 +0000 UTC m=+0.025674655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:21:30 np0005535469 podman[262857]: 2025-11-25 16:21:30.077985424 +0000 UTC m=+0.213529956 container create c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:21:30 np0005535469 systemd[1]: Started libpod-conmon-c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b.scope.
Nov 25 11:21:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:21:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:30 np0005535469 podman[262857]: 2025-11-25 16:21:30.348060178 +0000 UTC m=+0.483604820 container init c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:21:30 np0005535469 podman[262857]: 2025-11-25 16:21:30.355069548 +0000 UTC m=+0.490614140 container start c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:21:30 np0005535469 podman[262857]: 2025-11-25 16:21:30.496919514 +0000 UTC m=+0.632464076 container attach c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:21:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Nov 25 11:21:31 np0005535469 thirsty_cerf[262873]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:21:31 np0005535469 thirsty_cerf[262873]: --> relative data size: 1.0
Nov 25 11:21:31 np0005535469 thirsty_cerf[262873]: --> All data devices are unavailable
Nov 25 11:21:31 np0005535469 systemd[1]: libpod-c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b.scope: Deactivated successfully.
Nov 25 11:21:31 np0005535469 podman[262903]: 2025-11-25 16:21:31.407883824 +0000 UTC m=+0.023555398 container died c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:21:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f-merged.mount: Deactivated successfully.
Nov 25 11:21:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 4.9 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Nov 25 11:21:33 np0005535469 podman[262903]: 2025-11-25 16:21:33.304005429 +0000 UTC m=+1.919677013 container remove c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:21:33 np0005535469 systemd[1]: libpod-conmon-c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b.scope: Deactivated successfully.
Nov 25 11:21:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 4.9 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Nov 25 11:21:34 np0005535469 podman[263058]: 2025-11-25 16:21:34.864770415 +0000 UTC m=+0.023222670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:21:35 np0005535469 podman[263058]: 2025-11-25 16:21:35.176931088 +0000 UTC m=+0.335383273 container create 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:21:35 np0005535469 systemd[1]: Started libpod-conmon-3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3.scope.
Nov 25 11:21:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:21:35 np0005535469 podman[263058]: 2025-11-25 16:21:35.679452069 +0000 UTC m=+0.837904264 container init 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:21:35 np0005535469 podman[263058]: 2025-11-25 16:21:35.685994847 +0000 UTC m=+0.844447022 container start 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 11:21:35 np0005535469 practical_hamilton[263074]: 167 167
Nov 25 11:21:35 np0005535469 systemd[1]: libpod-3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3.scope: Deactivated successfully.
Nov 25 11:21:35 np0005535469 podman[263058]: 2025-11-25 16:21:35.794574974 +0000 UTC m=+0.953027149 container attach 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:21:35 np0005535469 podman[263058]: 2025-11-25 16:21:35.794901113 +0000 UTC m=+0.953353288 container died 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:21:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2c789636d04f310e5c001129b47925e28adbd5cdbdadd7b8b1fd5d680eebc906-merged.mount: Deactivated successfully.
Nov 25 11:21:36 np0005535469 podman[263058]: 2025-11-25 16:21:36.476326013 +0000 UTC m=+1.634778188 container remove 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:21:36 np0005535469 systemd[1]: libpod-conmon-3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3.scope: Deactivated successfully.
Nov 25 11:21:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 11:21:36 np0005535469 podman[263099]: 2025-11-25 16:21:36.728155684 +0000 UTC m=+0.120874950 container create 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:21:36 np0005535469 podman[263099]: 2025-11-25 16:21:36.634270554 +0000 UTC m=+0.026989830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:21:36 np0005535469 systemd[1]: Started libpod-conmon-99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af.scope.
Nov 25 11:21:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:21:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:36 np0005535469 podman[263099]: 2025-11-25 16:21:36.965461612 +0000 UTC m=+0.358180928 container init 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 11:21:36 np0005535469 podman[263099]: 2025-11-25 16:21:36.974744744 +0000 UTC m=+0.367464010 container start 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:21:36 np0005535469 podman[263099]: 2025-11-25 16:21:36.992916135 +0000 UTC m=+0.385635461 container attach 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:21:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 25 11:21:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 25 11:21:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 25 11:21:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:21:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4492 writes, 20K keys, 4492 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4492 writes, 4492 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1319 writes, 6006 keys, 1319 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s#012Interval WAL: 1319 writes, 1319 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.1      1.16              0.08        11    0.105       0      0       0.0       0.0#012  L6      1/0    7.39 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     90.3     74.8      0.94              0.20        10    0.094     44K   5209       0.0       0.0#012 Sum      1/0    7.39 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     40.6     44.1      2.10              0.28        21    0.100     44K   5209       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6     62.8     63.4      0.69              0.14        10    0.069     24K   2995       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     90.3     74.8      0.94              0.20        10    0.094     44K   5209       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.8      1.11              0.08        10    0.111       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.1 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 2.1 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 308.00 MB usage: 6.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(421,6.18 MB,2.00493%) FilterBlock(22,130.30 KB,0.0413127%) IndexBlock(22,237.86 KB,0.0754171%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]: {
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:    "0": [
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:        {
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "devices": [
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "/dev/loop3"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            ],
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_name": "ceph_lv0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_size": "21470642176",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "name": "ceph_lv0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "tags": {
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cluster_name": "ceph",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.crush_device_class": "",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.encrypted": "0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osd_id": "0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.type": "block",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.vdo": "0"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            },
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "type": "block",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "vg_name": "ceph_vg0"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:        }
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:    ],
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:    "1": [
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:        {
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "devices": [
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "/dev/loop4"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            ],
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_name": "ceph_lv1",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_size": "21470642176",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "name": "ceph_lv1",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "tags": {
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cluster_name": "ceph",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.crush_device_class": "",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.encrypted": "0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osd_id": "1",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.type": "block",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.vdo": "0"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            },
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "type": "block",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "vg_name": "ceph_vg1"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:        }
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:    ],
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:    "2": [
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:        {
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "devices": [
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "/dev/loop5"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            ],
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_name": "ceph_lv2",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_size": "21470642176",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "name": "ceph_lv2",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "tags": {
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.cluster_name": "ceph",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.crush_device_class": "",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.encrypted": "0",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osd_id": "2",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.type": "block",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:                "ceph.vdo": "0"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            },
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "type": "block",
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:            "vg_name": "ceph_vg2"
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:        }
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]:    ]
Nov 25 11:21:37 np0005535469 nervous_chebyshev[263116]: }
Nov 25 11:21:37 np0005535469 systemd[1]: libpod-99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af.scope: Deactivated successfully.
Nov 25 11:21:37 np0005535469 podman[263125]: 2025-11-25 16:21:37.767369982 +0000 UTC m=+0.030274510 container died 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:21:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446-merged.mount: Deactivated successfully.
Nov 25 11:21:38 np0005535469 podman[263125]: 2025-11-25 16:21:38.677665393 +0000 UTC m=+0.940569911 container remove 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 11:21:38 np0005535469 systemd[1]: libpod-conmon-99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af.scope: Deactivated successfully.
Nov 25 11:21:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Nov 25 11:21:38 np0005535469 podman[263135]: 2025-11-25 16:21:38.771222533 +0000 UTC m=+1.009298969 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 11:21:38 np0005535469 podman[263155]: 2025-11-25 16:21:38.773628559 +0000 UTC m=+0.470678232 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:21:38 np0005535469 podman[263126]: 2025-11-25 16:21:38.787321529 +0000 UTC m=+1.022899068 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:21:39 np0005535469 podman[263344]: 2025-11-25 16:21:39.446884708 +0000 UTC m=+0.096800139 container create 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:21:39 np0005535469 podman[263344]: 2025-11-25 16:21:39.37301131 +0000 UTC m=+0.022926761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:21:39 np0005535469 systemd[1]: Started libpod-conmon-810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f.scope.
Nov 25 11:21:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:21:39 np0005535469 podman[263344]: 2025-11-25 16:21:39.602077876 +0000 UTC m=+0.251993317 container init 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:21:39 np0005535469 podman[263344]: 2025-11-25 16:21:39.610439653 +0000 UTC m=+0.260355084 container start 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:21:39 np0005535469 heuristic_herschel[263360]: 167 167
Nov 25 11:21:39 np0005535469 systemd[1]: libpod-810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f.scope: Deactivated successfully.
Nov 25 11:21:39 np0005535469 podman[263344]: 2025-11-25 16:21:39.652232173 +0000 UTC m=+0.302147634 container attach 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:21:39 np0005535469 podman[263344]: 2025-11-25 16:21:39.653180568 +0000 UTC m=+0.303095999 container died 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:21:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2c24668aed7df2498639b47e73faf931df573333db12691c08e0d016541d3c8f-merged.mount: Deactivated successfully.
Nov 25 11:21:39 np0005535469 podman[263344]: 2025-11-25 16:21:39.900425675 +0000 UTC m=+0.550341096 container remove 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:21:39 np0005535469 systemd[1]: libpod-conmon-810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f.scope: Deactivated successfully.
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:21:40
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', '.rgw.root']
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:21:40 np0005535469 podman[263386]: 2025-11-25 16:21:40.097226708 +0000 UTC m=+0.083125839 container create ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:21:40 np0005535469 podman[263386]: 2025-11-25 16:21:40.037594586 +0000 UTC m=+0.023493737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:21:40 np0005535469 systemd[1]: Started libpod-conmon-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope.
Nov 25 11:21:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:21:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:21:40 np0005535469 podman[263386]: 2025-11-25 16:21:40.302800118 +0000 UTC m=+0.288699249 container init ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 11:21:40 np0005535469 podman[263386]: 2025-11-25 16:21:40.309127019 +0000 UTC m=+0.295026150 container start ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:21:40 np0005535469 podman[263386]: 2025-11-25 16:21:40.397748937 +0000 UTC m=+0.383648088 container attach ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:21:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 409 B/s wr, 8 op/s
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]: {
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "osd_id": 1,
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "type": "bluestore"
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:    },
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "osd_id": 2,
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "type": "bluestore"
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:    },
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "osd_id": 0,
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:        "type": "bluestore"
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]:    }
Nov 25 11:21:41 np0005535469 ecstatic_aryabhata[263403]: }
Nov 25 11:21:41 np0005535469 systemd[1]: libpod-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope: Deactivated successfully.
Nov 25 11:21:41 np0005535469 systemd[1]: libpod-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope: Consumed 1.060s CPU time.
Nov 25 11:21:41 np0005535469 podman[263436]: 2025-11-25 16:21:41.412543754 +0000 UTC m=+0.029916860 container died ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:21:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f-merged.mount: Deactivated successfully.
Nov 25 11:21:41 np0005535469 podman[263436]: 2025-11-25 16:21:41.827178179 +0000 UTC m=+0.444551305 container remove ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:21:41 np0005535469 systemd[1]: libpod-conmon-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope: Deactivated successfully.
Nov 25 11:21:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:21:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:21:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:21:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:21:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 62b6d10b-b5b3-47cc-be66-235c1611caba does not exist
Nov 25 11:21:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ca0f7dc1-3370-48c4-af13-4647302b9017 does not exist
Nov 25 11:21:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 11:21:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:21:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:21:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.533 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.535 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.535 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:21:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:21:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3562051711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:21:45 np0005535469 nova_compute[254092]: 2025-11-25 16:21:45.938 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.079 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.080 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5154MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.080 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.080 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.165 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.166 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.190 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:21:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:21:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1669472854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.605 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.610 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.623 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.624 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:21:46 np0005535469 nova_compute[254092]: 2025-11-25 16:21:46.625 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:21:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 11:21:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:47 np0005535469 nova_compute[254092]: 2025-11-25 16:21:47.625 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:47 np0005535469 nova_compute[254092]: 2025-11-25 16:21:47.626 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:47 np0005535469 nova_compute[254092]: 2025-11-25 16:21:47.626 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:47 np0005535469 nova_compute[254092]: 2025-11-25 16:21:47.626 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:21:48 np0005535469 nova_compute[254092]: 2025-11-25 16:21:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:48 np0005535469 nova_compute[254092]: 2025-11-25 16:21:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:21:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 25 11:21:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 25 11:21:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 25 11:21:49 np0005535469 nova_compute[254092]: 2025-11-25 16:21:49.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:49 np0005535469 nova_compute[254092]: 2025-11-25 16:21:49.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:21:49 np0005535469 nova_compute[254092]: 2025-11-25 16:21:49.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:21:49 np0005535469 nova_compute[254092]: 2025-11-25 16:21:49.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:21:49 np0005535469 nova_compute[254092]: 2025-11-25 16:21:49.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 716 B/s wr, 3 op/s
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:21:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:21:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 11:21:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 11:21:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:21:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4064269425' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:21:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:21:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4064269425' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:21:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 11:21:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:21:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 11:22:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 25 11:22:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 767 B/s wr, 9 op/s
Nov 25 11:22:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:09 np0005535469 podman[263547]: 2025-11-25 16:22:09.640373003 +0000 UTC m=+0.061234077 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:22:09 np0005535469 podman[263548]: 2025-11-25 16:22:09.665695348 +0000 UTC m=+0.080180669 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:22:09 np0005535469 podman[263546]: 2025-11-25 16:22:09.672888533 +0000 UTC m=+0.093553471 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 25 11:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:22:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:22:13.589 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:22:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:22:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:22:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:22:40
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.mgr', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:22:40 np0005535469 podman[263612]: 2025-11-25 16:22:40.650304892 +0000 UTC m=+0.055201434 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 11:22:40 np0005535469 podman[263611]: 2025-11-25 16:22:40.661955277 +0000 UTC m=+0.069700556 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 25 11:22:40 np0005535469 podman[263613]: 2025-11-25 16:22:40.718466716 +0000 UTC m=+0.116467961 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:40 np0005535469 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.174195) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761174233, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1253, "num_deletes": 254, "total_data_size": 1845245, "memory_usage": 1869920, "flush_reason": "Manual Compaction"}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761340887, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1825986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19869, "largest_seqno": 21121, "table_properties": {"data_size": 1819921, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12744, "raw_average_key_size": 20, "raw_value_size": 1807736, "raw_average_value_size": 2842, "num_data_blocks": 154, "num_entries": 636, "num_filter_entries": 636, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087647, "oldest_key_time": 1764087647, "file_creation_time": 1764087761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 166728 microseconds, and 5764 cpu microseconds.
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.340922) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1825986 bytes OK
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.340941) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376165) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376204) EVENT_LOG_v1 {"time_micros": 1764087761376193, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1839532, prev total WAL file size 1839532, number of live WAL files 2.
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.377146) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1783KB)], [47(7570KB)]
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761377277, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9577687, "oldest_snapshot_seqno": -1}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4438 keys, 7799225 bytes, temperature: kUnknown
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761510359, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7799225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7768250, "index_size": 18769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110265, "raw_average_key_size": 24, "raw_value_size": 7686540, "raw_average_value_size": 1731, "num_data_blocks": 786, "num_entries": 4438, "num_filter_entries": 4438, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.510592) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7799225 bytes
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.517084) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.9 rd, 58.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.5) write-amplify(4.3) OK, records in: 4959, records dropped: 521 output_compression: NoCompression
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.517116) EVENT_LOG_v1 {"time_micros": 1764087761517101, "job": 24, "event": "compaction_finished", "compaction_time_micros": 133126, "compaction_time_cpu_micros": 28947, "output_level": 6, "num_output_files": 1, "total_output_size": 7799225, "num_input_records": 4959, "num_output_records": 4438, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761517937, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761520780, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:22:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:22:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:22:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a45ac62c-cd05-4507-93b2-392f60410e38 does not exist
Nov 25 11:22:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5d45e04c-b164-49a7-9267-44ff4c21784a does not exist
Nov 25 11:22:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7db5c926-8408-4eb1-b3ac-2b1d3607e2e7 does not exist
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:22:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:22:43 np0005535469 podman[263948]: 2025-11-25 16:22:43.711064237 +0000 UTC m=+0.038629045 container create 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:22:43 np0005535469 systemd[1]: Started libpod-conmon-91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37.scope.
Nov 25 11:22:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:22:43 np0005535469 podman[263948]: 2025-11-25 16:22:43.694273443 +0000 UTC m=+0.021838261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:22:43 np0005535469 podman[263948]: 2025-11-25 16:22:43.807675821 +0000 UTC m=+0.135240649 container init 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:22:43 np0005535469 podman[263948]: 2025-11-25 16:22:43.816240113 +0000 UTC m=+0.143804921 container start 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 11:22:43 np0005535469 podman[263948]: 2025-11-25 16:22:43.820296412 +0000 UTC m=+0.147861240 container attach 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:22:43 np0005535469 epic_cori[263964]: 167 167
Nov 25 11:22:43 np0005535469 systemd[1]: libpod-91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37.scope: Deactivated successfully.
Nov 25 11:22:43 np0005535469 podman[263948]: 2025-11-25 16:22:43.822688937 +0000 UTC m=+0.150253775 container died 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 11:22:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-af0fdc1beafaee6c5bc6ebeb48caf38ecb3c04a583dcdba31f857127f99c9e4c-merged.mount: Deactivated successfully.
Nov 25 11:22:44 np0005535469 podman[263948]: 2025-11-25 16:22:44.008464032 +0000 UTC m=+0.336028870 container remove 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:22:44 np0005535469 systemd[1]: libpod-conmon-91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37.scope: Deactivated successfully.
Nov 25 11:22:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:22:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:22:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:22:44 np0005535469 podman[263991]: 2025-11-25 16:22:44.243258022 +0000 UTC m=+0.075243295 container create 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:22:44 np0005535469 systemd[1]: Started libpod-conmon-2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29.scope.
Nov 25 11:22:44 np0005535469 podman[263991]: 2025-11-25 16:22:44.210335192 +0000 UTC m=+0.042320485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:22:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:22:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:44 np0005535469 podman[263991]: 2025-11-25 16:22:44.335858567 +0000 UTC m=+0.167843840 container init 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:22:44 np0005535469 podman[263991]: 2025-11-25 16:22:44.351395447 +0000 UTC m=+0.183380700 container start 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:22:44 np0005535469 podman[263991]: 2025-11-25 16:22:44.374648076 +0000 UTC m=+0.206633369 container attach 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:22:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:45 np0005535469 pedantic_davinci[264007]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:22:45 np0005535469 pedantic_davinci[264007]: --> relative data size: 1.0
Nov 25 11:22:45 np0005535469 pedantic_davinci[264007]: --> All data devices are unavailable
Nov 25 11:22:45 np0005535469 systemd[1]: libpod-2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29.scope: Deactivated successfully.
Nov 25 11:22:45 np0005535469 podman[264036]: 2025-11-25 16:22:45.401345516 +0000 UTC m=+0.021831182 container died 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:22:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed-merged.mount: Deactivated successfully.
Nov 25 11:22:45 np0005535469 nova_compute[254092]: 2025-11-25 16:22:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:45 np0005535469 podman[264036]: 2025-11-25 16:22:45.505061251 +0000 UTC m=+0.125546937 container remove 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:22:45 np0005535469 systemd[1]: libpod-conmon-2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29.scope: Deactivated successfully.
Nov 25 11:22:45 np0005535469 nova_compute[254092]: 2025-11-25 16:22:45.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:22:45 np0005535469 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:22:45 np0005535469 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:22:45 np0005535469 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:22:45 np0005535469 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:22:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:22:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471127480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:22:45 np0005535469 nova_compute[254092]: 2025-11-25 16:22:45.941 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.113 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.115 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.115 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.116 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.171 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.172 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.192 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:22:46 np0005535469 podman[264213]: 2025-11-25 16:22:46.233988296 +0000 UTC m=+0.037219177 container create 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:22:46 np0005535469 systemd[1]: Started libpod-conmon-539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6.scope.
Nov 25 11:22:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:22:46 np0005535469 podman[264213]: 2025-11-25 16:22:46.310772703 +0000 UTC m=+0.114003634 container init 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 11:22:46 np0005535469 podman[264213]: 2025-11-25 16:22:46.217929102 +0000 UTC m=+0.021160023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:22:46 np0005535469 podman[264213]: 2025-11-25 16:22:46.319343335 +0000 UTC m=+0.122574246 container start 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:22:46 np0005535469 brave_gould[264231]: 167 167
Nov 25 11:22:46 np0005535469 podman[264213]: 2025-11-25 16:22:46.323999601 +0000 UTC m=+0.127230532 container attach 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 11:22:46 np0005535469 systemd[1]: libpod-539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6.scope: Deactivated successfully.
Nov 25 11:22:46 np0005535469 podman[264213]: 2025-11-25 16:22:46.325747669 +0000 UTC m=+0.128978550 container died 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:22:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0aaaac24547034cc81cd47812e125a26b0fa20ec4e8a0e78c1e256531718edba-merged.mount: Deactivated successfully.
Nov 25 11:22:46 np0005535469 podman[264213]: 2025-11-25 16:22:46.372062791 +0000 UTC m=+0.175293682 container remove 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:22:46 np0005535469 systemd[1]: libpod-conmon-539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6.scope: Deactivated successfully.
Nov 25 11:22:46 np0005535469 podman[264273]: 2025-11-25 16:22:46.518712058 +0000 UTC m=+0.036655202 container create bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:22:46 np0005535469 systemd[1]: Started libpod-conmon-bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1.scope.
Nov 25 11:22:46 np0005535469 podman[264273]: 2025-11-25 16:22:46.503045974 +0000 UTC m=+0.020989138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:22:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:22:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006967742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:22:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.629 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:22:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.636 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:22:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:46 np0005535469 podman[264273]: 2025-11-25 16:22:46.652795065 +0000 UTC m=+0.170738239 container init bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.656 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.658 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.658 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.659 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:46 np0005535469 nova_compute[254092]: 2025-11-25 16:22:46.659 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:22:46 np0005535469 podman[264273]: 2025-11-25 16:22:46.660088541 +0000 UTC m=+0.178031685 container start bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:22:46 np0005535469 podman[264273]: 2025-11-25 16:22:46.663469693 +0000 UTC m=+0.181412857 container attach bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:22:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]: {
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:    "0": [
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:        {
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "devices": [
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "/dev/loop3"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            ],
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_name": "ceph_lv0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_size": "21470642176",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "name": "ceph_lv0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "tags": {
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cluster_name": "ceph",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.crush_device_class": "",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.encrypted": "0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osd_id": "0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.type": "block",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.vdo": "0"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            },
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "type": "block",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "vg_name": "ceph_vg0"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:        }
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:    ],
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:    "1": [
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:        {
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "devices": [
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "/dev/loop4"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            ],
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_name": "ceph_lv1",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_size": "21470642176",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "name": "ceph_lv1",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "tags": {
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cluster_name": "ceph",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.crush_device_class": "",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.encrypted": "0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osd_id": "1",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.type": "block",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.vdo": "0"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            },
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "type": "block",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "vg_name": "ceph_vg1"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:        }
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:    ],
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:    "2": [
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:        {
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "devices": [
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "/dev/loop5"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            ],
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_name": "ceph_lv2",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_size": "21470642176",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "name": "ceph_lv2",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "tags": {
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.cluster_name": "ceph",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.crush_device_class": "",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.encrypted": "0",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osd_id": "2",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.type": "block",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:                "ceph.vdo": "0"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            },
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "type": "block",
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:            "vg_name": "ceph_vg2"
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:        }
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]:    ]
Nov 25 11:22:47 np0005535469 wonderful_dirac[264289]: }
Nov 25 11:22:47 np0005535469 systemd[1]: libpod-bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1.scope: Deactivated successfully.
Nov 25 11:22:47 np0005535469 podman[264273]: 2025-11-25 16:22:47.404194768 +0000 UTC m=+0.922137912 container died bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:22:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe-merged.mount: Deactivated successfully.
Nov 25 11:22:47 np0005535469 podman[264273]: 2025-11-25 16:22:47.469110533 +0000 UTC m=+0.987053677 container remove bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:22:47 np0005535469 systemd[1]: libpod-conmon-bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1.scope: Deactivated successfully.
Nov 25 11:22:47 np0005535469 nova_compute[254092]: 2025-11-25 16:22:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:47 np0005535469 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:47 np0005535469 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:47 np0005535469 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:47 np0005535469 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:22:47 np0005535469 nova_compute[254092]: 2025-11-25 16:22:47.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:22:47 np0005535469 nova_compute[254092]: 2025-11-25 16:22:47.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:48 np0005535469 podman[264453]: 2025-11-25 16:22:48.025449471 +0000 UTC m=+0.039311685 container create 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:22:48 np0005535469 systemd[1]: Started libpod-conmon-2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2.scope.
Nov 25 11:22:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:22:48 np0005535469 podman[264453]: 2025-11-25 16:22:48.091098757 +0000 UTC m=+0.104960991 container init 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:22:48 np0005535469 podman[264453]: 2025-11-25 16:22:48.100137171 +0000 UTC m=+0.113999385 container start 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:22:48 np0005535469 podman[264453]: 2025-11-25 16:22:48.009215502 +0000 UTC m=+0.023077736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:22:48 np0005535469 podman[264453]: 2025-11-25 16:22:48.102824404 +0000 UTC m=+0.116686618 container attach 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:22:48 np0005535469 frosty_nash[264469]: 167 167
Nov 25 11:22:48 np0005535469 systemd[1]: libpod-2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2.scope: Deactivated successfully.
Nov 25 11:22:48 np0005535469 podman[264453]: 2025-11-25 16:22:48.108815955 +0000 UTC m=+0.122678169 container died 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:22:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-654d2e50ba428c8447f1e9879b79299950a43263e7891cf77f3e32a8464a5643-merged.mount: Deactivated successfully.
Nov 25 11:22:48 np0005535469 podman[264453]: 2025-11-25 16:22:48.147678627 +0000 UTC m=+0.161540881 container remove 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:22:48 np0005535469 systemd[1]: libpod-conmon-2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2.scope: Deactivated successfully.
Nov 25 11:22:48 np0005535469 podman[264493]: 2025-11-25 16:22:48.320135032 +0000 UTC m=+0.043986291 container create 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:22:48 np0005535469 systemd[1]: Started libpod-conmon-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope.
Nov 25 11:22:48 np0005535469 podman[264493]: 2025-11-25 16:22:48.304823807 +0000 UTC m=+0.028675086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:22:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:22:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:22:48 np0005535469 podman[264493]: 2025-11-25 16:22:48.431216815 +0000 UTC m=+0.155068094 container init 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 11:22:48 np0005535469 podman[264493]: 2025-11-25 16:22:48.44134255 +0000 UTC m=+0.165193819 container start 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:22:48 np0005535469 podman[264493]: 2025-11-25 16:22:48.445364508 +0000 UTC m=+0.169215797 container attach 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:22:48 np0005535469 nova_compute[254092]: 2025-11-25 16:22:48.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]: {
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "osd_id": 1,
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "type": "bluestore"
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:    },
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "osd_id": 2,
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "type": "bluestore"
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:    },
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "osd_id": 0,
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:        "type": "bluestore"
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]:    }
Nov 25 11:22:49 np0005535469 peaceful_babbage[264510]: }
Nov 25 11:22:49 np0005535469 systemd[1]: libpod-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope: Deactivated successfully.
Nov 25 11:22:49 np0005535469 conmon[264510]: conmon 9e3094993b554076d394 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope/container/memory.events
Nov 25 11:22:49 np0005535469 podman[264493]: 2025-11-25 16:22:49.367608623 +0000 UTC m=+1.091459892 container died 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:22:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a-merged.mount: Deactivated successfully.
Nov 25 11:22:49 np0005535469 podman[264493]: 2025-11-25 16:22:49.429265021 +0000 UTC m=+1.153116300 container remove 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:22:49 np0005535469 systemd[1]: libpod-conmon-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope: Deactivated successfully.
Nov 25 11:22:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:22:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:22:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:22:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:22:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 99a15ce4-0995-4fa5-8ff1-e03efa2333e6 does not exist
Nov 25 11:22:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 365e2596-a0e6-41ed-b63d-e9846e2ab0a4 does not exist
Nov 25 11:22:49 np0005535469 nova_compute[254092]: 2025-11-25 16:22:49.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:49 np0005535469 nova_compute[254092]: 2025-11-25 16:22:49.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:49 np0005535469 nova_compute[254092]: 2025-11-25 16:22:49.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:22:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:22:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:22:50 np0005535469 nova_compute[254092]: 2025-11-25 16:22:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:50 np0005535469 nova_compute[254092]: 2025-11-25 16:22:50.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:22:50 np0005535469 nova_compute[254092]: 2025-11-25 16:22:50.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:22:50 np0005535469 nova_compute[254092]: 2025-11-25 16:22:50.535 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:22:50 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:22:51 np0005535469 nova_compute[254092]: 2025-11-25 16:22:51.529 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:53 np0005535469 nova_compute[254092]: 2025-11-25 16:22:53.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:22:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:22:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752993815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:22:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:22:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752993815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:22:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:22:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:22:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:23:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:11 np0005535469 podman[264607]: 2025-11-25 16:23:11.64949398 +0000 UTC m=+0.060769345 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:23:11 np0005535469 podman[264606]: 2025-11-25 16:23:11.680373955 +0000 UTC m=+0.091554757 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:23:11 np0005535469 podman[264608]: 2025-11-25 16:23:11.680447227 +0000 UTC m=+0.087198330 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 25 11:23:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:23:13.591 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:23:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:23:13.592 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:23:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:23:13.592 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:23:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:34 np0005535469 nova_compute[254092]: 2025-11-25 16:23:34.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 25 11:23:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 25 11:23:35 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 25 11:23:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 21 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Nov 25 11:23:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 25 11:23:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 25 11:23:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 25 11:23:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 21 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 22 op/s
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:23:40
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:23:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:23:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5920 writes, 24K keys, 5920 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5920 writes, 990 syncs, 5.98 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 376 writes, 869 keys, 376 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s#012Interval WAL: 376 writes, 162 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:23:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 11:23:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:42 np0005535469 podman[264670]: 2025-11-25 16:23:42.637613817 +0000 UTC m=+0.054243888 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:23:42 np0005535469 podman[264671]: 2025-11-25 16:23:42.677140243 +0000 UTC m=+0.090574217 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:23:42 np0005535469 podman[264669]: 2025-11-25 16:23:42.691899015 +0000 UTC m=+0.112102413 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:23:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 25 11:23:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.3 MiB/s wr, 33 op/s
Nov 25 11:23:46 np0005535469 nova_compute[254092]: 2025-11-25 16:23:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:46 np0005535469 nova_compute[254092]: 2025-11-25 16:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:23:46 np0005535469 nova_compute[254092]: 2025-11-25 16:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:23:46 np0005535469 nova_compute[254092]: 2025-11-25 16:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:23:46 np0005535469 nova_compute[254092]: 2025-11-25 16:23:46.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:23:46 np0005535469 nova_compute[254092]: 2025-11-25 16:23:46.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:23:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Nov 25 11:23:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:23:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289942496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:23:46 np0005535469 nova_compute[254092]: 2025-11-25 16:23:46.939 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.060 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.061 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.061 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.061 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:23:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.243 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.311 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.375 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.375 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.391 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.413 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.428 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:23:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:23:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417170017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.837 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.843 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.859 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.860 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:23:47 np0005535469 nova_compute[254092]: 2025-11-25 16:23:47.860 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:23:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.9 MiB/s wr, 18 op/s
Nov 25 11:23:49 np0005535469 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:49 np0005535469 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:49 np0005535469 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:49 np0005535469 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:49 np0005535469 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:23:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.2 total, 600.0 interval#012Cumulative writes: 7071 writes, 28K keys, 7071 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7071 writes, 1329 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 459 writes, 1272 keys, 459 commit groups, 1.0 writes per commit group, ingest: 0.61 MB, 0.00 MB/s#012Interval WAL: 459 writes, 186 syncs, 2.47 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:23:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:23:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:23:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:50 np0005535469 nova_compute[254092]: 2025-11-25 16:23:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:50 np0005535469 nova_compute[254092]: 2025-11-25 16:23:50.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:23:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8fe984d4-d710-4024-850c-250280033c1a does not exist
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 86d7d757-0822-4c60-b80c-b2565f9da7f1 does not exist
Nov 25 11:23:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9190ca86-5d2c-4c55-aa11-f0cb26456ac0 does not exist
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:23:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:23:51 np0005535469 nova_compute[254092]: 2025-11-25 16:23:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:51 np0005535469 nova_compute[254092]: 2025-11-25 16:23:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:23:51 np0005535469 nova_compute[254092]: 2025-11-25 16:23:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:23:51 np0005535469 nova_compute[254092]: 2025-11-25 16:23:51.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:23:51 np0005535469 podman[265279]: 2025-11-25 16:23:51.704664302 +0000 UTC m=+0.038585891 container create ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:23:51 np0005535469 systemd[1]: Started libpod-conmon-ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94.scope.
Nov 25 11:23:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:23:51 np0005535469 podman[265279]: 2025-11-25 16:23:51.77220116 +0000 UTC m=+0.106122759 container init ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:23:51 np0005535469 podman[265279]: 2025-11-25 16:23:51.77806735 +0000 UTC m=+0.111988939 container start ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 11:23:51 np0005535469 podman[265279]: 2025-11-25 16:23:51.781076411 +0000 UTC m=+0.114998010 container attach ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:23:51 np0005535469 funny_neumann[265295]: 167 167
Nov 25 11:23:51 np0005535469 systemd[1]: libpod-ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94.scope: Deactivated successfully.
Nov 25 11:23:51 np0005535469 podman[265279]: 2025-11-25 16:23:51.783160188 +0000 UTC m=+0.117081777 container died ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:23:51 np0005535469 podman[265279]: 2025-11-25 16:23:51.690338352 +0000 UTC m=+0.024259971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:23:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-707d5b6f2d5d46115eb5bc15d4b667507bb01f26a5339a99ff78ce93a5404597-merged.mount: Deactivated successfully.
Nov 25 11:23:51 np0005535469 podman[265279]: 2025-11-25 16:23:51.822013936 +0000 UTC m=+0.155935525 container remove ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 11:23:51 np0005535469 systemd[1]: libpod-conmon-ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94.scope: Deactivated successfully.
Nov 25 11:23:51 np0005535469 podman[265319]: 2025-11-25 16:23:51.977209509 +0000 UTC m=+0.040295197 container create 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:23:52 np0005535469 systemd[1]: Started libpod-conmon-2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1.scope.
Nov 25 11:23:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:23:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:52 np0005535469 podman[265319]: 2025-11-25 16:23:51.958948133 +0000 UTC m=+0.022033851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:23:52 np0005535469 podman[265319]: 2025-11-25 16:23:52.073434537 +0000 UTC m=+0.136520245 container init 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:23:52 np0005535469 podman[265319]: 2025-11-25 16:23:52.079526704 +0000 UTC m=+0.142612412 container start 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:23:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:52 np0005535469 podman[265319]: 2025-11-25 16:23:52.142752714 +0000 UTC m=+0.205838432 container attach 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:23:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:23:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:23:52 np0005535469 nova_compute[254092]: 2025-11-25 16:23:52.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:23:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Nov 25 11:23:53 np0005535469 condescending_payne[265336]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:23:53 np0005535469 condescending_payne[265336]: --> relative data size: 1.0
Nov 25 11:23:53 np0005535469 condescending_payne[265336]: --> All data devices are unavailable
Nov 25 11:23:53 np0005535469 systemd[1]: libpod-2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1.scope: Deactivated successfully.
Nov 25 11:23:53 np0005535469 podman[265365]: 2025-11-25 16:23:53.112399462 +0000 UTC m=+0.023104099 container died 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:23:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d-merged.mount: Deactivated successfully.
Nov 25 11:23:53 np0005535469 podman[265365]: 2025-11-25 16:23:53.162759503 +0000 UTC m=+0.073464120 container remove 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:23:53 np0005535469 systemd[1]: libpod-conmon-2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1.scope: Deactivated successfully.
Nov 25 11:23:53 np0005535469 podman[265518]: 2025-11-25 16:23:53.7990852 +0000 UTC m=+0.045747816 container create 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:23:53 np0005535469 systemd[1]: Started libpod-conmon-7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90.scope.
Nov 25 11:23:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:23:53 np0005535469 podman[265518]: 2025-11-25 16:23:53.86596213 +0000 UTC m=+0.112624776 container init 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:23:53 np0005535469 podman[265518]: 2025-11-25 16:23:53.778233823 +0000 UTC m=+0.024896499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:23:53 np0005535469 podman[265518]: 2025-11-25 16:23:53.874064661 +0000 UTC m=+0.120727277 container start 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:23:53 np0005535469 modest_diffie[265534]: 167 167
Nov 25 11:23:53 np0005535469 systemd[1]: libpod-7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90.scope: Deactivated successfully.
Nov 25 11:23:53 np0005535469 podman[265518]: 2025-11-25 16:23:53.888685328 +0000 UTC m=+0.135347944 container attach 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 11:23:53 np0005535469 podman[265518]: 2025-11-25 16:23:53.889182643 +0000 UTC m=+0.135845259 container died 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 11:23:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ded201f0456a0665df37a41a439310ab6948e2900fac73bd341dd4d45c225d5d-merged.mount: Deactivated successfully.
Nov 25 11:23:53 np0005535469 podman[265518]: 2025-11-25 16:23:53.980145458 +0000 UTC m=+0.226808104 container remove 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:23:53 np0005535469 systemd[1]: libpod-conmon-7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90.scope: Deactivated successfully.
Nov 25 11:23:54 np0005535469 podman[265559]: 2025-11-25 16:23:54.176552443 +0000 UTC m=+0.043224337 container create f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:23:54 np0005535469 systemd[1]: Started libpod-conmon-f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af.scope.
Nov 25 11:23:54 np0005535469 podman[265559]: 2025-11-25 16:23:54.161210696 +0000 UTC m=+0.027882610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:23:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:23:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:54 np0005535469 podman[265559]: 2025-11-25 16:23:54.282552988 +0000 UTC m=+0.149224902 container init f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:23:54 np0005535469 podman[265559]: 2025-11-25 16:23:54.294956205 +0000 UTC m=+0.161628099 container start f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:23:54 np0005535469 podman[265559]: 2025-11-25 16:23:54.303295372 +0000 UTC m=+0.169967286 container attach f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:23:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]: {
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:    "0": [
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:        {
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "devices": [
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "/dev/loop3"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            ],
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_name": "ceph_lv0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_size": "21470642176",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "name": "ceph_lv0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "tags": {
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cluster_name": "ceph",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.crush_device_class": "",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.encrypted": "0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osd_id": "0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.type": "block",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.vdo": "0"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            },
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "type": "block",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "vg_name": "ceph_vg0"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:        }
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:    ],
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:    "1": [
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:        {
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "devices": [
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "/dev/loop4"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            ],
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_name": "ceph_lv1",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_size": "21470642176",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "name": "ceph_lv1",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "tags": {
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cluster_name": "ceph",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.crush_device_class": "",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.encrypted": "0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osd_id": "1",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.type": "block",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.vdo": "0"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            },
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "type": "block",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "vg_name": "ceph_vg1"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:        }
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:    ],
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:    "2": [
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:        {
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "devices": [
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "/dev/loop5"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            ],
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_name": "ceph_lv2",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_size": "21470642176",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "name": "ceph_lv2",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "tags": {
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.cluster_name": "ceph",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.crush_device_class": "",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.encrypted": "0",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osd_id": "2",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.type": "block",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:                "ceph.vdo": "0"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            },
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "type": "block",
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:            "vg_name": "ceph_vg2"
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:        }
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]:    ]
Nov 25 11:23:54 np0005535469 gifted_pasteur[265575]: }
Nov 25 11:23:55 np0005535469 systemd[1]: libpod-f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af.scope: Deactivated successfully.
Nov 25 11:23:55 np0005535469 podman[265559]: 2025-11-25 16:23:55.019066952 +0000 UTC m=+0.885738846 container died f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:23:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0-merged.mount: Deactivated successfully.
Nov 25 11:23:55 np0005535469 podman[265559]: 2025-11-25 16:23:55.093287051 +0000 UTC m=+0.959958945 container remove f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:23:55 np0005535469 systemd[1]: libpod-conmon-f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af.scope: Deactivated successfully.
Nov 25 11:23:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:23:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4284690352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:23:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:23:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4284690352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:23:55 np0005535469 podman[265737]: 2025-11-25 16:23:55.7586944 +0000 UTC m=+0.022677058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:23:55 np0005535469 podman[265737]: 2025-11-25 16:23:55.933861148 +0000 UTC m=+0.197843826 container create 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:23:56 np0005535469 systemd[1]: Started libpod-conmon-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope.
Nov 25 11:23:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:23:56 np0005535469 podman[265737]: 2025-11-25 16:23:56.145116677 +0000 UTC m=+0.409099365 container init 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:23:56 np0005535469 podman[265737]: 2025-11-25 16:23:56.162785367 +0000 UTC m=+0.426768015 container start 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 11:23:56 np0005535469 crazy_nobel[265753]: 167 167
Nov 25 11:23:56 np0005535469 systemd[1]: libpod-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope: Deactivated successfully.
Nov 25 11:23:56 np0005535469 conmon[265753]: conmon 1f18fc416e480548640e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope/container/memory.events
Nov 25 11:23:56 np0005535469 podman[265737]: 2025-11-25 16:23:56.210447944 +0000 UTC m=+0.474430632 container attach 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:23:56 np0005535469 podman[265737]: 2025-11-25 16:23:56.215001829 +0000 UTC m=+0.478984477 container died 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:23:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-78d34f77d87ecb5a90f171ca3b6988f234925c5781bf76649025e8295f17f41c-merged.mount: Deactivated successfully.
Nov 25 11:23:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:56 np0005535469 podman[265737]: 2025-11-25 16:23:56.760745961 +0000 UTC m=+1.024728599 container remove 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 11:23:56 np0005535469 systemd[1]: libpod-conmon-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope: Deactivated successfully.
Nov 25 11:23:57 np0005535469 podman[265778]: 2025-11-25 16:23:56.950317189 +0000 UTC m=+0.043354280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:23:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:23:57 np0005535469 podman[265778]: 2025-11-25 16:23:57.171294313 +0000 UTC m=+0.264331354 container create 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:23:57 np0005535469 systemd[1]: Started libpod-conmon-6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62.scope.
Nov 25 11:23:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:23:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:23:57 np0005535469 podman[265778]: 2025-11-25 16:23:57.724864168 +0000 UTC m=+0.817901199 container init 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:23:57 np0005535469 podman[265778]: 2025-11-25 16:23:57.732139246 +0000 UTC m=+0.825176277 container start 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:23:57 np0005535469 podman[265778]: 2025-11-25 16:23:57.809150012 +0000 UTC m=+0.902187043 container attach 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]: {
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "osd_id": 1,
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "type": "bluestore"
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:    },
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "osd_id": 2,
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "type": "bluestore"
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:    },
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "osd_id": 0,
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:        "type": "bluestore"
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]:    }
Nov 25 11:23:58 np0005535469 nifty_chatterjee[265795]: }
Nov 25 11:23:58 np0005535469 systemd[1]: libpod-6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62.scope: Deactivated successfully.
Nov 25 11:23:58 np0005535469 podman[265778]: 2025-11-25 16:23:58.676834196 +0000 UTC m=+1.769871267 container died 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:23:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:23:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166-merged.mount: Deactivated successfully.
Nov 25 11:23:58 np0005535469 podman[265778]: 2025-11-25 16:23:58.957334539 +0000 UTC m=+2.050371570 container remove 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:23:58 np0005535469 systemd[1]: libpod-conmon-6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62.scope: Deactivated successfully.
Nov 25 11:23:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:23:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:23:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:23:59 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 79a5725d-a130-4503-9325-d2a045057ab4 does not exist
Nov 25 11:23:59 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b697e559-9e54-4b37-8c86-82bd0df39313 does not exist
Nov 25 11:24:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:24:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:24:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:24:04.812 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:24:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:24:04.814 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:24:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:24:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.5 total, 600.0 interval#012Cumulative writes: 6266 writes, 25K keys, 6266 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6266 writes, 1100 syncs, 5.70 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 588 writes, 1697 keys, 588 commit groups, 1.0 writes per commit group, ingest: 0.89 MB, 0.00 MB/s#012Interval WAL: 588 writes, 256 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:24:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:24:07.816 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:24:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:24:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:24:13.592 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:24:13.593 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:24:13.593 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:13 np0005535469 podman[265893]: 2025-11-25 16:24:13.639906998 +0000 UTC m=+0.057819665 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 11:24:13 np0005535469 podman[265892]: 2025-11-25 16:24:13.66457251 +0000 UTC m=+0.076968167 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 11:24:13 np0005535469 podman[265894]: 2025-11-25 16:24:13.712446602 +0000 UTC m=+0.129725271 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:24:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.360 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.361 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.433 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:24:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.852 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.853 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.860 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.861 254096 INFO nova.compute.claims [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:24:18 np0005535469 nova_compute[254092]: 2025-11-25 16:24:18.986 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954788808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.429 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.436 254096 DEBUG nova.compute.provider_tree [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.452 254096 DEBUG nova.scheduler.client.report [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.498 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.499 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.643 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.643 254096 DEBUG nova.network.neutron [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.691 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.720 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.916 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.918 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.918 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Creating image(s)#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.944 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.968 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.988 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.991 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:19 np0005535469 nova_compute[254092]: 2025-11-25 16:24:19.992 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:20 np0005535469 nova_compute[254092]: 2025-11-25 16:24:20.267 254096 DEBUG nova.virt.libvirt.imagebackend [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/8b512c8e-2281-41de-a668-eb983e174ba0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/8b512c8e-2281-41de-a668-eb983e174ba0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 11:24:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:21 np0005535469 nova_compute[254092]: 2025-11-25 16:24:21.270 254096 DEBUG nova.network.neutron [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:24:21 np0005535469 nova_compute[254092]: 2025-11-25 16:24:21.270 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:24:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:23 np0005535469 nova_compute[254092]: 2025-11-25 16:24:23.450 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:23 np0005535469 nova_compute[254092]: 2025-11-25 16:24:23.510 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:23 np0005535469 nova_compute[254092]: 2025-11-25 16:24:23.511 254096 DEBUG nova.virt.images [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] 8b512c8e-2281-41de-a668-eb983e174ba0 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 25 11:24:23 np0005535469 nova_compute[254092]: 2025-11-25 16:24:23.516 254096 DEBUG nova.privsep.utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 11:24:23 np0005535469 nova_compute[254092]: 2025-11-25 16:24:23.516 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 11:24:24 np0005535469 nova_compute[254092]: 2025-11-25 16:24:24.896 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:24 np0005535469 nova_compute[254092]: 2025-11-25 16:24:24.896 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:24 np0005535469 nova_compute[254092]: 2025-11-25 16:24:24.923 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.059 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.059 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.065 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.065 254096 INFO nova.compute.claims [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.200 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.409 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted" returned: 0 in 1.892s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.413 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.481 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.482 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.504 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.507 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a04ee12e-fa6a-4458-9472-b68930d7ba89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085360644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.634 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.639 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.674 254096 ERROR nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [req-fd1b6266-fde5-4532-8e04-c3b0cd76e047] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 4f066da7-306c-41d7-8522-9a9189cacc49.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-fd1b6266-fde5-4532-8e04-c3b0cd76e047"}]}#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.692 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.707 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.708 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.722 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.744 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 11:24:25 np0005535469 nova_compute[254092]: 2025-11-25 16:24:25.821 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582008969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.231 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.237 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.328 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updated inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.329 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.330 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:24:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 25 11:24:26 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.576 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.578 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.633 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.664 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.682 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:24:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.805 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.806 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.806 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Creating image(s)#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.875 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.895 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.921 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:26 np0005535469 nova_compute[254092]: 2025-11-25 16:24:26.926 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:27 np0005535469 nova_compute[254092]: 2025-11-25 16:24:27.006 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:27 np0005535469 nova_compute[254092]: 2025-11-25 16:24:27.007 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:27 np0005535469 nova_compute[254092]: 2025-11-25 16:24:27.008 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:27 np0005535469 nova_compute[254092]: 2025-11-25 16:24:27.008 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:27 np0005535469 nova_compute[254092]: 2025-11-25 16:24:27.029 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:27 np0005535469 nova_compute[254092]: 2025-11-25 16:24:27.033 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 25 11:24:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 25 11:24:28 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 25 11:24:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Nov 25 11:24:29 np0005535469 nova_compute[254092]: 2025-11-25 16:24:29.873 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:29 np0005535469 nova_compute[254092]: 2025-11-25 16:24:29.919 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] resizing rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:24:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 76 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.150 254096 DEBUG nova.objects.instance [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bb45e40-37dd-4cae-a966-ecbd9260eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.161 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.162 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Ensure instance console log exists: /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.162 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.163 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.163 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.164 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.168 254096 WARNING nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.171 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.172 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.174 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.175 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.175 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.175 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.178 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.193 254096 DEBUG nova.privsep.utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.193 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:24:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715378179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.625 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.704 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.708 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.778 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a04ee12e-fa6a-4458-9472-b68930d7ba89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.822 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] resizing rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.932 254096 DEBUG nova.objects.instance [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'migration_context' on Instance uuid a04ee12e-fa6a-4458-9472-b68930d7ba89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.962 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.963 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Ensure instance console log exists: /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.964 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.965 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.965 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.968 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.973 254096 WARNING nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.979 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.980 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.985 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.986 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.987 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.987 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.988 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.989 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.989 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.990 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.990 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.991 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.992 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.992 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.993 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.993 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:24:31 np0005535469 nova_compute[254092]: 2025-11-25 16:24:31.998 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:24:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258906684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:24:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.138 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.140 254096 DEBUG nova.objects.instance [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bb45e40-37dd-4cae-a966-ecbd9260eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.207 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <uuid>3bb45e40-37dd-4cae-a966-ecbd9260eb35</uuid>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <name>instance-00000002</name>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:name>tempest-AutoAllocateNetworkTest-server-1989432593</nova:name>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:24:31</nova:creationTime>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:user uuid="cb43f8dd9fea4dfb9a472f26cde44200">tempest-AutoAllocateNetworkTest-168723719-project-member</nova:user>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:project uuid="fdc770ac85b7451fbb50764fcc8bf038">tempest-AutoAllocateNetworkTest-168723719</nova:project>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="serial">3bb45e40-37dd-4cae-a966-ecbd9260eb35</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="uuid">3bb45e40-37dd-4cae-a966-ecbd9260eb35</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/console.log" append="off"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:24:32 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:24:32 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.252 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.252 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.253 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Using config drive#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.272 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:24:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658841496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.445 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.477 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.481 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 35 op/s
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.839 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Creating config drive at /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.849 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnvz1az8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:24:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230360792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.915 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.917 254096 DEBUG nova.objects.instance [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'pci_devices' on Instance uuid a04ee12e-fa6a-4458-9472-b68930d7ba89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.931 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <uuid>a04ee12e-fa6a-4458-9472-b68930d7ba89</uuid>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <name>instance-00000001</name>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-577439965</nova:name>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:24:31</nova:creationTime>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:user uuid="d72f101cd6d049f694a1d30145a4ed24">tempest-DeleteServersAdminTestJSON-1062463304-project-member</nova:user>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <nova:project uuid="185de66120c0404eb338d72909c776df">tempest-DeleteServersAdminTestJSON-1062463304</nova:project>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="serial">a04ee12e-fa6a-4458-9472-b68930d7ba89</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="uuid">a04ee12e-fa6a-4458-9472-b68930d7ba89</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a04ee12e-fa6a-4458-9472-b68930d7ba89_disk">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/console.log" append="off"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:24:32 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:24:32 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:24:32 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:24:32 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.970 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.971 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.971 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Using config drive#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.991 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:32 np0005535469 nova_compute[254092]: 2025-11-25 16:24:32.995 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnvz1az8n" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.014 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.018 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.143 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.144 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deleting local config drive /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config because it was imported into RBD.#033[00m
Nov 25 11:24:33 np0005535469 systemd[1]: Starting libvirt secret daemon...
Nov 25 11:24:33 np0005535469 systemd[1]: Started libvirt secret daemon.
Nov 25 11:24:33 np0005535469 systemd-machined[216343]: New machine qemu-1-instance-00000002.
Nov 25 11:24:33 np0005535469 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.303 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Creating config drive at /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.308 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpimj7_php execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.428 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpimj7_php" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.447 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:33 np0005535469 nova_compute[254092]: 2025-11-25 16:24:33.451 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.458 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087874.4577172, 3bb45e40-37dd-4cae-a966-ecbd9260eb35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.459 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.462 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.463 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.467 254096 INFO nova.virt.libvirt.driver [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance spawned successfully.#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.468 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.507 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.510 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.553 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.554 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087874.4614913, 3bb45e40-37dd-4cae-a966-ecbd9260eb35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.554 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] VM Started (Lifecycle Event)#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.576 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.576 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.577 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.577 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.578 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.578 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.583 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.586 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.635 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.760 254096 INFO nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 7.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.762 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Nov 25 11:24:34 np0005535469 nova_compute[254092]: 2025-11-25 16:24:34.923 254096 INFO nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 9.89 seconds to build instance.#033[00m
Nov 25 11:24:35 np0005535469 nova_compute[254092]: 2025-11-25 16:24:35.138 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Nov 25 11:24:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:37 np0005535469 nova_compute[254092]: 2025-11-25 16:24:37.908 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:37 np0005535469 nova_compute[254092]: 2025-11-25 16:24:37.909 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deleting local config drive /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config because it was imported into RBD.#033[00m
Nov 25 11:24:37 np0005535469 systemd-machined[216343]: New machine qemu-2-instance-00000001.
Nov 25 11:24:37 np0005535469 systemd[1]: Started Virtual Machine qemu-2-instance-00000001.
Nov 25 11:24:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 583 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.903 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.904 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.904 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.905 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.905 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.906 254096 INFO nova.compute.manager [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Terminating instance#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.907 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "refresh_cache-3bb45e40-37dd-4cae-a966-ecbd9260eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.908 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquired lock "refresh_cache-3bb45e40-37dd-4cae-a966-ecbd9260eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:24:38 np0005535469 nova_compute[254092]: 2025-11-25 16:24:38.908 254096 DEBUG nova.network.neutron [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.276 254096 DEBUG nova.network.neutron [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.326 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087879.3258498, a04ee12e-fa6a-4458-9472-b68930d7ba89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.326 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.329 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.329 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.332 254096 INFO nova.virt.libvirt.driver [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance spawned successfully.#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.332 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.353 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.356 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.357 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.357 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.358 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.358 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.358 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.362 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.415 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.415 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087879.328936, a04ee12e-fa6a-4458-9472-b68930d7ba89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.416 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] VM Started (Lifecycle Event)#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.428 254096 INFO nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 19.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.429 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.434 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.436 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.467 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.509 254096 INFO nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 20.71 seconds to build instance.#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.541 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.829 254096 DEBUG nova.network.neutron [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.843 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Releasing lock "refresh_cache-3bb45e40-37dd-4cae-a966-ecbd9260eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:24:39 np0005535469 nova_compute[254092]: 2025-11-25 16:24:39.843 254096 DEBUG nova.compute.manager [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:24:40
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:24:40 np0005535469 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 25 11:24:40 np0005535469 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 5.866s CPU time.
Nov 25 11:24:40 np0005535469 systemd-machined[216343]: Machine qemu-1-instance-00000002 terminated.
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.257 254096 INFO nova.virt.libvirt.driver [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance destroyed successfully.#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.258 254096 DEBUG nova.objects.instance [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lazy-loading 'resources' on Instance uuid 3bb45e40-37dd-4cae-a966-ecbd9260eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.567 254096 INFO nova.virt.libvirt.driver [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deleting instance files /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35_del#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.567 254096 INFO nova.virt.libvirt.driver [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deletion of /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35_del complete#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.663 254096 DEBUG nova.virt.libvirt.host [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.664 254096 INFO nova.virt.libvirt.host [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] UEFI support detected#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.666 254096 INFO nova.compute.manager [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.666 254096 DEBUG oslo.service.loopingcall [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.667 254096 DEBUG nova.compute.manager [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.667 254096 DEBUG nova.network.neutron [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:24:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.963 254096 DEBUG nova.network.neutron [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.974 254096 DEBUG nova.network.neutron [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:24:40 np0005535469 nova_compute[254092]: 2025-11-25 16:24:40.986 254096 INFO nova.compute.manager [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 0.32 seconds to deallocate network for instance.#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.032 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.032 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.103 254096 DEBUG oslo_concurrency.processutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.402 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "a04ee12e-fa6a-4458-9472-b68930d7ba89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.405 254096 INFO nova.compute.manager [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Terminating instance#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.405 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "refresh_cache-a04ee12e-fa6a-4458-9472-b68930d7ba89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.405 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquired lock "refresh_cache-a04ee12e-fa6a-4458-9472-b68930d7ba89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.406 254096 DEBUG nova.network.neutron [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:24:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4051828136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.586 254096 DEBUG oslo_concurrency.processutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.591 254096 DEBUG nova.compute.provider_tree [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.606 254096 DEBUG nova.scheduler.client.report [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.627 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.650 254096 INFO nova.scheduler.client.report [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Deleted allocations for instance 3bb45e40-37dd-4cae-a966-ecbd9260eb35#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.664 254096 DEBUG nova.network.neutron [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.721 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.953 254096 DEBUG nova.network.neutron [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.971 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Releasing lock "refresh_cache-a04ee12e-fa6a-4458-9472-b68930d7ba89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:24:41 np0005535469 nova_compute[254092]: 2025-11-25 16:24:41.971 254096 DEBUG nova.compute.manager [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:24:42 np0005535469 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 25 11:24:42 np0005535469 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Consumed 4.036s CPU time.
Nov 25 11:24:42 np0005535469 systemd-machined[216343]: Machine qemu-2-instance-00000001 terminated.
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.193 254096 INFO nova.virt.libvirt.driver [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance destroyed successfully.#033[00m
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.194 254096 DEBUG nova.objects.instance [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lazy-loading 'resources' on Instance uuid a04ee12e-fa6a-4458-9472-b68930d7ba89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 25 11:24:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 25 11:24:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.646 254096 INFO nova.virt.libvirt.driver [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deleting instance files /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89_del#033[00m
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.647 254096 INFO nova.virt.libvirt.driver [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deletion of /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89_del complete#033[00m
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.729 254096 INFO nova.compute.manager [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.730 254096 DEBUG oslo.service.loopingcall [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.731 254096 DEBUG nova.compute.manager [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:24:42 np0005535469 nova_compute[254092]: 2025-11-25 16:24:42.731 254096 DEBUG nova.network.neutron [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:24:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 204 op/s
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.045 254096 DEBUG nova.network.neutron [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.058 254096 DEBUG nova.network.neutron [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.079 254096 INFO nova.compute.manager [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 0.35 seconds to deallocate network for instance.#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.122 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.122 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.167 254096 DEBUG oslo_concurrency.processutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814541778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.664 254096 DEBUG oslo_concurrency.processutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.670 254096 DEBUG nova.compute.provider_tree [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.686 254096 DEBUG nova.scheduler.client.report [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.710 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.736 254096 INFO nova.scheduler.client.report [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Deleted allocations for instance a04ee12e-fa6a-4458-9472-b68930d7ba89#033[00m
Nov 25 11:24:43 np0005535469 nova_compute[254092]: 2025-11-25 16:24:43.968 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:44 np0005535469 podman[266826]: 2025-11-25 16:24:44.653518811 +0000 UTC m=+0.066406539 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 11:24:44 np0005535469 podman[266825]: 2025-11-25 16:24:44.680588227 +0000 UTC m=+0.093819324 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:24:44 np0005535469 podman[266827]: 2025-11-25 16:24:44.695627537 +0000 UTC m=+0.104342141 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:24:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 204 op/s
Nov 25 11:24:46 np0005535469 nova_compute[254092]: 2025-11-25 16:24:46.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:46 np0005535469 nova_compute[254092]: 2025-11-25 16:24:46.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:46 np0005535469 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:46 np0005535469 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:46 np0005535469 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:24:46 np0005535469 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 204 op/s
Nov 25 11:24:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1832746166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:46 np0005535469 nova_compute[254092]: 2025-11-25 16:24:46.959 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.151 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.154 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5063MB free_disk=59.950958251953125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.154 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.154 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.239 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.258 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.608 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.608 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.630 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:24:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824501766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.707 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.711 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.716 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.729 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.750 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.751 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.752 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.758 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.759 254096 INFO nova.compute.claims [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:24:47 np0005535469 nova_compute[254092]: 2025-11-25 16:24:47.895 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684486207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.333 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.340 254096 DEBUG nova.compute.provider_tree [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.353 254096 DEBUG nova.scheduler.client.report [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.378 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.379 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.439 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.440 254096 DEBUG nova.network.neutron [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.463 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.519 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.611 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.613 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.613 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Creating image(s)#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.634 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.657 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.678 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.682 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.769 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.770 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.770 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.771 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 204 op/s
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.793 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:48 np0005535469 nova_compute[254092]: 2025-11-25 16:24:48.797 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.545 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.748s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.597 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] resizing rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.753 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.754 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.793 254096 DEBUG nova.objects.instance [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'migration_context' on Instance uuid d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.810 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.810 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Ensure instance console log exists: /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.811 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.811 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:49 np0005535469 nova_compute[254092]: 2025-11-25 16:24:49.811 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.023 254096 DEBUG nova.network.neutron [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.024 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.025 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.031 254096 WARNING nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.042 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.042 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.046 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.046 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.047 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.047 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.048 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.049 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.049 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.049 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.050 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.050 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.051 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.051 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.051 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.052 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.056 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:24:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59151704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.601 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.630 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:50 np0005535469 nova_compute[254092]: 2025-11-25 16:24:50.634 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 76 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 172 op/s
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00021105755925636519 of space, bias 1.0, pg target 0.06331726777690956 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:24:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:24:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875928217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.090 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.093 254096 DEBUG nova.objects.instance [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'pci_devices' on Instance uuid d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.112 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <uuid>d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce</uuid>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <name>instance-00000003</name>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1684340739</nova:name>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:24:50</nova:creationTime>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <nova:user uuid="d72f101cd6d049f694a1d30145a4ed24">tempest-DeleteServersAdminTestJSON-1062463304-project-member</nova:user>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <nova:project uuid="185de66120c0404eb338d72909c776df">tempest-DeleteServersAdminTestJSON-1062463304</nova:project>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <entry name="serial">d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce</entry>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <entry name="uuid">d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce</entry>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/console.log" append="off"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:24:51 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:24:51 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:24:51 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:24:51 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.214 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.215 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.216 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Using config drive#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.238 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.738 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Creating config drive at /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.748 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxpnwewa6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.883 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxpnwewa6" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.915 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:51 np0005535469 nova_compute[254092]: 2025-11-25 16:24:51.920 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:52 np0005535469 nova_compute[254092]: 2025-11-25 16:24:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:52 np0005535469 nova_compute[254092]: 2025-11-25 16:24:52.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:24:52 np0005535469 nova_compute[254092]: 2025-11-25 16:24:52.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:24:52 np0005535469 nova_compute[254092]: 2025-11-25 16:24:52.524 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:24:52 np0005535469 nova_compute[254092]: 2025-11-25 16:24:52.524 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:24:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 88 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 566 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.078 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.079 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deleting local config drive /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config because it was imported into RBD.#033[00m
Nov 25 11:24:53 np0005535469 systemd-machined[216343]: New machine qemu-3-instance-00000003.
Nov 25 11:24:53 np0005535469 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.663 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087893.662957, d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.664 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.667 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.668 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.672 254096 INFO nova.virt.libvirt.driver [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance spawned successfully.#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.673 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.693 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.698 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.702 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.702 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.703 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.703 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.703 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.704 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.726 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.726 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087893.6641448, d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.727 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] VM Started (Lifecycle Event)#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.761 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.765 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.789 254096 INFO nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 5.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.790 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.798 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.869 254096 INFO nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 6.19 seconds to build instance.#033[00m
Nov 25 11:24:53 np0005535469 nova_compute[254092]: 2025-11-25 16:24:53.884 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.123 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.124 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.140 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.215 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.215 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.221 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.221 254096 INFO nova.compute.claims [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.347 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 88 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 488 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 25 11:24:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:24:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438810855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.829 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.835 254096 DEBUG nova.compute.provider_tree [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.850 254096 DEBUG nova.scheduler.client.report [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.873 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.873 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.913 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.914 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.931 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:24:54 np0005535469 nova_compute[254092]: 2025-11-25 16:24:54.952 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.039 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.040 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.040 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Creating image(s)#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.068 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.098 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.126 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.133 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:24:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176779923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:24:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:24:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176779923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.200 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.202 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.204 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.204 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.229 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.233 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.256 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087880.2558208, 3bb45e40-37dd-4cae-a966-ecbd9260eb35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.257 254096 INFO nova.compute.manager [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.288 254096 DEBUG nova.compute.manager [None req-70e7ad0d-7ea4-4a83-b23d-4cc07433c264 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.385 254096 WARNING oslo_policy.policy [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.385 254096 WARNING oslo_policy.policy [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.389 254096 DEBUG nova.policy [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '787cb8b4238c4926a4466f3421db09ef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95df0d15c889499aba411e805ea145a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.537 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:55 np0005535469 nova_compute[254092]: 2025-11-25 16:24:55.606 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] resizing rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.004 254096 DEBUG nova.objects.instance [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 71cf0ae0-6191-4b64-9a81-a955d807ceb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.016 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.017 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Ensure instance console log exists: /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.017 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.018 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.020 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.207 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.208 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.209 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.209 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.209 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.211 254096 INFO nova.compute.manager [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Terminating instance#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.212 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "refresh_cache-d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.212 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquired lock "refresh_cache-d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.212 254096 DEBUG nova.network.neutron [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:24:56 np0005535469 nova_compute[254092]: 2025-11-25 16:24:56.579 254096 DEBUG nova.network.neutron [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:24:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 99 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.191 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087882.1905575, a04ee12e-fa6a-4458-9472-b68930d7ba89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.192 254096 INFO nova.compute.manager [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.194 254096 DEBUG nova.network.neutron [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.228 254096 DEBUG nova.compute.manager [None req-8f4e641a-2212-4597-9c36-f3165288a8d8 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.229 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Releasing lock "refresh_cache-d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.229 254096 DEBUG nova.compute.manager [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:24:57 np0005535469 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 25 11:24:57 np0005535469 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.044s CPU time.
Nov 25 11:24:57 np0005535469 systemd-machined[216343]: Machine qemu-3-instance-00000003 terminated.
Nov 25 11:24:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.448 254096 INFO nova.virt.libvirt.driver [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance destroyed successfully.#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.449 254096 DEBUG nova.objects.instance [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'resources' on Instance uuid d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:24:57 np0005535469 nova_compute[254092]: 2025-11-25 16:24:57.657 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Successfully created port: 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:24:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 99 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 25 11:24:59 np0005535469 nova_compute[254092]: 2025-11-25 16:24:59.350 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Successfully updated port: 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:24:59 np0005535469 nova_compute[254092]: 2025-11-25 16:24:59.393 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:24:59 np0005535469 nova_compute[254092]: 2025-11-25 16:24:59.394 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquired lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:24:59 np0005535469 nova_compute[254092]: 2025-11-25 16:24:59.394 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:24:59 np0005535469 nova_compute[254092]: 2025-11-25 16:24:59.416 254096 DEBUG oslo_concurrency.processutils [None req-b0b49039-76ee-4344-ad31-434a905e3834 602139a4fa0b4ea6a916c8bf9b9b8ee3 ec8a4d65ee6043d18b0c81f32d05ce40 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:24:59 np0005535469 nova_compute[254092]: 2025-11-25 16:24:59.481 254096 DEBUG oslo_concurrency.processutils [None req-b0b49039-76ee-4344-ad31-434a905e3834 602139a4fa0b4ea6a916c8bf9b9b8ee3 ec8a4d65ee6043d18b0c81f32d05ce40 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:24:59 np0005535469 nova_compute[254092]: 2025-11-25 16:24:59.590 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:25:00 np0005535469 podman[267783]: 2025-11-25 16:25:00.351218574 +0000 UTC m=+0.021540347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:00 np0005535469 podman[267783]: 2025-11-25 16:25:00.481247163 +0000 UTC m=+0.151568866 container create 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 11:25:00 np0005535469 systemd[1]: Started libpod-conmon-0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168.scope.
Nov 25 11:25:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:00 np0005535469 podman[267783]: 2025-11-25 16:25:00.663105943 +0000 UTC m=+0.333427666 container init 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:25:00 np0005535469 podman[267783]: 2025-11-25 16:25:00.669793774 +0000 UTC m=+0.340115477 container start 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:25:00 np0005535469 cranky_villani[267800]: 167 167
Nov 25 11:25:00 np0005535469 systemd[1]: libpod-0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168.scope: Deactivated successfully.
Nov 25 11:25:00 np0005535469 podman[267783]: 2025-11-25 16:25:00.789538533 +0000 UTC m=+0.459860256 container attach 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:25:00 np0005535469 podman[267783]: 2025-11-25 16:25:00.790138349 +0000 UTC m=+0.460460052 container died 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:25:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 134 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 134 op/s
Nov 25 11:25:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4995e226f290018d4d77c41ed4c88168a60bf1aaf3e3b94d12eaf03a61ef2dd9-merged.mount: Deactivated successfully.
Nov 25 11:25:01 np0005535469 podman[267783]: 2025-11-25 16:25:01.307300554 +0000 UTC m=+0.977622267 container remove 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:25:01 np0005535469 systemd[1]: libpod-conmon-0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168.scope: Deactivated successfully.
Nov 25 11:25:01 np0005535469 podman[267826]: 2025-11-25 16:25:01.513116725 +0000 UTC m=+0.062807460 container create 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:25:01 np0005535469 podman[267826]: 2025-11-25 16:25:01.474858213 +0000 UTC m=+0.024548938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:01 np0005535469 systemd[1]: Started libpod-conmon-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope.
Nov 25 11:25:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:01 np0005535469 podman[267826]: 2025-11-25 16:25:01.637115569 +0000 UTC m=+0.186806314 container init 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:25:01 np0005535469 podman[267826]: 2025-11-25 16:25:01.644348576 +0000 UTC m=+0.194039281 container start 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.645 254096 INFO nova.virt.libvirt.driver [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deleting instance files /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_del#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.646 254096 INFO nova.virt.libvirt.driver [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deletion of /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_del complete#033[00m
Nov 25 11:25:01 np0005535469 podman[267826]: 2025-11-25 16:25:01.659997502 +0000 UTC m=+0.209688247 container attach 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.676 254096 DEBUG nova.compute.manager [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.677 254096 DEBUG nova.compute.manager [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing instance network info cache due to event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.677 254096 DEBUG oslo_concurrency.lockutils [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.738 254096 INFO nova.compute.manager [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 4.51 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.739 254096 DEBUG oslo.service.loopingcall [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.739 254096 DEBUG nova.compute.manager [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.740 254096 DEBUG nova.network.neutron [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.983 254096 DEBUG nova.network.neutron [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:25:01 np0005535469 nova_compute[254092]: 2025-11-25 16:25:01.996 254096 DEBUG nova.network.neutron [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.009 254096 INFO nova.compute.manager [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 0.27 seconds to deallocate network for instance.#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.098 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.099 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.190 254096 DEBUG oslo_concurrency.processutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.288 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.339 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Releasing lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.340 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance network_info: |[{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.341 254096 DEBUG oslo_concurrency.lockutils [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.342 254096 DEBUG nova.network.neutron [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.345 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start _get_guest_xml network_info=[{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.351 254096 WARNING nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.362 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.363 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.367 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.367 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.368 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.368 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:24:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='964051212',id=9,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1974927111',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.373 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:25:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042876011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.681 254096 DEBUG oslo_concurrency.processutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.688 254096 DEBUG nova.compute.provider_tree [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.713 254096 DEBUG nova.scheduler.client.report [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:25:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Nov 25 11:25:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:25:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136750213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.822 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.843 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.846 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:02 np0005535469 nova_compute[254092]: 2025-11-25 16:25:02.937 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.018 254096 INFO nova.scheduler.client.report [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Deleted allocations for instance d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce#033[00m
Nov 25 11:25:03 np0005535469 strange_pike[267844]: [
Nov 25 11:25:03 np0005535469 strange_pike[267844]:    {
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "available": false,
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "ceph_device": false,
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "lsm_data": {},
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "lvs": [],
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "path": "/dev/sr0",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "rejected_reasons": [
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "Has a FileSystem",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "Insufficient space (<5GB)"
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        ],
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        "sys_api": {
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "actuators": null,
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "device_nodes": "sr0",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "devname": "sr0",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "human_readable_size": "482.00 KB",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "id_bus": "ata",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "model": "QEMU DVD-ROM",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "nr_requests": "2",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "parent": "/dev/sr0",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "partitions": {},
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "path": "/dev/sr0",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "removable": "1",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "rev": "2.5+",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "ro": "0",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "rotational": "1",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "sas_address": "",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "sas_device_handle": "",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "scheduler_mode": "mq-deadline",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "sectors": 0,
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "sectorsize": "2048",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "size": 493568.0,
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "support_discard": "2048",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "type": "disk",
Nov 25 11:25:03 np0005535469 strange_pike[267844]:            "vendor": "QEMU"
Nov 25 11:25:03 np0005535469 strange_pike[267844]:        }
Nov 25 11:25:03 np0005535469 strange_pike[267844]:    }
Nov 25 11:25:03 np0005535469 strange_pike[267844]: ]
Nov 25 11:25:03 np0005535469 systemd[1]: libpod-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope: Deactivated successfully.
Nov 25 11:25:03 np0005535469 podman[267826]: 2025-11-25 16:25:03.075973107 +0000 UTC m=+1.625663822 container died 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:25:03 np0005535469 systemd[1]: libpod-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope: Consumed 1.437s CPU time.
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.114 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f-merged.mount: Deactivated successfully.
Nov 25 11:25:03 np0005535469 podman[267826]: 2025-11-25 16:25:03.194228655 +0000 UTC m=+1.743919360 container remove 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:03 np0005535469 systemd[1]: libpod-conmon-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope: Deactivated successfully.
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323743500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:03 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 85f74c62-98ea-4cab-ae2d-c55962a1ccf6 does not exist
Nov 25 11:25:03 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5d5878a4-9993-4157-86cc-b51ee86754f3 does not exist
Nov 25 11:25:03 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e6173d6f-bcdb-4fd2-ad90-45d13513c2b8 does not exist
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.272 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.274 254096 DEBUG nova.virt.libvirt.vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(9),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-531690935',id=4,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=9,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-88lf5wc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:24:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=71cf0ae0-6191-4b64-9a81-a955d807ceb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:25:03 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.277 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.278 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.280 254096 DEBUG nova.objects.instance [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 71cf0ae0-6191-4b64-9a81-a955d807ceb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.304 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <uuid>71cf0ae0-6191-4b64-9a81-a955d807ceb4</uuid>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <name>instance-00000004</name>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-531690935</nova:name>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:25:02</nova:creationTime>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1974927111">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:user uuid="787cb8b4238c4926a4466f3421db09ef">tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member</nova:user>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:project uuid="95df0d15c889499aba411e805ea145a5">tempest-ServersWithSpecificFlavorTestJSON-341738122</nova:project>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <nova:port uuid="7adfcb53-33cb-482b-ba39-82d0ee72c4ea">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <entry name="serial">71cf0ae0-6191-4b64-9a81-a955d807ceb4</entry>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <entry name="uuid">71cf0ae0-6191-4b64-9a81-a955d807ceb4</entry>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:7d:06:61"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <target dev="tap7adfcb53-33"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/console.log" append="off"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:25:03 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:25:03 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:25:03 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:25:03 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.304 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Preparing to wait for external event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.304 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.305 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.305 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.306 254096 DEBUG nova.virt.libvirt.vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(9),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-531690935',id=4,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=9,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-88lf5wc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:24:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=71cf0ae0-6191-4b64-9a81-a955d807ceb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.306 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.308 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.308 254096 DEBUG os_vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.389 254096 DEBUG ovsdbapp.backend.ovs_idl [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.389 254096 DEBUG ovsdbapp.backend.ovs_idl [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.389 254096 DEBUG ovsdbapp.backend.ovs_idl [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.403 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.403 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:25:03 np0005535469 nova_compute[254092]: 2025-11-25 16:25:03.404 254096 INFO oslo.privsep.daemon [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmphzcrj_n1/privsep.sock']#033[00m
Nov 25 11:25:03 np0005535469 podman[270116]: 2025-11-25 16:25:03.868468684 +0000 UTC m=+0.063399396 container create 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:25:03 np0005535469 systemd[1]: Started libpod-conmon-382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde.scope.
Nov 25 11:25:03 np0005535469 podman[270116]: 2025-11-25 16:25:03.830948363 +0000 UTC m=+0.025879095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:03 np0005535469 podman[270116]: 2025-11-25 16:25:03.983483664 +0000 UTC m=+0.178414406 container init 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:25:03 np0005535469 podman[270116]: 2025-11-25 16:25:03.992982042 +0000 UTC m=+0.187912764 container start 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:25:03 np0005535469 eager_edison[270132]: 167 167
Nov 25 11:25:03 np0005535469 podman[270116]: 2025-11-25 16:25:03.9987792 +0000 UTC m=+0.193709932 container attach 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 11:25:03 np0005535469 systemd[1]: libpod-382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde.scope: Deactivated successfully.
Nov 25 11:25:04 np0005535469 podman[270116]: 2025-11-25 16:25:04.000248761 +0000 UTC m=+0.195179483 container died 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:25:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3f62538e57847b148cc866dc33ad70edf69d1d49d96287a30462150cfd2ab06a-merged.mount: Deactivated successfully.
Nov 25 11:25:04 np0005535469 podman[270116]: 2025-11-25 16:25:04.11744899 +0000 UTC m=+0.312379722 container remove 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:25:04 np0005535469 systemd[1]: libpod-conmon-382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde.scope: Deactivated successfully.
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.151 254096 DEBUG nova.network.neutron [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updated VIF entry in instance network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.152 254096 DEBUG nova.network.neutron [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.169 254096 DEBUG oslo_concurrency.lockutils [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:04 np0005535469 podman[270155]: 2025-11-25 16:25:04.342966137 +0000 UTC m=+0.100385423 container create 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 11:25:04 np0005535469 podman[270155]: 2025-11-25 16:25:04.265693295 +0000 UTC m=+0.023112601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:04 np0005535469 systemd[1]: Started libpod-conmon-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope.
Nov 25 11:25:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.455 254096 INFO oslo.privsep.daemon [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.327 270169 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.331 270169 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.333 270169 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.333 270169 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270169#033[00m
Nov 25 11:25:04 np0005535469 podman[270155]: 2025-11-25 16:25:04.528424614 +0000 UTC m=+0.285843920 container init 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:25:04 np0005535469 podman[270155]: 2025-11-25 16:25:04.53523065 +0000 UTC m=+0.292649946 container start 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:25:04 np0005535469 podman[270155]: 2025-11-25 16:25:04.558877113 +0000 UTC m=+0.316296419 container attach 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 25 11:25:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.819 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7adfcb53-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.820 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7adfcb53-33, col_values=(('external_ids', {'iface-id': '7adfcb53-33cb-482b-ba39-82d0ee72c4ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:06:61', 'vm-uuid': '71cf0ae0-6191-4b64-9a81-a955d807ceb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:04 np0005535469 NetworkManager[48891]: <info>  [1764087904.8220] manager: (tap7adfcb53-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:04 np0005535469 nova_compute[254092]: 2025-11-25 16:25:04.833 254096 INFO os_vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33')#033[00m
Nov 25 11:25:05 np0005535469 nova_compute[254092]: 2025-11-25 16:25:05.058 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:25:05 np0005535469 nova_compute[254092]: 2025-11-25 16:25:05.058 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:25:05 np0005535469 nova_compute[254092]: 2025-11-25 16:25:05.058 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No VIF found with MAC fa:16:3e:7d:06:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:25:05 np0005535469 nova_compute[254092]: 2025-11-25 16:25:05.059 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Using config drive#033[00m
Nov 25 11:25:05 np0005535469 nova_compute[254092]: 2025-11-25 16:25:05.078 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:06 np0005535469 hardcore_ritchie[270172]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:25:06 np0005535469 hardcore_ritchie[270172]: --> relative data size: 1.0
Nov 25 11:25:06 np0005535469 hardcore_ritchie[270172]: --> All data devices are unavailable
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.015 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Creating config drive at /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config#033[00m
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.024 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvoratgj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:06 np0005535469 systemd[1]: libpod-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope: Deactivated successfully.
Nov 25 11:25:06 np0005535469 podman[270155]: 2025-11-25 16:25:06.044003479 +0000 UTC m=+1.801422765 container died 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:25:06 np0005535469 systemd[1]: libpod-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope: Consumed 1.119s CPU time.
Nov 25 11:25:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.047 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:25:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.048 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b-merged.mount: Deactivated successfully.
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.153 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvoratgj" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.175 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.180 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:06 np0005535469 podman[270155]: 2025-11-25 16:25:06.255432424 +0000 UTC m=+2.012851710 container remove 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:25:06 np0005535469 systemd[1]: libpod-conmon-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope: Deactivated successfully.
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.514 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.515 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deleting local config drive /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config because it was imported into RBD.#033[00m
Nov 25 11:25:06 np0005535469 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 25 11:25:06 np0005535469 kernel: tap7adfcb53-33: entered promiscuous mode
Nov 25 11:25:06 np0005535469 NetworkManager[48891]: <info>  [1764087906.5995] manager: (tap7adfcb53-33): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:06Z|00027|binding|INFO|Claiming lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea for this chassis.
Nov 25 11:25:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:06Z|00028|binding|INFO|7adfcb53-33cb-482b-ba39-82d0ee72c4ea: Claiming fa:16:3e:7d:06:61 10.100.0.3
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.633 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:06:61 10.100.0.3'], port_security=['fa:16:3e:7d:06:61 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '71cf0ae0-6191-4b64-9a81-a955d807ceb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7adfcb53-33cb-482b-ba39-82d0ee72c4ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:25:06 np0005535469 systemd-udevd[270394]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:25:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.634 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d bound to our chassis#033[00m
Nov 25 11:25:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.636 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd4097f8-dcdf-451c-8fbb-2057e86e375d#033[00m
Nov 25 11:25:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.637 163338 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp4a18eijb/privsep.sock']#033[00m
Nov 25 11:25:06 np0005535469 NetworkManager[48891]: <info>  [1764087906.6565] device (tap7adfcb53-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:25:06 np0005535469 NetworkManager[48891]: <info>  [1764087906.6576] device (tap7adfcb53-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:25:06 np0005535469 systemd-machined[216343]: New machine qemu-4-instance-00000004.
Nov 25 11:25:06 np0005535469 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:06Z|00029|binding|INFO|Setting lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea ovn-installed in OVS
Nov 25 11:25:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:06Z|00030|binding|INFO|Setting lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea up in Southbound
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 11:25:06 np0005535469 podman[270450]: 2025-11-25 16:25:06.916845214 +0000 UTC m=+0.042463647 container create 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:25:06 np0005535469 nova_compute[254092]: 2025-11-25 16:25:06.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:06 np0005535469 systemd[1]: Started libpod-conmon-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope.
Nov 25 11:25:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:06 np0005535469 podman[270450]: 2025-11-25 16:25:06.899570034 +0000 UTC m=+0.025188487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:06 np0005535469 podman[270450]: 2025-11-25 16:25:06.998500476 +0000 UTC m=+0.124118929 container init 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:25:07 np0005535469 podman[270450]: 2025-11-25 16:25:07.007430818 +0000 UTC m=+0.133049251 container start 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:25:07 np0005535469 busy_bell[270468]: 167 167
Nov 25 11:25:07 np0005535469 systemd[1]: libpod-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope: Deactivated successfully.
Nov 25 11:25:07 np0005535469 conmon[270468]: conmon 049a4cd55a917e57a80f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope/container/memory.events
Nov 25 11:25:07 np0005535469 podman[270450]: 2025-11-25 16:25:07.015027216 +0000 UTC m=+0.140645649 container attach 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:25:07 np0005535469 podman[270450]: 2025-11-25 16:25:07.018397927 +0000 UTC m=+0.144016380 container died 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.141 254096 DEBUG nova.compute.manager [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.143 254096 DEBUG oslo_concurrency.lockutils [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.143 254096 DEBUG oslo_concurrency.lockutils [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.144 254096 DEBUG oslo_concurrency.lockutils [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.144 254096 DEBUG nova.compute.manager [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Processing event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:25:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6adf3984ab92d16509350a16b052e04d81ded97682c0c05264e9d4fd0bb704ff-merged.mount: Deactivated successfully.
Nov 25 11:25:07 np0005535469 podman[270450]: 2025-11-25 16:25:07.219818559 +0000 UTC m=+0.345436992 container remove 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:25:07 np0005535469 systemd[1]: libpod-conmon-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope: Deactivated successfully.
Nov 25 11:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.362 163338 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 11:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.364 163338 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4a18eijb/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 11:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.242 270486 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 11:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.245 270486 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 11:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.247 270486 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 25 11:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.248 270486 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270486#033[00m
Nov 25 11:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[096b207e-37a7-4cfd-adf6-66a3318c3cbf]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:07 np0005535469 podman[270510]: 2025-11-25 16:25:07.462943205 +0000 UTC m=+0.105041780 container create d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 11:25:07 np0005535469 podman[270510]: 2025-11-25 16:25:07.383179965 +0000 UTC m=+0.025278580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:07 np0005535469 systemd[1]: Started libpod-conmon-d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5.scope.
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.526 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087907.5262318, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.527 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Started (Lifecycle Event)#033[00m
Nov 25 11:25:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.532 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.535 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:25:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.538 254096 INFO nova.virt.libvirt.driver [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance spawned successfully.#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.539 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:25:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.557 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.564 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.566 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.567 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.567 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.567 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.568 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.568 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:07 np0005535469 podman[270510]: 2025-11-25 16:25:07.583950738 +0000 UTC m=+0.226049333 container init d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:25:07 np0005535469 podman[270510]: 2025-11-25 16:25:07.593961531 +0000 UTC m=+0.236060086 container start d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.595 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.595 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087907.526333, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.596 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:25:07 np0005535469 podman[270510]: 2025-11-25 16:25:07.604100457 +0000 UTC m=+0.246199052 container attach d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.629 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087907.535144, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.632 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.646 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.649 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.658 254096 INFO nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 12.62 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.659 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.684 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.772 254096 INFO nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 13.58 seconds to build instance.#033[00m
Nov 25 11:25:07 np0005535469 nova_compute[254092]: 2025-11-25 16:25:07.816 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]: {
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:    "0": [
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:        {
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "devices": [
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "/dev/loop3"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            ],
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_name": "ceph_lv0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_size": "21470642176",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "name": "ceph_lv0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "tags": {
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cluster_name": "ceph",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.crush_device_class": "",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.encrypted": "0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osd_id": "0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.type": "block",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.vdo": "0"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            },
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "type": "block",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "vg_name": "ceph_vg0"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:        }
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:    ],
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:    "1": [
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:        {
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "devices": [
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "/dev/loop4"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            ],
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_name": "ceph_lv1",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_size": "21470642176",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "name": "ceph_lv1",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "tags": {
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cluster_name": "ceph",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.crush_device_class": "",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.encrypted": "0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osd_id": "1",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.type": "block",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.vdo": "0"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            },
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "type": "block",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "vg_name": "ceph_vg1"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:        }
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:    ],
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:    "2": [
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:        {
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "devices": [
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "/dev/loop5"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            ],
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_name": "ceph_lv2",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_size": "21470642176",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "name": "ceph_lv2",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "tags": {
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.cluster_name": "ceph",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.crush_device_class": "",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.encrypted": "0",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osd_id": "2",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.type": "block",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:                "ceph.vdo": "0"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            },
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "type": "block",
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:            "vg_name": "ceph_vg2"
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:        }
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]:    ]
Nov 25 11:25:08 np0005535469 nervous_banzai[270553]: }
Nov 25 11:25:08 np0005535469 systemd[1]: libpod-d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5.scope: Deactivated successfully.
Nov 25 11:25:08 np0005535469 podman[270510]: 2025-11-25 16:25:08.372835678 +0000 UTC m=+1.014934243 container died d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:25:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc-merged.mount: Deactivated successfully.
Nov 25 11:25:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Nov 25 11:25:09 np0005535469 podman[270510]: 2025-11-25 16:25:09.447121023 +0000 UTC m=+2.089219588 container remove d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:25:09 np0005535469 nova_compute[254092]: 2025-11-25 16:25:09.452 254096 DEBUG nova.compute.manager [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:09 np0005535469 nova_compute[254092]: 2025-11-25 16:25:09.452 254096 DEBUG oslo_concurrency.lockutils [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:09 np0005535469 nova_compute[254092]: 2025-11-25 16:25:09.452 254096 DEBUG oslo_concurrency.lockutils [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:09 np0005535469 nova_compute[254092]: 2025-11-25 16:25:09.453 254096 DEBUG oslo_concurrency.lockutils [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:09 np0005535469 nova_compute[254092]: 2025-11-25 16:25:09.453 254096 DEBUG nova.compute.manager [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] No waiting events found dispatching network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:25:09 np0005535469 nova_compute[254092]: 2025-11-25 16:25:09.453 254096 WARNING nova.compute.manager [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received unexpected event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea for instance with vm_state active and task_state None.#033[00m
Nov 25 11:25:09 np0005535469 systemd[1]: libpod-conmon-d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5.scope: Deactivated successfully.
Nov 25 11:25:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:09.788 270486 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:09.788 270486 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:09.789 270486 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:09 np0005535469 nova_compute[254092]: 2025-11-25 16:25:09.821 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:25:10 np0005535469 podman[270713]: 2025-11-25 16:25:10.081731214 +0000 UTC m=+0.022307918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:10 np0005535469 podman[270713]: 2025-11-25 16:25:10.198711808 +0000 UTC m=+0.139288492 container create d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:25:10 np0005535469 systemd[1]: Started libpod-conmon-d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1.scope.
Nov 25 11:25:10 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:10 np0005535469 podman[270713]: 2025-11-25 16:25:10.529335436 +0000 UTC m=+0.469912140 container init d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:25:10 np0005535469 podman[270713]: 2025-11-25 16:25:10.540559751 +0000 UTC m=+0.481136435 container start d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 25 11:25:10 np0005535469 unruffled_brattain[270730]: 167 167
Nov 25 11:25:10 np0005535469 systemd[1]: libpod-d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1.scope: Deactivated successfully.
Nov 25 11:25:10 np0005535469 podman[270713]: 2025-11-25 16:25:10.597019718 +0000 UTC m=+0.537596402 container attach d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 11:25:10 np0005535469 podman[270713]: 2025-11-25 16:25:10.597539372 +0000 UTC m=+0.538116056 container died d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:25:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 102 op/s
Nov 25 11:25:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-acc10c22d8a9af846670dfb51fa57b11724cc369e4fdcc87e904b26a01d70e37-merged.mount: Deactivated successfully.
Nov 25 11:25:11 np0005535469 podman[270713]: 2025-11-25 16:25:11.214925183 +0000 UTC m=+1.155501887 container remove d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:11 np0005535469 systemd[1]: libpod-conmon-d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1.scope: Deactivated successfully.
Nov 25 11:25:11 np0005535469 podman[270754]: 2025-11-25 16:25:11.367383753 +0000 UTC m=+0.025497095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:25:11 np0005535469 podman[270754]: 2025-11-25 16:25:11.640459034 +0000 UTC m=+0.298572366 container create a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 11:25:11 np0005535469 systemd[1]: Started libpod-conmon-a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a.scope.
Nov 25 11:25:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:11 np0005535469 podman[270754]: 2025-11-25 16:25:11.881443372 +0000 UTC m=+0.539556814 container init a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:11 np0005535469 podman[270754]: 2025-11-25 16:25:11.89314685 +0000 UTC m=+0.551260192 container start a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:11 np0005535469 nova_compute[254092]: 2025-11-25 16:25:11.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:11 np0005535469 podman[270754]: 2025-11-25 16:25:11.964050791 +0000 UTC m=+0.622164203 container attach a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:25:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:12 np0005535469 nova_compute[254092]: 2025-11-25 16:25:12.447 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087897.445856, d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:12 np0005535469 nova_compute[254092]: 2025-11-25 16:25:12.448 254096 INFO nova.compute.manager [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:25:12 np0005535469 nova_compute[254092]: 2025-11-25 16:25:12.471 254096 DEBUG nova.compute.manager [None req-d0378137-4191-4176-a6b2-cd3084e3fd56 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 93 op/s
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]: {
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "osd_id": 1,
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "type": "bluestore"
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:    },
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "osd_id": 2,
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "type": "bluestore"
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:    },
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "osd_id": 0,
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:        "type": "bluestore"
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]:    }
Nov 25 11:25:12 np0005535469 hopeful_wozniak[270773]: }
Nov 25 11:25:12 np0005535469 systemd[1]: libpod-a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a.scope: Deactivated successfully.
Nov 25 11:25:12 np0005535469 podman[270754]: 2025-11-25 16:25:12.870406436 +0000 UTC m=+1.528519768 container died a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4-merged.mount: Deactivated successfully.
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.594 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.650 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[16b7479b-b138-4b79-a3a4-39fbfcfea5f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.652 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdd4097f8-d1 in ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.655 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdd4097f8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.655 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7472398-17db-421c-8487-db756f277a6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.663 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9180679f-3ce7-48d9-8ba1-99500dc808d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.692 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bc48e43e-990d-46e4-9174-53d573296656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:13 np0005535469 podman[270754]: 2025-11-25 16:25:13.697445894 +0000 UTC m=+2.355559226 container remove a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:25:13 np0005535469 systemd[1]: libpod-conmon-a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a.scope: Deactivated successfully.
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.730 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af713821-28d1-45cd-8b95-cd7c28e2d619]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.733 163338 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6in_ubzq/privsep.sock']#033[00m
Nov 25 11:25:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:25:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:25:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:13 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2fcbd9e6-acb0-40ab-b451-385104887e48 does not exist
Nov 25 11:25:13 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e34d79f6-4496-4189-ad3c-1307eb1b79f1 does not exist
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3632] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3640] device (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3653] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3656] device (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3667] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3673] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3678] device (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 11:25:14 np0005535469 NetworkManager[48891]: <info>  [1764087914.3682] device (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.409 163338 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.409 163338 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6in_ubzq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.256 270879 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.260 270879 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.262 270879 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.262 270879 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270879#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.411 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c865618a-0e95-402b-a7a1-e63b26b61398]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:25:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 92 op/s
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.834 254096 DEBUG nova.compute.manager [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.835 254096 DEBUG nova.compute.manager [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing instance network info cache due to event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.835 254096 DEBUG oslo_concurrency.lockutils [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.836 254096 DEBUG oslo_concurrency.lockutils [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:14 np0005535469 nova_compute[254092]: 2025-11-25 16:25:14.836 254096 DEBUG nova.network.neutron [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.902 270879 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.902 270879 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.902 270879 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.481 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[80c7ef4e-2350-4a55-9b84-e2cdfdf1506c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 NetworkManager[48891]: <info>  [1764087915.5047] manager: (tapdd4097f8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b6c04d-8c48-43aa-8720-513a82e22255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.544 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc06131-587b-4d1c-ab5b-d3dda791679b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.548 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1c210419-c4e9-46dc-b5fe-9e1071fad83b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 NetworkManager[48891]: <info>  [1764087915.5749] device (tapdd4097f8-d0): carrier: link connected
Nov 25 11:25:15 np0005535469 systemd-udevd[270928]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.581 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2203ecf4-8a44-45fa-8273-0382fb7ae8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfdb9dd-add5-42df-acb8-439e2cf270df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437316, 'reachable_time': 34477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270941, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[637a82c1-4e0b-46d2-9036-460c2bd7afe1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:9711'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437316, 'tstamp': 437316}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270968, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 podman[270889]: 2025-11-25 16:25:15.634391447 +0000 UTC m=+0.102875671 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:25:15 np0005535469 podman[270892]: 2025-11-25 16:25:15.634463509 +0000 UTC m=+0.097846104 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:25:15 np0005535469 podman[270891]: 2025-11-25 16:25:15.638314923 +0000 UTC m=+0.106928971 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.647 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[63e668a0-ce4b-4267-9b6b-17015e45132e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437316, 'reachable_time': 34477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270972, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.674 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a9f54f-eb58-4f25-ae9c-735a56da72cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a680a899-f1e2-49a1-a617-e13101be109d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.730 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd4097f8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:15 np0005535469 kernel: tapdd4097f8-d0: entered promiscuous mode
Nov 25 11:25:15 np0005535469 NetworkManager[48891]: <info>  [1764087915.7348] manager: (tapdd4097f8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 25 11:25:15 np0005535469 nova_compute[254092]: 2025-11-25 16:25:15.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:15 np0005535469 nova_compute[254092]: 2025-11-25 16:25:15.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.737 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd4097f8-d0, col_values=(('external_ids', {'iface-id': '65cb2392-e609-45e8-bc45-ba0ce2e7d527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:15Z|00031|binding|INFO|Releasing lport 65cb2392-e609-45e8-bc45-ba0ce2e7d527 from this chassis (sb_readonly=0)
Nov 25 11:25:15 np0005535469 nova_compute[254092]: 2025-11-25 16:25:15.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.742 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8187a8e-cdaa-4a3e-8c70-e2f676b9131a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.743 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:25:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.744 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'env', 'PROCESS_TAG=haproxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dd4097f8-dcdf-451c-8fbb-2057e86e375d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:25:15 np0005535469 nova_compute[254092]: 2025-11-25 16:25:15.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:16 np0005535469 podman[271006]: 2025-11-25 16:25:16.081484195 +0000 UTC m=+0.053652142 container create c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 11:25:16 np0005535469 systemd[1]: Started libpod-conmon-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e.scope.
Nov 25 11:25:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a5ecbe109b46af30ce696e568d65890d385bb2e36f742ce025f714d7cdb26e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:16 np0005535469 podman[271006]: 2025-11-25 16:25:16.048871237 +0000 UTC m=+0.021039204 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:25:16 np0005535469 podman[271006]: 2025-11-25 16:25:16.151631054 +0000 UTC m=+0.123799051 container init c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:25:16 np0005535469 podman[271006]: 2025-11-25 16:25:16.156696251 +0000 UTC m=+0.128864198 container start c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:16 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : New worker (271027) forked
Nov 25 11:25:16 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : Loading success.
Nov 25 11:25:16 np0005535469 nova_compute[254092]: 2025-11-25 16:25:16.493 254096 DEBUG nova.network.neutron [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updated VIF entry in instance network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:25:16 np0005535469 nova_compute[254092]: 2025-11-25 16:25:16.493 254096 DEBUG nova.network.neutron [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:16 np0005535469 nova_compute[254092]: 2025-11-25 16:25:16.516 254096 DEBUG oslo_concurrency.lockutils [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 92 op/s
Nov 25 11:25:17 np0005535469 nova_compute[254092]: 2025-11-25 16:25:17.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 25 11:25:19 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 11:25:19 np0005535469 nova_compute[254092]: 2025-11-25 16:25:19.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 95 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 996 KiB/s wr, 91 op/s
Nov 25 11:25:22 np0005535469 nova_compute[254092]: 2025-11-25 16:25:22.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:22 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 11:25:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:22Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:06:61 10.100.0.3
Nov 25 11:25:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:22Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:06:61 10.100.0.3
Nov 25 11:25:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 107 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 599 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 11:25:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 107 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 11:25:24 np0005535469 nova_compute[254092]: 2025-11-25 16:25:24.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 11:25:27 np0005535469 nova_compute[254092]: 2025-11-25 16:25:27.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 11:25:29 np0005535469 nova_compute[254092]: 2025-11-25 16:25:29.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.314 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.314 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.315 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.315 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.315 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.316 254096 INFO nova.compute.manager [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Terminating instance#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.317 254096 DEBUG nova.compute.manager [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:25:31 np0005535469 kernel: tap7adfcb53-33 (unregistering): left promiscuous mode
Nov 25 11:25:31 np0005535469 NetworkManager[48891]: <info>  [1764087931.4212] device (tap7adfcb53-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:25:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:31Z|00032|binding|INFO|Releasing lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea from this chassis (sb_readonly=0)
Nov 25 11:25:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:31Z|00033|binding|INFO|Setting lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea down in Southbound
Nov 25 11:25:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:31Z|00034|binding|INFO|Removing iface tap7adfcb53-33 ovn-installed in OVS
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.452 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:06:61 10.100.0.3'], port_security=['fa:16:3e:7d:06:61 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '71cf0ae0-6191-4b64-9a81-a955d807ceb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7adfcb53-33cb-482b-ba39-82d0ee72c4ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:25:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.454 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d unbound from our chassis#033[00m
Nov 25 11:25:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.456 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd4097f8-dcdf-451c-8fbb-2057e86e375d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:25:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.457 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40c03e34-eb65-4bbc-aa55-a7ad2b94e6af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.457 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace which is not needed anymore#033[00m
Nov 25 11:25:31 np0005535469 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 25 11:25:31 np0005535469 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 13.700s CPU time.
Nov 25 11:25:31 np0005535469 systemd-machined[216343]: Machine qemu-4-instance-00000004 terminated.
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.553 254096 INFO nova.virt.libvirt.driver [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance destroyed successfully.#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.554 254096 DEBUG nova.objects.instance [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'resources' on Instance uuid 71cf0ae0-6191-4b64-9a81-a955d807ceb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.566 254096 DEBUG nova.virt.libvirt.vif [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(9),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-531690935',id=4,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=9,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:25:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-88lf5wc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:25:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=71cf0ae0-6191-4b64-9a81-a955d807ceb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.567 254096 DEBUG nova.network.os_vif_util [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.568 254096 DEBUG nova.network.os_vif_util [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.568 254096 DEBUG os_vif [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.571 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7adfcb53-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.580 254096 INFO os_vif [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33')#033[00m
Nov 25 11:25:31 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : haproxy version is 2.8.14-c23fe91
Nov 25 11:25:31 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : path to executable is /usr/sbin/haproxy
Nov 25 11:25:31 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [WARNING]  (271025) : Exiting Master process...
Nov 25 11:25:31 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [WARNING]  (271025) : Exiting Master process...
Nov 25 11:25:31 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [ALERT]    (271025) : Current worker (271027) exited with code 143 (Terminated)
Nov 25 11:25:31 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [WARNING]  (271025) : All workers exited. Exiting... (0)
Nov 25 11:25:31 np0005535469 systemd[1]: libpod-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e.scope: Deactivated successfully.
Nov 25 11:25:31 np0005535469 podman[271067]: 2025-11-25 16:25:31.612003919 +0000 UTC m=+0.054410791 container died c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.857 254096 DEBUG nova.compute.manager [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-unplugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG oslo_concurrency.lockutils [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG oslo_concurrency.lockutils [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG oslo_concurrency.lockutils [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG nova.compute.manager [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] No waiting events found dispatching network-vif-unplugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:25:31 np0005535469 nova_compute[254092]: 2025-11-25 16:25:31.859 254096 DEBUG nova.compute.manager [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-unplugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:25:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e-userdata-shm.mount: Deactivated successfully.
Nov 25 11:25:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3a5ecbe109b46af30ce696e568d65890d385bb2e36f742ce025f714d7cdb26e1-merged.mount: Deactivated successfully.
Nov 25 11:25:32 np0005535469 podman[271067]: 2025-11-25 16:25:32.00187171 +0000 UTC m=+0.444278582 container cleanup c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:25:32 np0005535469 systemd[1]: libpod-conmon-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e.scope: Deactivated successfully.
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:32 np0005535469 podman[271122]: 2025-11-25 16:25:32.088824896 +0000 UTC m=+0.051111863 container remove c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d0d960-4bee-42a8-9e56-4c4c4ee49bf5]: (4, ('Tue Nov 25 04:25:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e)\nc6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e\nTue Nov 25 04:25:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e)\nc6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.098 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5dca94b7-84a1-41fd-a1e2-b96cabe8e208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.100 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:32 np0005535469 kernel: tapdd4097f8-d0: left promiscuous mode
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.120 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c29476c9-ab07-4a13-b4dc-b3b62e093c36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40584d9b-b8fc-4e94-9684-defc47cec3d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.139 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9286c903-c3e4-42b7-a328-9f0b004e7f5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.158 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[720c546e-bdd9-40db-b66c-7c958b8a0329]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437306, 'reachable_time': 16528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271137, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:32 np0005535469 systemd[1]: run-netns-ovnmeta\x2ddd4097f8\x2ddcdf\x2d451c\x2d8fbb\x2d2057e86e375d.mount: Deactivated successfully.
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.172 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:25:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.173 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c250be03-34aa-4250-8f5d-9f7c93c1d52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.428 254096 INFO nova.virt.libvirt.driver [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deleting instance files /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4_del#033[00m
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.430 254096 INFO nova.virt.libvirt.driver [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deletion of /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4_del complete#033[00m
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.486 254096 INFO nova.compute.manager [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.487 254096 DEBUG oslo.service.loopingcall [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.487 254096 DEBUG nova.compute.manager [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:25:32 np0005535469 nova_compute[254092]: 2025-11-25 16:25:32.487 254096 DEBUG nova.network.neutron [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:25:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.2 MiB/s wr, 46 op/s
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.612 254096 DEBUG nova.network.neutron [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.641 254096 DEBUG nova.compute.manager [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.641 254096 DEBUG oslo_concurrency.lockutils [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.641 254096 DEBUG oslo_concurrency.lockutils [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.642 254096 DEBUG oslo_concurrency.lockutils [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.642 254096 DEBUG nova.compute.manager [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] No waiting events found dispatching network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.642 254096 WARNING nova.compute.manager [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received unexpected event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.644 254096 INFO nova.compute.manager [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 2.16 seconds to deallocate network for instance.#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.694 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.695 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:34 np0005535469 nova_compute[254092]: 2025-11-25 16:25:34.773 254096 DEBUG oslo_concurrency.processutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 225 KiB/s rd, 306 KiB/s wr, 38 op/s
Nov 25 11:25:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:25:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046537941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.201 254096 DEBUG oslo_concurrency.processutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.207 254096 DEBUG nova.compute.provider_tree [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.228 254096 DEBUG nova.scheduler.client.report [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.270 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "2c261173-944d-4c35-8d16-b066436572bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.270 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.294 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.299 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.364 254096 DEBUG nova.compute.manager [req-3c791b89-86ee-45dd-b52f-a19df22003de req-fe0cfe79-4bba-4be0-ad13-74ed382e2f10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-deleted-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.384 254096 INFO nova.scheduler.client.report [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Deleted allocations for instance 71cf0ae0-6191-4b64-9a81-a955d807ceb4#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.442 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.443 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.450 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.450 254096 INFO nova.compute.claims [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.488 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.568 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:25:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868291142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:25:35 np0005535469 nova_compute[254092]: 2025-11-25 16:25:35.997 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.004 254096 DEBUG nova.compute.provider_tree [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.023 254096 DEBUG nova.scheduler.client.report [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.052 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.053 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.119 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.120 254096 DEBUG nova.network.neutron [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.138 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.153 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.154 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.167 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.187 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.286 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.288 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.289 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Creating image(s)#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.325 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.363 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.386 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.389 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.411 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.413 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.426 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.427 254096 INFO nova.compute.claims [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.446 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.446 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.447 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.448 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.470 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.473 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c261173-944d-4c35-8d16-b066436572bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.577 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.615 254096 DEBUG nova.network.neutron [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.616 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:25:36 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 11:25:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 307 KiB/s wr, 66 op/s
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.879 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c261173-944d-4c35-8d16-b066436572bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:36 np0005535469 nova_compute[254092]: 2025-11-25 16:25:36.929 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] resizing rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.015 254096 DEBUG nova.objects.instance [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c261173-944d-4c35-8d16-b066436572bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.029 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.030 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Ensure instance console log exists: /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.031 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.031 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.031 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.033 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.038 254096 WARNING nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.043 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.043 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:25:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:25:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2298148670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.073 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.073 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.074 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.074 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.075 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.075 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.075 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.076 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.076 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.076 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.077 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.077 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.077 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.078 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.080 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.100 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.108 254096 DEBUG nova.compute.provider_tree [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.123 254096 DEBUG nova.scheduler.client.report [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.194 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.196 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.260 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.260 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.304 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.342 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.421 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.422 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.423 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Creating image(s)#033[00m
Nov 25 11:25:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.440 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.458 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.479 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.483 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:25:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358463041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.506 254096 DEBUG nova.policy [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '787cb8b4238c4926a4466f3421db09ef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95df0d15c889499aba411e805ea145a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.515 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.536 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.540 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.556 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.557 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.557 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.557 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.583 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:37 np0005535469 nova_compute[254092]: 2025-11-25 16:25:37.587 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:25:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951548492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:25:38 np0005535469 nova_compute[254092]: 2025-11-25 16:25:38.087 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:38 np0005535469 nova_compute[254092]: 2025-11-25 16:25:38.089 254096 DEBUG nova.objects.instance [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c261173-944d-4c35-8d16-b066436572bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:38 np0005535469 nova_compute[254092]: 2025-11-25 16:25:38.106 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <uuid>2c261173-944d-4c35-8d16-b066436572bb</uuid>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <name>instance-00000005</name>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerExternalEventsTest-server-583466621</nova:name>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:25:37</nova:creationTime>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <nova:user uuid="c56561c97a2d48ffa5ee1c65800dc0fa">tempest-ServerExternalEventsTest-65175838-project-member</nova:user>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <nova:project uuid="b3a17f41085a44e38251177c55db1ed1">tempest-ServerExternalEventsTest-65175838</nova:project>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <entry name="serial">2c261173-944d-4c35-8d16-b066436572bb</entry>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <entry name="uuid">2c261173-944d-4c35-8d16-b066436572bb</entry>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2c261173-944d-4c35-8d16-b066436572bb_disk">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2c261173-944d-4c35-8d16-b066436572bb_disk.config">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/console.log" append="off"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:25:38 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:25:38 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:25:38 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:25:38 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:25:38 np0005535469 nova_compute[254092]: 2025-11-25 16:25:38.443 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:25:38 np0005535469 nova_compute[254092]: 2025-11-25 16:25:38.444 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:25:38 np0005535469 nova_compute[254092]: 2025-11-25 16:25:38.445 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Using config drive#033[00m
Nov 25 11:25:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 25 11:25:38 np0005535469 nova_compute[254092]: 2025-11-25 16:25:38.861 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:39 np0005535469 nova_compute[254092]: 2025-11-25 16:25:39.123 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Creating config drive at /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config#033[00m
Nov 25 11:25:39 np0005535469 nova_compute[254092]: 2025-11-25 16:25:39.129 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_hcf6r_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:39 np0005535469 nova_compute[254092]: 2025-11-25 16:25:39.256 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_hcf6r_" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:39 np0005535469 nova_compute[254092]: 2025-11-25 16:25:39.467 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:39 np0005535469 nova_compute[254092]: 2025-11-25 16:25:39.471 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config 2c261173-944d-4c35-8d16-b066436572bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.022 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Successfully created port: 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:25:40
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control']
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.266 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.317 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] resizing rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.646 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config 2c261173-944d-4c35-8d16-b066436572bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.647 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deleting local config drive /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config because it was imported into RBD.#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.653 254096 DEBUG nova.objects.instance [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.694 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:40 np0005535469 systemd-machined[216343]: New machine qemu-5-instance-00000005.
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.721 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.725 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.726 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.726 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:40 np0005535469 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.753 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.754 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 81 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.928 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.930 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.950 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:40 np0005535469 nova_compute[254092]: 2025-11-25 16:25:40.954 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.139 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087941.1388347, 2c261173-944d-4c35-8d16-b066436572bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.140 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.146 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.146 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.167 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.169 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance spawned successfully.#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.170 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.173 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.300 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.300 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.301 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.301 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.302 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.303 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.472 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.472 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087941.1415298, 2c261173-944d-4c35-8d16-b066436572bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.473 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] VM Started (Lifecycle Event)#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.504 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.507 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.529 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.544 254096 INFO nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 5.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.545 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.619 254096 INFO nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 6.21 seconds to build instance.#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.644 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.840 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.926 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.927 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Ensure instance console log exists: /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.928 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.928 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:41 np0005535469 nova_compute[254092]: 2025-11-25 16:25:41.929 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.071 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Successfully updated port: 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.087 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.087 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.088 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.412 254096 DEBUG nova.compute.manager [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.412 254096 DEBUG nova.compute.manager [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing instance network info cache due to event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.412 254096 DEBUG oslo_concurrency.lockutils [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:42 np0005535469 nova_compute[254092]: 2025-11-25 16:25:42.413 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:25:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 106 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Nov 25 11:25:43 np0005535469 nova_compute[254092]: 2025-11-25 16:25:43.812 254096 DEBUG nova.compute.manager [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:43 np0005535469 nova_compute[254092]: 2025-11-25 16:25:43.813 254096 DEBUG nova.compute.manager [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:25:43 np0005535469 nova_compute[254092]: 2025-11-25 16:25:43.813 254096 DEBUG oslo_concurrency.lockutils [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] Acquiring lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:43 np0005535469 nova_compute[254092]: 2025-11-25 16:25:43.813 254096 DEBUG oslo_concurrency.lockutils [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] Acquired lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:43 np0005535469 nova_compute[254092]: 2025-11-25 16:25:43.814 254096 DEBUG nova.network.neutron [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:25:43 np0005535469 nova_compute[254092]: 2025-11-25 16:25:43.997 254096 DEBUG nova.network.neutron [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:25:44 np0005535469 nova_compute[254092]: 2025-11-25 16:25:44.094 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "2c261173-944d-4c35-8d16-b066436572bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:44 np0005535469 nova_compute[254092]: 2025-11-25 16:25:44.094 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:44 np0005535469 nova_compute[254092]: 2025-11-25 16:25:44.094 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "2c261173-944d-4c35-8d16-b066436572bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:44 np0005535469 nova_compute[254092]: 2025-11-25 16:25:44.095 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:44 np0005535469 nova_compute[254092]: 2025-11-25 16:25:44.095 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:44 np0005535469 nova_compute[254092]: 2025-11-25 16:25:44.096 254096 INFO nova.compute.manager [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Terminating instance#033[00m
Nov 25 11:25:44 np0005535469 nova_compute[254092]: 2025-11-25 16:25:44.096 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 106 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.042 254096 DEBUG nova.network.neutron [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.072 254096 DEBUG oslo_concurrency.lockutils [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] Releasing lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.073 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquired lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.073 254096 DEBUG nova.network.neutron [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.082 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.117 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.118 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance network_info: |[{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.118 254096 DEBUG oslo_concurrency.lockutils [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.119 254096 DEBUG nova.network.neutron [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.122 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start _get_guest_xml network_info=[{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [{'size': 1, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.126 254096 WARNING nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.137 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.138 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.148 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.149 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.149 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.150 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:24:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='991195749',id=8,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-223579632',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.150 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.150 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.151 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.151 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.151 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.153 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.155 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:25:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135733257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.604 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.605 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:45 np0005535469 nova_compute[254092]: 2025-11-25 16:25:45.881 254096 DEBUG nova.network.neutron [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:25:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:25:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400848585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.022 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.044 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.047 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.201 254096 DEBUG nova.network.neutron [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.220 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Releasing lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.221 254096 DEBUG nova.compute.manager [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:25:46 np0005535469 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 25 11:25:46 np0005535469 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 5.476s CPU time.
Nov 25 11:25:46 np0005535469 systemd-machined[216343]: Machine qemu-5-instance-00000005 terminated.
Nov 25 11:25:46 np0005535469 podman[271929]: 2025-11-25 16:25:46.347873671 +0000 UTC m=+0.061776752 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:46 np0005535469 podman[271930]: 2025-11-25 16:25:46.36842325 +0000 UTC m=+0.082010313 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 11:25:46 np0005535469 podman[271931]: 2025-11-25 16:25:46.380539959 +0000 UTC m=+0.086835364 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.438 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance destroyed successfully.#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.438 254096 DEBUG nova.objects.instance [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lazy-loading 'resources' on Instance uuid 2c261173-944d-4c35-8d16-b066436572bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:25:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197307260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.468 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.470 254096 DEBUG nova.virt.libvirt.vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-423978472',id=6,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-236be33o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:25:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=4b2c6795-15b5-424c-b7c5-4b695a348f41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.470 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.471 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.472 254096 DEBUG nova.objects.instance [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.488 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <uuid>4b2c6795-15b5-424c-b7c5-4b695a348f41</uuid>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <name>instance-00000006</name>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-423978472</nova:name>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:25:45</nova:creationTime>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-223579632">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:ephemeral>1</nova:ephemeral>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:user uuid="787cb8b4238c4926a4466f3421db09ef">tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member</nova:user>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:project uuid="95df0d15c889499aba411e805ea145a5">tempest-ServersWithSpecificFlavorTestJSON-341738122</nova:project>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <nova:port uuid="05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <entry name="serial">4b2c6795-15b5-424c-b7c5-4b695a348f41</entry>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <entry name="uuid">4b2c6795-15b5-424c-b7c5-4b695a348f41</entry>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4b2c6795-15b5-424c-b7c5-4b695a348f41_disk">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <target dev="vdb" bus="virtio"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:fb:ac:25"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <target dev="tap05cb1f2f-ce"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/console.log" append="off"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:25:46 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:25:46 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:25:46 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:25:46 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.490 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Preparing to wait for external event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.490 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.491 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.491 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.492 254096 DEBUG nova.virt.libvirt.vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-423978472',id=6,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-236be33o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:25:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=4b2c6795-15b5-424c-b7c5-4b695a348f41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.492 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.493 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.493 254096 DEBUG os_vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.495 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.495 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.499 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05cb1f2f-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.499 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05cb1f2f-ce, col_values=(('external_ids', {'iface-id': '05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:ac:25', 'vm-uuid': '4b2c6795-15b5-424c-b7c5-4b695a348f41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:46 np0005535469 NetworkManager[48891]: <info>  [1764087946.5015] manager: (tap05cb1f2f-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.508 254096 INFO os_vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce')#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.551 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087931.550435, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.551 254096 INFO nova.compute.manager [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.576 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.577 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.577 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.577 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No VIF found with MAC fa:16:3e:fb:ac:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.578 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Using config drive#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.594 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:46 np0005535469 nova_compute[254092]: 2025-11-25 16:25:46.599 254096 DEBUG nova.compute.manager [None req-7eee63e9-4bf5-46db-a0d7-0467786475cf - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 136 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.014 254096 INFO nova.virt.libvirt.driver [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deleting instance files /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb_del#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.015 254096 INFO nova.virt.libvirt.driver [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deletion of /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb_del complete#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.124 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Creating config drive at /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.133 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gpv9x0o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.156 254096 INFO nova.compute.manager [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.157 254096 DEBUG oslo.service.loopingcall [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.157 254096 DEBUG nova.compute.manager [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.157 254096 DEBUG nova.network.neutron [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.192 254096 DEBUG nova.network.neutron [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updated VIF entry in instance network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.193 254096 DEBUG nova.network.neutron [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.207 254096 DEBUG oslo_concurrency.lockutils [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.259 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gpv9x0o" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.286 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.290 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.343 254096 DEBUG nova.network.neutron [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.370 254096 DEBUG nova.network.neutron [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.396 254096 INFO nova.compute.manager [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 0.24 seconds to deallocate network for instance.#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.414 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.415 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deleting local config drive /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config because it was imported into RBD.#033[00m
Nov 25 11:25:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:47 np0005535469 kernel: tap05cb1f2f-ce: entered promiscuous mode
Nov 25 11:25:47 np0005535469 NetworkManager[48891]: <info>  [1764087947.4619] manager: (tap05cb1f2f-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 11:25:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:47Z|00035|binding|INFO|Claiming lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for this chassis.
Nov 25 11:25:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:47Z|00036|binding|INFO|05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8: Claiming fa:16:3e:fb:ac:25 10.100.0.6
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:47 np0005535469 systemd-udevd[271959]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.470 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:ac:25 10.100.0.6'], port_security=['fa:16:3e:fb:ac:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b2c6795-15b5-424c-b7c5-4b695a348f41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.471 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d bound to our chassis#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.472 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd4097f8-dcdf-451c-8fbb-2057e86e375d#033[00m
Nov 25 11:25:47 np0005535469 NetworkManager[48891]: <info>  [1764087947.4778] device (tap05cb1f2f-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.478 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:47 np0005535469 NetworkManager[48891]: <info>  [1764087947.4798] device (tap05cb1f2f-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.478 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:47Z|00037|binding|INFO|Setting lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 ovn-installed in OVS
Nov 25 11:25:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:47Z|00038|binding|INFO|Setting lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 up in Southbound
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.487 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c997e0c8-5af4-4e23-9ade-6c4ae7613a8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.488 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdd4097f8-d1 in ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.490 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdd4097f8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d95897d-d6da-49a0-8a35-d5b5876034ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.491 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37cc2450-5225-4e3d-9202-0dc4fe7a444b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:47 np0005535469 systemd-machined[216343]: New machine qemu-6-instance-00000006.
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.503 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d69b81f3-23d6-4485-904a-1c40d5ddb2ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.514 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2141af-cc82-46f0-9ccd-dc66583ae169]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.533 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.545 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b466f7-3d5e-44a1-9257-1d86609796c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.551 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[775822b3-e1bc-4a8e-8ead-2ea5bb2ef461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 NetworkManager[48891]: <info>  [1764087947.5525] manager: (tapdd4097f8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.582 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[712265ee-9224-433b-a2d6-4ce673d3216c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.585 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a00456f0-0be2-4011-8bff-fafbe4ea8d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.588 254096 DEBUG oslo_concurrency.processutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:47 np0005535469 NetworkManager[48891]: <info>  [1764087947.6118] device (tapdd4097f8-d0): carrier: link connected
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.616 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcce62a-93cf-46c4-a5ce-d462d5711357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e7efb996-00cb-4066-b320-e0b4d77d1152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440520, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272120, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.647 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03e5a1c8-8a2c-45b6-977a-6c640a89fbd4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:9711'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440520, 'tstamp': 440520}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272121, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.665 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5548975-2b95-445d-bffe-5e03d077df77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440520, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272122, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.695 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[69bb0f8b-8f04-47ec-b62b-4231db6b9349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.758 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87d9f297-0721-48a6-aab5-093ee8584293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.759 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.759 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.760 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd4097f8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:47 np0005535469 kernel: tapdd4097f8-d0: entered promiscuous mode
Nov 25 11:25:47 np0005535469 NetworkManager[48891]: <info>  [1764087947.7627] manager: (tapdd4097f8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.767 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd4097f8-d0, col_values=(('external_ids', {'iface-id': '65cb2392-e609-45e8-bc45-ba0ce2e7d527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:25:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:25:47Z|00039|binding|INFO|Releasing lport 65cb2392-e609-45e8-bc45-ba0ce2e7d527 from this chassis (sb_readonly=0)
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.773 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.775 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5327ea-c198-464a-94be-743c85e0cb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.776 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:25:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.776 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'env', 'PROCESS_TAG=haproxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dd4097f8-dcdf-451c-8fbb-2057e86e375d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.957 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087947.9565694, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Started (Lifecycle Event)#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.985 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.989 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087947.9566867, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:47 np0005535469 nova_compute[254092]: 2025-11-25 16:25:47.989 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:25:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:25:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617436220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.015 254096 DEBUG oslo_concurrency.processutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.018 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.021 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.026 254096 DEBUG nova.compute.provider_tree [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.049 254096 DEBUG nova.scheduler.client.report [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.054 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.082 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.084 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.085 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.085 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.085 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.139 254096 INFO nova.scheduler.client.report [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Deleted allocations for instance 2c261173-944d-4c35-8d16-b066436572bb#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.151 254096 DEBUG nova.compute.manager [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.151 254096 DEBUG oslo_concurrency.lockutils [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.152 254096 DEBUG oslo_concurrency.lockutils [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.152 254096 DEBUG oslo_concurrency.lockutils [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.152 254096 DEBUG nova.compute.manager [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Processing event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.153 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.158 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087948.1584089, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.159 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.160 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.167 254096 INFO nova.virt.libvirt.driver [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance spawned successfully.#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.167 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:25:48 np0005535469 podman[272236]: 2025-11-25 16:25:48.187399562 +0000 UTC m=+0.069708158 container create 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.210 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.219 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:25:48 np0005535469 systemd[1]: Started libpod-conmon-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d.scope.
Nov 25 11:25:48 np0005535469 podman[272236]: 2025-11-25 16:25:48.139115589 +0000 UTC m=+0.021424195 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.239 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.250 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.256 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.258 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.258 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.259 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d56c1463fe42daf52e3cc2c80f46ee13778cefe2ff5f841cf7e2212503eba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.259 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.260 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:25:48 np0005535469 podman[272236]: 2025-11-25 16:25:48.273404863 +0000 UTC m=+0.155713459 container init 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:25:48 np0005535469 podman[272236]: 2025-11-25 16:25:48.279211741 +0000 UTC m=+0.161520327 container start 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:25:48 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : New worker (272275) forked
Nov 25 11:25:48 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : Loading success.
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.321 254096 INFO nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 10.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.322 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.416 254096 INFO nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 12.16 seconds to build instance.#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.430 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:25:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1745681765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.620 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.620 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.621 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.781 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.783 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4673MB free_disk=59.94660568237305GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 136 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 140 op/s
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.843 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 4b2c6795-15b5-424c-b7c5-4b695a348f41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:25:48 np0005535469 nova_compute[254092]: 2025-11-25 16:25:48.904 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:25:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:25:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731796822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:25:49 np0005535469 nova_compute[254092]: 2025-11-25 16:25:49.350 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:25:49 np0005535469 nova_compute[254092]: 2025-11-25 16:25:49.357 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:25:49 np0005535469 nova_compute[254092]: 2025-11-25 16:25:49.378 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:25:49 np0005535469 nova_compute[254092]: 2025-11-25 16:25:49.425 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:25:49 np0005535469 nova_compute[254092]: 2025-11-25 16:25:49.426 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:50 np0005535469 nova_compute[254092]: 2025-11-25 16:25:50.481 254096 DEBUG nova.compute.manager [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:50 np0005535469 nova_compute[254092]: 2025-11-25 16:25:50.483 254096 DEBUG oslo_concurrency.lockutils [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:25:50 np0005535469 nova_compute[254092]: 2025-11-25 16:25:50.484 254096 DEBUG oslo_concurrency.lockutils [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:25:50 np0005535469 nova_compute[254092]: 2025-11-25 16:25:50.484 254096 DEBUG oslo_concurrency.lockutils [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:25:50 np0005535469 nova_compute[254092]: 2025-11-25 16:25:50.485 254096 DEBUG nova.compute.manager [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] No waiting events found dispatching network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:25:50 np0005535469 nova_compute[254092]: 2025-11-25 16:25:50.485 254096 WARNING nova.compute.manager [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received unexpected event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:25:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 103 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004066625765123397 of space, bias 1.0, pg target 0.1219987729537019 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:25:51 np0005535469 nova_compute[254092]: 2025-11-25 16:25:51.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.426 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:52 np0005535469 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:25:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 204 op/s
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.590 254096 DEBUG nova.compute.manager [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.591 254096 DEBUG nova.compute.manager [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing instance network info cache due to event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.591 254096 DEBUG oslo_concurrency.lockutils [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.592 254096 DEBUG oslo_concurrency.lockutils [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:53 np0005535469 nova_compute[254092]: 2025-11-25 16:25:53.592 254096 DEBUG nova.network.neutron [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:25:54 np0005535469 nova_compute[254092]: 2025-11-25 16:25:54.126 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:25:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 25 11:25:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:25:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1942886310' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:25:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:25:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1942886310' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:25:56 np0005535469 nova_compute[254092]: 2025-11-25 16:25:56.245 254096 DEBUG nova.network.neutron [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updated VIF entry in instance network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:25:56 np0005535469 nova_compute[254092]: 2025-11-25 16:25:56.246 254096 DEBUG nova.network.neutron [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:56 np0005535469 nova_compute[254092]: 2025-11-25 16:25:56.269 254096 DEBUG oslo_concurrency.lockutils [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:56 np0005535469 nova_compute[254092]: 2025-11-25 16:25:56.269 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:25:56 np0005535469 nova_compute[254092]: 2025-11-25 16:25:56.269 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:25:56 np0005535469 nova_compute[254092]: 2025-11-25 16:25:56.270 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:25:56 np0005535469 nova_compute[254092]: 2025-11-25 16:25:56.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 25 11:25:57 np0005535469 nova_compute[254092]: 2025-11-25 16:25:57.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:25:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:25:58 np0005535469 nova_compute[254092]: 2025-11-25 16:25:58.594 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:25:58 np0005535469 nova_compute[254092]: 2025-11-25 16:25:58.611 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:25:58 np0005535469 nova_compute[254092]: 2025-11-25 16:25:58.612 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:25:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 102 op/s
Nov 25 11:26:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 95 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 923 KiB/s wr, 117 op/s
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.437 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087946.4354992, 2c261173-944d-4c35-8d16-b066436572bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.438 254096 INFO nova.compute.manager [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.487 254096 DEBUG nova.compute.manager [None req-2934ee08-b3a5-41c6-b3e8-5b4df85e7bdb - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.606 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.943 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.944 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:01 np0005535469 nova_compute[254092]: 2025-11-25 16:26:01.959 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.040 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.041 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.049 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.050 254096 INFO nova.compute.claims [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.207 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:02Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:ac:25 10.100.0.6
Nov 25 11:26:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:02Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:ac:25 10.100.0.6
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.263 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.263 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.278 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.346 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519270304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.650 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.655 254096 DEBUG nova.compute.provider_tree [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.678 254096 DEBUG nova.scheduler.client.report [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.707 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.707 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.710 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.717 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.717 254096 INFO nova.compute.claims [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.801 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.802 254096 DEBUG nova.network.neutron [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:26:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 103 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.915 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:26:02 np0005535469 nova_compute[254092]: 2025-11-25 16:26:02.949 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.097 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.172 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.173 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.174 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Creating image(s)#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.192 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.210 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.228 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.231 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.249 254096 DEBUG nova.network.neutron [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.250 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.289 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.290 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.290 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.291 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.308 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.311 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0ce39e6-663b-4ff2-84e5-98aa54955701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301432048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.461 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.466 254096 DEBUG nova.compute.provider_tree [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.488 254096 DEBUG nova.scheduler.client.report [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.551 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.553 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.637 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.637 254096 DEBUG nova.network.neutron [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.671 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.691 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.706 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0ce39e6-663b-4ff2-84e5-98aa54955701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.765 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] resizing rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.853 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.854 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.855 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Creating image(s)#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.877 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.896 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.925 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.930 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.959 254096 DEBUG nova.objects.instance [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'migration_context' on Instance uuid e0ce39e6-663b-4ff2-84e5-98aa54955701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.979 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.980 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Ensure instance console log exists: /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.981 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.981 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.982 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.984 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.989 254096 WARNING nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.992 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.993 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.993 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:03 np0005535469 nova_compute[254092]: 2025-11-25 16:26:03.994 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.020 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.024 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.054 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.055 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.058 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.059 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.060 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.060 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.061 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.062 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.062 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.063 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.063 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.064 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.064 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.065 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.065 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.065 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.071 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539899894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.510 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.537 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.556 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.560 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.608 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] resizing rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:26:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 103 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.866 254096 DEBUG nova.objects.instance [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lazy-loading 'migration_context' on Instance uuid e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.883 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.883 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Ensure instance console log exists: /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.884 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.884 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:04 np0005535469 nova_compute[254092]: 2025-11-25 16:26:04.885 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510083338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:05 np0005535469 nova_compute[254092]: 2025-11-25 16:26:05.024 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:05 np0005535469 nova_compute[254092]: 2025-11-25 16:26:05.026 254096 DEBUG nova.objects.instance [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'pci_devices' on Instance uuid e0ce39e6-663b-4ff2-84e5-98aa54955701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:05 np0005535469 nova_compute[254092]: 2025-11-25 16:26:05.040 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <uuid>e0ce39e6-663b-4ff2-84e5-98aa54955701</uuid>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <name>instance-00000007</name>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1320978474</nova:name>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:26:03</nova:creationTime>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <nova:user uuid="820b9ab982364678b8e75b2c9cc4cfed">tempest-ServersAdminNegativeTestJSON-1866473487-project-member</nova:user>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <nova:project uuid="511bc9af98844c8995c27adbee1a3d4c">tempest-ServersAdminNegativeTestJSON-1866473487</nova:project>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <entry name="serial">e0ce39e6-663b-4ff2-84e5-98aa54955701</entry>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <entry name="uuid">e0ce39e6-663b-4ff2-84e5-98aa54955701</entry>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e0ce39e6-663b-4ff2-84e5-98aa54955701_disk">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/console.log" append="off"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:26:05 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:26:05 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:26:05 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:26:05 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:26:05 np0005535469 nova_compute[254092]: 2025-11-25 16:26:05.141 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:05 np0005535469 nova_compute[254092]: 2025-11-25 16:26:05.142 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:05 np0005535469 nova_compute[254092]: 2025-11-25 16:26:05.142 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Using config drive#033[00m
Nov 25 11:26:05 np0005535469 nova_compute[254092]: 2025-11-25 16:26:05.161 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.110 254096 DEBUG nova.network.neutron [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.111 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.112 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.116 254096 WARNING nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.121 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.121 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.127 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.129 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000459050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.554 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.572 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.576 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.812 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Creating config drive at /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.817 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprhbazlr9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 215 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 5.7 MiB/s wr, 187 op/s
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.941 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprhbazlr9" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280467783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.972 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.976 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.996 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:06 np0005535469 nova_compute[254092]: 2025-11-25 16:26:06.998 254096 DEBUG nova.objects.instance [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lazy-loading 'pci_devices' on Instance uuid e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.014 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <uuid>e9e96b2e-62f4-4f02-96ec-5306f9e39ca8</uuid>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <name>instance-00000008</name>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1796428729</nova:name>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:26:06</nova:creationTime>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <nova:user uuid="65b0c04bdd69400aa1e7ba74ddab759e">tempest-ServerDiagnosticsNegativeTest-1808094176-project-member</nova:user>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <nova:project uuid="e8bfbfb2d0e640eea61c1a51c8c5f71e">tempest-ServerDiagnosticsNegativeTest-1808094176</nova:project>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <entry name="serial">e9e96b2e-62f4-4f02-96ec-5306f9e39ca8</entry>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <entry name="uuid">e9e96b2e-62f4-4f02-96ec-5306f9e39ca8</entry>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/console.log" append="off"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:26:07 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:26:07 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:26:07 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:26:07 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:07.040 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:26:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:07.042 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.061 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.061 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.062 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Using config drive#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.081 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.131 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.132 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deleting local config drive /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config because it was imported into RBD.#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:07 np0005535469 systemd-machined[216343]: New machine qemu-7-instance-00000007.
Nov 25 11:26:07 np0005535469 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.295 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Creating config drive at /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.303 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ygwn9w9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.431 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ygwn9w9" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.461 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.466 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.554 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087967.5539758, e0ce39e6-663b-4ff2-84e5-98aa54955701 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.555 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.558 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.559 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.562 254096 INFO nova.virt.libvirt.driver [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance spawned successfully.#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.563 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.575 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.584 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.590 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.590 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.591 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.591 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.592 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.593 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.636 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.636 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087967.558507, e0ce39e6-663b-4ff2-84e5-98aa54955701 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.637 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] VM Started (Lifecycle Event)#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.640 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.641 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deleting local config drive /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config because it was imported into RBD.#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.681 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.691 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.697 254096 INFO nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 4.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.698 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:07 np0005535469 systemd-machined[216343]: New machine qemu-8-instance-00000008.
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.732 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:07 np0005535469 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.782 254096 INFO nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 5.78 seconds to build instance.#033[00m
Nov 25 11:26:07 np0005535469 nova_compute[254092]: 2025-11-25 16:26:07.826 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.169 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087968.1687279, e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.169 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.172 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.173 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.176 254096 INFO nova.virt.libvirt.driver [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance spawned successfully.#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.176 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.187 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.193 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.196 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.197 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.197 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.197 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.198 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.198 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.223 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.224 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087968.1698375, e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.224 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] VM Started (Lifecycle Event)#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.247 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.250 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.280 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.281 254096 INFO nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 4.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.282 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.344 254096 INFO nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 6.02 seconds to build instance.#033[00m
Nov 25 11:26:08 np0005535469 nova_compute[254092]: 2025-11-25 16:26:08.359 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 215 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 5.7 MiB/s wr, 187 op/s
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.338 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.338 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.339 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.339 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.339 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.340 254096 INFO nova.compute.manager [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Terminating instance#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.341 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "refresh_cache-e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.341 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquired lock "refresh_cache-e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.342 254096 DEBUG nova.network.neutron [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:26:09 np0005535469 nova_compute[254092]: 2025-11-25 16:26:09.874 254096 DEBUG nova.network.neutron [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.217 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.219 254096 INFO nova.compute.manager [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Terminating instance#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.221 254096 DEBUG nova.compute.manager [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.276 254096 DEBUG nova.network.neutron [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.288 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Releasing lock "refresh_cache-e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.289 254096 DEBUG nova.compute.manager [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:26:10 np0005535469 kernel: tap05cb1f2f-ce (unregistering): left promiscuous mode
Nov 25 11:26:10 np0005535469 NetworkManager[48891]: <info>  [1764087970.3112] device (tap05cb1f2f-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:10Z|00040|binding|INFO|Releasing lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 from this chassis (sb_readonly=0)
Nov 25 11:26:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:10Z|00041|binding|INFO|Setting lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 down in Southbound
Nov 25 11:26:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:10Z|00042|binding|INFO|Removing iface tap05cb1f2f-ce ovn-installed in OVS
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.330 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:ac:25 10.100.0.6'], port_security=['fa:16:3e:fb:ac:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b2c6795-15b5-424c-b7c5-4b695a348f41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.331 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d unbound from our chassis#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.332 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd4097f8-dcdf-451c-8fbb-2057e86e375d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.332 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[335251c7-440d-474d-b030-7278a0479d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.333 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace which is not needed anymore#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:10 np0005535469 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 25 11:26:10 np0005535469 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.172s CPU time.
Nov 25 11:26:10 np0005535469 systemd-machined[216343]: Machine qemu-6-instance-00000006 terminated.
Nov 25 11:26:10 np0005535469 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 25 11:26:10 np0005535469 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 2.564s CPU time.
Nov 25 11:26:10 np0005535469 systemd-machined[216343]: Machine qemu-8-instance-00000008 terminated.
Nov 25 11:26:10 np0005535469 NetworkManager[48891]: <info>  [1764087970.4427] manager: (tap05cb1f2f-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.459 254096 INFO nova.virt.libvirt.driver [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance destroyed successfully.#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.461 254096 DEBUG nova.objects.instance [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'resources' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.476 254096 DEBUG nova.virt.libvirt.vif [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-423978472',id=6,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:25:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-236be33o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:25:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=4b2c6795-15b5-424c-b7c5-4b695a348f41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.477 254096 DEBUG nova.network.os_vif_util [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.477 254096 DEBUG nova.network.os_vif_util [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.478 254096 DEBUG os_vif [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05cb1f2f-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.488 254096 INFO os_vif [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce')#033[00m
Nov 25 11:26:10 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : haproxy version is 2.8.14-c23fe91
Nov 25 11:26:10 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : path to executable is /usr/sbin/haproxy
Nov 25 11:26:10 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [WARNING]  (272273) : Exiting Master process...
Nov 25 11:26:10 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [ALERT]    (272273) : Current worker (272275) exited with code 143 (Terminated)
Nov 25 11:26:10 np0005535469 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [WARNING]  (272273) : All workers exited. Exiting... (0)
Nov 25 11:26:10 np0005535469 systemd[1]: libpod-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d.scope: Deactivated successfully.
Nov 25 11:26:10 np0005535469 podman[273063]: 2025-11-25 16:26:10.511187592 +0000 UTC m=+0.070277443 container died 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.524 254096 INFO nova.virt.libvirt.driver [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance destroyed successfully.#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.525 254096 DEBUG nova.objects.instance [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lazy-loading 'resources' on Instance uuid e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d-userdata-shm.mount: Deactivated successfully.
Nov 25 11:26:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-565d56c1463fe42daf52e3cc2c80f46ee13778cefe2ff5f841cf7e2212503eba-merged.mount: Deactivated successfully.
Nov 25 11:26:10 np0005535469 podman[273063]: 2025-11-25 16:26:10.556620818 +0000 UTC m=+0.115710669 container cleanup 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:26:10 np0005535469 systemd[1]: libpod-conmon-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d.scope: Deactivated successfully.
Nov 25 11:26:10 np0005535469 podman[273136]: 2025-11-25 16:26:10.652811077 +0000 UTC m=+0.066858921 container remove 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.662 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8649169c-4b61-4ad2-b9b7-14a4368ed177]: (4, ('Tue Nov 25 04:26:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d)\n7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d\nTue Nov 25 04:26:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d)\n7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.665 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c92dff68-9acc-40f8-91e5-8a505bb8877c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.666 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:10 np0005535469 kernel: tapdd4097f8-d0: left promiscuous mode
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.678 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c1fd12-264f-4f6c-a5f9-5373c6725963]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 nova_compute[254092]: 2025-11-25 16:26:10.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f58de09-0dee-4237-9063-81558d16e37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.693 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfe09fb-4926-4119-9d33-9377cec45126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2043c817-0c18-47cd-91b3-4f305d5212ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440513, 'reachable_time': 26511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273151, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 systemd[1]: run-netns-ovnmeta\x2ddd4097f8\x2ddcdf\x2d451c\x2d8fbb\x2d2057e86e375d.mount: Deactivated successfully.
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.719 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.719 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4a15ec11-9fca-4e2a-81b2-f5b4711721fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 216 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 302 op/s
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.058 254096 INFO nova.virt.libvirt.driver [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deleting instance files /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_del#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.059 254096 INFO nova.virt.libvirt.driver [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deletion of /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_del complete#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.133 254096 INFO nova.compute.manager [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.134 254096 DEBUG oslo.service.loopingcall [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.134 254096 DEBUG nova.compute.manager [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.135 254096 DEBUG nova.network.neutron [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.171 254096 INFO nova.virt.libvirt.driver [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deleting instance files /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41_del#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.172 254096 INFO nova.virt.libvirt.driver [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deletion of /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41_del complete#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.213 254096 DEBUG nova.compute.manager [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-unplugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.213 254096 DEBUG oslo_concurrency.lockutils [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.214 254096 DEBUG oslo_concurrency.lockutils [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.214 254096 DEBUG oslo_concurrency.lockutils [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.214 254096 DEBUG nova.compute.manager [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] No waiting events found dispatching network-vif-unplugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.215 254096 DEBUG nova.compute.manager [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-unplugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.229 254096 INFO nova.compute.manager [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.230 254096 DEBUG oslo.service.loopingcall [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.230 254096 DEBUG nova.compute.manager [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.231 254096 DEBUG nova.network.neutron [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.308 254096 DEBUG nova.network.neutron [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.324 254096 DEBUG nova.network.neutron [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.334 254096 INFO nova.compute.manager [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 0.20 seconds to deallocate network for instance.#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.378 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.379 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.481 254096 DEBUG oslo_concurrency.processutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.856 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "25540ca1-3029-48b2-8ab3-c800a16c8175" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.857 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.886 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:26:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279493066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.933 254096 DEBUG oslo_concurrency.processutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.952 254096 DEBUG nova.compute.provider_tree [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.971 254096 DEBUG nova.scheduler.client.report [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:11 np0005535469 nova_compute[254092]: 2025-11-25 16:26:11.982 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.045 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.049 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.059 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.059 254096 INFO nova.compute.claims [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.080 254096 INFO nova.scheduler.client.report [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Deleted allocations for instance e9e96b2e-62f4-4f02-96ec-5306f9e39ca8#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.170 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.237 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.638 254096 DEBUG nova.network.neutron [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718175817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.664 254096 INFO nova.compute.manager [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 1.43 seconds to deallocate network for instance.#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.678 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.684 254096 DEBUG nova.compute.provider_tree [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.705 254096 DEBUG nova.scheduler.client.report [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.720 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.734 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.735 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.743 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.794 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.796 254096 DEBUG nova.network.neutron [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:26:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.8 MiB/s wr, 323 op/s
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.826 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.853 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:26:12 np0005535469 nova_compute[254092]: 2025-11-25 16:26:12.894 254096 DEBUG oslo_concurrency.processutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.896299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087972896335, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2099, "num_deletes": 251, "total_data_size": 3406324, "memory_usage": 3457280, "flush_reason": "Manual Compaction"}
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087972934026, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3317966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21122, "largest_seqno": 23220, "table_properties": {"data_size": 3308560, "index_size": 5900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19580, "raw_average_key_size": 20, "raw_value_size": 3289453, "raw_average_value_size": 3394, "num_data_blocks": 266, "num_entries": 969, "num_filter_entries": 969, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087762, "oldest_key_time": 1764087762, "file_creation_time": 1764087972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 37798 microseconds, and 12014 cpu microseconds.
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.934090) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3317966 bytes OK
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.934116) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.938087) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.938143) EVENT_LOG_v1 {"time_micros": 1764087972938130, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.938169) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3397464, prev total WAL file size 3397464, number of live WAL files 2.
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.939302) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3240KB)], [50(7616KB)]
Nov 25 11:26:12 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087972939345, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11117191, "oldest_snapshot_seqno": -1}
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.011 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.016 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.017 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Creating image(s)#033[00m
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4889 keys, 9374666 bytes, temperature: kUnknown
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087973020108, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9374666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9339338, "index_size": 22034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 120310, "raw_average_key_size": 24, "raw_value_size": 9248326, "raw_average_value_size": 1891, "num_data_blocks": 926, "num_entries": 4889, "num_filter_entries": 4889, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.020368) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9374666 bytes
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.021596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.5 rd, 115.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5407, records dropped: 518 output_compression: NoCompression
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.021615) EVENT_LOG_v1 {"time_micros": 1764087973021606, "job": 26, "event": "compaction_finished", "compaction_time_micros": 80865, "compaction_time_cpu_micros": 22878, "output_level": 6, "num_output_files": 1, "total_output_size": 9374666, "num_input_records": 5407, "num_output_records": 4889, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087973022443, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087973023951, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.939237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.045 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.070 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.100 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.105 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.182 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.183 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.186 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.186 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.210 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.213 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 25540ca1-3029-48b2-8ab3-c800a16c8175_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005132630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.379 254096 DEBUG oslo_concurrency.processutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.386 254096 DEBUG nova.compute.provider_tree [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.402 254096 DEBUG nova.scheduler.client.report [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.422 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.482 254096 DEBUG nova.network.neutron [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.483 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.485 254096 INFO nova.scheduler.client.report [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Deleted allocations for instance 4b2c6795-15b5-424c-b7c5-4b695a348f41#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.510 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 25540ca1-3029-48b2-8ab3-c800a16c8175_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.585 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] resizing rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:26:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:13.596 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.623 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.678 254096 DEBUG nova.objects.instance [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'migration_context' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.689 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.690 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Ensure instance console log exists: /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.690 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.690 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.691 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.692 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.697 254096 WARNING nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.701 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.702 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.705 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.705 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.706 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.706 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.706 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.708 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.708 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.708 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.709 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.709 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.712 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.739 254096 DEBUG nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.740 254096 DEBUG oslo_concurrency.lockutils [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.740 254096 DEBUG oslo_concurrency.lockutils [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 DEBUG oslo_concurrency.lockutils [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 DEBUG nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] No waiting events found dispatching network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 WARNING nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received unexpected event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:26:13 np0005535469 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 DEBUG nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-deleted-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796005807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.206 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.230 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.234 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767150283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.733 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.735 254096 DEBUG nova.objects.instance [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'pci_devices' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.751 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <uuid>25540ca1-3029-48b2-8ab3-c800a16c8175</uuid>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <name>instance-00000009</name>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-4808240</nova:name>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:26:13</nova:creationTime>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <nova:user uuid="820b9ab982364678b8e75b2c9cc4cfed">tempest-ServersAdminNegativeTestJSON-1866473487-project-member</nova:user>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <nova:project uuid="511bc9af98844c8995c27adbee1a3d4c">tempest-ServersAdminNegativeTestJSON-1866473487</nova:project>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <entry name="serial">25540ca1-3029-48b2-8ab3-c800a16c8175</entry>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <entry name="uuid">25540ca1-3029-48b2-8ab3-c800a16c8175</entry>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/25540ca1-3029-48b2-8ab3-c800a16c8175_disk">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/console.log" append="off"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:26:14 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:26:14 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:26:14 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:26:14 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 316 op/s
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.829 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.829 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.830 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Using config drive#033[00m
Nov 25 11:26:14 np0005535469 nova_compute[254092]: 2025-11-25 16:26:14.871 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:26:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f9496648-21d8-46f8-9a07-7aff6cca85b9 does not exist
Nov 25 11:26:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9b0474e6-86bf-435b-9a50-1471161836ab does not exist
Nov 25 11:26:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 53b3a34d-41e3-44a4-926b-4c4a96c36d4d does not exist
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:26:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:26:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:26:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.192 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Creating config drive at /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.199 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10rbe7lh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.325 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10rbe7lh" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.347 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.349 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.492 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.492 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deleting local config drive /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config because it was imported into RBD.#033[00m
Nov 25 11:26:15 np0005535469 systemd-machined[216343]: New machine qemu-9-instance-00000009.
Nov 25 11:26:15 np0005535469 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 25 11:26:15 np0005535469 podman[273781]: 2025-11-25 16:26:15.589215109 +0000 UTC m=+0.048621325 container create 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 11:26:15 np0005535469 systemd[1]: Started libpod-conmon-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope.
Nov 25 11:26:15 np0005535469 podman[273781]: 2025-11-25 16:26:15.574013575 +0000 UTC m=+0.033419811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:26:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:26:15 np0005535469 podman[273781]: 2025-11-25 16:26:15.685492209 +0000 UTC m=+0.144898455 container init 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:26:15 np0005535469 podman[273781]: 2025-11-25 16:26:15.6917799 +0000 UTC m=+0.151186116 container start 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:26:15 np0005535469 podman[273781]: 2025-11-25 16:26:15.694699379 +0000 UTC m=+0.154105625 container attach 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:26:15 np0005535469 systemd[1]: libpod-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope: Deactivated successfully.
Nov 25 11:26:15 np0005535469 magical_pike[273806]: 167 167
Nov 25 11:26:15 np0005535469 conmon[273806]: conmon 8a3b3d90fd44f2d70bf8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope/container/memory.events
Nov 25 11:26:15 np0005535469 podman[273781]: 2025-11-25 16:26:15.698702879 +0000 UTC m=+0.158109095 container died 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:26:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-95a3ad12d5de1a989277333a70de65cb8078a9d27f91cb250dd33e640af23259-merged.mount: Deactivated successfully.
Nov 25 11:26:15 np0005535469 podman[273781]: 2025-11-25 16:26:15.73550963 +0000 UTC m=+0.194915846 container remove 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:26:15 np0005535469 systemd[1]: libpod-conmon-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope: Deactivated successfully.
Nov 25 11:26:15 np0005535469 podman[273870]: 2025-11-25 16:26:15.898612949 +0000 UTC m=+0.039851486 container create 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.901 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087975.9009004, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.902 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.904 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.905 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.908 254096 INFO nova.virt.libvirt.driver [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance spawned successfully.#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.909 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.922 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.932 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.941 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.942 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.942 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.943 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.943 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.943 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:15 np0005535469 systemd[1]: Started libpod-conmon-948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef.scope.
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.968 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.969 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087975.9017322, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:15 np0005535469 nova_compute[254092]: 2025-11-25 16:26:15.969 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Started (Lifecycle Event)#033[00m
Nov 25 11:26:15 np0005535469 podman[273870]: 2025-11-25 16:26:15.882284344 +0000 UTC m=+0.023522911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:26:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:26:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:16 np0005535469 nova_compute[254092]: 2025-11-25 16:26:16.000 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:16 np0005535469 nova_compute[254092]: 2025-11-25 16:26:16.005 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:16 np0005535469 nova_compute[254092]: 2025-11-25 16:26:16.007 254096 INFO nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 3.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:26:16 np0005535469 nova_compute[254092]: 2025-11-25 16:26:16.007 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:16 np0005535469 podman[273870]: 2025-11-25 16:26:16.015932732 +0000 UTC m=+0.157171279 container init 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:26:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:26:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:26:16 np0005535469 podman[273870]: 2025-11-25 16:26:16.022878641 +0000 UTC m=+0.164117178 container start 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:26:16 np0005535469 podman[273870]: 2025-11-25 16:26:16.02543273 +0000 UTC m=+0.166671267 container attach 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:26:16 np0005535469 nova_compute[254092]: 2025-11-25 16:26:16.043 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:16 np0005535469 nova_compute[254092]: 2025-11-25 16:26:16.091 254096 INFO nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 4.15 seconds to build instance.#033[00m
Nov 25 11:26:16 np0005535469 nova_compute[254092]: 2025-11-25 16:26:16.114 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:16 np0005535469 podman[273894]: 2025-11-25 16:26:16.648950799 +0000 UTC m=+0.057455514 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:26:16 np0005535469 podman[273893]: 2025-11-25 16:26:16.663885105 +0000 UTC m=+0.072420371 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 11:26:16 np0005535469 podman[273895]: 2025-11-25 16:26:16.68353554 +0000 UTC m=+0.090503774 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:26:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 407 op/s
Nov 25 11:26:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:17.043 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:17 np0005535469 jovial_mendel[273888]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:26:17 np0005535469 jovial_mendel[273888]: --> relative data size: 1.0
Nov 25 11:26:17 np0005535469 jovial_mendel[273888]: --> All data devices are unavailable
Nov 25 11:26:17 np0005535469 systemd[1]: libpod-948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef.scope: Deactivated successfully.
Nov 25 11:26:17 np0005535469 podman[273870]: 2025-11-25 16:26:17.091686678 +0000 UTC m=+1.232925205 container died 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:26:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525-merged.mount: Deactivated successfully.
Nov 25 11:26:17 np0005535469 nova_compute[254092]: 2025-11-25 16:26:17.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:17 np0005535469 podman[273870]: 2025-11-25 16:26:17.150692953 +0000 UTC m=+1.291931490 container remove 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:26:17 np0005535469 systemd[1]: libpod-conmon-948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef.scope: Deactivated successfully.
Nov 25 11:26:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:17 np0005535469 podman[274128]: 2025-11-25 16:26:17.717209861 +0000 UTC m=+0.036805333 container create 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:26:17 np0005535469 systemd[1]: Started libpod-conmon-30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd.scope.
Nov 25 11:26:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:26:17 np0005535469 podman[274128]: 2025-11-25 16:26:17.787956457 +0000 UTC m=+0.107551959 container init 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:26:17 np0005535469 podman[274128]: 2025-11-25 16:26:17.796040907 +0000 UTC m=+0.115636379 container start 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 11:26:17 np0005535469 podman[274128]: 2025-11-25 16:26:17.701801952 +0000 UTC m=+0.021397444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:26:17 np0005535469 podman[274128]: 2025-11-25 16:26:17.799021458 +0000 UTC m=+0.118616930 container attach 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:26:17 np0005535469 pedantic_chatelet[274144]: 167 167
Nov 25 11:26:17 np0005535469 systemd[1]: libpod-30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd.scope: Deactivated successfully.
Nov 25 11:26:17 np0005535469 podman[274128]: 2025-11-25 16:26:17.802763259 +0000 UTC m=+0.122358731 container died 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:26:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-41fa1edb85ed9328c94f361cb3c43dcd9cb90f741a7319f0bc423c598f553c9c-merged.mount: Deactivated successfully.
Nov 25 11:26:17 np0005535469 podman[274128]: 2025-11-25 16:26:17.861086777 +0000 UTC m=+0.180682249 container remove 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:26:17 np0005535469 systemd[1]: libpod-conmon-30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd.scope: Deactivated successfully.
Nov 25 11:26:18 np0005535469 podman[274168]: 2025-11-25 16:26:18.047217242 +0000 UTC m=+0.060745754 container create 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:26:18 np0005535469 systemd[1]: Started libpod-conmon-79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6.scope.
Nov 25 11:26:18 np0005535469 podman[274168]: 2025-11-25 16:26:18.019418825 +0000 UTC m=+0.032947377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:26:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:26:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:18 np0005535469 podman[274168]: 2025-11-25 16:26:18.128619997 +0000 UTC m=+0.142148519 container init 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:26:18 np0005535469 podman[274168]: 2025-11-25 16:26:18.136282736 +0000 UTC m=+0.149811248 container start 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 25 11:26:18 np0005535469 podman[274168]: 2025-11-25 16:26:18.139189955 +0000 UTC m=+0.152718457 container attach 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:26:18 np0005535469 nova_compute[254092]: 2025-11-25 16:26:18.291 254096 DEBUG nova.objects.instance [None req-bc560204-8aeb-47da-b23d-3b7879e6f0ba 002b88c8dbb14a3b9516bcf2c1ec67e4 9b9d5cbb2ff14e48aa429ebc506b3a74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:18 np0005535469 nova_compute[254092]: 2025-11-25 16:26:18.315 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087978.3094976, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:18 np0005535469 nova_compute[254092]: 2025-11-25 16:26:18.315 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:26:18 np0005535469 nova_compute[254092]: 2025-11-25 16:26:18.333 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:18 np0005535469 nova_compute[254092]: 2025-11-25 16:26:18.337 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:18 np0005535469 nova_compute[254092]: 2025-11-25 16:26:18.371 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 11:26:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]: {
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:    "0": [
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:        {
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "devices": [
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "/dev/loop3"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            ],
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_name": "ceph_lv0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_size": "21470642176",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "name": "ceph_lv0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "tags": {
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cluster_name": "ceph",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.crush_device_class": "",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.encrypted": "0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osd_id": "0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.type": "block",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.vdo": "0"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            },
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "type": "block",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "vg_name": "ceph_vg0"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:        }
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:    ],
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:    "1": [
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:        {
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "devices": [
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "/dev/loop4"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            ],
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_name": "ceph_lv1",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_size": "21470642176",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "name": "ceph_lv1",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "tags": {
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cluster_name": "ceph",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.crush_device_class": "",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.encrypted": "0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osd_id": "1",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.type": "block",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.vdo": "0"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            },
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "type": "block",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "vg_name": "ceph_vg1"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:        }
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:    ],
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:    "2": [
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:        {
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "devices": [
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "/dev/loop5"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            ],
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_name": "ceph_lv2",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_size": "21470642176",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "name": "ceph_lv2",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "tags": {
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.cluster_name": "ceph",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.crush_device_class": "",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.encrypted": "0",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osd_id": "2",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.type": "block",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:                "ceph.vdo": "0"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            },
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "type": "block",
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:            "vg_name": "ceph_vg2"
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:        }
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]:    ]
Nov 25 11:26:18 np0005535469 zealous_grothendieck[274185]: }
Nov 25 11:26:18 np0005535469 systemd[1]: libpod-79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6.scope: Deactivated successfully.
Nov 25 11:26:18 np0005535469 podman[274168]: 2025-11-25 16:26:18.924058765 +0000 UTC m=+0.937587277 container died 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:26:19 np0005535469 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 11:26:19 np0005535469 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 2.882s CPU time.
Nov 25 11:26:19 np0005535469 systemd-machined[216343]: Machine qemu-9-instance-00000009 terminated.
Nov 25 11:26:19 np0005535469 nova_compute[254092]: 2025-11-25 16:26:19.258 254096 DEBUG nova.compute.manager [None req-bc560204-8aeb-47da-b23d-3b7879e6f0ba 002b88c8dbb14a3b9516bcf2c1ec67e4 9b9d5cbb2ff14e48aa429ebc506b3a74 - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08-merged.mount: Deactivated successfully.
Nov 25 11:26:19 np0005535469 podman[274168]: 2025-11-25 16:26:19.611624007 +0000 UTC m=+1.625152519 container remove 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:26:19 np0005535469 systemd[1]: libpod-conmon-79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6.scope: Deactivated successfully.
Nov 25 11:26:20 np0005535469 podman[274350]: 2025-11-25 16:26:20.21880102 +0000 UTC m=+0.045549690 container create 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:26:20 np0005535469 systemd[1]: Started libpod-conmon-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope.
Nov 25 11:26:20 np0005535469 podman[274350]: 2025-11-25 16:26:20.192422263 +0000 UTC m=+0.019170963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:26:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:26:20 np0005535469 podman[274350]: 2025-11-25 16:26:20.316783937 +0000 UTC m=+0.143532667 container init 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:26:20 np0005535469 podman[274350]: 2025-11-25 16:26:20.324766834 +0000 UTC m=+0.151515504 container start 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:26:20 np0005535469 podman[274350]: 2025-11-25 16:26:20.327743705 +0000 UTC m=+0.154492365 container attach 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:26:20 np0005535469 tender_allen[274366]: 167 167
Nov 25 11:26:20 np0005535469 systemd[1]: libpod-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope: Deactivated successfully.
Nov 25 11:26:20 np0005535469 conmon[274366]: conmon 151c14e1faeb67f9ec44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope/container/memory.events
Nov 25 11:26:20 np0005535469 podman[274350]: 2025-11-25 16:26:20.331682423 +0000 UTC m=+0.158431093 container died 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 11:26:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1c2e79a6d24c61f4120a499bc58c6468acbd8d0ed6a1fecf4a29a995d55d6346-merged.mount: Deactivated successfully.
Nov 25 11:26:20 np0005535469 podman[274350]: 2025-11-25 16:26:20.364161206 +0000 UTC m=+0.190909876 container remove 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:26:20 np0005535469 systemd[1]: libpod-conmon-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope: Deactivated successfully.
Nov 25 11:26:20 np0005535469 podman[274390]: 2025-11-25 16:26:20.526305619 +0000 UTC m=+0.041968123 container create d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:26:20 np0005535469 nova_compute[254092]: 2025-11-25 16:26:20.551 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:20 np0005535469 systemd[1]: Started libpod-conmon-d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd.scope.
Nov 25 11:26:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:26:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:20 np0005535469 podman[274390]: 2025-11-25 16:26:20.507721364 +0000 UTC m=+0.023383888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:26:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:20 np0005535469 podman[274390]: 2025-11-25 16:26:20.615394844 +0000 UTC m=+0.131057358 container init d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:26:20 np0005535469 podman[274390]: 2025-11-25 16:26:20.621724636 +0000 UTC m=+0.137387140 container start d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:26:20 np0005535469 podman[274390]: 2025-11-25 16:26:20.625732615 +0000 UTC m=+0.141395109 container attach d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:26:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 144 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.0 MiB/s wr, 323 op/s
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]: {
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "osd_id": 1,
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "type": "bluestore"
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:    },
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "osd_id": 2,
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "type": "bluestore"
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:    },
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "osd_id": 0,
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:        "type": "bluestore"
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]:    }
Nov 25 11:26:21 np0005535469 compassionate_mirzakhani[274406]: }
Nov 25 11:26:21 np0005535469 systemd[1]: libpod-d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd.scope: Deactivated successfully.
Nov 25 11:26:21 np0005535469 podman[274390]: 2025-11-25 16:26:21.581002622 +0000 UTC m=+1.096665126 container died d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:26:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e-merged.mount: Deactivated successfully.
Nov 25 11:26:21 np0005535469 podman[274390]: 2025-11-25 16:26:21.634283702 +0000 UTC m=+1.149946206 container remove d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:26:21 np0005535469 systemd[1]: libpod-conmon-d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd.scope: Deactivated successfully.
Nov 25 11:26:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:26:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:26:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:26:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:26:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 15ad8e01-bfac-4eb8-b1b1-eaee2825df45 does not exist
Nov 25 11:26:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6d4df03b-122f-4ae2-9440-2db2ba8163ab does not exist
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.789 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "25540ca1-3029-48b2-8ab3-c800a16c8175" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.790 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.790 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "25540ca1-3029-48b2-8ab3-c800a16c8175-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.791 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.791 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.792 254096 INFO nova.compute.manager [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Terminating instance#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.793 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "refresh_cache-25540ca1-3029-48b2-8ab3-c800a16c8175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.793 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquired lock "refresh_cache-25540ca1-3029-48b2-8ab3-c800a16c8175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.794 254096 DEBUG nova.network.neutron [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:26:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:26:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:26:21 np0005535469 nova_compute[254092]: 2025-11-25 16:26:21.998 254096 DEBUG nova.network.neutron [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:22 np0005535469 nova_compute[254092]: 2025-11-25 16:26:22.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:22 np0005535469 nova_compute[254092]: 2025-11-25 16:26:22.463 254096 DEBUG nova.network.neutron [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:22 np0005535469 nova_compute[254092]: 2025-11-25 16:26:22.487 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Releasing lock "refresh_cache-25540ca1-3029-48b2-8ab3-c800a16c8175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:26:22 np0005535469 nova_compute[254092]: 2025-11-25 16:26:22.487 254096 DEBUG nova.compute.manager [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:26:22 np0005535469 nova_compute[254092]: 2025-11-25 16:26:22.493 254096 INFO nova.virt.libvirt.driver [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance destroyed successfully.#033[00m
Nov 25 11:26:22 np0005535469 nova_compute[254092]: 2025-11-25 16:26:22.493 254096 DEBUG nova.objects.instance [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'resources' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 159 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 231 op/s
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.020 254096 INFO nova.virt.libvirt.driver [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deleting instance files /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175_del#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.021 254096 INFO nova.virt.libvirt.driver [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deletion of /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175_del complete#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.090 254096 INFO nova.compute.manager [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.091 254096 DEBUG oslo.service.loopingcall [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.092 254096 DEBUG nova.compute.manager [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.092 254096 DEBUG nova.network.neutron [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.228 254096 DEBUG nova.network.neutron [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.241 254096 DEBUG nova.network.neutron [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.252 254096 INFO nova.compute.manager [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 0.16 seconds to deallocate network for instance.#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.300 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.301 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.382 254096 DEBUG oslo_concurrency.processutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474540718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.853 254096 DEBUG oslo_concurrency.processutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.860 254096 DEBUG nova.compute.provider_tree [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.892 254096 DEBUG nova.scheduler.client.report [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.926 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:23 np0005535469 nova_compute[254092]: 2025-11-25 16:26:23.991 254096 INFO nova.scheduler.client.report [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Deleted allocations for instance 25540ca1-3029-48b2-8ab3-c800a16c8175#033[00m
Nov 25 11:26:24 np0005535469 nova_compute[254092]: 2025-11-25 16:26:24.065 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 159 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 195 op/s
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.456 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087970.4554572, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.456 254096 INFO nova.compute.manager [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.483 254096 DEBUG nova.compute.manager [None req-0a44f5b5-ad17-4f00-91df-95b561184e57 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.520 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087970.5175343, e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.520 254096 INFO nova.compute.manager [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.551 254096 DEBUG nova.compute.manager [None req-0e010ba8-300d-4e71-b4be-31cde3740ee5 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.554 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.555 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.555 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "e0ce39e6-663b-4ff2-84e5-98aa54955701-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.556 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.556 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.557 254096 INFO nova.compute.manager [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Terminating instance#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.559 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "refresh_cache-e0ce39e6-663b-4ff2-84e5-98aa54955701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.559 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquired lock "refresh_cache-e0ce39e6-663b-4ff2-84e5-98aa54955701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.559 254096 DEBUG nova.network.neutron [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:26:25 np0005535469 nova_compute[254092]: 2025-11-25 16:26:25.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:26 np0005535469 nova_compute[254092]: 2025-11-25 16:26:26.810 254096 DEBUG nova.network.neutron [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 254 op/s
Nov 25 11:26:27 np0005535469 nova_compute[254092]: 2025-11-25 16:26:27.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:27 np0005535469 nova_compute[254092]: 2025-11-25 16:26:27.152 254096 DEBUG nova.network.neutron [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:27 np0005535469 nova_compute[254092]: 2025-11-25 16:26:27.166 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Releasing lock "refresh_cache-e0ce39e6-663b-4ff2-84e5-98aa54955701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:26:27 np0005535469 nova_compute[254092]: 2025-11-25 16:26:27.167 254096 DEBUG nova.compute.manager [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:26:27 np0005535469 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 25 11:26:27 np0005535469 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 13.078s CPU time.
Nov 25 11:26:27 np0005535469 systemd-machined[216343]: Machine qemu-7-instance-00000007 terminated.
Nov 25 11:26:27 np0005535469 nova_compute[254092]: 2025-11-25 16:26:27.399 254096 INFO nova.virt.libvirt.driver [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance destroyed successfully.#033[00m
Nov 25 11:26:27 np0005535469 nova_compute[254092]: 2025-11-25 16:26:27.400 254096 DEBUG nova.objects.instance [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'resources' on Instance uuid e0ce39e6-663b-4ff2-84e5-98aa54955701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Nov 25 11:26:28 np0005535469 nova_compute[254092]: 2025-11-25 16:26:28.836 254096 INFO nova.virt.libvirt.driver [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deleting instance files /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701_del#033[00m
Nov 25 11:26:28 np0005535469 nova_compute[254092]: 2025-11-25 16:26:28.837 254096 INFO nova.virt.libvirt.driver [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deletion of /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701_del complete#033[00m
Nov 25 11:26:28 np0005535469 nova_compute[254092]: 2025-11-25 16:26:28.961 254096 INFO nova.compute.manager [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 1.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:26:28 np0005535469 nova_compute[254092]: 2025-11-25 16:26:28.961 254096 DEBUG oslo.service.loopingcall [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:26:28 np0005535469 nova_compute[254092]: 2025-11-25 16:26:28.962 254096 DEBUG nova.compute.manager [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:26:28 np0005535469 nova_compute[254092]: 2025-11-25 16:26:28.962 254096 DEBUG nova.network.neutron [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.245 254096 DEBUG nova.network.neutron [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.260 254096 DEBUG nova.network.neutron [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.277 254096 INFO nova.compute.manager [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 0.31 seconds to deallocate network for instance.#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.376 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.376 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.428 254096 DEBUG oslo_concurrency.processutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649592809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.831 254096 DEBUG oslo_concurrency.processutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.836 254096 DEBUG nova.compute.provider_tree [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:29 np0005535469 nova_compute[254092]: 2025-11-25 16:26:29.856 254096 DEBUG nova.scheduler.client.report [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:30 np0005535469 nova_compute[254092]: 2025-11-25 16:26:30.087 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:30 np0005535469 nova_compute[254092]: 2025-11-25 16:26:30.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 70 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Nov 25 11:26:30 np0005535469 nova_compute[254092]: 2025-11-25 16:26:30.971 254096 INFO nova.scheduler.client.report [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Deleted allocations for instance e0ce39e6-663b-4ff2-84e5-98aa54955701#033[00m
Nov 25 11:26:31 np0005535469 nova_compute[254092]: 2025-11-25 16:26:31.056 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:32 np0005535469 nova_compute[254092]: 2025-11-25 16:26:32.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 692 KiB/s rd, 967 KiB/s wr, 110 op/s
Nov 25 11:26:34 np0005535469 nova_compute[254092]: 2025-11-25 16:26:34.259 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087979.2575266, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:34 np0005535469 nova_compute[254092]: 2025-11-25 16:26:34.259 254096 INFO nova.compute.manager [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:26:34 np0005535469 nova_compute[254092]: 2025-11-25 16:26:34.279 254096 DEBUG nova.compute.manager [None req-8ba24401-7337-4a14-ab4a-f09f834090bc - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 100 KiB/s wr, 87 op/s
Nov 25 11:26:35 np0005535469 nova_compute[254092]: 2025-11-25 16:26:35.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:35 np0005535469 nova_compute[254092]: 2025-11-25 16:26:35.854 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "cf8226e4-d68b-425a-8419-e273b162e9ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:35 np0005535469 nova_compute[254092]: 2025-11-25 16:26:35.854 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:35 np0005535469 nova_compute[254092]: 2025-11-25 16:26:35.880 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.033 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.034 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.041 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.041 254096 INFO nova.compute.claims [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.197 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904941502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.657 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.662 254096 DEBUG nova.compute.provider_tree [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.679 254096 DEBUG nova.scheduler.client.report [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.699 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.700 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.749 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.766 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.783 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:26:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 100 KiB/s wr, 87 op/s
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.877 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.879 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.879 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Creating image(s)#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.904 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.930 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.949 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:36 np0005535469 nova_compute[254092]: 2025-11-25 16:26:36.952 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.024 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.025 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.025 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.026 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.046 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.050 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cf8226e4-d68b-425a-8419-e273b162e9ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.289 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cf8226e4-d68b-425a-8419-e273b162e9ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.344 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] resizing rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.432 254096 DEBUG nova.objects.instance [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lazy-loading 'migration_context' on Instance uuid cf8226e4-d68b-425a-8419-e273b162e9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.451 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.451 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Ensure instance console log exists: /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.452 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.452 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.452 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.454 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.459 254096 WARNING nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.463 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.463 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.466 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.466 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.466 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.467 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.467 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.470 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.473 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/471529986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.970 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.994 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:37 np0005535469 nova_compute[254092]: 2025-11-25 16:26:37.998 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019693713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.432 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.434 254096 DEBUG nova.objects.instance [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf8226e4-d68b-425a-8419-e273b162e9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.465 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <uuid>cf8226e4-d68b-425a-8419-e273b162e9ee</uuid>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <name>instance-0000000a</name>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-1206401827</nova:name>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:26:37</nova:creationTime>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <nova:user uuid="ef9096d47e8b4fceb4fdb347f45e82ea">tempest-ServerDiagnosticsV248Test-1494605572-project-member</nova:user>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <nova:project uuid="3c8b74363ca84877a8f0a40f07822af8">tempest-ServerDiagnosticsV248Test-1494605572</nova:project>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <entry name="serial">cf8226e4-d68b-425a-8419-e273b162e9ee</entry>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <entry name="uuid">cf8226e4-d68b-425a-8419-e273b162e9ee</entry>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/cf8226e4-d68b-425a-8419-e273b162e9ee_disk">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/console.log" append="off"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:26:38 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:26:38 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:26:38 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:26:38 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.537 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.538 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.538 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Using config drive#033[00m
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.558 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.943 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Creating config drive at /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config#033[00m
Nov 25 11:26:38 np0005535469 nova_compute[254092]: 2025-11-25 16:26:38.948 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6kyk9p23 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:39 np0005535469 nova_compute[254092]: 2025-11-25 16:26:39.080 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6kyk9p23" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:39 np0005535469 nova_compute[254092]: 2025-11-25 16:26:39.105 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:39 np0005535469 nova_compute[254092]: 2025-11-25 16:26:39.110 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:39 np0005535469 nova_compute[254092]: 2025-11-25 16:26:39.271 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:39 np0005535469 nova_compute[254092]: 2025-11-25 16:26:39.272 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deleting local config drive /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config because it was imported into RBD.#033[00m
Nov 25 11:26:39 np0005535469 systemd-machined[216343]: New machine qemu-10-instance-0000000a.
Nov 25 11:26:39 np0005535469 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:26:40
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes', 'backups']
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.112 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088000.112107, cf8226e4-d68b-425a-8419-e273b162e9ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.114 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.116 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.116 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.119 254096 INFO nova.virt.libvirt.driver [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance spawned successfully.#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.120 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.150 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.154 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.157 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.158 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.158 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.158 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.159 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.159 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.196 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.196 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088000.1132455, cf8226e4-d68b-425a-8419-e273b162e9ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.197 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] VM Started (Lifecycle Event)#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.224 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.227 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.240 254096 INFO nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 3.36 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.241 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.253 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.300 254096 INFO nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 4.30 seconds to build instance.#033[00m
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.319 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:26:40 np0005535469 nova_compute[254092]: 2025-11-25 16:26:40.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 81 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Nov 25 11:26:41 np0005535469 nova_compute[254092]: 2025-11-25 16:26:41.349 254096 DEBUG nova.compute.manager [None req-501c19a4-dd2d-4f7c-a52d-05d7197f4554 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:41 np0005535469 nova_compute[254092]: 2025-11-25 16:26:41.351 254096 INFO nova.compute.manager [None req-501c19a4-dd2d-4f7c-a52d-05d7197f4554 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Retrieving diagnostics#033[00m
Nov 25 11:26:42 np0005535469 nova_compute[254092]: 2025-11-25 16:26:42.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:42 np0005535469 nova_compute[254092]: 2025-11-25 16:26:42.397 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087987.395677, e0ce39e6-663b-4ff2-84e5-98aa54955701 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:42 np0005535469 nova_compute[254092]: 2025-11-25 16:26:42.398 254096 INFO nova.compute.manager [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:26:42 np0005535469 nova_compute[254092]: 2025-11-25 16:26:42.421 254096 DEBUG nova.compute.manager [None req-fd08d703-3f26-4ec9-a35a-49f8cd5ccc9e - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 11:26:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 11:26:44 np0005535469 nova_compute[254092]: 2025-11-25 16:26:44.923 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:44 np0005535469 nova_compute[254092]: 2025-11-25 16:26:44.924 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:44 np0005535469 nova_compute[254092]: 2025-11-25 16:26:44.997 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.102 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.103 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.110 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.111 254096 INFO nova.compute.claims [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.336 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085521449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.821 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.827 254096 DEBUG nova.compute.provider_tree [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.847 254096 DEBUG nova.scheduler.client.report [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.930 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:45 np0005535469 nova_compute[254092]: 2025-11-25 16:26:45.932 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:26:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.118 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.119 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.465 254096 DEBUG nova.policy [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dda8ef18e79e4220b420023d65ccb78a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33fbb668df82403b9f379e45132213fd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:26:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 25 11:26:46 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.530 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.551 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.652 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.653 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.654 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Creating image(s)#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.683 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.711 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.736 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.740 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.798 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.799 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.800 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.800 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.823 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:46 np0005535469 nova_compute[254092]: 2025-11-25 16:26:46.828 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 138942ff-b720-4101-8dcf-38958751745b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.144 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 138942ff-b720-4101-8dcf-38958751745b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.240 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Successfully created port: f445c9f8-c211-4af8-a66d-21cacc81fdc5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.246 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] resizing rbd image 138942ff-b720-4101-8dcf-38958751745b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.332 254096 DEBUG nova.objects.instance [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lazy-loading 'migration_context' on Instance uuid 138942ff-b720-4101-8dcf-38958751745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.344 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.344 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Ensure instance console log exists: /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.345 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.345 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.346 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:47 np0005535469 podman[275143]: 2025-11-25 16:26:47.645903056 +0000 UTC m=+0.060956720 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 11:26:47 np0005535469 podman[275142]: 2025-11-25 16:26:47.653428651 +0000 UTC m=+0.068431864 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 11:26:47 np0005535469 podman[275144]: 2025-11-25 16:26:47.687791416 +0000 UTC m=+0.089002543 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.900 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Successfully updated port: f445c9f8-c211-4af8-a66d-21cacc81fdc5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.915 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.916 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:26:47 np0005535469 nova_compute[254092]: 2025-11-25 16:26:47.917 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.002 254096 DEBUG nova.compute.manager [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.003 254096 DEBUG nova.compute.manager [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing instance network info cache due to event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.003 254096 DEBUG oslo_concurrency.lockutils [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.071 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 25 11:26:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 25 11:26:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 25 11:26:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 221 KiB/s wr, 112 op/s
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.967 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.990 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.990 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance network_info: |[{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.991 254096 DEBUG oslo_concurrency.lockutils [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.992 254096 DEBUG nova.network.neutron [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.995 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start _get_guest_xml network_info=[{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:26:48 np0005535469 nova_compute[254092]: 2025-11-25 16:26:48.999 254096 WARNING nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.004 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.004 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.009 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.010 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.010 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.011 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.011 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.011 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.013 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.013 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.013 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.014 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.016 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867437894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.464 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.484 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.488 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.506 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.533 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.534 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.535 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:26:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188071052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.947 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.950 254096 DEBUG nova.virt.libvirt.vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-964763500',id=11,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33fbb668df82403b9f379e45132213fd',ramdisk_id='',reservation_id='r-7p1lpp0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:26:46Z,user_data=None,user_id='dda8ef18e79e4220b420023d65ccb78a',uuid=138942ff-b720-4101-8dcf-38958751745b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.950 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converting VIF {"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.952 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.954 254096 DEBUG nova.objects.instance [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 138942ff-b720-4101-8dcf-38958751745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.969 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <uuid>138942ff-b720-4101-8dcf-38958751745b</uuid>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <name>instance-0000000b</name>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500</nova:name>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:26:49</nova:creationTime>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:user uuid="dda8ef18e79e4220b420023d65ccb78a">tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member</nova:user>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:project uuid="33fbb668df82403b9f379e45132213fd">tempest-FloatingIPsAssociationNegativeTestJSON-1669688315</nova:project>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <nova:port uuid="f445c9f8-c211-4af8-a66d-21cacc81fdc5">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <entry name="serial">138942ff-b720-4101-8dcf-38958751745b</entry>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <entry name="uuid">138942ff-b720-4101-8dcf-38958751745b</entry>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/138942ff-b720-4101-8dcf-38958751745b_disk">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/138942ff-b720-4101-8dcf-38958751745b_disk.config">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f7:94:8b"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <target dev="tapf445c9f8-c2"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/console.log" append="off"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:26:49 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:26:49 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:26:49 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:26:49 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:26:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2615863893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.973 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Preparing to wait for external event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.974 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.974 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.974 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.975 254096 DEBUG nova.virt.libvirt.vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-964763500',id=11,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33fbb668df82403b9f379e45132213fd',ramdisk_id='',reservation_id='r-7p1lpp0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:26:46Z,user_data=None,user_id='dda8ef18e79e4220b420023d65ccb78a',uuid=138942ff-b720-4101-8dcf-38958751745b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.975 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converting VIF {"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.975 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.976 254096 DEBUG os_vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.977 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.977 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.980 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf445c9f8-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.980 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf445c9f8-c2, col_values=(('external_ids', {'iface-id': 'f445c9f8-c211-4af8-a66d-21cacc81fdc5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:94:8b', 'vm-uuid': '138942ff-b720-4101-8dcf-38958751745b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:49 np0005535469 NetworkManager[48891]: <info>  [1764088009.9830] manager: (tapf445c9f8-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.990 254096 INFO os_vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2')#033[00m
Nov 25 11:26:49 np0005535469 nova_compute[254092]: 2025-11-25 16:26:49.992 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.148 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.148 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.149 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] No VIF found with MAC fa:16:3e:f7:94:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.149 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Using config drive#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.165 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.196 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.197 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.202 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.202 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.368 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.368 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4547MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.369 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.369 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cf8226e4-d68b-425a-8419-e273b162e9ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 138942ff-b720-4101-8dcf-38958751745b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.580 254096 DEBUG nova.network.neutron [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updated VIF entry in instance network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.581 254096 DEBUG nova.network.neutron [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.602 254096 DEBUG oslo_concurrency.lockutils [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.664 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Creating config drive at /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.671 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcix2h5r3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.801 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcix2h5r3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.835 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:26:50 np0005535469 nova_compute[254092]: 2025-11-25 16:26:50.838 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config 138942ff-b720-4101-8dcf-38958751745b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 126 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Nov 25 11:26:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315259713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.020 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.028 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006243971600898613 of space, bias 1.0, pg target 0.1873191480269584 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.042 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.163 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.164 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.591 254096 DEBUG nova.compute.manager [None req-6938c30d-9e94-4bb2-ac62-c4a87fe747ce 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.594 254096 INFO nova.compute.manager [None req-6938c30d-9e94-4bb2-ac62-c4a87fe747ce 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Retrieving diagnostics#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.743 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config 138942ff-b720-4101-8dcf-38958751745b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.906s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.744 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deleting local config drive /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config because it was imported into RBD.#033[00m
Nov 25 11:26:51 np0005535469 kernel: tapf445c9f8-c2: entered promiscuous mode
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:51 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:51Z|00043|binding|INFO|Claiming lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 for this chassis.
Nov 25 11:26:51 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:51Z|00044|binding|INFO|f445c9f8-c211-4af8-a66d-21cacc81fdc5: Claiming fa:16:3e:f7:94:8b 10.100.0.3
Nov 25 11:26:51 np0005535469 NetworkManager[48891]: <info>  [1764088011.8121] manager: (tapf445c9f8-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.827 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:94:8b 10.100.0.3'], port_security=['fa:16:3e:f7:94:8b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '138942ff-b720-4101-8dcf-38958751745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33fbb668df82403b9f379e45132213fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '554076c7-aad0-4f65-8aee-4a40c468d6fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8958ae03-9d40-4b3e-bee3-6d4c69009647, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f445c9f8-c211-4af8-a66d-21cacc81fdc5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.830 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f445c9f8-c211-4af8-a66d-21cacc81fdc5 in datapath 12698a0a-7c9a-41c0-97e4-92c265b0a639 bound to our chassis#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.831 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12698a0a-7c9a-41c0-97e4-92c265b0a639#033[00m
Nov 25 11:26:51 np0005535469 systemd-udevd[275380]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:26:51 np0005535469 systemd-machined[216343]: New machine qemu-11-instance-0000000b.
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.852 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b603723e-3d49-435b-b324-2146012ffa08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.853 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12698a0a-71 in ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.856 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12698a0a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.856 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d76e2cbe-8859-4c19-9646-8b7078842931]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.857 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e80c5c3f-93e5-4a79-9701-933bf0715c6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 25 11:26:51 np0005535469 NetworkManager[48891]: <info>  [1764088011.8664] device (tapf445c9f8-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:26:51 np0005535469 NetworkManager[48891]: <info>  [1764088011.8689] device (tapf445c9f8-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.871 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce65b14-ecb1-4110-a2c5-5d099df517d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:51 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:51Z|00045|binding|INFO|Setting lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 ovn-installed in OVS
Nov 25 11:26:51 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:51Z|00046|binding|INFO|Setting lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 up in Southbound
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.896 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8a3912-b0df-41bb-8d4d-aa8aa34c0912]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.935 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1463890e-cbeb-40ee-88e2-9317639666d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 systemd-udevd[275383]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.942 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9eed15f8-e462-4192-b31f-4301975f28d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 NetworkManager[48891]: <info>  [1764088011.9452] manager: (tap12698a0a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.959 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "cf8226e4-d68b-425a-8419-e273b162e9ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.959 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.959 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "cf8226e4-d68b-425a-8419-e273b162e9ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.960 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.960 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.961 254096 INFO nova.compute.manager [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Terminating instance#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.961 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "refresh_cache-cf8226e4-d68b-425a-8419-e273b162e9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.962 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquired lock "refresh_cache-cf8226e4-d68b-425a-8419-e273b162e9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:26:51 np0005535469 nova_compute[254092]: 2025-11-25 16:26:51.962 254096 DEBUG nova.network.neutron [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.976 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4297cf-fd20-4a72-a6f5-56ffc42aaeba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.980 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8e27aa2d-1da2-4bac-9ba3-8874d77ac949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 NetworkManager[48891]: <info>  [1764088012.0023] device (tap12698a0a-70): carrier: link connected
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.007 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c700ef8f-fb93-4263-b301-ca16043b9b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.023 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6b1d56-d018-4dba-aaf7-8d5517486119]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12698a0a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:89:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446959, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275412, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.037 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a86f6cfc-7cc5-4d78-a9ec-0f29c853edb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:89d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446959, 'tstamp': 446959}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275414, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[990e5676-5747-4d5e-b215-bf85ac252c2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12698a0a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:89:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446959, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275415, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1e2e5f-4bf1-4a0e-a186-8f5afb464420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b63c14e8-9454-47e1-bb90-a06098414c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.150 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12698a0a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.150 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.151 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12698a0a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:52 np0005535469 NetworkManager[48891]: <info>  [1764088012.1531] manager: (tap12698a0a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.155 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:26:52 np0005535469 kernel: tap12698a0a-70: entered promiscuous mode
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.157 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12698a0a-70, col_values=(('external_ids', {'iface-id': '0d00cd0f-f859-441b-b6f5-82cb8f44f315'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:26:52Z|00047|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.176 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.177 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12698a0a-7c9a-41c0-97e4-92c265b0a639.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12698a0a-7c9a-41c0-97e4-92c265b0a639.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da86e0a0-4822-4aeb-a78a-f1d7ab92fb64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.178 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-12698a0a-7c9a-41c0-97e4-92c265b0a639
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/12698a0a-7c9a-41c0-97e4-92c265b0a639.pid.haproxy
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 12698a0a-7c9a-41c0-97e4-92c265b0a639
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:26:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.179 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'env', 'PROCESS_TAG=haproxy-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12698a0a-7c9a-41c0-97e4-92c265b0a639.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.346 254096 DEBUG nova.network.neutron [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.468 254096 DEBUG nova.compute.manager [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.469 254096 DEBUG oslo_concurrency.lockutils [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.470 254096 DEBUG oslo_concurrency.lockutils [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.471 254096 DEBUG oslo_concurrency.lockutils [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.472 254096 DEBUG nova.compute.manager [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Processing event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.600 254096 DEBUG nova.network.neutron [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.613 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Releasing lock "refresh_cache-cf8226e4-d68b-425a-8419-e273b162e9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.614 254096 DEBUG nova.compute.manager [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:26:52 np0005535469 podman[275447]: 2025-11-25 16:26:52.647108631 +0000 UTC m=+0.074096017 container create 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 11:26:52 np0005535469 podman[275447]: 2025-11-25 16:26:52.603697651 +0000 UTC m=+0.030685077 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:26:52 np0005535469 systemd[1]: Started libpod-conmon-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582.scope.
Nov 25 11:26:52 np0005535469 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 25 11:26:52 np0005535469 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 11.737s CPU time.
Nov 25 11:26:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:26:52 np0005535469 systemd-machined[216343]: Machine qemu-10-instance-0000000a terminated.
Nov 25 11:26:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5912db2bca04cbf22b9662e16eec824478d559ed5d99feb271afa3bc163b746/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:26:52 np0005535469 podman[275447]: 2025-11-25 16:26:52.762160802 +0000 UTC m=+0.189148148 container init 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 11:26:52 np0005535469 podman[275447]: 2025-11-25 16:26:52.768567048 +0000 UTC m=+0.195554394 container start 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:26:52 np0005535469 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : New worker (275468) forked
Nov 25 11:26:52 np0005535469 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : Loading success.
Nov 25 11:26:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.843 254096 INFO nova.virt.libvirt.driver [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance destroyed successfully.#033[00m
Nov 25 11:26:52 np0005535469 nova_compute[254092]: 2025-11-25 16:26:52.844 254096 DEBUG nova.objects.instance [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lazy-loading 'resources' on Instance uuid cf8226e4-d68b-425a-8419-e273b162e9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.295 254096 INFO nova.virt.libvirt.driver [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deleting instance files /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee_del#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.297 254096 INFO nova.virt.libvirt.driver [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deletion of /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee_del complete#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.339 254096 INFO nova.compute.manager [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.341 254096 DEBUG oslo.service.loopingcall [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.341 254096 DEBUG nova.compute.manager [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.341 254096 DEBUG nova.network.neutron [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.517 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.579 254096 DEBUG nova.network.neutron [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.594 254096 DEBUG nova.network.neutron [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.608 254096 INFO nova.compute.manager [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 0.27 seconds to deallocate network for instance.#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.650 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.651 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:53 np0005535469 nova_compute[254092]: 2025-11-25 16:26:53.719 254096 DEBUG oslo_concurrency.processutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.184 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.186 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088014.1832747, 138942ff-b720-4101-8dcf-38958751745b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.187 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Started (Lifecycle Event)#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.197 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:26:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:26:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728620109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.205 254096 INFO nova.virt.libvirt.driver [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance spawned successfully.#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.206 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.219 254096 DEBUG oslo_concurrency.processutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.226 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.236 254096 DEBUG nova.compute.provider_tree [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.238 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.247 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.248 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.249 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.250 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.251 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.252 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.263 254096 DEBUG nova.scheduler.client.report [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.268 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.269 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088014.1856723, 138942ff-b720-4101-8dcf-38958751745b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.269 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.325 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.334 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.338 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088014.190129, 138942ff-b720-4101-8dcf-38958751745b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.338 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.352 254096 INFO nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 7.70 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.353 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.359 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.361 254096 INFO nova.scheduler.client.report [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Deleted allocations for instance cf8226e4-d68b-425a-8419-e273b162e9ee#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.363 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.429 254096 INFO nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 9.37 seconds to build instance.#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.458 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.461 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.630 254096 DEBUG nova.compute.manager [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.631 254096 DEBUG oslo_concurrency.lockutils [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.631 254096 DEBUG oslo_concurrency.lockutils [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.632 254096 DEBUG oslo_concurrency.lockutils [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.632 254096 DEBUG nova.compute.manager [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] No waiting events found dispatching network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.633 254096 WARNING nova.compute.manager [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received unexpected event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:26:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Nov 25 11:26:54 np0005535469 nova_compute[254092]: 2025-11-25 16:26:54.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:26:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732839102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:26:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:26:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732839102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:26:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 194 op/s
Nov 25 11:26:57 np0005535469 nova_compute[254092]: 2025-11-25 16:26:57.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:26:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:26:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 25 11:26:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 25 11:26:57 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 25 11:26:58 np0005535469 nova_compute[254092]: 2025-11-25 16:26:58.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:58 np0005535469 nova_compute[254092]: 2025-11-25 16:26:58.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:26:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 194 op/s
Nov 25 11:26:59 np0005535469 nova_compute[254092]: 2025-11-25 16:26:59.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.9 MiB/s wr, 181 op/s
Nov 25 11:27:02 np0005535469 nova_compute[254092]: 2025-11-25 16:27:02.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 175 op/s
Nov 25 11:27:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 175 op/s
Nov 25 11:27:04 np0005535469 nova_compute[254092]: 2025-11-25 16:27:04.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:05 np0005535469 NetworkManager[48891]: <info>  [1764088025.2810] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 11:27:05 np0005535469 NetworkManager[48891]: <info>  [1764088025.2821] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:05Z|00048|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.835 254096 DEBUG nova.compute.manager [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.835 254096 DEBUG nova.compute.manager [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing instance network info cache due to event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.835 254096 DEBUG oslo_concurrency.lockutils [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.836 254096 DEBUG oslo_concurrency.lockutils [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:27:05 np0005535469 nova_compute[254092]: 2025-11-25 16:27:05.836 254096 DEBUG nova.network.neutron [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:27:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 95 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 649 KiB/s wr, 66 op/s
Nov 25 11:27:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:07Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:94:8b 10.100.0.3
Nov 25 11:27:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:07Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:94:8b 10.100.0.3
Nov 25 11:27:07 np0005535469 nova_compute[254092]: 2025-11-25 16:27:07.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:07 np0005535469 nova_compute[254092]: 2025-11-25 16:27:07.838 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088012.8369787, cf8226e4-d68b-425a-8419-e273b162e9ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:07 np0005535469 nova_compute[254092]: 2025-11-25 16:27:07.839 254096 INFO nova.compute.manager [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:27:07 np0005535469 nova_compute[254092]: 2025-11-25 16:27:07.866 254096 DEBUG nova.compute.manager [None req-a5c9ebac-bd5a-4ba6-bdda-03ca66c240b9 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.197 254096 DEBUG nova.network.neutron [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updated VIF entry in instance network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.198 254096 DEBUG nova.network.neutron [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.212 254096 DEBUG oslo_concurrency.lockutils [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.756 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.758 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.777 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:27:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 95 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 581 KiB/s wr, 59 op/s
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.849 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.849 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.859 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.860 254096 INFO nova.compute.claims [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:27:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:08.948 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:08.949 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:27:08 np0005535469 nova_compute[254092]: 2025-11-25 16:27:08.997 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100195329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.411 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.419 254096 DEBUG nova.compute.provider_tree [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.433 254096 DEBUG nova.scheduler.client.report [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.459 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.460 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.507 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.520 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.534 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.636 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.638 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.638 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating image(s)#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.657 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.677 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.696 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.699 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.753 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.754 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.754 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.755 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.772 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.775 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.827 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "38463335-bf41-4609-b81c-08bc8da299af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.828 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.845 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.924 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.925 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.937 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.938 254096 INFO nova.compute.claims [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:27:09 np0005535469 nova_compute[254092]: 2025-11-25 16:27:09.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.101 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.173 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.211 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] resizing rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.305 254096 DEBUG nova.objects.instance [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'migration_context' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.320 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.320 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ensure instance console log exists: /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.321 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.321 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.322 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.324 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.328 254096 WARNING nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.333 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.334 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.336 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.337 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.337 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.338 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.338 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.340 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.340 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.340 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.341 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.341 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.344 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548401150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.653 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.661 254096 DEBUG nova.compute.provider_tree [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.677 254096 DEBUG nova.scheduler.client.report [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.693 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.694 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.736 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.736 254096 DEBUG nova.network.neutron [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.750 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:27:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895172410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.765 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.780 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.805 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.811 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 117 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.898 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.899 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.900 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Creating image(s)#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.917 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.934 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.952 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:10 np0005535469 nova_compute[254092]: 2025-11-25 16:27:10.955 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.012 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.013 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.014 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.014 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.035 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.038 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 38463335-bf41-4609-b81c-08bc8da299af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.281 254096 DEBUG nova.network.neutron [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.281 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:27:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411282142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.309 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 38463335-bf41-4609-b81c-08bc8da299af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.332 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.334 254096 DEBUG nova.objects.instance [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.362 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <uuid>5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</uuid>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <name>instance-0000000c</name>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdmin275Test-server-1909956679</nova:name>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:27:10</nova:creationTime>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <nova:user uuid="f16e272341774153991e7ed856e34188">tempest-ServersAdmin275Test-1636693528-project-member</nova:user>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <nova:project uuid="a9a4c2749782401c89e06948356b0e0a">tempest-ServersAdmin275Test-1636693528</nova:project>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <entry name="serial">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <entry name="uuid">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log" append="off"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:27:11 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:27:11 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:27:11 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:27:11 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.368 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] resizing rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.442 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.443 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.443 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Using config drive#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.464 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.616 254096 DEBUG nova.objects.instance [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'migration_context' on Instance uuid 38463335-bf41-4609-b81c-08bc8da299af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.630 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.630 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Ensure instance console log exists: /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.631 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.631 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.631 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.632 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.638 254096 WARNING nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.643 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.644 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.646 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.649 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.649 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:27:11 np0005535469 nova_compute[254092]: 2025-11-25 16:27:11.651 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165087591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.147 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.165 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.170 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.193 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating config drive at /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.198 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ra_u8gw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.327 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ra_u8gw" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.345 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.348 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945289862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.590 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.592 254096 DEBUG nova.objects.instance [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38463335-bf41-4609-b81c-08bc8da299af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.610 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <uuid>38463335-bf41-4609-b81c-08bc8da299af</uuid>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <name>instance-0000000d</name>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1273486075</nova:name>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:27:11</nova:creationTime>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <nova:user uuid="6fecc7ec96b94801b693d75b96da5cca">tempest-LiveMigrationNegativeTest-1263337402-project-member</nova:user>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <nova:project uuid="200751433d7c4e9994df0ea449a6cb48">tempest-LiveMigrationNegativeTest-1263337402</nova:project>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <entry name="serial">38463335-bf41-4609-b81c-08bc8da299af</entry>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <entry name="uuid">38463335-bf41-4609-b81c-08bc8da299af</entry>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/38463335-bf41-4609-b81c-08bc8da299af_disk">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/38463335-bf41-4609-b81c-08bc8da299af_disk.config">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/console.log" append="off"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:27:12 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:27:12 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:27:12 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:27:12 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.710 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.711 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.712 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Using config drive#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.734 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.872 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:12 np0005535469 nova_compute[254092]: 2025-11-25 16:27:12.873 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting local config drive /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config because it was imported into RBD.#033[00m
Nov 25 11:27:12 np0005535469 systemd-machined[216343]: New machine qemu-12-instance-0000000c.
Nov 25 11:27:12 np0005535469 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.162 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Creating config drive at /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.168 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2f_isdis execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.216 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.2164097, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.217 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.220 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.221 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.224 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance spawned successfully.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.225 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.238 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.243 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.247 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.247 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.248 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.248 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.249 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.249 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.274 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.274 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.2174766, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Started (Lifecycle Event)#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.295 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2f_isdis" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.317 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.319 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config 38463335-bf41-4609-b81c-08bc8da299af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.343 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.345 254096 INFO nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 3.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.346 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.349 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.379 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.426 254096 INFO nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 4.60 seconds to build instance.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.449 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.481 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config 38463335-bf41-4609-b81c-08bc8da299af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.482 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deleting local config drive /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config because it was imported into RBD.#033[00m
Nov 25 11:27:13 np0005535469 systemd-machined[216343]: New machine qemu-13-instance-0000000d.
Nov 25 11:27:13 np0005535469 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 25 11:27:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.599 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.599 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.600 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.865 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.8648262, 38463335-bf41-4609-b81c-08bc8da299af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.868 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.870 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.871 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.876 254096 INFO nova.virt.libvirt.driver [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance spawned successfully.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.876 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.889 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.896 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.901 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.902 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.903 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.904 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.904 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.905 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.928 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.929 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.8650072, 38463335-bf41-4609-b81c-08bc8da299af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.930 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] VM Started (Lifecycle Event)#033[00m
Nov 25 11:27:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.951 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.951 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.956 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.973 254096 INFO nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 3.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.974 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:13 np0005535469 nova_compute[254092]: 2025-11-25 16:27:13.983 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:14 np0005535469 nova_compute[254092]: 2025-11-25 16:27:14.033 254096 INFO nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 4.14 seconds to build instance.#033[00m
Nov 25 11:27:14 np0005535469 nova_compute[254092]: 2025-11-25 16:27:14.048 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 11:27:15 np0005535469 nova_compute[254092]: 2025-11-25 16:27:14.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.7 MiB/s wr, 243 op/s
Nov 25 11:27:17 np0005535469 nova_compute[254092]: 2025-11-25 16:27:17.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.166 254096 INFO nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Rebuilding instance#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.299 254096 DEBUG nova.compute.manager [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.300 254096 DEBUG nova.compute.manager [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing instance network info cache due to event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.301 254096 DEBUG oslo_concurrency.lockutils [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.301 254096 DEBUG oslo_concurrency.lockutils [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.301 254096 DEBUG nova.network.neutron [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.562 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.583 254096 DEBUG nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:18 np0005535469 podman[276298]: 2025-11-25 16:27:18.652318868 +0000 UTC m=+0.061027962 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.654 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'pci_requests' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:18 np0005535469 podman[276297]: 2025-11-25 16:27:18.659103763 +0000 UTC m=+0.068285640 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.666 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.683 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'resources' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:18 np0005535469 podman[276299]: 2025-11-25 16:27:18.691515035 +0000 UTC m=+0.094255576 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.695 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'migration_context' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.715 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:27:18 np0005535469 nova_compute[254092]: 2025-11-25 16:27:18.718 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:27:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.2 MiB/s wr, 231 op/s
Nov 25 11:27:20 np0005535469 nova_compute[254092]: 2025-11-25 16:27:20.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:20 np0005535469 nova_compute[254092]: 2025-11-25 16:27:20.009 254096 DEBUG nova.network.neutron [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updated VIF entry in instance network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:27:20 np0005535469 nova_compute[254092]: 2025-11-25 16:27:20.010 254096 DEBUG nova.network.neutron [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:20 np0005535469 nova_compute[254092]: 2025-11-25 16:27:20.025 254096 DEBUG oslo_concurrency.lockutils [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:27:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.2 MiB/s wr, 254 op/s
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.345 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "c058218b-7732-4ced-b6a3-bb04203967d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.345 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.372 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.487 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.487 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.495 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.495 254096 INFO nova.compute.claims [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:27:21 np0005535469 nova_compute[254092]: 2025-11-25 16:27:21.725 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332587435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.223 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.231 254096 DEBUG nova.compute.provider_tree [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.261 254096 DEBUG nova.scheduler.client.report [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.291 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.294 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.348 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.349 254096 DEBUG nova.network.neutron [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.370 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.391 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.506 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.508 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.509 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Creating image(s)#033[00m
Nov 25 11:27:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.541 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.565 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.588 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.592 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:22 np0005535469 podman[276582]: 2025-11-25 16:27:22.655935614 +0000 UTC m=+0.075136985 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.669 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.670 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.670 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.671 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.697 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.703 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c058218b-7732-4ced-b6a3-bb04203967d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:22 np0005535469 podman[276582]: 2025-11-25 16:27:22.742212682 +0000 UTC m=+0.161414043 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.795 254096 DEBUG nova.network.neutron [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.796 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:27:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.945 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.945 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.946 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.946 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.946 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.950 254096 INFO nova.compute.manager [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Terminating instance#033[00m
Nov 25 11:27:22 np0005535469 nova_compute[254092]: 2025-11-25 16:27:22.952 254096 DEBUG nova.compute.manager [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:27:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:23Z|00049|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.073 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c058218b-7732-4ced-b6a3-bb04203967d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:23 np0005535469 kernel: tapf445c9f8-c2 (unregistering): left promiscuous mode
Nov 25 11:27:23 np0005535469 NetworkManager[48891]: <info>  [1764088043.0869] device (tapf445c9f8-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:27:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:23Z|00050|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 11:27:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:23Z|00051|binding|INFO|Releasing lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 from this chassis (sb_readonly=0)
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:23Z|00052|binding|INFO|Removing iface tapf445c9f8-c2 ovn-installed in OVS
Nov 25 11:27:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:27:23Z|00053|binding|INFO|Setting lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 down in Southbound
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.128 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:94:8b 10.100.0.3'], port_security=['fa:16:3e:f7:94:8b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '138942ff-b720-4101-8dcf-38958751745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33fbb668df82403b9f379e45132213fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '554076c7-aad0-4f65-8aee-4a40c468d6fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8958ae03-9d40-4b3e-bee3-6d4c69009647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f445c9f8-c211-4af8-a66d-21cacc81fdc5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.129 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f445c9f8-c211-4af8-a66d-21cacc81fdc5 in datapath 12698a0a-7c9a-41c0-97e4-92c265b0a639 unbound from our chassis#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.129 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12698a0a-7c9a-41c0-97e4-92c265b0a639, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.131 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3229cc-74fe-4090-88e2-c5ac1a07adc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.131 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 namespace which is not needed anymore#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 25 11:27:23 np0005535469 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 14.723s CPU time.
Nov 25 11:27:23 np0005535469 systemd-machined[216343]: Machine qemu-11-instance-0000000b terminated.
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.205 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] resizing rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : haproxy version is 2.8.14-c23fe91
Nov 25 11:27:23 np0005535469 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : path to executable is /usr/sbin/haproxy
Nov 25 11:27:23 np0005535469 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [WARNING]  (275466) : Exiting Master process...
Nov 25 11:27:23 np0005535469 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [ALERT]    (275466) : Current worker (275468) exited with code 143 (Terminated)
Nov 25 11:27:23 np0005535469 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [WARNING]  (275466) : All workers exited. Exiting... (0)
Nov 25 11:27:23 np0005535469 systemd[1]: libpod-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582.scope: Deactivated successfully.
Nov 25 11:27:23 np0005535469 podman[276827]: 2025-11-25 16:27:23.305882722 +0000 UTC m=+0.066343336 container died 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.327 254096 DEBUG nova.objects.instance [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'migration_context' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.341 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.342 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Ensure instance console log exists: /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.342 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.343 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.343 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.344 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.347 254096 DEBUG nova.compute.manager [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-unplugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.347 254096 DEBUG oslo_concurrency.lockutils [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG oslo_concurrency.lockutils [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG oslo_concurrency.lockutils [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG nova.compute.manager [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] No waiting events found dispatching network-vif-unplugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG nova.compute.manager [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-unplugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:27:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582-userdata-shm.mount: Deactivated successfully.
Nov 25 11:27:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b5912db2bca04cbf22b9662e16eec824478d559ed5d99feb271afa3bc163b746-merged.mount: Deactivated successfully.
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.355 254096 WARNING nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.361 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.362 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.366 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.366 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.367 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.367 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.367 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.372 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:23 np0005535469 podman[276827]: 2025-11-25 16:27:23.382518738 +0000 UTC m=+0.142979352 container cleanup 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 systemd[1]: libpod-conmon-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582.scope: Deactivated successfully.
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.412 254096 INFO nova.virt.libvirt.driver [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance destroyed successfully.#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.412 254096 DEBUG nova.objects.instance [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lazy-loading 'resources' on Instance uuid 138942ff-b720-4101-8dcf-38958751745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.427 254096 DEBUG nova.virt.libvirt.vif [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-964763500',id=11,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:26:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33fbb668df82403b9f379e45132213fd',ramdisk_id='',reservation_id='r-7p1lpp0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:26:54Z,user_data=None,user_id='dda8ef18e79e4220b420023d65ccb78a',uuid=138942ff-b720-4101-8dcf-38958751745b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.428 254096 DEBUG nova.network.os_vif_util [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converting VIF {"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.429 254096 DEBUG nova.network.os_vif_util [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.429 254096 DEBUG os_vif [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.432 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf445c9f8-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.440 254096 INFO os_vif [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2')#033[00m
Nov 25 11:27:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:27:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:27:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:23 np0005535469 podman[276907]: 2025-11-25 16:27:23.485999464 +0000 UTC m=+0.065496574 container remove 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.493 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[540ef775-ff86-45a7-bacc-8f01a308776e]: (4, ('Tue Nov 25 04:27:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 (15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582)\n15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582\nTue Nov 25 04:27:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 (15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582)\n15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.496 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0eeddf2-7612-450f-b3b3-71b725ed639d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.497 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12698a0a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:27:23 np0005535469 kernel: tap12698a0a-70: left promiscuous mode
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.515 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9951429-7cd1-4ee8-9cc7-5b688fcf33e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.537 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7849fbc-aab9-4fe8-b422-f8d97210aca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.539 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b1689e-9031-4fe4-8e25-be936af12294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dffe8853-6e40-4936-8469-b8fbaed47c9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446951, 'reachable_time': 44104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276999, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 systemd[1]: run-netns-ovnmeta\x2d12698a0a\x2d7c9a\x2d41c0\x2d97e4\x2d92c265b0a639.mount: Deactivated successfully.
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.560 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:27:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.561 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d8dee2a3-8ac7-4ae8-b69f-73003447c8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:27:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165044617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.869 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.903 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.919 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.954 254096 INFO nova.virt.libvirt.driver [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deleting instance files /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b_del#033[00m
Nov 25 11:27:23 np0005535469 nova_compute[254092]: 2025-11-25 16:27:23.955 254096 INFO nova.virt.libvirt.driver [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deletion of /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b_del complete#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.050 254096 INFO nova.compute.manager [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.051 254096 DEBUG oslo.service.loopingcall [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.051 254096 DEBUG nova.compute.manager [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.051 254096 DEBUG nova.network.neutron [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev de185658-503a-4bf6-8832-8651140d8904 does not exist
Nov 25 11:27:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e7c1a43a-83fa-4d74-8b3b-7f624197e110 does not exist
Nov 25 11:27:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7542f6f8-f3c9-4dbd-88e7-473bb591cb63 does not exist
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603418679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.387 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.388 254096 DEBUG nova.objects.instance [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'pci_devices' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.403 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <uuid>c058218b-7732-4ced-b6a3-bb04203967d4</uuid>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <name>instance-0000000e</name>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <nova:name>tempest-LiveMigrationNegativeTest-server-3347679</nova:name>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:27:23</nova:creationTime>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <nova:user uuid="6fecc7ec96b94801b693d75b96da5cca">tempest-LiveMigrationNegativeTest-1263337402-project-member</nova:user>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <nova:project uuid="200751433d7c4e9994df0ea449a6cb48">tempest-LiveMigrationNegativeTest-1263337402</nova:project>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <entry name="serial">c058218b-7732-4ced-b6a3-bb04203967d4</entry>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <entry name="uuid">c058218b-7732-4ced-b6a3-bb04203967d4</entry>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c058218b-7732-4ced-b6a3-bb04203967d4_disk">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c058218b-7732-4ced-b6a3-bb04203967d4_disk.config">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/console.log" append="off"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:27:24 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:27:24 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:27:24 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:27:24 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.497 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.497 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.498 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Using config drive#033[00m
Nov 25 11:27:24 np0005535469 nova_compute[254092]: 2025-11-25 16:27:24.517 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:27:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 186 op/s
Nov 25 11:27:24 np0005535469 podman[277292]: 2025-11-25 16:27:24.860266634 +0000 UTC m=+0.079735341 container create e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:27:24 np0005535469 systemd[1]: Started libpod-conmon-e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2.scope.
Nov 25 11:27:24 np0005535469 podman[277292]: 2025-11-25 16:27:24.814313014 +0000 UTC m=+0.033781741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:27:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:27:24 np0005535469 podman[277292]: 2025-11-25 16:27:24.949317157 +0000 UTC m=+0.168785945 container init e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:27:24 np0005535469 podman[277292]: 2025-11-25 16:27:24.956300857 +0000 UTC m=+0.175769564 container start e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 25 11:27:24 np0005535469 podman[277292]: 2025-11-25 16:27:24.960308167 +0000 UTC m=+0.179776964 container attach e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:27:24 np0005535469 naughty_lehmann[277309]: 167 167
Nov 25 11:27:24 np0005535469 systemd[1]: libpod-e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2.scope: Deactivated successfully.
Nov 25 11:27:24 np0005535469 podman[277292]: 2025-11-25 16:27:24.962839756 +0000 UTC m=+0.182308453 container died e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:27:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6a928615c3f69d9f80e997e222642240b35ff7882a670b333c0c791bf417cfe0-merged.mount: Deactivated successfully.
Nov 25 11:27:24 np0005535469 podman[277292]: 2025-11-25 16:27:24.995317379 +0000 UTC m=+0.214786096 container remove e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:27:25 np0005535469 systemd[1]: libpod-conmon-e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2.scope: Deactivated successfully.
Nov 25 11:27:25 np0005535469 podman[277333]: 2025-11-25 16:27:25.196176136 +0000 UTC m=+0.071972340 container create 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:27:25 np0005535469 podman[277333]: 2025-11-25 16:27:25.1797866 +0000 UTC m=+0.055582834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:27:25 np0005535469 systemd[1]: Started libpod-conmon-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope.
Nov 25 11:27:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:27:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:25 np0005535469 podman[277333]: 2025-11-25 16:27:25.302205982 +0000 UTC m=+0.178002206 container init 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:27:25 np0005535469 podman[277333]: 2025-11-25 16:27:25.307834945 +0000 UTC m=+0.183631149 container start 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:27:25 np0005535469 podman[277333]: 2025-11-25 16:27:25.310533329 +0000 UTC m=+0.186329553 container attach 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 11:27:25 np0005535469 nova_compute[254092]: 2025-11-25 16:27:25.393 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Creating config drive at /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config#033[00m
Nov 25 11:27:25 np0005535469 nova_compute[254092]: 2025-11-25 16:27:25.400 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9iqo5hjb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:25 np0005535469 nova_compute[254092]: 2025-11-25 16:27:25.543 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9iqo5hjb" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:25 np0005535469 nova_compute[254092]: 2025-11-25 16:27:25.646 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:25 np0005535469 nova_compute[254092]: 2025-11-25 16:27:25.649 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config c058218b-7732-4ced-b6a3-bb04203967d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.086 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config c058218b-7732-4ced-b6a3-bb04203967d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.088 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deleting local config drive /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config because it was imported into RBD.#033[00m
Nov 25 11:27:26 np0005535469 systemd-machined[216343]: New machine qemu-14-instance-0000000e.
Nov 25 11:27:26 np0005535469 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.176 254096 DEBUG nova.compute.manager [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.176 254096 DEBUG oslo_concurrency.lockutils [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 DEBUG oslo_concurrency.lockutils [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 DEBUG oslo_concurrency.lockutils [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 DEBUG nova.compute.manager [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] No waiting events found dispatching network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 WARNING nova.compute.manager [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received unexpected event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:27:26 np0005535469 unruffled_heyrovsky[277350]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:27:26 np0005535469 unruffled_heyrovsky[277350]: --> relative data size: 1.0
Nov 25 11:27:26 np0005535469 unruffled_heyrovsky[277350]: --> All data devices are unavailable
Nov 25 11:27:26 np0005535469 systemd[1]: libpod-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope: Deactivated successfully.
Nov 25 11:27:26 np0005535469 systemd[1]: libpod-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope: Consumed 1.010s CPU time.
Nov 25 11:27:26 np0005535469 conmon[277350]: conmon 52be2a0773d208fb3080 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope/container/memory.events
Nov 25 11:27:26 np0005535469 podman[277333]: 2025-11-25 16:27:26.414253006 +0000 UTC m=+1.290049210 container died 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.429 254096 DEBUG nova.network.neutron [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e-merged.mount: Deactivated successfully.
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.464 254096 INFO nova.compute.manager [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 2.41 seconds to deallocate network for instance.#033[00m
Nov 25 11:27:26 np0005535469 podman[277333]: 2025-11-25 16:27:26.476365466 +0000 UTC m=+1.352161670 container remove 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.504 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.504 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:26 np0005535469 systemd[1]: libpod-conmon-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope: Deactivated successfully.
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.512 254096 DEBUG nova.compute.manager [req-52d1e66c-1561-4a55-bf6e-96a0474b88d3 req-3a4f7140-adb9-4e8d-a1be-f86090c77745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-deleted-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.601 254096 DEBUG oslo_concurrency.processutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.750 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088046.7499244, c058218b-7732-4ced-b6a3-bb04203967d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.751 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.756 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.757 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.761 254096 INFO nova.virt.libvirt.driver [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance spawned successfully.#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.762 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.790 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.796 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.809 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.809 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.810 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.810 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.811 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.812 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.821 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.822 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088046.755951, c058218b-7732-4ced-b6a3-bb04203967d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.822 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Started (Lifecycle Event)#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.854 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 206 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.0 MiB/s wr, 272 op/s
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.859 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.898 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.915 254096 INFO nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 4.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.916 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:26 np0005535469 nova_compute[254092]: 2025-11-25 16:27:26.981 254096 INFO nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 5.52 seconds to build instance.#033[00m
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.002 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595038871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.112 254096 DEBUG oslo_concurrency.processutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:27 np0005535469 podman[277643]: 2025-11-25 16:27:27.113252809 +0000 UTC m=+0.041745477 container create 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.120 254096 DEBUG nova.compute.provider_tree [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.142 254096 DEBUG nova.scheduler.client.report [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.161 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:27 np0005535469 systemd[1]: Started libpod-conmon-85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7.scope.
Nov 25 11:27:27 np0005535469 podman[277643]: 2025-11-25 16:27:27.093347757 +0000 UTC m=+0.021840465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:27:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.211 254096 INFO nova.scheduler.client.report [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Deleted allocations for instance 138942ff-b720-4101-8dcf-38958751745b#033[00m
Nov 25 11:27:27 np0005535469 podman[277643]: 2025-11-25 16:27:27.218708069 +0000 UTC m=+0.147200757 container init 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:27:27 np0005535469 podman[277643]: 2025-11-25 16:27:27.226022938 +0000 UTC m=+0.154515606 container start 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 11:27:27 np0005535469 distracted_dirac[277661]: 167 167
Nov 25 11:27:27 np0005535469 systemd[1]: libpod-85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7.scope: Deactivated successfully.
Nov 25 11:27:27 np0005535469 podman[277643]: 2025-11-25 16:27:27.250207916 +0000 UTC m=+0.178700594 container attach 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:27:27 np0005535469 podman[277643]: 2025-11-25 16:27:27.250899405 +0000 UTC m=+0.179392073 container died 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0d2094a39dfbda8b681b880ee4017f5b653a163c6362aba76a267c57cddcb640-merged.mount: Deactivated successfully.
Nov 25 11:27:27 np0005535469 podman[277643]: 2025-11-25 16:27:27.298069168 +0000 UTC m=+0.226561836 container remove 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:27:27 np0005535469 systemd[1]: libpod-conmon-85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7.scope: Deactivated successfully.
Nov 25 11:27:27 np0005535469 nova_compute[254092]: 2025-11-25 16:27:27.313 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:27 np0005535469 podman[277685]: 2025-11-25 16:27:27.496972371 +0000 UTC m=+0.060741103 container create 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:27:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:27 np0005535469 systemd[1]: Started libpod-conmon-5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689.scope.
Nov 25 11:27:27 np0005535469 podman[277685]: 2025-11-25 16:27:27.478075378 +0000 UTC m=+0.041844110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:27:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:27:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:27 np0005535469 podman[277685]: 2025-11-25 16:27:27.627066712 +0000 UTC m=+0.190835494 container init 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:27:27 np0005535469 podman[277685]: 2025-11-25 16:27:27.637830195 +0000 UTC m=+0.201598937 container start 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 11:27:27 np0005535469 podman[277685]: 2025-11-25 16:27:27.64207312 +0000 UTC m=+0.205841862 container attach 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:27:28 np0005535469 serene_merkle[277701]: {
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:    "0": [
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:        {
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "devices": [
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "/dev/loop3"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            ],
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_name": "ceph_lv0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_size": "21470642176",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "name": "ceph_lv0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "tags": {
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cluster_name": "ceph",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.crush_device_class": "",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.encrypted": "0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osd_id": "0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.type": "block",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.vdo": "0"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            },
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "type": "block",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "vg_name": "ceph_vg0"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:        }
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:    ],
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:    "1": [
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:        {
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "devices": [
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "/dev/loop4"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            ],
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_name": "ceph_lv1",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_size": "21470642176",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "name": "ceph_lv1",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "tags": {
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cluster_name": "ceph",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.crush_device_class": "",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.encrypted": "0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osd_id": "1",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.type": "block",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.vdo": "0"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            },
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "type": "block",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "vg_name": "ceph_vg1"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:        }
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:    ],
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:    "2": [
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:        {
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "devices": [
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "/dev/loop5"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            ],
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_name": "ceph_lv2",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_size": "21470642176",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "name": "ceph_lv2",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "tags": {
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.cluster_name": "ceph",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.crush_device_class": "",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.encrypted": "0",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osd_id": "2",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.type": "block",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:                "ceph.vdo": "0"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            },
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "type": "block",
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:            "vg_name": "ceph_vg2"
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:        }
Nov 25 11:27:28 np0005535469 serene_merkle[277701]:    ]
Nov 25 11:27:28 np0005535469 serene_merkle[277701]: }
Nov 25 11:27:28 np0005535469 systemd[1]: libpod-5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689.scope: Deactivated successfully.
Nov 25 11:27:28 np0005535469 podman[277685]: 2025-11-25 16:27:28.403555544 +0000 UTC m=+0.967324296 container died 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:27:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854-merged.mount: Deactivated successfully.
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:28 np0005535469 podman[277685]: 2025-11-25 16:27:28.449832484 +0000 UTC m=+1.013601216 container remove 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 11:27:28 np0005535469 systemd[1]: libpod-conmon-5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689.scope: Deactivated successfully.
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.471 254096 DEBUG nova.objects.instance [None req-4733b19c-fa6c-4ae0-86c7-09d9ba1fa2e8 c35aca60808846df92d36162b59d2ba0 e0ca320f9ba34efda13ff2001e7d6cdc - - default default] Lazy-loading 'pci_devices' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.489 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088048.487984, c058218b-7732-4ced-b6a3-bb04203967d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.489 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.503 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.506 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.520 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 11:27:28 np0005535469 nova_compute[254092]: 2025-11-25 16:27:28.792 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:27:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 206 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 794 KiB/s rd, 4.1 MiB/s wr, 108 op/s
Nov 25 11:27:28 np0005535469 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 25 11:27:28 np0005535469 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 2.358s CPU time.
Nov 25 11:27:28 np0005535469 systemd-machined[216343]: Machine qemu-14-instance-0000000e terminated.
Nov 25 11:27:29 np0005535469 nova_compute[254092]: 2025-11-25 16:27:29.082 254096 DEBUG nova.compute.manager [None req-4733b19c-fa6c-4ae0-86c7-09d9ba1fa2e8 c35aca60808846df92d36162b59d2ba0 e0ca320f9ba34efda13ff2001e7d6cdc - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:29 np0005535469 podman[277866]: 2025-11-25 16:27:29.092804471 +0000 UTC m=+0.054191345 container create 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:27:29 np0005535469 systemd[1]: Started libpod-conmon-0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8.scope.
Nov 25 11:27:29 np0005535469 podman[277866]: 2025-11-25 16:27:29.068747586 +0000 UTC m=+0.030134490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:27:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:27:29 np0005535469 podman[277866]: 2025-11-25 16:27:29.189573976 +0000 UTC m=+0.150960850 container init 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:27:29 np0005535469 podman[277866]: 2025-11-25 16:27:29.197165761 +0000 UTC m=+0.158552665 container start 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:27:29 np0005535469 podman[277866]: 2025-11-25 16:27:29.202473106 +0000 UTC m=+0.163859970 container attach 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:27:29 np0005535469 awesome_lamarr[277885]: 167 167
Nov 25 11:27:29 np0005535469 systemd[1]: libpod-0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8.scope: Deactivated successfully.
Nov 25 11:27:29 np0005535469 podman[277866]: 2025-11-25 16:27:29.206110706 +0000 UTC m=+0.167497570 container died 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:27:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4c74a36c89d0d601c55a7c55070f438a7f662bf51dec763f711a4a8fea7b1461-merged.mount: Deactivated successfully.
Nov 25 11:27:29 np0005535469 podman[277866]: 2025-11-25 16:27:29.248050956 +0000 UTC m=+0.209437820 container remove 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:27:29 np0005535469 systemd[1]: libpod-conmon-0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8.scope: Deactivated successfully.
Nov 25 11:27:29 np0005535469 podman[277910]: 2025-11-25 16:27:29.450226059 +0000 UTC m=+0.057079525 container create 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:27:29 np0005535469 systemd[1]: Started libpod-conmon-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope.
Nov 25 11:27:29 np0005535469 podman[277910]: 2025-11-25 16:27:29.421353422 +0000 UTC m=+0.028206908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:27:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:27:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:27:29 np0005535469 podman[277910]: 2025-11-25 16:27:29.544843713 +0000 UTC m=+0.151697189 container init 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:27:29 np0005535469 podman[277910]: 2025-11-25 16:27:29.553949222 +0000 UTC m=+0.160802688 container start 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:27:29 np0005535469 podman[277910]: 2025-11-25 16:27:29.556768848 +0000 UTC m=+0.163622314 container attach 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:27:30 np0005535469 clever_joliot[277926]: {
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "osd_id": 1,
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "type": "bluestore"
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:    },
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "osd_id": 2,
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "type": "bluestore"
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:    },
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "osd_id": 0,
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:        "type": "bluestore"
Nov 25 11:27:30 np0005535469 clever_joliot[277926]:    }
Nov 25 11:27:30 np0005535469 clever_joliot[277926]: }
Nov 25 11:27:30 np0005535469 systemd[1]: libpod-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope: Deactivated successfully.
Nov 25 11:27:30 np0005535469 podman[277910]: 2025-11-25 16:27:30.574944887 +0000 UTC m=+1.181798353 container died 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:27:30 np0005535469 systemd[1]: libpod-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope: Consumed 1.021s CPU time.
Nov 25 11:27:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83-merged.mount: Deactivated successfully.
Nov 25 11:27:30 np0005535469 podman[277910]: 2025-11-25 16:27:30.64302789 +0000 UTC m=+1.249881356 container remove 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 11:27:30 np0005535469 systemd[1]: libpod-conmon-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope: Deactivated successfully.
Nov 25 11:27:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:27:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:27:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 574c18aa-dfe6-4b0c-848e-60314bff8d82 does not exist
Nov 25 11:27:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 86b7a3df-3283-4689-8d2a-ed740b75ef7d does not exist
Nov 25 11:27:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 242 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 236 op/s
Nov 25 11:27:31 np0005535469 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 11:27:31 np0005535469 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.213s CPU time.
Nov 25 11:27:31 np0005535469 systemd-machined[216343]: Machine qemu-12-instance-0000000c terminated.
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.641 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "c058218b-7732-4ced-b6a3-bb04203967d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "c058218b-7732-4ced-b6a3-bb04203967d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.643 254096 INFO nova.compute.manager [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Terminating instance#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.644 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "refresh_cache-c058218b-7732-4ced-b6a3-bb04203967d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.644 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquired lock "refresh_cache-c058218b-7732-4ced-b6a3-bb04203967d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.644 254096 DEBUG nova.network.neutron [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:27:31 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:31 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.811 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.816 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.#033[00m
Nov 25 11:27:31 np0005535469 nova_compute[254092]: 2025-11-25 16:27:31.820 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.161 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting instance files /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.162 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deletion of /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del complete#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.270 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.308 254096 DEBUG nova.network.neutron [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.523 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.524 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating image(s)#033[00m
Nov 25 11:27:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.555 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.590 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.630 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.634 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:32 np0005535469 nova_compute[254092]: 2025-11-25 16:27:32.635 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 246 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.072 254096 DEBUG nova.virt.libvirt.imagebackend [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/a4aa3708-bb73-4b5a-b3f3-42153358021e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/a4aa3708-bb73-4b5a-b3f3-42153358021e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.209 254096 DEBUG nova.network.neutron [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.230 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Releasing lock "refresh_cache-c058218b-7732-4ced-b6a3-bb04203967d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.230 254096 DEBUG nova.compute.manager [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.237 254096 INFO nova.virt.libvirt.driver [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance destroyed successfully.#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.237 254096 DEBUG nova.objects.instance [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'resources' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.693 254096 INFO nova.virt.libvirt.driver [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deleting instance files /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4_del#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.693 254096 INFO nova.virt.libvirt.driver [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deletion of /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4_del complete#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.885 254096 INFO nova.compute.manager [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.885 254096 DEBUG oslo.service.loopingcall [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.886 254096 DEBUG nova.compute.manager [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:27:33 np0005535469 nova_compute[254092]: 2025-11-25 16:27:33.886 254096 DEBUG nova.network.neutron [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.188 254096 DEBUG nova.network.neutron [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.200 254096 DEBUG nova.network.neutron [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.211 254096 INFO nova.compute.manager [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 0.32 seconds to deallocate network for instance.#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.249 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.250 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.357 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.377 254096 DEBUG oslo_concurrency.processutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.422 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.424 254096 DEBUG nova.virt.images [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] a4aa3708-bb73-4b5a-b3f3-42153358021e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.425 254096 DEBUG nova.privsep.utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.425 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.713 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.717 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.770 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.772 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.788 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.791 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378797721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.817 254096 DEBUG oslo_concurrency.processutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.823 254096 DEBUG nova.compute.provider_tree [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.840 254096 DEBUG nova.scheduler.client.report [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 246 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.867 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.898 254096 INFO nova.scheduler.client.report [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Deleted allocations for instance c058218b-7732-4ced-b6a3-bb04203967d4#033[00m
Nov 25 11:27:34 np0005535469 nova_compute[254092]: 2025-11-25 16:27:34.973 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.103 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.166 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] resizing rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.255 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.255 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ensure instance console log exists: /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.256 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.256 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.256 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.258 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.260 254096 WARNING nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.269 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.270 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.272 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.273 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.273 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.273 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.274 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.274 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.275 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.275 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.275 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.276 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.276 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.276 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.277 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.277 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.277 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.313 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.499 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "38463335-bf41-4609-b81c-08bc8da299af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.499 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.500 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "38463335-bf41-4609-b81c-08bc8da299af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.500 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.500 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.501 254096 INFO nova.compute.manager [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Terminating instance#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.502 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "refresh_cache-38463335-bf41-4609-b81c-08bc8da299af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.502 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquired lock "refresh_cache-38463335-bf41-4609-b81c-08bc8da299af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.502 254096 DEBUG nova.network.neutron [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:27:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791826067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.731 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.762 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.766 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:35 np0005535469 nova_compute[254092]: 2025-11-25 16:27:35.785 254096 DEBUG nova.network.neutron [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:27:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376679774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.211 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.213 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <uuid>5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</uuid>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <name>instance-0000000c</name>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdmin275Test-server-1909956679</nova:name>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:27:35</nova:creationTime>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <nova:user uuid="f16e272341774153991e7ed856e34188">tempest-ServersAdmin275Test-1636693528-project-member</nova:user>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <nova:project uuid="a9a4c2749782401c89e06948356b0e0a">tempest-ServersAdmin275Test-1636693528</nova:project>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <entry name="serial">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <entry name="uuid">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log" append="off"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:27:36 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:27:36 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:27:36 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:27:36 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.259 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.259 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.259 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Using config drive#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.281 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.300 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.375 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'keypairs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.519 254096 DEBUG nova.network.neutron [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.770 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Releasing lock "refresh_cache-38463335-bf41-4609-b81c-08bc8da299af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.771 254096 DEBUG nova.compute.manager [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:27:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 326 op/s
Nov 25 11:27:36 np0005535469 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 25 11:27:36 np0005535469 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 13.091s CPU time.
Nov 25 11:27:36 np0005535469 systemd-machined[216343]: Machine qemu-13-instance-0000000d terminated.
Nov 25 11:27:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 25 11:27:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 25 11:27:36 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.900 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating config drive at /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.911 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpseyo0wf2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.993 254096 INFO nova.virt.libvirt.driver [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance destroyed successfully.#033[00m
Nov 25 11:27:36 np0005535469 nova_compute[254092]: 2025-11-25 16:27:36.993 254096 DEBUG nova.objects.instance [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'resources' on Instance uuid 38463335-bf41-4609-b81c-08bc8da299af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.057 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpseyo0wf2" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.078 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.081 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.211 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.212 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting local config drive /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config because it was imported into RBD.#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:37 np0005535469 systemd-machined[216343]: New machine qemu-15-instance-0000000c.
Nov 25 11:27:37 np0005535469 systemd[1]: Started Virtual Machine qemu-15-instance-0000000c.
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.379 254096 INFO nova.virt.libvirt.driver [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deleting instance files /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af_del#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.380 254096 INFO nova.virt.libvirt.driver [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deletion of /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af_del complete#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.424 254096 INFO nova.compute.manager [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.425 254096 DEBUG oslo.service.loopingcall [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.425 254096 DEBUG nova.compute.manager [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.426 254096 DEBUG nova.network.neutron [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:27:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.567 254096 DEBUG nova.network.neutron [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.581 254096 DEBUG nova.network.neutron [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.592 254096 INFO nova.compute.manager [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.641 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.642 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.727 254096 DEBUG oslo_concurrency.processutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.849 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.849 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088057.848299, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.850 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.854 254096 DEBUG nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.854 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.859 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance spawned successfully.#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.860 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.875 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.883 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.889 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.890 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.890 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.891 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.891 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.892 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.914 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.914 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088057.8529596, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.914 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Started (Lifecycle Event)#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.933 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.936 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.974 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:27:37 np0005535469 nova_compute[254092]: 2025-11-25 16:27:37.983 254096 DEBUG nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.052 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062311794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.168 254096 DEBUG oslo_concurrency.processutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.172 254096 DEBUG nova.compute.provider_tree [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.189 254096 DEBUG nova.scheduler.client.report [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.204 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.206 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.207 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.236 254096 INFO nova.scheduler.client.report [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Deleted allocations for instance 38463335-bf41-4609-b81c-08bc8da299af#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.269 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.312 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.400 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088043.3983412, 138942ff-b720-4101-8dcf-38958751745b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.400 254096 INFO nova.compute.manager [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.419 254096 DEBUG nova.compute.manager [None req-433e2c21-1419-4e89-b385-481cde54bb28 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:38 np0005535469 nova_compute[254092]: 2025-11-25 16:27:38.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 289 op/s
Nov 25 11:27:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 25 11:27:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 25 11:27:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.030 254096 INFO nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Rebuilding instance#033[00m
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:27:40
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'images', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.log']
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.665 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.683 254096 DEBUG nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.725 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'pci_requests' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.735 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'pci_devices' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.746 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'resources' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.753 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'migration_context' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.773 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:27:40 np0005535469 nova_compute[254092]: 2025-11-25 16:27:40.777 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:27:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 102 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.7 MiB/s wr, 292 op/s
Nov 25 11:27:42 np0005535469 nova_compute[254092]: 2025-11-25 16:27:42.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.548097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062548132, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1393, "num_deletes": 507, "total_data_size": 1527975, "memory_usage": 1563504, "flush_reason": "Manual Compaction"}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062556683, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1318115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23221, "largest_seqno": 24613, "table_properties": {"data_size": 1312367, "index_size": 2503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16401, "raw_average_key_size": 19, "raw_value_size": 1298370, "raw_average_value_size": 1520, "num_data_blocks": 112, "num_entries": 854, "num_filter_entries": 854, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087973, "oldest_key_time": 1764087973, "file_creation_time": 1764088062, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8635 microseconds, and 4440 cpu microseconds.
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.556728) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1318115 bytes OK
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.556749) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558101) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558115) EVENT_LOG_v1 {"time_micros": 1764088062558110, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1520609, prev total WAL file size 1520609, number of live WAL files 2.
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558866) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1287KB)], [53(9154KB)]
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062558906, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10692781, "oldest_snapshot_seqno": -1}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4725 keys, 7478442 bytes, temperature: kUnknown
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062610701, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7478442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7446774, "index_size": 18758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118648, "raw_average_key_size": 25, "raw_value_size": 7361270, "raw_average_value_size": 1557, "num_data_blocks": 779, "num_entries": 4725, "num_filter_entries": 4725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088062, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.610980) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7478442 bytes
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.612739) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.0 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(13.8) write-amplify(5.7) OK, records in: 5743, records dropped: 1018 output_compression: NoCompression
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.612767) EVENT_LOG_v1 {"time_micros": 1764088062612754, "job": 28, "event": "compaction_finished", "compaction_time_micros": 51905, "compaction_time_cpu_micros": 20620, "output_level": 6, "num_output_files": 1, "total_output_size": 7478442, "num_input_records": 5743, "num_output_records": 4725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062613172, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062615029, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:27:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:27:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 321 op/s
Nov 25 11:27:43 np0005535469 nova_compute[254092]: 2025-11-25 16:27:43.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:44 np0005535469 nova_compute[254092]: 2025-11-25 16:27:44.085 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088049.0827372, c058218b-7732-4ced-b6a3-bb04203967d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:44 np0005535469 nova_compute[254092]: 2025-11-25 16:27:44.085 254096 INFO nova.compute.manager [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:27:44 np0005535469 nova_compute[254092]: 2025-11-25 16:27:44.105 254096 DEBUG nova.compute.manager [None req-2792c046-fba4-4e57-9936-5be4edc4fec1 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 25 11:27:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 25 11:27:44 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 25 11:27:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 210 op/s
Nov 25 11:27:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 229 op/s
Nov 25 11:27:47 np0005535469 nova_compute[254092]: 2025-11-25 16:27:47.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.726 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "12043b7f-9853-45a8-b963-ae96713754b4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.727 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.755 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.846 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.846 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.854 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:27:48 np0005535469 nova_compute[254092]: 2025-11-25 16:27:48.855 254096 INFO nova.compute.claims [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:27:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 3.3 KiB/s wr, 43 op/s
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.007 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001850020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.429 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.435 254096 DEBUG nova.compute.provider_tree [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.456 254096 DEBUG nova.scheduler.client.report [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.497 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.498 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.567 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.568 254096 DEBUG nova.network.neutron [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.597 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.618 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:27:49 np0005535469 podman[278509]: 2025-11-25 16:27:49.650534755 +0000 UTC m=+0.065728378 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:27:49 np0005535469 podman[278510]: 2025-11-25 16:27:49.670866618 +0000 UTC m=+0.082812233 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:27:49 np0005535469 podman[278516]: 2025-11-25 16:27:49.689483204 +0000 UTC m=+0.080620003 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.707 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.708 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.709 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Creating image(s)#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.726 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.743 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.759 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.762 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.814 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.817 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.832 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:49 np0005535469 nova_compute[254092]: 2025-11-25 16:27:49.835 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12043b7f-9853-45a8-b963-ae96713754b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.007 254096 DEBUG nova.network.neutron [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.007 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.245 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12043b7f-9853-45a8-b963-ae96713754b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.300 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] resizing rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.376 254096 DEBUG nova.objects.instance [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'migration_context' on Instance uuid 12043b7f-9853-45a8-b963-ae96713754b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.396 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.396 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Ensure instance console log exists: /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.397 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.398 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.398 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.401 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.407 254096 WARNING nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.421 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.422 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.425 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.426 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.426 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.426 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.429 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.431 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.537 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "2174ef15-55fa-4734-8cc2-89064853919b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.537 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.543 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.544 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.545 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.567 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.694 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.695 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.701 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.701 254096 INFO nova.compute.claims [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.848 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1665008175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 104 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 499 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.868 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.870 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.890 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.894 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960485916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:50 np0005535469 nova_compute[254092]: 2025-11-25 16:27:50.961 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.023 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.023 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000645763637917562 of space, bias 1.0, pg target 0.19372909137526861 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668757529139806 of space, bias 1.0, pg target 0.20006272587419416 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.160 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.161 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4445MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.161 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831315958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059842282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.332 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.333 254096 DEBUG nova.objects.instance [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12043b7f-9853-45a8-b963-ae96713754b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.337 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.342 254096 DEBUG nova.compute.provider_tree [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.358 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <uuid>12043b7f-9853-45a8-b963-ae96713754b4</uuid>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <name>instance-0000000f</name>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListImageFiltersTestJSON-server-842538226</nova:name>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:27:50</nova:creationTime>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <nova:user uuid="435d184bc01d4b1b878995bce4319f96">tempest-ListImageFiltersTestJSON-1919749506-project-member</nova:user>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <nova:project uuid="b791fb0a74ad43e9b9270c33338d5556">tempest-ListImageFiltersTestJSON-1919749506</nova:project>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <entry name="serial">12043b7f-9853-45a8-b963-ae96713754b4</entry>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <entry name="uuid">12043b7f-9853-45a8-b963-ae96713754b4</entry>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/12043b7f-9853-45a8-b963-ae96713754b4_disk">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/12043b7f-9853-45a8-b963-ae96713754b4_disk.config">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/console.log" append="off"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:27:51 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:27:51 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:27:51 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:27:51 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.365 254096 DEBUG nova.scheduler.client.report [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.411 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.412 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.414 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.446 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.446 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.447 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Using config drive#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.463 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.471 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.472 254096 DEBUG nova.network.neutron [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.510 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.527 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 12043b7f-9853-45a8-b963-ae96713754b4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2174ef15-55fa-4734-8cc2-89064853919b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.625 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.645 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.647 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.647 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Creating image(s)#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.667 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.687 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.707 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.710 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.770 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.771 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.771 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.772 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.796 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.800 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2174ef15-55fa-4734-8cc2-89064853919b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.993 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088056.9909072, 38463335-bf41-4609-b81c-08bc8da299af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:51 np0005535469 nova_compute[254092]: 2025-11-25 16:27:51.993 254096 INFO nova.compute.manager [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.035 254096 DEBUG nova.compute.manager [None req-975bc03c-ea7a-4416-bc0c-14cb8d121734 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:27:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128952328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.059 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.063 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2174ef15-55fa-4734-8cc2-89064853919b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.091 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.121 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.128 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] resizing rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.155 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.211 254096 DEBUG nova.objects.instance [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'migration_context' on Instance uuid 2174ef15-55fa-4734-8cc2-89064853919b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.223 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.223 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Ensure instance console log exists: /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.224 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.224 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.224 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.257 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Creating config drive at /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.261 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2jw8rai execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.384 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2jw8rai" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.408 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.411 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config 12043b7f-9853-45a8-b963-ae96713754b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.431 254096 DEBUG nova.network.neutron [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.432 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.433 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.438 254096 WARNING nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.443 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.443 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.446 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.446 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.447 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.447 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.447 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.450 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.453 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.544 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config 12043b7f-9853-45a8-b963-ae96713754b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.545 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deleting local config drive /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config because it was imported into RBD.#033[00m
Nov 25 11:27:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:27:52 np0005535469 systemd-machined[216343]: New machine qemu-16-instance-0000000f.
Nov 25 11:27:52 np0005535469 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 25 11:27:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 135 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Nov 25 11:27:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489301489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.901 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.924 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:52 np0005535469 nova_compute[254092]: 2025-11-25 16:27:52.928 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:53 np0005535469 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 11:27:53 np0005535469 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000c.scope: Consumed 12.603s CPU time.
Nov 25 11:27:53 np0005535469 systemd-machined[216343]: Machine qemu-15-instance-0000000c terminated.
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.151 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088073.1514132, 12043b7f-9853-45a8-b963-ae96713754b4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.152 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.155 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.155 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.158 254096 INFO nova.virt.libvirt.driver [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance spawned successfully.#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.158 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.172 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.176 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.184 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.184 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.184 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.185 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.185 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.185 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088073.1544726, 12043b7f-9853-45a8-b963-ae96713754b4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] VM Started (Lifecycle Event)#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.230 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.249 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.287 254096 INFO nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 3.58 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.287 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607220910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.362 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.363 254096 DEBUG nova.objects.instance [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2174ef15-55fa-4734-8cc2-89064853919b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.377 254096 INFO nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 4.58 seconds to build instance.#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.385 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <uuid>2174ef15-55fa-4734-8cc2-89064853919b</uuid>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <name>instance-00000010</name>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1252766420</nova:name>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:27:52</nova:creationTime>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <nova:user uuid="435d184bc01d4b1b878995bce4319f96">tempest-ListImageFiltersTestJSON-1919749506-project-member</nova:user>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <nova:project uuid="b791fb0a74ad43e9b9270c33338d5556">tempest-ListImageFiltersTestJSON-1919749506</nova:project>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <entry name="serial">2174ef15-55fa-4734-8cc2-89064853919b</entry>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <entry name="uuid">2174ef15-55fa-4734-8cc2-89064853919b</entry>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2174ef15-55fa-4734-8cc2-89064853919b_disk">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2174ef15-55fa-4734-8cc2-89064853919b_disk.config">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/console.log" append="off"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:27:53 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:27:53 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:27:53 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:27:53 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.431 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.471 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.472 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.473 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Using config drive#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.503 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.508 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.794 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Creating config drive at /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.820 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpruwhl31l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.945 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpruwhl31l" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.972 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:53 np0005535469 nova_compute[254092]: 2025-11-25 16:27:53.975 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config 2174ef15-55fa-4734-8cc2-89064853919b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.002 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.008 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.012 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.174 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config 2174ef15-55fa-4734-8cc2-89064853919b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.174 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deleting local config drive /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config because it was imported into RBD.#033[00m
Nov 25 11:27:54 np0005535469 systemd-machined[216343]: New machine qemu-17-instance-00000010.
Nov 25 11:27:54 np0005535469 systemd[1]: Started Virtual Machine qemu-17-instance-00000010.
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.397 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting instance files /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.398 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deletion of /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del complete#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.668 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.669 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating image(s)#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.688 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.708 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.727 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.730 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.766 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088074.7655294, 2174ef15-55fa-4734-8cc2-89064853919b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.778 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.778 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.781 254096 INFO nova.virt.libvirt.driver [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance spawned successfully.#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.782 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.786 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.787 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.788 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.788 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.805 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.808 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.839 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.841 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.842 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.842 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.842 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.843 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.843 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.848 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 135 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 3.5 MiB/s wr, 91 op/s
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.876 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.877 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088074.7779467, 2174ef15-55fa-4734-8cc2-89064853919b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.877 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] VM Started (Lifecycle Event)#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.900 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.902 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.923 254096 INFO nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 3.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.924 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:54 np0005535469 nova_compute[254092]: 2025-11-25 16:27:54.926 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.030 254096 INFO nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 4.37 seconds to build instance.#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.056 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.164 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:27:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345017531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:27:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:27:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345017531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.224 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] resizing rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.315 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.316 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ensure instance console log exists: /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.316 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.317 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.317 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.319 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.324 254096 WARNING nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.332 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.333 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.336 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.337 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.337 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.338 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.338 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.338 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.356 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.516 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.517 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.798 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:27:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1177493047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.880 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.904 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:55 np0005535469 nova_compute[254092]: 2025-11-25 16:27:55.911 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.256 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.278 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.278 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:27:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:27:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229204161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.395 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.398 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <uuid>5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</uuid>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <name>instance-0000000c</name>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdmin275Test-server-1909956679</nova:name>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:27:55</nova:creationTime>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <nova:user uuid="f16e272341774153991e7ed856e34188">tempest-ServersAdmin275Test-1636693528-project-member</nova:user>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <nova:project uuid="a9a4c2749782401c89e06948356b0e0a">tempest-ServersAdmin275Test-1636693528</nova:project>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <entry name="serial">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <entry name="uuid">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log" append="off"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:27:56 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:27:56 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:27:56 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:27:56 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.462 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.462 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.463 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Using config drive#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.487 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.506 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.543 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'keypairs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:27:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 148 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 MiB/s wr, 268 op/s
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.921 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating config drive at /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config#033[00m
Nov 25 11:27:56 np0005535469 nova_compute[254092]: 2025-11-25 16:27:56.927 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv21jnxz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.055 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv21jnxz" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.077 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.080 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.152 254096 DEBUG nova.compute.manager [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.199 254096 INFO nova.compute.manager [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] instance snapshotting#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.212 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.213 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting local config drive /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config because it was imported into RBD.#033[00m
Nov 25 11:27:57 np0005535469 systemd-machined[216343]: New machine qemu-18-instance-0000000c.
Nov 25 11:27:57 np0005535469 systemd[1]: Started Virtual Machine qemu-18-instance-0000000c.
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.705 254096 INFO nova.virt.libvirt.driver [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Beginning live snapshot process#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.871 254096 DEBUG nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.872 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.872 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.873 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088077.8086817, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.873 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.881 254096 DEBUG nova.virt.libvirt.imagebackend [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.885 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance spawned successfully.#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.886 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.909 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.920 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.923 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.923 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.923 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.924 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.925 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.926 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.947 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.949 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088077.8087618, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.949 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Started (Lifecycle Event)#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.973 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:57 np0005535469 nova_compute[254092]: 2025-11-25 16:27:57.978 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.001 254096 DEBUG nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.002 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.089 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.091 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.092 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.136 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(8d2851002d5d4f4ab95baf095588850d) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.194 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 25 11:27:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 25 11:27:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.326 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] cloning vms/12043b7f-9853-45a8-b963-ae96713754b4_disk@8d2851002d5d4f4ab95baf095588850d to images/83bd8229-570c-4485-b723-55b6d19bebf1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.449 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] flattening images/83bd8229-570c-4485-b723-55b6d19bebf1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.505 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:27:58 np0005535469 nova_compute[254092]: 2025-11-25 16:27:58.786 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] removing snapshot(8d2851002d5d4f4ab95baf095588850d) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:27:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 148 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 8.0 MiB/s wr, 306 op/s
Nov 25 11:27:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 25 11:27:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 25 11:27:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.299 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(snap) on rbd image(83bd8229-570c-4485-b723-55b6d19bebf1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.414 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.414 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.415 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.415 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.416 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.418 254096 INFO nova.compute.manager [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Terminating instance#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.419 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.419 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquired lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.420 254096 DEBUG nova.network.neutron [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:27:59 np0005535469 nova_compute[254092]: 2025-11-25 16:27:59.636 254096 DEBUG nova.network.neutron [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:00 np0005535469 nova_compute[254092]: 2025-11-25 16:28:00.241 254096 DEBUG nova.network.neutron [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 25 11:28:00 np0005535469 nova_compute[254092]: 2025-11-25 16:28:00.258 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Releasing lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:00 np0005535469 nova_compute[254092]: 2025-11-25 16:28:00.258 254096 DEBUG nova.compute.manager [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:28:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 25 11:28:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 25 11:28:00 np0005535469 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 11:28:00 np0005535469 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000c.scope: Consumed 2.937s CPU time.
Nov 25 11:28:00 np0005535469 systemd-machined[216343]: Machine qemu-18-instance-0000000c terminated.
Nov 25 11:28:00 np0005535469 nova_compute[254092]: 2025-11-25 16:28:00.490 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.#033[00m
Nov 25 11:28:00 np0005535469 nova_compute[254092]: 2025-11-25 16:28:00.490 254096 DEBUG nova.objects.instance [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'resources' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 209 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 642 op/s
Nov 25 11:28:00 np0005535469 nova_compute[254092]: 2025-11-25 16:28:00.973 254096 INFO nova.virt.libvirt.driver [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting instance files /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del#033[00m
Nov 25 11:28:00 np0005535469 nova_compute[254092]: 2025-11-25 16:28:00.974 254096 INFO nova.virt.libvirt.driver [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deletion of /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del complete#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.040 254096 INFO nova.compute.manager [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.041 254096 DEBUG oslo.service.loopingcall [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.041 254096 DEBUG nova.compute.manager [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.042 254096 DEBUG nova.network.neutron [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.237 254096 DEBUG nova.network.neutron [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.256 254096 DEBUG nova.network.neutron [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.284 254096 INFO nova.compute.manager [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 0.24 seconds to deallocate network for instance.#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.343 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.344 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.431 254096 DEBUG oslo_concurrency.processutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548129775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.865 254096 DEBUG oslo_concurrency.processutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.870 254096 DEBUG nova.compute.provider_tree [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.886 254096 DEBUG nova.scheduler.client.report [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.910 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:01 np0005535469 nova_compute[254092]: 2025-11-25 16:28:01.953 254096 INFO nova.scheduler.client.report [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Deleted allocations for instance 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2#033[00m
Nov 25 11:28:02 np0005535469 nova_compute[254092]: 2025-11-25 16:28:02.023 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:02 np0005535469 nova_compute[254092]: 2025-11-25 16:28:02.071 254096 INFO nova.virt.libvirt.driver [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Snapshot image upload complete#033[00m
Nov 25 11:28:02 np0005535469 nova_compute[254092]: 2025-11-25 16:28:02.071 254096 INFO nova.compute.manager [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 4.87 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:28:02 np0005535469 nova_compute[254092]: 2025-11-25 16:28:02.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 213 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 5.2 MiB/s wr, 395 op/s
Nov 25 11:28:03 np0005535469 nova_compute[254092]: 2025-11-25 16:28:03.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:04 np0005535469 nova_compute[254092]: 2025-11-25 16:28:04.508 254096 DEBUG nova.compute.manager [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:04 np0005535469 nova_compute[254092]: 2025-11-25 16:28:04.570 254096 INFO nova.compute.manager [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] instance snapshotting#033[00m
Nov 25 11:28:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:04Z|00054|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 11:28:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 213 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.7 MiB/s wr, 357 op/s
Nov 25 11:28:05 np0005535469 nova_compute[254092]: 2025-11-25 16:28:05.266 254096 INFO nova.virt.libvirt.driver [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Beginning live snapshot process#033[00m
Nov 25 11:28:05 np0005535469 nova_compute[254092]: 2025-11-25 16:28:05.403 254096 DEBUG nova.virt.libvirt.imagebackend [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:28:05 np0005535469 nova_compute[254092]: 2025-11-25 16:28:05.747 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(12d7e5362aa64c338d0fcf8b1fbee135) on rbd image(2174ef15-55fa-4734-8cc2-89064853919b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:28:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 25 11:28:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 25 11:28:06 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 25 11:28:06 np0005535469 nova_compute[254092]: 2025-11-25 16:28:06.136 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] cloning vms/2174ef15-55fa-4734-8cc2-89064853919b_disk@12d7e5362aa64c338d0fcf8b1fbee135 to images/55734109-7b58-4129-b190-ab05588f6d0a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:28:06 np0005535469 nova_compute[254092]: 2025-11-25 16:28:06.307 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] flattening images/55734109-7b58-4129-b190-ab05588f6d0a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:28:06 np0005535469 nova_compute[254092]: 2025-11-25 16:28:06.648 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] removing snapshot(12d7e5362aa64c338d0fcf8b1fbee135) on rbd image(2174ef15-55fa-4734-8cc2-89064853919b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:28:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 213 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.5 MiB/s wr, 465 op/s
Nov 25 11:28:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 25 11:28:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 25 11:28:07 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 25 11:28:07 np0005535469 nova_compute[254092]: 2025-11-25 16:28:07.100 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(snap) on rbd image(55734109-7b58-4129-b190-ab05588f6d0a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:28:07 np0005535469 nova_compute[254092]: 2025-11-25 16:28:07.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 25 11:28:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 25 11:28:07 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 25 11:28:08 np0005535469 nova_compute[254092]: 2025-11-25 16:28:08.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 213 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 621 KiB/s rd, 4.3 MiB/s wr, 195 op/s
Nov 25 11:28:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:08.994 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:28:08 np0005535469 nova_compute[254092]: 2025-11-25 16:28:08.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:08.995 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:28:09 np0005535469 nova_compute[254092]: 2025-11-25 16:28:09.650 254096 INFO nova.virt.libvirt.driver [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Snapshot image upload complete#033[00m
Nov 25 11:28:09 np0005535469 nova_compute[254092]: 2025-11-25 16:28:09.651 254096 INFO nova.compute.manager [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 5.08 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:28:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 258 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 8.9 MiB/s wr, 414 op/s
Nov 25 11:28:12 np0005535469 nova_compute[254092]: 2025-11-25 16:28:12.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:12 np0005535469 nova_compute[254092]: 2025-11-25 16:28:12.584 254096 DEBUG nova.compute.manager [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:12 np0005535469 nova_compute[254092]: 2025-11-25 16:28:12.627 254096 INFO nova.compute.manager [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] instance snapshotting#033[00m
Nov 25 11:28:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 292 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.9 MiB/s wr, 250 op/s
Nov 25 11:28:13 np0005535469 nova_compute[254092]: 2025-11-25 16:28:13.063 254096 INFO nova.virt.libvirt.driver [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Beginning live snapshot process#033[00m
Nov 25 11:28:13 np0005535469 nova_compute[254092]: 2025-11-25 16:28:13.407 254096 DEBUG nova.virt.libvirt.imagebackend [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:28:13 np0005535469 nova_compute[254092]: 2025-11-25 16:28:13.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:13.600 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:13 np0005535469 nova_compute[254092]: 2025-11-25 16:28:13.783 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(0bd486026d0d4f82b447ccc4811ee656) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:28:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 25 11:28:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 25 11:28:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 25 11:28:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 292 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.0 MiB/s wr, 219 op/s
Nov 25 11:28:14 np0005535469 nova_compute[254092]: 2025-11-25 16:28:14.970 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] cloning vms/12043b7f-9853-45a8-b963-ae96713754b4_disk@0bd486026d0d4f82b447ccc4811ee656 to images/41202dc9-0782-4f14-be74-5217dd621459 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:28:15 np0005535469 nova_compute[254092]: 2025-11-25 16:28:15.479 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088080.478113, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:15 np0005535469 nova_compute[254092]: 2025-11-25 16:28:15.480 254096 INFO nova.compute.manager [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:28:15 np0005535469 nova_compute[254092]: 2025-11-25 16:28:15.502 254096 DEBUG nova.compute.manager [None req-eccaf6b2-7612-4009-8e39-ce5cbefb96f2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:15 np0005535469 nova_compute[254092]: 2025-11-25 16:28:15.767 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] flattening images/41202dc9-0782-4f14-be74-5217dd621459 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:28:16 np0005535469 nova_compute[254092]: 2025-11-25 16:28:16.143 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] removing snapshot(0bd486026d0d4f82b447ccc4811ee656) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:28:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 25 11:28:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 25 11:28:16 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 25 11:28:16 np0005535469 nova_compute[254092]: 2025-11-25 16:28:16.698 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(snap) on rbd image(41202dc9-0782-4f14-be74-5217dd621459) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:28:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 300 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.6 MiB/s wr, 238 op/s
Nov 25 11:28:17 np0005535469 nova_compute[254092]: 2025-11-25 16:28:17.358 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 25 11:28:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 25 11:28:17 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 25 11:28:18 np0005535469 nova_compute[254092]: 2025-11-25 16:28:18.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 300 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 958 KiB/s wr, 33 op/s
Nov 25 11:28:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:18.997 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:20 np0005535469 nova_compute[254092]: 2025-11-25 16:28:20.490 254096 INFO nova.virt.libvirt.driver [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Snapshot image upload complete#033[00m
Nov 25 11:28:20 np0005535469 nova_compute[254092]: 2025-11-25 16:28:20.491 254096 INFO nova.compute.manager [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 7.86 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:28:20 np0005535469 podman[280160]: 2025-11-25 16:28:20.680072068 +0000 UTC m=+0.084574371 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:28:20 np0005535469 podman[280161]: 2025-11-25 16:28:20.693410001 +0000 UTC m=+0.094397549 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 11:28:20 np0005535469 podman[280162]: 2025-11-25 16:28:20.716983711 +0000 UTC m=+0.122641356 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:28:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 330 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 3.3 MiB/s wr, 109 op/s
Nov 25 11:28:22 np0005535469 nova_compute[254092]: 2025-11-25 16:28:22.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Nov 25 11:28:23 np0005535469 nova_compute[254092]: 2025-11-25 16:28:23.585 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.0 MiB/s wr, 102 op/s
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.332 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.332 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.348 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.428 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.429 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.436 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.437 254096 INFO nova.compute.claims [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:28:25 np0005535469 nova_compute[254092]: 2025-11-25 16:28:25.602 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209411780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.066 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.076 254096 DEBUG nova.compute.provider_tree [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.107 254096 DEBUG nova.scheduler.client.report [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.126 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.127 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.180 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.181 254096 DEBUG nova.network.neutron [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.198 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.220 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.337 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.340 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.341 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Creating image(s)#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.366 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.393 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.418 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.422 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.498 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.499 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.499 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.500 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.516 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.520 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.811 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.862 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] resizing rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:28:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 84 op/s
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.937 254096 DEBUG nova.objects.instance [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lazy-loading 'migration_context' on Instance uuid d5372b02-1d93-4354-8f5c-c4228e8d3ec4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.948 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.948 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Ensure instance console log exists: /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.949 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.949 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:26 np0005535469 nova_compute[254092]: 2025-11-25 16:28:26.949 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.031 254096 DEBUG nova.network.neutron [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.032 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.035 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.041 254096 WARNING nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.046 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.047 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.050 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.051 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.052 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.052 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.053 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.054 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.054 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.055 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.055 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.056 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.056 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.056 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.057 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.057 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.062 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.362 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150345003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.519 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.546 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.549 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039421316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.967 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.970 254096 DEBUG nova.objects.instance [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5372b02-1d93-4354-8f5c-c4228e8d3ec4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:27 np0005535469 nova_compute[254092]: 2025-11-25 16:28:27.995 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <uuid>d5372b02-1d93-4354-8f5c-c4228e8d3ec4</uuid>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <name>instance-00000011</name>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiagnosticsTest-server-1671210126</nova:name>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:28:27</nova:creationTime>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <nova:user uuid="e3ab5ee2932743f4acd4c73f7a5aa7d3">tempest-ServerDiagnosticsTest-1741880264-project-member</nova:user>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <nova:project uuid="3c8a4dd83dcd4ef1a79608794e3620d2">tempest-ServerDiagnosticsTest-1741880264</nova:project>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <entry name="serial">d5372b02-1d93-4354-8f5c-c4228e8d3ec4</entry>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <entry name="uuid">d5372b02-1d93-4354-8f5c-c4228e8d3ec4</entry>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/console.log" append="off"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:28:27 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:27 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:28:28 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:28:28 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:28:28 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:28:28 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.051 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.052 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.052 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Using config drive#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.104 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.686 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Creating config drive at /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.691 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydrrzxyy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.838 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydrrzxyy" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.861 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:28 np0005535469 nova_compute[254092]: 2025-11-25 16:28:28.864 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 84 op/s
Nov 25 11:28:29 np0005535469 nova_compute[254092]: 2025-11-25 16:28:29.407 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:29 np0005535469 nova_compute[254092]: 2025-11-25 16:28:29.407 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deleting local config drive /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config because it was imported into RBD.#033[00m
Nov 25 11:28:29 np0005535469 systemd-machined[216343]: New machine qemu-19-instance-00000011.
Nov 25 11:28:29 np0005535469 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.045 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088110.0448537, d5372b02-1d93-4354-8f5c-c4228e8d3ec4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.046 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.049 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.049 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.053 254096 INFO nova.virt.libvirt.driver [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance spawned successfully.#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.054 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.090 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.090 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.091 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.091 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.092 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.092 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.095 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.132 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.133 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088110.0457306, d5372b02-1d93-4354-8f5c-c4228e8d3ec4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.133 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] VM Started (Lifecycle Event)#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.149 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.152 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.186 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.222 254096 INFO nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 3.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.222 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.285 254096 INFO nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 4.89 seconds to build instance.#033[00m
Nov 25 11:28:30 np0005535469 nova_compute[254092]: 2025-11-25 16:28:30.319 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 402 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:28:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 61ccdb25-ef7a-4f0b-9b60-e1f2fad8730c does not exist
Nov 25 11:28:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 22b699b9-80aa-43a8-9872-d83cdbe2fe10 does not exist
Nov 25 11:28:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1ca48de6-a556-4f13-a72d-813baf0b05d1 does not exist
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:28:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:28:31 np0005535469 nova_compute[254092]: 2025-11-25 16:28:31.635 254096 DEBUG nova.compute.manager [None req-f60a9db8-2af0-4352-b8fe-ea537001bf50 fc63e752c50f4250ae6c27d066bc5d5d 2d249d5c3f4a443d9512925398689e22 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:31 np0005535469 nova_compute[254092]: 2025-11-25 16:28:31.638 254096 INFO nova.compute.manager [None req-f60a9db8-2af0-4352-b8fe-ea537001bf50 fc63e752c50f4250ae6c27d066bc5d5d 2d249d5c3f4a443d9512925398689e22 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Retrieving diagnostics#033[00m
Nov 25 11:28:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:28:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:28:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:28:32 np0005535469 podman[280860]: 2025-11-25 16:28:32.131267279 +0000 UTC m=+0.022775270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.236 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.237 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.237 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.238 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.238 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.239 254096 INFO nova.compute.manager [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Terminating instance#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.240 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "refresh_cache-d5372b02-1d93-4354-8f5c-c4228e8d3ec4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.240 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquired lock "refresh_cache-d5372b02-1d93-4354-8f5c-c4228e8d3ec4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.240 254096 DEBUG nova.network.neutron [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:28:32 np0005535469 podman[280860]: 2025-11-25 16:28:32.311554662 +0000 UTC m=+0.203062633 container create bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:32 np0005535469 systemd[1]: Started libpod-conmon-bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f.scope.
Nov 25 11:28:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:28:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 25 11:28:32 np0005535469 podman[280860]: 2025-11-25 16:28:32.583166179 +0000 UTC m=+0.474674180 container init bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:28:32 np0005535469 podman[280860]: 2025-11-25 16:28:32.594738123 +0000 UTC m=+0.486246094 container start bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:28:32 np0005535469 amazing_mahavira[280877]: 167 167
Nov 25 11:28:32 np0005535469 systemd[1]: libpod-bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f.scope: Deactivated successfully.
Nov 25 11:28:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 25 11:28:32 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 25 11:28:32 np0005535469 podman[280860]: 2025-11-25 16:28:32.638239917 +0000 UTC m=+0.529747908 container attach bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:28:32 np0005535469 podman[280860]: 2025-11-25 16:28:32.640157158 +0000 UTC m=+0.531665129 container died bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:28:32 np0005535469 nova_compute[254092]: 2025-11-25 16:28:32.640 254096 DEBUG nova.network.neutron [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 418 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.7 MiB/s wr, 68 op/s
Nov 25 11:28:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9c1592cde8910527eeffe88100e252768d08b73e864999e837b827ed08dbbef4-merged.mount: Deactivated successfully.
Nov 25 11:28:33 np0005535469 nova_compute[254092]: 2025-11-25 16:28:33.019 254096 DEBUG nova.network.neutron [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:33 np0005535469 nova_compute[254092]: 2025-11-25 16:28:33.031 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Releasing lock "refresh_cache-d5372b02-1d93-4354-8f5c-c4228e8d3ec4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:33 np0005535469 nova_compute[254092]: 2025-11-25 16:28:33.032 254096 DEBUG nova.compute.manager [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:28:33 np0005535469 podman[280860]: 2025-11-25 16:28:33.432400904 +0000 UTC m=+1.323908875 container remove bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:28:33 np0005535469 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 25 11:28:33 np0005535469 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 3.524s CPU time.
Nov 25 11:28:33 np0005535469 systemd-machined[216343]: Machine qemu-19-instance-00000011 terminated.
Nov 25 11:28:33 np0005535469 systemd[1]: libpod-conmon-bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f.scope: Deactivated successfully.
Nov 25 11:28:33 np0005535469 nova_compute[254092]: 2025-11-25 16:28:33.453 254096 INFO nova.virt.libvirt.driver [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance destroyed successfully.#033[00m
Nov 25 11:28:33 np0005535469 nova_compute[254092]: 2025-11-25 16:28:33.453 254096 DEBUG nova.objects.instance [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lazy-loading 'resources' on Instance uuid d5372b02-1d93-4354-8f5c-c4228e8d3ec4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:33 np0005535469 podman[280925]: 2025-11-25 16:28:33.571710583 +0000 UTC m=+0.023754998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:28:33 np0005535469 nova_compute[254092]: 2025-11-25 16:28:33.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:33 np0005535469 podman[280925]: 2025-11-25 16:28:33.766766337 +0000 UTC m=+0.218810732 container create f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:28:33 np0005535469 systemd[1]: Started libpod-conmon-f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978.scope.
Nov 25 11:28:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:28:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:34 np0005535469 podman[280925]: 2025-11-25 16:28:34.135835834 +0000 UTC m=+0.587880269 container init f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:28:34 np0005535469 podman[280925]: 2025-11-25 16:28:34.143342269 +0000 UTC m=+0.595386664 container start f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:28:34 np0005535469 podman[280925]: 2025-11-25 16:28:34.149702691 +0000 UTC m=+0.601747106 container attach f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:28:34 np0005535469 nova_compute[254092]: 2025-11-25 16:28:34.462 254096 INFO nova.virt.libvirt.driver [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deleting instance files /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_del#033[00m
Nov 25 11:28:34 np0005535469 nova_compute[254092]: 2025-11-25 16:28:34.465 254096 INFO nova.virt.libvirt.driver [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deletion of /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_del complete#033[00m
Nov 25 11:28:34 np0005535469 nova_compute[254092]: 2025-11-25 16:28:34.526 254096 INFO nova.compute.manager [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 1.49 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:28:34 np0005535469 nova_compute[254092]: 2025-11-25 16:28:34.527 254096 DEBUG oslo.service.loopingcall [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:28:34 np0005535469 nova_compute[254092]: 2025-11-25 16:28:34.527 254096 DEBUG nova.compute.manager [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:28:34 np0005535469 nova_compute[254092]: 2025-11-25 16:28:34.528 254096 DEBUG nova.network.neutron [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:28:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 418 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.7 MiB/s wr, 67 op/s
Nov 25 11:28:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 25 11:28:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 25 11:28:35 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 25 11:28:35 np0005535469 focused_archimedes[280942]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:28:35 np0005535469 focused_archimedes[280942]: --> relative data size: 1.0
Nov 25 11:28:35 np0005535469 focused_archimedes[280942]: --> All data devices are unavailable
Nov 25 11:28:35 np0005535469 systemd[1]: libpod-f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978.scope: Deactivated successfully.
Nov 25 11:28:35 np0005535469 podman[280925]: 2025-11-25 16:28:35.190132777 +0000 UTC m=+1.642177172 container died f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:28:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066-merged.mount: Deactivated successfully.
Nov 25 11:28:35 np0005535469 podman[280925]: 2025-11-25 16:28:35.257721775 +0000 UTC m=+1.709766160 container remove f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:28:35 np0005535469 systemd[1]: libpod-conmon-f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978.scope: Deactivated successfully.
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.322 254096 DEBUG nova.network.neutron [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.339 254096 DEBUG nova.network.neutron [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.356 254096 INFO nova.compute.manager [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 0.83 seconds to deallocate network for instance.#033[00m
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.398 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.399 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.480 254096 DEBUG oslo_concurrency.processutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959057331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.955 254096 DEBUG oslo_concurrency.processutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:35 np0005535469 nova_compute[254092]: 2025-11-25 16:28:35.962 254096 DEBUG nova.compute.provider_tree [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:36 np0005535469 podman[281146]: 2025-11-25 16:28:35.954425372 +0000 UTC m=+0.031316513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:28:36 np0005535469 podman[281146]: 2025-11-25 16:28:36.059967592 +0000 UTC m=+0.136858733 container create 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:28:36 np0005535469 systemd[1]: Started libpod-conmon-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope.
Nov 25 11:28:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:28:36 np0005535469 podman[281146]: 2025-11-25 16:28:36.232563517 +0000 UTC m=+0.309454668 container init 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:28:36 np0005535469 podman[281146]: 2025-11-25 16:28:36.242109086 +0000 UTC m=+0.319000207 container start 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:28:36 np0005535469 systemd[1]: libpod-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope: Deactivated successfully.
Nov 25 11:28:36 np0005535469 great_antonelli[281164]: 167 167
Nov 25 11:28:36 np0005535469 conmon[281164]: conmon 4c3bb87abc91c60861ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope/container/memory.events
Nov 25 11:28:36 np0005535469 nova_compute[254092]: 2025-11-25 16:28:36.254 254096 DEBUG nova.scheduler.client.report [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:36 np0005535469 podman[281146]: 2025-11-25 16:28:36.270467707 +0000 UTC m=+0.347358848 container attach 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 11:28:36 np0005535469 podman[281146]: 2025-11-25 16:28:36.270997771 +0000 UTC m=+0.347888892 container died 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 11:28:36 np0005535469 nova_compute[254092]: 2025-11-25 16:28:36.295 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:36 np0005535469 nova_compute[254092]: 2025-11-25 16:28:36.369 254096 INFO nova.scheduler.client.report [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Deleted allocations for instance d5372b02-1d93-4354-8f5c-c4228e8d3ec4#033[00m
Nov 25 11:28:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-08616db84c0cb2b069427b5dbe09086f84bf86a647d8e979628ff42fa94890f6-merged.mount: Deactivated successfully.
Nov 25 11:28:36 np0005535469 nova_compute[254092]: 2025-11-25 16:28:36.430 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:36 np0005535469 podman[281146]: 2025-11-25 16:28:36.461409389 +0000 UTC m=+0.538300510 container remove 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 11:28:36 np0005535469 systemd[1]: libpod-conmon-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope: Deactivated successfully.
Nov 25 11:28:36 np0005535469 podman[281189]: 2025-11-25 16:28:36.645318441 +0000 UTC m=+0.048409607 container create 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:28:36 np0005535469 systemd[1]: Started libpod-conmon-3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa.scope.
Nov 25 11:28:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:28:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:36 np0005535469 podman[281189]: 2025-11-25 16:28:36.620694362 +0000 UTC m=+0.023785548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:28:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:36 np0005535469 podman[281189]: 2025-11-25 16:28:36.732103522 +0000 UTC m=+0.135194728 container init 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:28:36 np0005535469 podman[281189]: 2025-11-25 16:28:36.738093064 +0000 UTC m=+0.141184280 container start 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:28:36 np0005535469 podman[281189]: 2025-11-25 16:28:36.74195714 +0000 UTC m=+0.145048356 container attach 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:28:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 275 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Nov 25 11:28:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 25 11:28:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 25 11:28:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 25 11:28:37 np0005535469 nova_compute[254092]: 2025-11-25 16:28:37.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:37 np0005535469 boring_brattain[281205]: {
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:    "0": [
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:        {
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "devices": [
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "/dev/loop3"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            ],
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_name": "ceph_lv0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_size": "21470642176",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "name": "ceph_lv0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "tags": {
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cluster_name": "ceph",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.crush_device_class": "",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.encrypted": "0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osd_id": "0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.type": "block",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.vdo": "0"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            },
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "type": "block",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "vg_name": "ceph_vg0"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:        }
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:    ],
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:    "1": [
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:        {
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "devices": [
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "/dev/loop4"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            ],
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_name": "ceph_lv1",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_size": "21470642176",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "name": "ceph_lv1",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "tags": {
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cluster_name": "ceph",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.crush_device_class": "",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.encrypted": "0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osd_id": "1",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.type": "block",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.vdo": "0"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            },
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "type": "block",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "vg_name": "ceph_vg1"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:        }
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:    ],
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:    "2": [
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:        {
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "devices": [
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "/dev/loop5"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            ],
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_name": "ceph_lv2",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_size": "21470642176",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "name": "ceph_lv2",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "tags": {
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.cluster_name": "ceph",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.crush_device_class": "",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.encrypted": "0",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osd_id": "2",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.type": "block",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:                "ceph.vdo": "0"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            },
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "type": "block",
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:            "vg_name": "ceph_vg2"
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:        }
Nov 25 11:28:37 np0005535469 boring_brattain[281205]:    ]
Nov 25 11:28:37 np0005535469 boring_brattain[281205]: }
Nov 25 11:28:37 np0005535469 systemd[1]: libpod-3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa.scope: Deactivated successfully.
Nov 25 11:28:37 np0005535469 podman[281189]: 2025-11-25 16:28:37.622334282 +0000 UTC m=+1.025425488 container died 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:28:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63-merged.mount: Deactivated successfully.
Nov 25 11:28:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 25 11:28:37 np0005535469 podman[281189]: 2025-11-25 16:28:37.683042092 +0000 UTC m=+1.086133268 container remove 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 11:28:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 25 11:28:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 25 11:28:37 np0005535469 systemd[1]: libpod-conmon-3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa.scope: Deactivated successfully.
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.021 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "2174ef15-55fa-4734-8cc2-89064853919b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "2174ef15-55fa-4734-8cc2-89064853919b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.023 254096 INFO nova.compute.manager [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Terminating instance#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.024 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "refresh_cache-2174ef15-55fa-4734-8cc2-89064853919b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.024 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquired lock "refresh_cache-2174ef15-55fa-4734-8cc2-89064853919b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.024 254096 DEBUG nova.network.neutron [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:28:38 np0005535469 podman[281367]: 2025-11-25 16:28:38.295780276 +0000 UTC m=+0.038124287 container create 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:28:38 np0005535469 systemd[1]: Started libpod-conmon-5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906.scope.
Nov 25 11:28:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:28:38 np0005535469 podman[281367]: 2025-11-25 16:28:38.367432865 +0000 UTC m=+0.109776916 container init 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:28:38 np0005535469 podman[281367]: 2025-11-25 16:28:38.373537761 +0000 UTC m=+0.115881782 container start 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:28:38 np0005535469 podman[281367]: 2025-11-25 16:28:38.28043534 +0000 UTC m=+0.022779371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:28:38 np0005535469 podman[281367]: 2025-11-25 16:28:38.376976485 +0000 UTC m=+0.119320516 container attach 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:28:38 np0005535469 happy_einstein[281383]: 167 167
Nov 25 11:28:38 np0005535469 systemd[1]: libpod-5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906.scope: Deactivated successfully.
Nov 25 11:28:38 np0005535469 podman[281367]: 2025-11-25 16:28:38.37827068 +0000 UTC m=+0.120614691 container died 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.396 254096 DEBUG nova.network.neutron [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e074f30a0705dd4b08b96e5d96e9fa4536faba549c6fd066adbfc813e063569d-merged.mount: Deactivated successfully.
Nov 25 11:28:38 np0005535469 podman[281367]: 2025-11-25 16:28:38.410902287 +0000 UTC m=+0.153246298 container remove 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:28:38 np0005535469 systemd[1]: libpod-conmon-5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906.scope: Deactivated successfully.
Nov 25 11:28:38 np0005535469 podman[281406]: 2025-11-25 16:28:38.59598224 +0000 UTC m=+0.057233017 container create 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:28:38 np0005535469 systemd[1]: Started libpod-conmon-60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4.scope.
Nov 25 11:28:38 np0005535469 podman[281406]: 2025-11-25 16:28:38.569267214 +0000 UTC m=+0.030517991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:28:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:28:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:38 np0005535469 podman[281406]: 2025-11-25 16:28:38.696381141 +0000 UTC m=+0.157631958 container init 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:28:38 np0005535469 podman[281406]: 2025-11-25 16:28:38.702715083 +0000 UTC m=+0.163965850 container start 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:28:38 np0005535469 podman[281406]: 2025-11-25 16:28:38.706227629 +0000 UTC m=+0.167478346 container attach 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 11:28:38 np0005535469 nova_compute[254092]: 2025-11-25 16:28:38.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 275 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 KiB/s wr, 228 op/s
Nov 25 11:28:39 np0005535469 nova_compute[254092]: 2025-11-25 16:28:39.167 254096 DEBUG nova.network.neutron [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:39 np0005535469 nova_compute[254092]: 2025-11-25 16:28:39.182 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Releasing lock "refresh_cache-2174ef15-55fa-4734-8cc2-89064853919b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:39 np0005535469 nova_compute[254092]: 2025-11-25 16:28:39.183 254096 DEBUG nova.compute.manager [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:28:39 np0005535469 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 11:28:39 np0005535469 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Consumed 13.859s CPU time.
Nov 25 11:28:39 np0005535469 systemd-machined[216343]: Machine qemu-17-instance-00000010 terminated.
Nov 25 11:28:39 np0005535469 nova_compute[254092]: 2025-11-25 16:28:39.416 254096 INFO nova.virt.libvirt.driver [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance destroyed successfully.#033[00m
Nov 25 11:28:39 np0005535469 nova_compute[254092]: 2025-11-25 16:28:39.417 254096 DEBUG nova.objects.instance [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'resources' on Instance uuid 2174ef15-55fa-4734-8cc2-89064853919b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]: {
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "osd_id": 1,
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "type": "bluestore"
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:    },
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "osd_id": 2,
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "type": "bluestore"
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:    },
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "osd_id": 0,
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:        "type": "bluestore"
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]:    }
Nov 25 11:28:39 np0005535469 jolly_torvalds[281423]: }
Nov 25 11:28:39 np0005535469 systemd[1]: libpod-60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4.scope: Deactivated successfully.
Nov 25 11:28:39 np0005535469 podman[281406]: 2025-11-25 16:28:39.688486062 +0000 UTC m=+1.149736799 container died 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:28:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b-merged.mount: Deactivated successfully.
Nov 25 11:28:39 np0005535469 podman[281406]: 2025-11-25 16:28:39.801403263 +0000 UTC m=+1.262653990 container remove 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:28:39 np0005535469 systemd[1]: libpod-conmon-60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4.scope: Deactivated successfully.
Nov 25 11:28:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:28:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:28:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:28:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:28:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 49aab584-00ed-4519-b3e7-440900f0dc29 does not exist
Nov 25 11:28:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0bb34bcc-caa2-4855-82c2-6c23440f2c78 does not exist
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:28:40
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'backups', 'images', '.mgr']
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.068 254096 INFO nova.virt.libvirt.driver [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deleting instance files /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b_del#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.069 254096 INFO nova.virt.libvirt.driver [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deletion of /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b_del complete#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.130 254096 INFO nova.compute.manager [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.131 254096 DEBUG oslo.service.loopingcall [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.131 254096 DEBUG nova.compute.manager [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.131 254096 DEBUG nova.network.neutron [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.292 254096 DEBUG nova.network.neutron [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.302 254096 DEBUG nova.network.neutron [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.315 254096 INFO nova.compute.manager [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.364 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.364 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.418 254096 DEBUG oslo_concurrency.processutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.611 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.612 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.631 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.710 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:28:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:28:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610683225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.880 254096 DEBUG oslo_concurrency.processutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 173 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 47 KiB/s wr, 302 op/s
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.888 254096 DEBUG nova.compute.provider_tree [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.912 254096 DEBUG nova.scheduler.client.report [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.931 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.933 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.939 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.939 254096 INFO nova.compute.claims [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:28:40 np0005535469 nova_compute[254092]: 2025-11-25 16:28:40.984 254096 INFO nova.scheduler.client.report [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Deleted allocations for instance 2174ef15-55fa-4734-8cc2-89064853919b#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.053 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.062 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147019412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.527 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.534 254096 DEBUG nova.compute.provider_tree [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.552 254096 DEBUG nova.scheduler.client.report [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.620 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.621 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.825 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.826 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.917 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "12043b7f-9853-45a8-b963-ae96713754b4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.918 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.918 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "12043b7f-9853-45a8-b963-ae96713754b4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.919 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.919 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.920 254096 INFO nova.compute.manager [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Terminating instance#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.921 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "refresh_cache-12043b7f-9853-45a8-b963-ae96713754b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.922 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquired lock "refresh_cache-12043b7f-9853-45a8-b963-ae96713754b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:41 np0005535469 nova_compute[254092]: 2025-11-25 16:28:41.922 254096 DEBUG nova.network.neutron [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.002 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.053 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.083 254096 DEBUG nova.policy [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.352 254096 DEBUG nova.network.neutron [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.434 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.436 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.437 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating image(s)#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.475 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.503 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.525 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.528 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.594 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.596 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.596 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.597 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.623 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.628 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 25 11:28:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 25 11:28:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 25 11:28:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 43 KiB/s wr, 108 op/s
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.962 254096 DEBUG nova.network.neutron [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.987 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Releasing lock "refresh_cache-12043b7f-9853-45a8-b963-ae96713754b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:42 np0005535469 nova_compute[254092]: 2025-11-25 16:28:42.988 254096 DEBUG nova.compute.manager [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:28:43 np0005535469 nova_compute[254092]: 2025-11-25 16:28:43.274 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Successfully created port: d6146886-91a1-4d5f-9234-e1d0154b4230 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:28:43 np0005535469 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 25 11:28:43 np0005535469 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 13.364s CPU time.
Nov 25 11:28:43 np0005535469 systemd-machined[216343]: Machine qemu-16-instance-0000000f terminated.
Nov 25 11:28:43 np0005535469 nova_compute[254092]: 2025-11-25 16:28:43.411 254096 INFO nova.virt.libvirt.driver [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance destroyed successfully.#033[00m
Nov 25 11:28:43 np0005535469 nova_compute[254092]: 2025-11-25 16:28:43.412 254096 DEBUG nova.objects.instance [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'resources' on Instance uuid 12043b7f-9853-45a8-b963-ae96713754b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:43 np0005535469 nova_compute[254092]: 2025-11-25 16:28:43.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:43 np0005535469 nova_compute[254092]: 2025-11-25 16:28:43.829 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:43 np0005535469 nova_compute[254092]: 2025-11-25 16:28:43.829 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:43 np0005535469 nova_compute[254092]: 2025-11-25 16:28:43.896 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:28:44 np0005535469 nova_compute[254092]: 2025-11-25 16:28:44.117 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:44 np0005535469 nova_compute[254092]: 2025-11-25 16:28:44.177 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:28:44 np0005535469 nova_compute[254092]: 2025-11-25 16:28:44.352 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:44 np0005535469 nova_compute[254092]: 2025-11-25 16:28:44.352 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:44 np0005535469 nova_compute[254092]: 2025-11-25 16:28:44.359 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:28:44 np0005535469 nova_compute[254092]: 2025-11-25 16:28:44.359 254096 INFO nova.compute.claims [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:28:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 34 KiB/s wr, 85 op/s
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.021 254096 DEBUG nova.objects.instance [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.041 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.042 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ensure instance console log exists: /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.042 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.043 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.043 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.174 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.309 254096 INFO nova.virt.libvirt.driver [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deleting instance files /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4_del#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.310 254096 INFO nova.virt.libvirt.driver [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deletion of /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4_del complete#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.378 254096 INFO nova.compute.manager [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 2.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.379 254096 DEBUG oslo.service.loopingcall [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.379 254096 DEBUG nova.compute.manager [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.379 254096 DEBUG nova.network.neutron [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.382 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Successfully updated port: d6146886-91a1-4d5f-9234-e1d0154b4230 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.400 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.401 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.401 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.513 254096 DEBUG nova.compute.manager [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-changed-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.513 254096 DEBUG nova.compute.manager [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Refreshing instance network info cache due to event network-changed-d6146886-91a1-4d5f-9234-e1d0154b4230. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.514 254096 DEBUG oslo_concurrency.lockutils [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065165170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.602 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.608 254096 DEBUG nova.compute.provider_tree [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.645 254096 DEBUG nova.network.neutron [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.661 254096 DEBUG nova.scheduler.client.report [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.665 254096 DEBUG nova.network.neutron [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.682 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.700 254096 INFO nova.compute.manager [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 0.32 seconds to deallocate network for instance.#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.706 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.707 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.774 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.775 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.780 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.781 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.802 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.830 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.891 254096 DEBUG oslo_concurrency.processutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.933 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.935 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.936 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Creating image(s)#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.964 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:45 np0005535469 nova_compute[254092]: 2025-11-25 16:28:45.988 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.023 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.026 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.083 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.084 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.084 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.084 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.107 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.112 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 090ac2d7-979e-4706-8a01-5e94ab72282d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.139 254096 DEBUG nova.policy [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:28:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801946160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.346 254096 DEBUG oslo_concurrency.processutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.355 254096 DEBUG nova.compute.provider_tree [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.370 254096 DEBUG nova.scheduler.client.report [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.392 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.398 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 090ac2d7-979e-4706-8a01-5e94ab72282d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.433 254096 INFO nova.scheduler.client.report [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Deleted allocations for instance 12043b7f-9853-45a8-b963-ae96713754b4#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.480 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.523 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.613 254096 DEBUG nova.objects.instance [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 090ac2d7-979e-4706-8a01-5e94ab72282d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.627 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.628 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Ensure instance console log exists: /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.628 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.628 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:46 np0005535469 nova_compute[254092]: 2025-11-25 16:28:46.629 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 134 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 2.3 MiB/s wr, 142 op/s
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.091 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.116 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.117 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance network_info: |[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.118 254096 DEBUG oslo_concurrency.lockutils [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.119 254096 DEBUG nova.network.neutron [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Refreshing network info cache for port d6146886-91a1-4d5f-9234-e1d0154b4230 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.124 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start _get_guest_xml network_info=[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.132 254096 WARNING nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.139 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.140 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.145 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.145 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.147 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.147 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.149 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.149 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.150 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.150 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.151 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.151 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.153 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.385 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.533 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.534 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.552 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Successfully created port: 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.557 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588178945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.618 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.642 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.647 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 25 11:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 25 11:28:47 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.810 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.812 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.825 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:28:47 np0005535469 nova_compute[254092]: 2025-11-25 16:28:47.825 254096 INFO nova.compute.claims [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.144 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618127445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.179 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.180 254096 DEBUG nova.virt.libvirt.vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:42Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.181 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.181 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.183 254096 DEBUG nova.objects.instance [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.196 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <uuid>3375e096-321c-459b-8b6a-e085bb62872f</uuid>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <name>instance-00000012</name>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminTestJSON-server-1705426121</nova:name>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:28:47</nova:creationTime>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <nova:port uuid="d6146886-91a1-4d5f-9234-e1d0154b4230">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <entry name="serial">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <entry name="uuid">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk.config">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:dd:a2:8e"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <target dev="tapd6146886-91"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log" append="off"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:28:48 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:28:48 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:28:48 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:28:48 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.197 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Preparing to wait for external event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.197 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.198 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.198 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.198 254096 DEBUG nova.virt.libvirt.vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:42Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.199 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.199 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.200 254096 DEBUG os_vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.204 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6146886-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.205 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6146886-91, col_values=(('external_ids', {'iface-id': 'd6146886-91a1-4d5f-9234-e1d0154b4230', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:a2:8e', 'vm-uuid': '3375e096-321c-459b-8b6a-e085bb62872f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:48 np0005535469 NetworkManager[48891]: <info>  [1764088128.2075] manager: (tapd6146886-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.215 254096 INFO os_vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.288 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.289 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.289 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:dd:a2:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.290 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Using config drive#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.314 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.459 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088113.4494846, d5372b02-1d93-4354-8f5c-c4228e8d3ec4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.459 254096 INFO nova.compute.manager [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.480 254096 DEBUG nova.compute.manager [None req-8339a880-b614-4487-a680-90e3683d573c - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251603989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.587 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.593 254096 DEBUG nova.compute.provider_tree [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.605 254096 DEBUG nova.scheduler.client.report [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.609 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Successfully updated port: 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.626 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.627 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.629 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.630 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.630 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.685 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.686 254096 DEBUG nova.network.neutron [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.707 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.728 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.842 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.845 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.845 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Creating image(s)#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.871 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 134 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 2.7 MiB/s wr, 108 op/s
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.898 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.926 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.930 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.959 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.965 254096 DEBUG nova.network.neutron [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updated VIF entry in instance network info cache for port d6146886-91a1-4d5f-9234-e1d0154b4230. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.966 254096 DEBUG nova.network.neutron [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.974 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating config drive at /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config#033[00m
Nov 25 11:28:48 np0005535469 nova_compute[254092]: 2025-11-25 16:28:48.979 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu2rbk3_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 25 11:28:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.019 254096 DEBUG nova.compute.manager [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-changed-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.020 254096 DEBUG nova.compute.manager [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Refreshing instance network info cache due to event network-changed-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.020 254096 DEBUG oslo_concurrency.lockutils [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.021 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.023 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.023 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.024 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.085 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.092 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.125 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu2rbk3_" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.127 254096 DEBUG oslo_concurrency.lockutils [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.154 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.158 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.195 254096 DEBUG nova.network.neutron [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.196 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.396 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.435 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.437 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting local config drive /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.483 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] resizing rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:28:49 np0005535469 kernel: tapd6146886-91: entered promiscuous mode
Nov 25 11:28:49 np0005535469 NetworkManager[48891]: <info>  [1764088129.4888] manager: (tapd6146886-91): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 11:28:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:49Z|00055|binding|INFO|Claiming lport d6146886-91a1-4d5f-9234-e1d0154b4230 for this chassis.
Nov 25 11:28:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:49Z|00056|binding|INFO|d6146886-91a1-4d5f-9234-e1d0154b4230: Claiming fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.512 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.513 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.515 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:28:49 np0005535469 systemd-udevd[282287]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:28:49 np0005535469 systemd-machined[216343]: New machine qemu-20-instance-00000012.
Nov 25 11:28:49 np0005535469 NetworkManager[48891]: <info>  [1764088129.5321] device (tapd6146886-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.531 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb04495-1861-4502-abc8-1268ee1fa644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 NetworkManager[48891]: <info>  [1764088129.5342] device (tapd6146886-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.534 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3403825e-11 in ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.539 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3403825e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.539 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc8d25d-c0d5-49c3-b789-3541786e7454]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.541 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b734f031-1255-41c7-b4b6-78076a8daaf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.558 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[0da5b2c2-1af9-4e8e-8b2d-4f8c47d8ec3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:49Z|00057|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 ovn-installed in OVS
Nov 25 11:28:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:49Z|00058|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 up in Southbound
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.588 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72275494-5be3-4260-b3e0-0ba070847a44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.626 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c1439f47-463f-4880-a584-268247f180b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 NetworkManager[48891]: <info>  [1764088129.6348] manager: (tap3403825e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.635 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d039af9f-ab38-429c-8db6-80ac33bd637d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.642 254096 DEBUG nova.objects.instance [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lazy-loading 'migration_context' on Instance uuid 9440e9b4-329e-44cf-a489-5a0634a8aa30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.661 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.661 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Ensure instance console log exists: /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.662 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.662 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.662 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.663 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.669 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d52f58-b542-43a0-9943-534643773076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.671 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e02d8bc0-b192-4800-9377-7a82c3def111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.672 254096 WARNING nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.678 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.679 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.682 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.682 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.685 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.687 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:49 np0005535469 NetworkManager[48891]: <info>  [1764088129.6958] device (tap3403825e-10): carrier: link connected
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.701 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[22c74b09-e1dc-4718-9367-106b2fcd143b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c7f5f5-0f36-4008-95fd-8b26b413e462]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282343, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.732 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73c7e927-d04b-489a-8e59-3b9013342651]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:44ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458728, 'tstamp': 458728}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282344, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.749 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae2168b-061e-4e7d-87a7-85fef4bd5e6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282345, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5dce62-87b6-4168-86db-5587922bd4dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc824152-268b-48c4-bf44-a25fa685fcb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.841 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.841 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.842 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:49 np0005535469 kernel: tap3403825e-10: entered promiscuous mode
Nov 25 11:28:49 np0005535469 NetworkManager[48891]: <info>  [1764088129.8449] manager: (tap3403825e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.846 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:49Z|00059|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 11:28:49 np0005535469 nova_compute[254092]: 2025-11-25 16:28:49.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.866 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3403825e-13ff-43e0-80c4-b59cf23ed30b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3403825e-13ff-43e0-80c4-b59cf23ed30b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.867 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7040131c-8455-4880-a3bb-74cd656f2633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.868 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/3403825e-13ff-43e0-80c4-b59cf23ed30b.pid.haproxy
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:28:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.868 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'env', 'PROCESS_TAG=haproxy-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3403825e-13ff-43e0-80c4-b59cf23ed30b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:28:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77643635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.146 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.176 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.181 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.209 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.236 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.236 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance network_info: |[{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.237 254096 DEBUG oslo_concurrency.lockutils [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.238 254096 DEBUG nova.network.neutron [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Refreshing network info cache for port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.241 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start _get_guest_xml network_info=[{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.248 254096 WARNING nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:28:50 np0005535469 podman[282412]: 2025-11-25 16:28:50.251361218 +0000 UTC m=+0.067200358 container create a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.252 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.254 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.266 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.267 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.268 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.268 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.269 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.269 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.270 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.270 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.270 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.272 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.275 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:50 np0005535469 podman[282412]: 2025-11-25 16:28:50.217932689 +0000 UTC m=+0.033771849 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:28:50 np0005535469 systemd[1]: Started libpod-conmon-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb.scope.
Nov 25 11:28:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:28:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493ef27e1af43f842f739b06d394ae37970231d4264741926f2efe0d296ab9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:28:50 np0005535469 podman[282412]: 2025-11-25 16:28:50.381987241 +0000 UTC m=+0.197826381 container init a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 11:28:50 np0005535469 podman[282412]: 2025-11-25 16:28:50.395856188 +0000 UTC m=+0.211695328 container start a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 11:28:50 np0005535469 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : New worker (282492) forked
Nov 25 11:28:50 np0005535469 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : Loading success.
Nov 25 11:28:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821084128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.633 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.636 254096 DEBUG nova.objects.instance [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lazy-loading 'pci_devices' on Instance uuid 9440e9b4-329e-44cf-a489-5a0634a8aa30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.662 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <uuid>9440e9b4-329e-44cf-a489-5a0634a8aa30</uuid>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <name>instance-00000014</name>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <nova:name>tempest-TenantUsagesTestJSON-server-39150975</nova:name>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:28:49</nova:creationTime>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <nova:user uuid="06ffced0e9004a60b3fe2f455857f494">tempest-TenantUsagesTestJSON-707754235-project-member</nova:user>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <nova:project uuid="99f704e0b6c04f648538e8070f335bdc">tempest-TenantUsagesTestJSON-707754235</nova:project>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <entry name="serial">9440e9b4-329e-44cf-a489-5a0634a8aa30</entry>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <entry name="uuid">9440e9b4-329e-44cf-a489-5a0634a8aa30</entry>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9440e9b4-329e-44cf-a489-5a0634a8aa30_disk">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/console.log" append="off"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:28:50 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:28:50 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:28:50 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:28:50 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.666 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088130.6634674, 3375e096-321c-459b-8b6a-e085bb62872f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.667 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.686 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.691 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088130.6637552, 3375e096-321c-459b-8b6a-e085bb62872f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.691 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.715 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2494545252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.732 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.733 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.733 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Using config drive#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.753 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.758 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.780 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.784 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:50 np0005535469 nova_compute[254092]: 2025-11-25 16:28:50.811 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 159 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 6.6 MiB/s wr, 166 op/s
Nov 25 11:28:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010402803892361789 of space, bias 1.0, pg target 0.3120841167708537 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:28:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 25 11:28:51 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 25 11:28:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:28:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258874472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.238 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.241 254096 DEBUG nova.virt.libvirt.vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-391611256',display_name='tempest-ServersAdminTestJSON-server-391611256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-391611256',id=19,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-6weq4dsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:45Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=090ac2d7-979e-4706-8a01-5e94ab72282d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.241 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.242 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.244 254096 DEBUG nova.objects.instance [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 090ac2d7-979e-4706-8a01-5e94ab72282d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.270 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <uuid>090ac2d7-979e-4706-8a01-5e94ab72282d</uuid>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <name>instance-00000013</name>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminTestJSON-server-391611256</nova:name>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:28:50</nova:creationTime>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <nova:port uuid="7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <entry name="serial">090ac2d7-979e-4706-8a01-5e94ab72282d</entry>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <entry name="uuid">090ac2d7-979e-4706-8a01-5e94ab72282d</entry>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/090ac2d7-979e-4706-8a01-5e94ab72282d_disk">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d8:5a:58"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <target dev="tap7c4d5f4d-36"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/console.log" append="off"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:28:51 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:28:51 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:28:51 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:28:51 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.272 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Preparing to wait for external event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.272 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.273 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.273 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.274 254096 DEBUG nova.virt.libvirt.vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-391611256',display_name='tempest-ServersAdminTestJSON-server-391611256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-391611256',id=19,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-6weq4dsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:45Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=090ac2d7-979e-4706-8a01-5e94ab72282d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.274 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.275 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.275 254096 DEBUG os_vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.276 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.277 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.281 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c4d5f4d-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.281 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c4d5f4d-36, col_values=(('external_ids', {'iface-id': '7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:5a:58', 'vm-uuid': '090ac2d7-979e-4706-8a01-5e94ab72282d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:51 np0005535469 NetworkManager[48891]: <info>  [1764088131.2841] manager: (tap7c4d5f4d-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.285 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.292 254096 INFO os_vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36')#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.333 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.334 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.334 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:d8:5a:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.335 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Using config drive#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.358 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.371 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Creating config drive at /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.376 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqz72wcln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.518 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqz72wcln" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.549 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.554 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.587 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:51 np0005535469 podman[282634]: 2025-11-25 16:28:51.65496924 +0000 UTC m=+0.058282666 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:28:51 np0005535469 podman[282632]: 2025-11-25 16:28:51.658628339 +0000 UTC m=+0.068271187 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:28:51 np0005535469 podman[282635]: 2025-11-25 16:28:51.692752927 +0000 UTC m=+0.096525005 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.714 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:51 np0005535469 nova_compute[254092]: 2025-11-25 16:28:51.715 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deleting local config drive /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config because it was imported into RBD.#033[00m
Nov 25 11:28:51 np0005535469 systemd-machined[216343]: New machine qemu-21-instance-00000014.
Nov 25 11:28:51 np0005535469 systemd[1]: Started Virtual Machine qemu-21-instance-00000014.
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.050 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Creating config drive at /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.054 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi78vhy9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.094 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088132.0935538, 9440e9b4-329e-44cf-a489-5a0634a8aa30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.094 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.097 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.097 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.101 254096 INFO nova.virt.libvirt.driver [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance spawned successfully.#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.101 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.123 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.123 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.124 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.124 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.124 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.125 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.128 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.153 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.153 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088132.096536, 9440e9b4-329e-44cf-a489-5a0634a8aa30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.154 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] VM Started (Lifecycle Event)#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.186 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.188 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi78vhy9" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.215 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.218 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.253 254096 INFO nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 3.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.253 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.255 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.285 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.332 254096 INFO nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 4.72 seconds to build instance.#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.359 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.361 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.361 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deleting local config drive /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config because it was imported into RBD.#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:52 np0005535469 kernel: tap7c4d5f4d-36: entered promiscuous mode
Nov 25 11:28:52 np0005535469 NetworkManager[48891]: <info>  [1764088132.4100] manager: (tap7c4d5f4d-36): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 11:28:52 np0005535469 systemd-udevd[282331]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:52Z|00060|binding|INFO|Claiming lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for this chassis.
Nov 25 11:28:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:52Z|00061|binding|INFO|7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c: Claiming fa:16:3e:d8:5a:58 10.100.0.11
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.422 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:5a:58 10.100.0.11'], port_security=['fa:16:3e:d8:5a:58 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '090ac2d7-979e-4706-8a01-5e94ab72282d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.423 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis#033[00m
Nov 25 11:28:52 np0005535469 NetworkManager[48891]: <info>  [1764088132.4253] device (tap7c4d5f4d-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:28:52 np0005535469 NetworkManager[48891]: <info>  [1764088132.4276] device (tap7c4d5f4d-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:52Z|00062|binding|INFO|Setting lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c ovn-installed in OVS
Nov 25 11:28:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:28:52Z|00063|binding|INFO|Setting lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c up in Southbound
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bedb7ad4-1cb5-4a8d-89b4-5088e9610ffe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:52 np0005535469 systemd-machined[216343]: New machine qemu-22-instance-00000013.
Nov 25 11:28:52 np0005535469 systemd[1]: Started Virtual Machine qemu-22-instance-00000013.
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.473 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4830f569-e897-47ad-abe8-172384ab0823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.476 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[799224ce-6d31-49fa-985b-50a4e28acf43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.503 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccbec04-46d1-4547-87df-8669c2dfc08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.532 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af423c4d-a367-4d6c-976b-01ea5f4ec789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282832, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.543 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31df6ad6-0216-4c59-9dd3-9ff22112e042]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282835, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282835, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.544 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:28:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:28:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 180 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.1 MiB/s wr, 227 op/s
Nov 25 11:28:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148694826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:52 np0005535469 nova_compute[254092]: 2025-11-25 16:28:52.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.060 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.060 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.065 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.065 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.070 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088133.0703099, 090ac2d7-979e-4706-8a01-5e94ab72282d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.071 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Started (Lifecycle Event)#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.073 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.100 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.104 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088133.0717514, 090ac2d7-979e-4706-8a01-5e94ab72282d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.104 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.122 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.125 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.142 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.277 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4404MB free_disk=59.93641662597656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.279 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.279 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 3375e096-321c-459b-8b6a-e085bb62872f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 090ac2d7-979e-4706-8a01-5e94ab72282d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 9440e9b4-329e-44cf-a489-5a0634a8aa30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.477 254096 DEBUG nova.network.neutron [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updated VIF entry in instance network info cache for port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.478 254096 DEBUG nova.network.neutron [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.481 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.511 254096 DEBUG oslo_concurrency.lockutils [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:28:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:28:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753621258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.955 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.960 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.975 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.996 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:28:53 np0005535469 nova_compute[254092]: 2025-11-25 16:28:53.997 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.415 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088119.4138906, 2174ef15-55fa-4734-8cc2-89064853919b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.416 254096 INFO nova.compute.manager [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.419 254096 DEBUG nova.compute.manager [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.420 254096 DEBUG oslo_concurrency.lockutils [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.420 254096 DEBUG oslo_concurrency.lockutils [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.421 254096 DEBUG oslo_concurrency.lockutils [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.421 254096 DEBUG nova.compute.manager [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Processing event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.422 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.425 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088134.4256034, 090ac2d7-979e-4706-8a01-5e94ab72282d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.426 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.428 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.431 254096 INFO nova.virt.libvirt.driver [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance spawned successfully.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.431 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.450 254096 DEBUG nova.compute.manager [None req-d89895fa-1222-4643-88e2-6124354a414e - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.459 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.462 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.468 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.468 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.469 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.469 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.469 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.470 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.498 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.501 254096 DEBUG nova.compute.manager [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.501 254096 DEBUG oslo_concurrency.lockutils [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.502 254096 DEBUG oslo_concurrency.lockutils [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.502 254096 DEBUG oslo_concurrency.lockutils [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.502 254096 DEBUG nova.compute.manager [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Processing event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.503 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.506 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.507 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088134.506741, 3375e096-321c-459b-8b6a-e085bb62872f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.507 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.515 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance spawned successfully.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.515 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.527 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.531 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.535 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.535 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.536 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.536 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.537 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.537 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.571 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.641 254096 INFO nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 8.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.641 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.673 254096 INFO nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 12.24 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.674 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.737 254096 INFO nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 10.40 seconds to build instance.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.817 254096 INFO nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 14.13 seconds to build instance.#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.841 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 180 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 6.0 MiB/s wr, 190 op/s
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.891 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.996 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:54 np0005535469 nova_compute[254092]: 2025-11-25 16:28:54.998 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:28:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:28:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457479105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:28:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:28:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457479105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.499 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.500 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.500 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9440e9b4-329e-44cf-a489-5a0634a8aa30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.501 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.501 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.503 254096 INFO nova.compute.manager [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Terminating instance#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.504 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "refresh_cache-9440e9b4-329e-44cf-a489-5a0634a8aa30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.504 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquired lock "refresh_cache-9440e9b4-329e-44cf-a489-5a0634a8aa30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.505 254096 DEBUG nova.network.neutron [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:28:56 np0005535469 nova_compute[254092]: 2025-11-25 16:28:56.617 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:28:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 387 op/s
Nov 25 11:28:57 np0005535469 nova_compute[254092]: 2025-11-25 16:28:57.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:28:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:28:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 25 11:28:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 25 11:28:57 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 25 11:28:58 np0005535469 nova_compute[254092]: 2025-11-25 16:28:58.409 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088123.4073539, 12043b7f-9853-45a8-b963-ae96713754b4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:28:58 np0005535469 nova_compute[254092]: 2025-11-25 16:28:58.409 254096 INFO nova.compute.manager [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:28:58 np0005535469 nova_compute[254092]: 2025-11-25 16:28:58.425 254096 DEBUG nova.compute.manager [None req-8ded2092-1658-4b4b-b675-8ed3080575fe - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:28:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.4 MiB/s wr, 303 op/s
Nov 25 11:28:59 np0005535469 nova_compute[254092]: 2025-11-25 16:28:59.611 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:28:59 np0005535469 nova_compute[254092]: 2025-11-25 16:28:59.612 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.330 254096 DEBUG nova.network.neutron [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.449 254096 DEBUG nova.compute.manager [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.451 254096 DEBUG oslo_concurrency.lockutils [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.451 254096 DEBUG oslo_concurrency.lockutils [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.452 254096 DEBUG oslo_concurrency.lockutils [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.452 254096 DEBUG nova.compute.manager [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] No waiting events found dispatching network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.452 254096 WARNING nova.compute.manager [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received unexpected event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for instance with vm_state active and task_state None.#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.560 254096 DEBUG nova.compute.manager [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.561 254096 DEBUG oslo_concurrency.lockutils [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.563 254096 DEBUG oslo_concurrency.lockutils [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.564 254096 DEBUG oslo_concurrency.lockutils [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.564 254096 DEBUG nova.compute.manager [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.565 254096 WARNING nova.compute.manager [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.720 254096 DEBUG nova.network.neutron [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.736 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Releasing lock "refresh_cache-9440e9b4-329e-44cf-a489-5a0634a8aa30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.737 254096 DEBUG nova.compute.manager [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:29:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 32 KiB/s wr, 288 op/s
Nov 25 11:29:00 np0005535469 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 25 11:29:00 np0005535469 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Consumed 9.024s CPU time.
Nov 25 11:29:00 np0005535469 systemd-machined[216343]: Machine qemu-21-instance-00000014 terminated.
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.954 254096 INFO nova.virt.libvirt.driver [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance destroyed successfully.#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.955 254096 DEBUG nova.objects.instance [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lazy-loading 'resources' on Instance uuid 9440e9b4-329e-44cf-a489-5a0634a8aa30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.992 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:00 np0005535469 nova_compute[254092]: 2025-11-25 16:29:00.994 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:01 np0005535469 nova_compute[254092]: 2025-11-25 16:29:01.018 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:29:01 np0005535469 nova_compute[254092]: 2025-11-25 16:29:01.275 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:01 np0005535469 nova_compute[254092]: 2025-11-25 16:29:01.277 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:01 np0005535469 nova_compute[254092]: 2025-11-25 16:29:01.283 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:29:01 np0005535469 nova_compute[254092]: 2025-11-25 16:29:01.284 254096 INFO nova.compute.claims [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:29:01 np0005535469 nova_compute[254092]: 2025-11-25 16:29:01.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:01 np0005535469 nova_compute[254092]: 2025-11-25 16:29:01.579 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694543479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.052 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.062 254096 DEBUG nova.compute.provider_tree [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.077 254096 DEBUG nova.scheduler.client.report [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.121 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.122 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.192 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.193 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.219 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.246 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.390 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.392 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.395 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Creating image(s)#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.465 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.494 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.514 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.517 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.543 254096 DEBUG nova.policy [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '09afd60d3afd4a57a14e7e93a66275f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f99f039b80564f5684a91f3bc27c2249', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.577 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.578 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.579 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.579 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.608 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:02 np0005535469 nova_compute[254092]: 2025-11-25 16:29:02.616 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 25 11:29:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 25 11:29:02 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 25 11:29:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 39 KiB/s wr, 357 op/s
Nov 25 11:29:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 140 op/s
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.322 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Successfully created port: ab0cfddf-69e0-4494-a106-e603168444a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.383 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.455 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] resizing rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.509 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.509 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.525 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.667 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.668 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:05 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.719 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.722 254096 INFO nova.compute.claims [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:29:05 np0005535469 nova_compute[254092]: 2025-11-25 16:29:05.905 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.016 254096 DEBUG nova.objects.instance [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b9f60af-05f0-43c7-bce7-227cb54ec793 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.032 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.032 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Ensure instance console log exists: /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.033 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.033 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.033 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020884996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.349 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.361 254096 INFO nova.virt.libvirt.driver [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deleting instance files /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30_del#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.364 254096 INFO nova.virt.libvirt.driver [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deletion of /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30_del complete#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.371 254096 DEBUG nova.compute.provider_tree [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.395 254096 DEBUG nova.scheduler.client.report [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.431 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.432 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.437 254096 INFO nova.compute.manager [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 5.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.438 254096 DEBUG oslo.service.loopingcall [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.438 254096 DEBUG nova.compute.manager [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.438 254096 DEBUG nova.network.neutron [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.495 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.496 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.529 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.556 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.685 254096 DEBUG nova.network.neutron [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.706 254096 DEBUG nova.network.neutron [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.721 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.722 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.722 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Creating image(s)#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.740 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.759 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.785 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.788 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.823 254096 DEBUG nova.policy [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.829 254096 INFO nova.compute.manager [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 0.39 seconds to deallocate network for instance.#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.867 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.868 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.868 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.869 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 172 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 177 op/s
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.892 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.898 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.926 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:06 np0005535469 nova_compute[254092]: 2025-11-25 16:29:06.927 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.070 254096 DEBUG oslo_concurrency.processutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.100 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Successfully updated port: ab0cfddf-69e0-4494-a106-e603168444a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.186 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.186 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquired lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.187 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.400 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826807844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.520 254096 DEBUG nova.compute.manager [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.520 254096 DEBUG nova.compute.manager [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing instance network info cache due to event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.521 254096 DEBUG oslo_concurrency.lockutils [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.531 254096 DEBUG oslo_concurrency.processutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.537 254096 DEBUG nova.compute.provider_tree [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.549 254096 DEBUG nova.scheduler.client.report [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.593 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.732 254096 INFO nova.scheduler.client.report [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Deleted allocations for instance 9440e9b4-329e-44cf-a489-5a0634a8aa30#033[00m
Nov 25 11:29:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:07 np0005535469 nova_compute[254092]: 2025-11-25 16:29:07.892 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:08 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:08Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 11:29:08 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:08Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 11:29:08 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:08Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:5a:58 10.100.0.11
Nov 25 11:29:08 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:08Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:5a:58 10.100.0.11
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.406 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Successfully created port: 0d1cf86d-6639-47eb-8de1-718476d1c006 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.758 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.799 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Releasing lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.800 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance network_info: |[{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.800 254096 DEBUG oslo_concurrency.lockutils [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.800 254096 DEBUG nova.network.neutron [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.803 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start _get_guest_xml network_info=[{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.807 254096 WARNING nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.811 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.812 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.818 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.818 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:29:08 np0005535469 nova_compute[254092]: 2025-11-25 16:29:08.824 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 172 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 159 op/s
Nov 25 11:29:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318660158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.235 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.254 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.257 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.468 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.521 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:29:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394803419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.792 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.796 254096 DEBUG nova.virt.libvirt.vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1649655527',display_name='tempest-ServersTestJSON-server-1649655527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1649655527',id=21,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCW+Ktdde8brSi3dDjuPJuQcQZxoIsAM7EI886G65qnVc9QFdS/VNcpSvi1Y/e9z6GKqL8cPDahYUZN5KOZSYOR8WETlyE2X3Pf8M2fEr9LePpk/dgU5OnSDx/LsY9Zvng==',key_name='tempest-keypair-749925212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f99f039b80564f5684a91f3bc27c2249',ramdisk_id='',reservation_id='r-00qo170f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-841201839',owner_user_name='tempest-ServersTestJSON-841201839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09afd60d3afd4a57a14e7e93a66275f9',uuid=7b9f60af-05f0-43c7-bce7-227cb54ec793,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.797 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converting VIF {"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.797 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.798 254096 DEBUG nova.objects.instance [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b9f60af-05f0-43c7-bce7-227cb54ec793 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.820 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <uuid>7b9f60af-05f0-43c7-bce7-227cb54ec793</uuid>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <name>instance-00000015</name>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestJSON-server-1649655527</nova:name>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:29:08</nova:creationTime>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:user uuid="09afd60d3afd4a57a14e7e93a66275f9">tempest-ServersTestJSON-841201839-project-member</nova:user>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:project uuid="f99f039b80564f5684a91f3bc27c2249">tempest-ServersTestJSON-841201839</nova:project>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <nova:port uuid="ab0cfddf-69e0-4494-a106-e603168444a4">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <entry name="serial">7b9f60af-05f0-43c7-bce7-227cb54ec793</entry>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <entry name="uuid">7b9f60af-05f0-43c7-bce7-227cb54ec793</entry>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7b9f60af-05f0-43c7-bce7-227cb54ec793_disk">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:7d:e6:95"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <target dev="tapab0cfddf-69"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/console.log" append="off"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:29:09 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:29:09 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:29:09 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:29:09 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.820 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Preparing to wait for external event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.820 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.821 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.821 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.822 254096 DEBUG nova.virt.libvirt.vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1649655527',display_name='tempest-ServersTestJSON-server-1649655527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1649655527',id=21,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCW+Ktdde8brSi3dDjuPJuQcQZxoIsAM7EI886G65qnVc9QFdS/VNcpSvi1Y/e9z6GKqL8cPDahYUZN5KOZSYOR8WETlyE2X3Pf8M2fEr9LePpk/dgU5OnSDx/LsY9Zvng==',key_name='tempest-keypair-749925212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f99f039b80564f5684a91f3bc27c2249',ramdisk_id='',reservation_id='r-00qo170f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-841201839',owner_user_name='tempest-ServersTestJSON-841201839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09afd60d3afd4a57a14e7e93a66275f9',uuid=7b9f60af-05f0-43c7-bce7-227cb54ec793,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.822 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converting VIF {"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.822 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.823 254096 DEBUG os_vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.824 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.824 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab0cfddf-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab0cfddf-69, col_values=(('external_ids', {'iface-id': 'ab0cfddf-69e0-4494-a106-e603168444a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:e6:95', 'vm-uuid': '7b9f60af-05f0-43c7-bce7-227cb54ec793'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:09 np0005535469 NetworkManager[48891]: <info>  [1764088149.8290] manager: (tapab0cfddf-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.834 254096 INFO os_vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69')#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.942 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.943 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.943 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] No VIF found with MAC fa:16:3e:7d:e6:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:29:09 np0005535469 nova_compute[254092]: 2025-11-25 16:29:09.944 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Using config drive#033[00m
Nov 25 11:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.061 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.236 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Successfully updated port: 0d1cf86d-6639-47eb-8de1-718476d1c006 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.259 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.259 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.259 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.311 254096 DEBUG nova.objects.instance [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.322 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.322 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Ensure instance console log exists: /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.323 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.323 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.323 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.326 254096 DEBUG nova.compute.manager [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-changed-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.326 254096 DEBUG nova.compute.manager [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Refreshing instance network info cache due to event network-changed-0d1cf86d-6639-47eb-8de1-718476d1c006. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.326 254096 DEBUG oslo_concurrency.lockutils [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.481 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.563 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Creating config drive at /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.578 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ztfi_ea execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.611 254096 DEBUG nova.network.neutron [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updated VIF entry in instance network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.612 254096 DEBUG nova.network.neutron [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.636 254096 DEBUG oslo_concurrency.lockutils [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.716 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ztfi_ea" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.740 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:10 np0005535469 nova_compute[254092]: 2025-11-25 16:29:10.744 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 248 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 667 KiB/s rd, 7.4 MiB/s wr, 180 op/s
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.511 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.767s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.512 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deleting local config drive /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config because it was imported into RBD.#033[00m
Nov 25 11:29:11 np0005535469 kernel: tapab0cfddf-69: entered promiscuous mode
Nov 25 11:29:11 np0005535469 NetworkManager[48891]: <info>  [1764088151.5659] manager: (tapab0cfddf-69): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:11Z|00064|binding|INFO|Claiming lport ab0cfddf-69e0-4494-a106-e603168444a4 for this chassis.
Nov 25 11:29:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:11Z|00065|binding|INFO|ab0cfddf-69e0-4494-a106-e603168444a4: Claiming fa:16:3e:7d:e6:95 10.100.0.13
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:11 np0005535469 systemd-udevd[283480]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:29:11 np0005535469 systemd-machined[216343]: New machine qemu-23-instance-00000015.
Nov 25 11:29:11 np0005535469 NetworkManager[48891]: <info>  [1764088151.6122] device (tapab0cfddf-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:29:11 np0005535469 NetworkManager[48891]: <info>  [1764088151.6135] device (tapab0cfddf-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.619 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e6:95 10.100.0.13'], port_security=['fa:16:3e:7d:e6:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b9f60af-05f0-43c7-bce7-227cb54ec793', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f99f039b80564f5684a91f3bc27c2249', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a14e6fc5-327e-44fa-8134-4f62c2b97373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16d598ad-25ce-4f41-98a7-2a9985da8936, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ab0cfddf-69e0-4494-a106-e603168444a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.621 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ab0cfddf-69e0-4494-a106-e603168444a4 in datapath 09313f5b-a3fb-41e8-87c2-c636c3ed13c6 bound to our chassis#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.622 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09313f5b-a3fb-41e8-87c2-c636c3ed13c6#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.635 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c349ab9-6c06-4b6a-a3fd-eef8dfca9d99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.636 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap09313f5b-a1 in ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.638 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap09313f5b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.638 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1bae544b-2166-484d-8c7a-29f68b9d280c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.639 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48bf4560-4ac6-4ac5-91a7-a27adfd8c40e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:11Z|00066|binding|INFO|Setting lport ab0cfddf-69e0-4494-a106-e603168444a4 ovn-installed in OVS
Nov 25 11:29:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:11Z|00067|binding|INFO|Setting lport ab0cfddf-69e0-4494-a106-e603168444a4 up in Southbound
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.655 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6783e455-2f6e-434c-9a2e-478530e61a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.673 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e7960a-bc06-4533-8722-024d5a012860]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.710 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[afc3a34e-8a92-48e3-87d6-e363a52add1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 systemd-udevd[283483]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:29:11 np0005535469 NetworkManager[48891]: <info>  [1764088151.7178] manager: (tap09313f5b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7bf524-85bc-467a-9878-9dccf1598662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.752 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7aab9342-fd03-4f98-abce-fa0efc999ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.757 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3f67fd5f-fdfd-45f7-ad0b-0779cecd05fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 NetworkManager[48891]: <info>  [1764088151.7811] device (tap09313f5b-a0): carrier: link connected
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.786 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[535b145f-f1cf-4b9e-bf3d-7b7b10faa817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4dd850-64de-467d-b68e-72b0357d7f06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09313f5b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:45:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460937, 'reachable_time': 34619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283514, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.815 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[692e761a-4e82-4e44-bf02-3fc4cc7041d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:454c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460937, 'tstamp': 460937}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283515, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.834 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bc4195-ad31-4af8-a06b-a1830aa1fe5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09313f5b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:45:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460937, 'reachable_time': 34619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283516, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.865 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8650a1-8847-46b6-b325-30e48789e286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bb13e6-7d55-4f1d-a593-5df9eccc8bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.934 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09313f5b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.934 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.935 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09313f5b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:11 np0005535469 NetworkManager[48891]: <info>  [1764088151.9380] manager: (tap09313f5b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 25 11:29:11 np0005535469 kernel: tap09313f5b-a0: entered promiscuous mode
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.941 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09313f5b-a0, col_values=(('external_ids', {'iface-id': '491a5ecd-1693-49b3-bd97-98ff227e2ff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:11Z|00068|binding|INFO|Releasing lport 491a5ecd-1693-49b3-bd97-98ff227e2ff8 from this chassis (sb_readonly=0)
Nov 25 11:29:11 np0005535469 nova_compute[254092]: 2025-11-25 16:29:11.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.976 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02daecc3-2d06-4104-9c41-c648fce37797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.977 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-09313f5b-a3fb-41e8-87c2-c636c3ed13c6
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.pid.haproxy
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 09313f5b-a3fb-41e8-87c2-c636c3ed13c6
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:29:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.978 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'env', 'PROCESS_TAG=haproxy-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.074 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088152.0715392, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.075 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Started (Lifecycle Event)#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.101 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.106 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088152.0724978, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.106 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.127 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.132 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.152 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:12 np0005535469 podman[283591]: 2025-11-25 16:29:12.358770182 +0000 UTC m=+0.025790843 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:29:12 np0005535469 podman[283591]: 2025-11-25 16:29:12.612555043 +0000 UTC m=+0.279575694 container create 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 11:29:12 np0005535469 systemd[1]: Started libpod-conmon-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0.scope.
Nov 25 11:29:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeccf6ecdea4aa475a47ca66b66b3f98184438b7398c8f4397fb73ed72c289ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:12.759 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:12 np0005535469 podman[283591]: 2025-11-25 16:29:12.811661948 +0000 UTC m=+0.478682629 container init 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 11:29:12 np0005535469 podman[283591]: 2025-11-25 16:29:12.817084426 +0000 UTC m=+0.484105057 container start 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:29:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:12 np0005535469 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : New worker (283612) forked
Nov 25 11:29:12 np0005535469 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : Loading success.
Nov 25 11:29:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 282 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 806 KiB/s rd, 9.3 MiB/s wr, 223 op/s
Nov 25 11:29:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:12.943 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.957 254096 DEBUG nova.compute.manager [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.957 254096 DEBUG oslo_concurrency.lockutils [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.958 254096 DEBUG oslo_concurrency.lockutils [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.958 254096 DEBUG oslo_concurrency.lockutils [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.958 254096 DEBUG nova.compute.manager [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Processing event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.959 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.962 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088152.9625418, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.963 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.964 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.968 254096 INFO nova.virt.libvirt.driver [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance spawned successfully.#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.968 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:29:12 np0005535469 nova_compute[254092]: 2025-11-25 16:29:12.994 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.002 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.007 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.008 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.008 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.009 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.009 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.010 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.039 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.232 254096 INFO nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 10.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.232 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.319 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updating instance_info_cache with network_info: [{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.390 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.390 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance network_info: |[{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.391 254096 INFO nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 12.15 seconds to build instance.#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.392 254096 DEBUG oslo_concurrency.lockutils [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.392 254096 DEBUG nova.network.neutron [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Refreshing network info cache for port 0d1cf86d-6639-47eb-8de1-718476d1c006 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.394 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start _get_guest_xml network_info=[{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.398 254096 WARNING nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.403 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.404 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.412 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.457 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:13.602 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124175984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.926 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.946 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:13 np0005535469 nova_compute[254092]: 2025-11-25 16:29:13.950 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385547933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.418 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.422 254096 DEBUG nova.virt.libvirt.vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2013800485',display_name='tempest-ServersAdminTestJSON-server-2013800485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2013800485',id=22,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-p696wseq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:06Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=f0cb83d8-c2a3-49d1-8c01-b9be9922abd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.423 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.425 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.429 254096 DEBUG nova.objects.instance [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.445 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <uuid>f0cb83d8-c2a3-49d1-8c01-b9be9922abd1</uuid>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <name>instance-00000016</name>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminTestJSON-server-2013800485</nova:name>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:29:13</nova:creationTime>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <nova:port uuid="0d1cf86d-6639-47eb-8de1-718476d1c006">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <entry name="serial">f0cb83d8-c2a3-49d1-8c01-b9be9922abd1</entry>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <entry name="uuid">f0cb83d8-c2a3-49d1-8c01-b9be9922abd1</entry>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:78:52:60"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <target dev="tap0d1cf86d-66"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/console.log" append="off"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:29:14 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:29:14 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:29:14 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:29:14 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.446 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Preparing to wait for external event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.446 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.447 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.448 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.449 254096 DEBUG nova.virt.libvirt.vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2013800485',display_name='tempest-ServersAdminTestJSON-server-2013800485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2013800485',id=22,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-p696wseq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:06Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=f0cb83d8-c2a3-49d1-8c01-b9be9922abd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.450 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.451 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.452 254096 DEBUG os_vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.454 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.460 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d1cf86d-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.461 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d1cf86d-66, col_values=(('external_ids', {'iface-id': '0d1cf86d-6639-47eb-8de1-718476d1c006', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:52:60', 'vm-uuid': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:14 np0005535469 NetworkManager[48891]: <info>  [1764088154.4647] manager: (tap0d1cf86d-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.477 254096 INFO os_vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66')#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.581 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.582 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.583 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:78:52:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.584 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Using config drive#033[00m
Nov 25 11:29:14 np0005535469 nova_compute[254092]: 2025-11-25 16:29:14.619 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 282 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 673 KiB/s rd, 7.8 MiB/s wr, 186 op/s
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG nova.compute.manager [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG oslo_concurrency.lockutils [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG oslo_concurrency.lockutils [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG oslo_concurrency.lockutils [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.090 254096 DEBUG nova.compute.manager [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] No waiting events found dispatching network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.090 254096 WARNING nova.compute.manager [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received unexpected event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.239826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155239860, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1351, "num_deletes": 260, "total_data_size": 1763308, "memory_usage": 1791824, "flush_reason": "Manual Compaction"}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155350734, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1740717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24614, "largest_seqno": 25964, "table_properties": {"data_size": 1734268, "index_size": 3588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14575, "raw_average_key_size": 20, "raw_value_size": 1721027, "raw_average_value_size": 2465, "num_data_blocks": 158, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088063, "oldest_key_time": 1764088063, "file_creation_time": 1764088155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 110949 microseconds, and 4864 cpu microseconds.
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.350774) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1740717 bytes OK
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.350791) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429333) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429371) EVENT_LOG_v1 {"time_micros": 1764088155429363, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429391) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1757035, prev total WAL file size 1757035, number of live WAL files 2.
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.430033) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1699KB)], [56(7303KB)]
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155430077, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9219159, "oldest_snapshot_seqno": -1}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4894 keys, 7474213 bytes, temperature: kUnknown
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155514919, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7474213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7441323, "index_size": 19532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 123171, "raw_average_key_size": 25, "raw_value_size": 7352752, "raw_average_value_size": 1502, "num_data_blocks": 803, "num_entries": 4894, "num_filter_entries": 4894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.515124) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7474213 bytes
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.520539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.6 rd, 88.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.6) write-amplify(4.3) OK, records in: 5423, records dropped: 529 output_compression: NoCompression
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.520561) EVENT_LOG_v1 {"time_micros": 1764088155520550, "job": 30, "event": "compaction_finished", "compaction_time_micros": 84902, "compaction_time_cpu_micros": 16673, "output_level": 6, "num_output_files": 1, "total_output_size": 7474213, "num_input_records": 5423, "num_output_records": 4894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155520992, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155522060, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:29:15 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.602 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Creating config drive at /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.608 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3lhzrr9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.753 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3lhzrr9x" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.791 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.799 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.954 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088140.952928, 9440e9b4-329e-44cf-a489-5a0634a8aa30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.955 254096 INFO nova.compute.manager [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.976 254096 DEBUG nova.compute.manager [None req-4a4aff5c-d5cf-4f0d-98d0-d9a7369e37c6 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.991 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:15 np0005535469 nova_compute[254092]: 2025-11-25 16:29:15.992 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deleting local config drive /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config because it was imported into RBD.#033[00m
Nov 25 11:29:16 np0005535469 kernel: tap0d1cf86d-66: entered promiscuous mode
Nov 25 11:29:16 np0005535469 NetworkManager[48891]: <info>  [1764088156.0502] manager: (tap0d1cf86d-66): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:16Z|00069|binding|INFO|Claiming lport 0d1cf86d-6639-47eb-8de1-718476d1c006 for this chassis.
Nov 25 11:29:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:16Z|00070|binding|INFO|0d1cf86d-6639-47eb-8de1-718476d1c006: Claiming fa:16:3e:78:52:60 10.100.0.10
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.063 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.064 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.066 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.082 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[955fbd64-f76a-41d0-a5ce-6c1e5bd37fcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:16Z|00071|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 ovn-installed in OVS
Nov 25 11:29:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:16Z|00072|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 up in Southbound
Nov 25 11:29:16 np0005535469 systemd-udevd[283757]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:29:16 np0005535469 systemd-machined[216343]: New machine qemu-24-instance-00000016.
Nov 25 11:29:16 np0005535469 NetworkManager[48891]: <info>  [1764088156.1058] device (tap0d1cf86d-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:29:16 np0005535469 NetworkManager[48891]: <info>  [1764088156.1069] device (tap0d1cf86d-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:29:16 np0005535469 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.115 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[af42980d-0716-46f0-ac1f-7051e19b1a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.120 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1c43d1-4fc5-4e7e-8f72-e127fcedf265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.150 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f63e23a3-243c-4d0e-af7b-cb8ca4ba7c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01c95fb8-b3ee-4196-9f8e-121da0b79816]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283770, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.199 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2a68e5-83a7-4bc8-aa54-9118fea50f84]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283772, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283772, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.206 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.207 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.207 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.208 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.490 254096 DEBUG nova.network.neutron [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updated VIF entry in instance network info cache for port 0d1cf86d-6639-47eb-8de1-718476d1c006. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.490 254096 DEBUG nova.network.neutron [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updating instance_info_cache with network_info: [{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.519 254096 DEBUG oslo_concurrency.lockutils [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.658 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088156.6577435, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.658 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Started (Lifecycle Event)#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.675 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.678 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088156.6578999, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.678 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.699 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.703 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:16 np0005535469 nova_compute[254092]: 2025-11-25 16:29:16.723 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 293 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 279 op/s
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:17 np0005535469 NetworkManager[48891]: <info>  [1764088157.0058] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 25 11:29:17 np0005535469 NetworkManager[48891]: <info>  [1764088157.0067] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 25 11:29:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:17Z|00073|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 11:29:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:17Z|00074|binding|INFO|Releasing lport 491a5ecd-1693-49b3-bd97-98ff227e2ff8 from this chassis (sb_readonly=0)
Nov 25 11:29:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:17Z|00075|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 11:29:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:17Z|00076|binding|INFO|Releasing lport 491a5ecd-1693-49b3-bd97-98ff227e2ff8 from this chassis (sb_readonly=0)
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.233 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Processing event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] No waiting events found dispatching network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.236 254096 WARNING nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received unexpected event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.236 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.239 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088157.23915, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.239 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.241 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.246 254096 INFO nova.virt.libvirt.driver [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance spawned successfully.#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.247 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.266 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.272 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.275 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.276 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.276 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.277 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.277 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.278 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.310 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.393 254096 INFO nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 10.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.393 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.473 254096 INFO nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 11.83 seconds to build instance.#033[00m
Nov 25 11:29:17 np0005535469 nova_compute[254092]: 2025-11-25 16:29:17.512 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 293 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 240 op/s
Nov 25 11:29:19 np0005535469 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG nova.compute.manager [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:19 np0005535469 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG nova.compute.manager [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing instance network info cache due to event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:29:19 np0005535469 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG oslo_concurrency.lockutils [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:19 np0005535469 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG oslo_concurrency.lockutils [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:19 np0005535469 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG nova.network.neutron [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:29:19 np0005535469 nova_compute[254092]: 2025-11-25 16:29:19.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:20 np0005535469 nova_compute[254092]: 2025-11-25 16:29:20.868 254096 DEBUG nova.network.neutron [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updated VIF entry in instance network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:29:20 np0005535469 nova_compute[254092]: 2025-11-25 16:29:20.869 254096 DEBUG nova.network.neutron [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:20 np0005535469 nova_compute[254092]: 2025-11-25 16:29:20.891 254096 DEBUG oslo_concurrency.lockutils [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 285 op/s
Nov 25 11:29:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:21.945 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:22 np0005535469 nova_compute[254092]: 2025-11-25 16:29:22.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:22 np0005535469 podman[283816]: 2025-11-25 16:29:22.664468562 +0000 UTC m=+0.076293016 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:29:22 np0005535469 podman[283815]: 2025-11-25 16:29:22.666621021 +0000 UTC m=+0.070590151 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 11:29:22 np0005535469 podman[283817]: 2025-11-25 16:29:22.689438231 +0000 UTC m=+0.098040067 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:29:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Nov 25 11:29:22 np0005535469 nova_compute[254092]: 2025-11-25 16:29:22.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:23 np0005535469 nova_compute[254092]: 2025-11-25 16:29:23.769 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:23 np0005535469 nova_compute[254092]: 2025-11-25 16:29:23.769 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:23 np0005535469 nova_compute[254092]: 2025-11-25 16:29:23.797 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:29:23 np0005535469 nova_compute[254092]: 2025-11-25 16:29:23.904 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:23 np0005535469 nova_compute[254092]: 2025-11-25 16:29:23.905 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:23 np0005535469 nova_compute[254092]: 2025-11-25 16:29:23.911 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:29:23 np0005535469 nova_compute[254092]: 2025-11-25 16:29:23.912 254096 INFO nova.compute.claims [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.080 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541345696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.510 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.517 254096 DEBUG nova.compute.provider_tree [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.546 254096 DEBUG nova.scheduler.client.report [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.572 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.573 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.660 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.661 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.682 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.704 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.835 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.837 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.837 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Creating image(s)#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.862 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.888 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 96 KiB/s wr, 167 op/s
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.914 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.918 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.977 254096 DEBUG nova.policy [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.986 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.986 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.987 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:24 np0005535469 nova_compute[254092]: 2025-11-25 16:29:24.987 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.010 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.014 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7777dd86-925e-4f98-bd68-e38ac540d97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.332 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7777dd86-925e-4f98-bd68-e38ac540d97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.411 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.529 254096 DEBUG nova.objects.instance [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 7777dd86-925e-4f98-bd68-e38ac540d97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 25 11:29:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 25 11:29:25 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.938 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.938 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Ensure instance console log exists: /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.939 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.939 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:25 np0005535469 nova_compute[254092]: 2025-11-25 16:29:25.939 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:26Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:e6:95 10.100.0.13
Nov 25 11:29:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:26Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:e6:95 10.100.0.13
Nov 25 11:29:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 339 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Nov 25 11:29:27 np0005535469 nova_compute[254092]: 2025-11-25 16:29:27.438 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Successfully created port: 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:29:27 np0005535469 nova_compute[254092]: 2025-11-25 16:29:27.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:27 np0005535469 nova_compute[254092]: 2025-11-25 16:29:27.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:28 np0005535469 nova_compute[254092]: 2025-11-25 16:29:28.773 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Successfully updated port: 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:29:28 np0005535469 nova_compute[254092]: 2025-11-25 16:29:28.831 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:28 np0005535469 nova_compute[254092]: 2025-11-25 16:29:28.831 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:28 np0005535469 nova_compute[254092]: 2025-11-25 16:29:28.831 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:29:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 339 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Nov 25 11:29:28 np0005535469 nova_compute[254092]: 2025-11-25 16:29:28.986 254096 DEBUG nova.compute.manager [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-changed-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:28 np0005535469 nova_compute[254092]: 2025-11-25 16:29:28.987 254096 DEBUG nova.compute.manager [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Refreshing instance network info cache due to event network-changed-0f27a287-0c09-4767-a6cf-a7f4f8870ea1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:29:28 np0005535469 nova_compute[254092]: 2025-11-25 16:29:28.987 254096 DEBUG oslo_concurrency.lockutils [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:29 np0005535469 nova_compute[254092]: 2025-11-25 16:29:29.044 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:29:29 np0005535469 nova_compute[254092]: 2025-11-25 16:29:29.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:30Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:52:60 10.100.0.10
Nov 25 11:29:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:30Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:52:60 10.100.0.10
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.574 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.608 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.609 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance network_info: |[{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.610 254096 DEBUG oslo_concurrency.lockutils [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.610 254096 DEBUG nova.network.neutron [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Refreshing network info cache for port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.613 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start _get_guest_xml network_info=[{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.618 254096 WARNING nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.626 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.627 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.638 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.639 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.639 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.640 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.640 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:29:30 np0005535469 nova_compute[254092]: 2025-11-25 16:29:30.646 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 382 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.4 MiB/s wr, 193 op/s
Nov 25 11:29:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635119000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.130 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.159 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.164 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333427469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.610 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.611 254096 DEBUG nova.virt.libvirt.vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-890496249',display_name='tempest-ServersAdminTestJSON-server-890496249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-890496249',id=23,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-pzomq4ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:24Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=7777dd86-925e-4f98-bd68-e38ac540d97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.612 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.613 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.614 254096 DEBUG nova.objects.instance [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7777dd86-925e-4f98-bd68-e38ac540d97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.628 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <uuid>7777dd86-925e-4f98-bd68-e38ac540d97b</uuid>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <name>instance-00000017</name>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminTestJSON-server-890496249</nova:name>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:29:30</nova:creationTime>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <nova:port uuid="0f27a287-0c09-4767-a6cf-a7f4f8870ea1">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <entry name="serial">7777dd86-925e-4f98-bd68-e38ac540d97b</entry>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <entry name="uuid">7777dd86-925e-4f98-bd68-e38ac540d97b</entry>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7777dd86-925e-4f98-bd68-e38ac540d97b_disk">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:a8:bd:39"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <target dev="tap0f27a287-0c"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/console.log" append="off"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:29:31 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:29:31 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:29:31 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:29:31 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.628 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Preparing to wait for external event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.628 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.629 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.629 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.629 254096 DEBUG nova.virt.libvirt.vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-890496249',display_name='tempest-ServersAdminTestJSON-server-890496249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-890496249',id=23,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-pzomq4ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:24Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=7777dd86-925e-4f98-bd68-e38ac540d97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.630 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.630 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.630 254096 DEBUG os_vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.635 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f27a287-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.635 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f27a287-0c, col_values=(('external_ids', {'iface-id': '0f27a287-0c09-4767-a6cf-a7f4f8870ea1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:bd:39', 'vm-uuid': '7777dd86-925e-4f98-bd68-e38ac540d97b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:31 np0005535469 NetworkManager[48891]: <info>  [1764088171.6384] manager: (tap0f27a287-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.644 254096 INFO os_vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c')#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.744 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.744 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.745 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:a8:bd:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.746 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Using config drive#033[00m
Nov 25 11:29:31 np0005535469 nova_compute[254092]: 2025-11-25 16:29:31.762 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:32 np0005535469 nova_compute[254092]: 2025-11-25 16:29:32.293 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Creating config drive at /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config#033[00m
Nov 25 11:29:32 np0005535469 nova_compute[254092]: 2025-11-25 16:29:32.299 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4lb3ica execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:32 np0005535469 nova_compute[254092]: 2025-11-25 16:29:32.430 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4lb3ica" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:32 np0005535469 nova_compute[254092]: 2025-11-25 16:29:32.454 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:32 np0005535469 nova_compute[254092]: 2025-11-25 16:29:32.457 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:32 np0005535469 nova_compute[254092]: 2025-11-25 16:29:32.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 401 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 7.1 MiB/s wr, 181 op/s
Nov 25 11:29:33 np0005535469 nova_compute[254092]: 2025-11-25 16:29:33.514 254096 DEBUG nova.network.neutron [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updated VIF entry in instance network info cache for port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:29:33 np0005535469 nova_compute[254092]: 2025-11-25 16:29:33.515 254096 DEBUG nova.network.neutron [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:33 np0005535469 nova_compute[254092]: 2025-11-25 16:29:33.533 254096 DEBUG oslo_concurrency.lockutils [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:34 np0005535469 nova_compute[254092]: 2025-11-25 16:29:34.336 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:34 np0005535469 nova_compute[254092]: 2025-11-25 16:29:34.337 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deleting local config drive /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config because it was imported into RBD.#033[00m
Nov 25 11:29:34 np0005535469 kernel: tap0f27a287-0c: entered promiscuous mode
Nov 25 11:29:34 np0005535469 NetworkManager[48891]: <info>  [1764088174.3922] manager: (tap0f27a287-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 25 11:29:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:34Z|00077|binding|INFO|Claiming lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for this chassis.
Nov 25 11:29:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:34Z|00078|binding|INFO|0f27a287-0c09-4767-a6cf-a7f4f8870ea1: Claiming fa:16:3e:a8:bd:39 10.100.0.9
Nov 25 11:29:34 np0005535469 nova_compute[254092]: 2025-11-25 16:29:34.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:34Z|00079|binding|INFO|Setting lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 ovn-installed in OVS
Nov 25 11:29:34 np0005535469 nova_compute[254092]: 2025-11-25 16:29:34.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:34 np0005535469 nova_compute[254092]: 2025-11-25 16:29:34.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.422 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:bd:39 10.100.0.9'], port_security=['fa:16:3e:a8:bd:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7777dd86-925e-4f98-bd68-e38ac540d97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0f27a287-0c09-4767-a6cf-a7f4f8870ea1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:34Z|00080|binding|INFO|Setting lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 up in Southbound
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.424 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:29:34 np0005535469 systemd-machined[216343]: New machine qemu-25-instance-00000017.
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[810ff814-9c24-4a39-9abe-29f3f556afcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:34 np0005535469 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Nov 25 11:29:34 np0005535469 systemd-udevd[284204]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:29:34 np0005535469 NetworkManager[48891]: <info>  [1764088174.4644] device (tap0f27a287-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:29:34 np0005535469 NetworkManager[48891]: <info>  [1764088174.4654] device (tap0f27a287-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.478 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3f268f27-656b-4d7a-95fd-816f0415cfc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.481 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3212d5a0-1472-4628-8f4e-f0f5e18b78ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.509 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff44a7-4b16-4f54-bef1-b2b53d37f22e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.525 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a401624-5dba-4774-ae71-5084881324fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284216, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.542 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62b0e16b-b206-4b35-82ae-ec58a216644b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284217, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284217, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.543 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:34 np0005535469 nova_compute[254092]: 2025-11-25 16:29:34.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:34 np0005535469 nova_compute[254092]: 2025-11-25 16:29:34.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.546 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 401 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 7.1 MiB/s wr, 181 op/s
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.333 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088175.3331664, 7777dd86-925e-4f98-bd68-e38ac540d97b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.334 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Started (Lifecycle Event)#033[00m
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.353 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.358 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088175.3333547, 7777dd86-925e-4f98-bd68-e38ac540d97b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.380 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.386 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:35 np0005535469 nova_compute[254092]: 2025-11-25 16:29:35.410 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.284 254096 DEBUG nova.compute.manager [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.285 254096 DEBUG oslo_concurrency.lockutils [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.285 254096 DEBUG oslo_concurrency.lockutils [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.286 254096 DEBUG oslo_concurrency.lockutils [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.286 254096 DEBUG nova.compute.manager [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Processing event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.286 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.290 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088176.2896876, 7777dd86-925e-4f98-bd68-e38ac540d97b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.291 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.293 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.297 254096 INFO nova.virt.libvirt.driver [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance spawned successfully.#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.298 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.313 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.321 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.325 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.326 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.327 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.327 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.328 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.329 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.354 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.774 254096 INFO nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 11.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.775 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 5.4 MiB/s wr, 164 op/s
Nov 25 11:29:36 np0005535469 nova_compute[254092]: 2025-11-25 16:29:36.981 254096 INFO nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 13.11 seconds to build instance.#033[00m
Nov 25 11:29:37 np0005535469 nova_compute[254092]: 2025-11-25 16:29:37.310 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:37 np0005535469 nova_compute[254092]: 2025-11-25 16:29:37.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:38 np0005535469 nova_compute[254092]: 2025-11-25 16:29:38.698 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:38 np0005535469 nova_compute[254092]: 2025-11-25 16:29:38.699 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:38 np0005535469 nova_compute[254092]: 2025-11-25 16:29:38.766 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:29:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 555 KiB/s rd, 3.4 MiB/s wr, 123 op/s
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.006 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.007 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.018 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.018 254096 INFO nova.compute.claims [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.247 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.255 254096 DEBUG nova.compute.manager [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.256 254096 DEBUG oslo_concurrency.lockutils [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.256 254096 DEBUG oslo_concurrency.lockutils [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.257 254096 DEBUG oslo_concurrency.lockutils [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.257 254096 DEBUG nova.compute.manager [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] No waiting events found dispatching network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.257 254096 WARNING nova.compute.manager [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received unexpected event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.268 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.269 254096 DEBUG nova.compute.provider_tree [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.286 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.294 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.294 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.295 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.295 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.295 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.296 254096 INFO nova.compute.manager [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Terminating instance#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.297 254096 DEBUG nova.compute.manager [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.319 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 11:29:39 np0005535469 nova_compute[254092]: 2025-11-25 16:29:39.515 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:29:40
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'backups', '.rgw.root', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:29:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209716092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.086 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.097 254096 DEBUG nova.compute.provider_tree [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.111 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.157 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.158 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:29:40 np0005535469 kernel: tapab0cfddf-69 (unregistering): left promiscuous mode
Nov 25 11:29:40 np0005535469 NetworkManager[48891]: <info>  [1764088180.1872] device (tapab0cfddf-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:40Z|00081|binding|INFO|Releasing lport ab0cfddf-69e0-4494-a106-e603168444a4 from this chassis (sb_readonly=0)
Nov 25 11:29:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:40Z|00082|binding|INFO|Setting lport ab0cfddf-69e0-4494-a106-e603168444a4 down in Southbound
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:29:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:40Z|00083|binding|INFO|Removing iface tapab0cfddf-69 ovn-installed in OVS
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:40 np0005535469 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 25 11:29:40 np0005535469 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 13.670s CPU time.
Nov 25 11:29:40 np0005535469 systemd-machined[216343]: Machine qemu-23-instance-00000015 terminated.
Nov 25 11:29:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.324 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e6:95 10.100.0.13'], port_security=['fa:16:3e:7d:e6:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b9f60af-05f0-43c7-bce7-227cb54ec793', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f99f039b80564f5684a91f3bc27c2249', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a14e6fc5-327e-44fa-8134-4f62c2b97373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16d598ad-25ce-4f41-98a7-2a9985da8936, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ab0cfddf-69e0-4494-a106-e603168444a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.326 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ab0cfddf-69e0-4494-a106-e603168444a4 in datapath 09313f5b-a3fb-41e8-87c2-c636c3ed13c6 unbound from our chassis#033[00m
Nov 25 11:29:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.327 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09313f5b-a3fb-41e8-87c2-c636c3ed13c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:29:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.328 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f052e01-d820-4e80-9097-8c7307bfb045]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.328 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 namespace which is not needed anymore#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.334 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.335 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.439 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.516 254096 DEBUG nova.policy [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.538 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.552 254096 INFO nova.virt.libvirt.driver [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance destroyed successfully.#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.552 254096 DEBUG nova.objects.instance [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lazy-loading 'resources' on Instance uuid 7b9f60af-05f0-43c7-bce7-227cb54ec793 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.568 254096 DEBUG nova.virt.libvirt.vif [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1649655527',display_name='tempest-ServersTestJSON-server-1649655527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1649655527',id=21,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCW+Ktdde8brSi3dDjuPJuQcQZxoIsAM7EI886G65qnVc9QFdS/VNcpSvi1Y/e9z6GKqL8cPDahYUZN5KOZSYOR8WETlyE2X3Pf8M2fEr9LePpk/dgU5OnSDx/LsY9Zvng==',key_name='tempest-keypair-749925212',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f99f039b80564f5684a91f3bc27c2249',ramdisk_id='',reservation_id='r-00qo170f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-841201839',owner_user_name='tempest-ServersTestJSON-841201839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09afd60d3afd4a57a14e7e93a66275f9',uuid=7b9f60af-05f0-43c7-bce7-227cb54ec793,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.569 254096 DEBUG nova.network.os_vif_util [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converting VIF {"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.569 254096 DEBUG nova.network.os_vif_util [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.570 254096 DEBUG os_vif [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.572 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab0cfddf-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.583 254096 INFO os_vif [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69')#033[00m
Nov 25 11:29:40 np0005535469 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : haproxy version is 2.8.14-c23fe91
Nov 25 11:29:40 np0005535469 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : path to executable is /usr/sbin/haproxy
Nov 25 11:29:40 np0005535469 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [WARNING]  (283610) : Exiting Master process...
Nov 25 11:29:40 np0005535469 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [ALERT]    (283610) : Current worker (283612) exited with code 143 (Terminated)
Nov 25 11:29:40 np0005535469 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [WARNING]  (283610) : All workers exited. Exiting... (0)
Nov 25 11:29:40 np0005535469 systemd[1]: libpod-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0.scope: Deactivated successfully.
Nov 25 11:29:40 np0005535469 podman[284406]: 2025-11-25 16:29:40.79381193 +0000 UTC m=+0.351471060 container died 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.897 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.899 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.899 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Creating image(s)#033[00m
Nov 25 11:29:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 162 op/s
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.920 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.941 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.972 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:40 np0005535469 nova_compute[254092]: 2025-11-25 16:29:40.975 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.034 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.035 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.036 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.036 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.061 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.064 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 df0db130-3ffa-4a60-8f7d-fb285a797631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:29:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev df56d931-08e0-419a-b65d-a62d57567bc0 does not exist
Nov 25 11:29:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5afbea57-d205-43fa-8ca9-d0c165bbff8d does not exist
Nov 25 11:29:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7f77ce50-39b3-46a9-91c2-1439c6aa063e does not exist
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:29:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0-userdata-shm.mount: Deactivated successfully.
Nov 25 11:29:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-eeccf6ecdea4aa475a47ca66b66b3f98184438b7398c8f4397fb73ed72c289ab-merged.mount: Deactivated successfully.
Nov 25 11:29:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:29:41 np0005535469 podman[284406]: 2025-11-25 16:29:41.647533478 +0000 UTC m=+1.205192608 container cleanup 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:29:41 np0005535469 systemd[1]: libpod-conmon-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0.scope: Deactivated successfully.
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.768 254096 DEBUG oslo_concurrency.lockutils [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] Acquiring lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.768 254096 DEBUG oslo_concurrency.lockutils [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] Acquired lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.768 254096 DEBUG nova.network.neutron [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.838 254096 DEBUG nova.compute.manager [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-unplugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.839 254096 DEBUG oslo_concurrency.lockutils [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.839 254096 DEBUG oslo_concurrency.lockutils [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.840 254096 DEBUG oslo_concurrency.lockutils [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.840 254096 DEBUG nova.compute.manager [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] No waiting events found dispatching network-vif-unplugged-ab0cfddf-69e0-4494-a106-e603168444a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.840 254096 DEBUG nova.compute.manager [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-unplugged-ab0cfddf-69e0-4494-a106-e603168444a4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:29:41 np0005535469 podman[284631]: 2025-11-25 16:29:41.88256184 +0000 UTC m=+0.194652446 container remove 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.888 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[596414f9-3c37-4c7d-84f0-8c7c7aebf0cf]: (4, ('Tue Nov 25 04:29:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 (6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0)\n6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0\nTue Nov 25 04:29:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 (6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0)\n6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.889 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e20b6b07-150c-4d5a-acc4-7a359ce6e48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.890 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09313f5b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:41 np0005535469 kernel: tap09313f5b-a0: left promiscuous mode
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.919 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[959376ce-d578-4e25-800c-4cc6fd479db0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:41 np0005535469 nova_compute[254092]: 2025-11-25 16:29:41.932 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 df0db130-3ffa-4a60-8f7d-fb285a797631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.868s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.935 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ace72e8-6200-419c-b28f-d09c38e7d01e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.939 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1646ea-41c9-4d7b-9cf5-f8f5f742f5e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.956 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b9d92e-f039-4893-a8cd-3a5a922fcaaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460929, 'reachable_time': 18911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284708, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:41 np0005535469 systemd[1]: run-netns-ovnmeta\x2d09313f5b\x2da3fb\x2d41e8\x2d87c2\x2dc636c3ed13c6.mount: Deactivated successfully.
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.962 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:29:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.962 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[16ef63a0-8510-49ea-b3da-4414f638da00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.018 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.059 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully created port: 02104fc6-3780-400d-a6c2-577082384680 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.159 254096 DEBUG nova.objects.instance [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:42 np0005535469 podman[284793]: 2025-11-25 16:29:42.178839486 +0000 UTC m=+0.053226418 container create 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.179 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.179 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Ensure instance console log exists: /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.181 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.182 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.182 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:42 np0005535469 systemd[1]: Started libpod-conmon-6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca.scope.
Nov 25 11:29:42 np0005535469 podman[284793]: 2025-11-25 16:29:42.14773002 +0000 UTC m=+0.022116982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:29:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:42 np0005535469 podman[284793]: 2025-11-25 16:29:42.277204731 +0000 UTC m=+0.151591683 container init 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:29:42 np0005535469 podman[284793]: 2025-11-25 16:29:42.286255738 +0000 UTC m=+0.160642670 container start 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:29:42 np0005535469 podman[284793]: 2025-11-25 16:29:42.29074159 +0000 UTC m=+0.165128542 container attach 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:29:42 np0005535469 crazy_antonelli[284828]: 167 167
Nov 25 11:29:42 np0005535469 systemd[1]: libpod-6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca.scope: Deactivated successfully.
Nov 25 11:29:42 np0005535469 podman[284793]: 2025-11-25 16:29:42.294866812 +0000 UTC m=+0.169253744 container died 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:29:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-23cf44d45d8860db81dcd4274e00625732c539e99ddd759a14f2b0f30493b4e3-merged.mount: Deactivated successfully.
Nov 25 11:29:42 np0005535469 podman[284793]: 2025-11-25 16:29:42.348012708 +0000 UTC m=+0.222399630 container remove 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:29:42 np0005535469 systemd[1]: libpod-conmon-6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca.scope: Deactivated successfully.
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.386 254096 INFO nova.virt.libvirt.driver [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deleting instance files /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793_del#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.388 254096 INFO nova.virt.libvirt.driver [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deletion of /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793_del complete#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.454 254096 INFO nova.compute.manager [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 3.16 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.455 254096 DEBUG oslo.service.loopingcall [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.455 254096 DEBUG nova.compute.manager [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:29:42 np0005535469 nova_compute[254092]: 2025-11-25 16:29:42.455 254096 DEBUG nova.network.neutron [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:29:42 np0005535469 podman[284851]: 2025-11-25 16:29:42.540294496 +0000 UTC m=+0.036293128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:29:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:42 np0005535469 podman[284851]: 2025-11-25 16:29:42.83570109 +0000 UTC m=+0.331699672 container create 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 11:29:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:29:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:29:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 797 KiB/s wr, 108 op/s
Nov 25 11:29:42 np0005535469 systemd[1]: Started libpod-conmon-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope.
Nov 25 11:29:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:43 np0005535469 podman[284851]: 2025-11-25 16:29:43.234040563 +0000 UTC m=+0.730039205 container init 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:29:43 np0005535469 podman[284851]: 2025-11-25 16:29:43.24161331 +0000 UTC m=+0.737611902 container start 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 11:29:43 np0005535469 podman[284851]: 2025-11-25 16:29:43.382970893 +0000 UTC m=+0.878969495 container attach 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.482 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully updated port: 02104fc6-3780-400d-a6c2-577082384680 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.520 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.520 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.521 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.624 254096 DEBUG nova.network.neutron [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.722 254096 DEBUG oslo_concurrency.lockutils [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] Releasing lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.723 254096 DEBUG nova.compute.manager [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.723 254096 DEBUG nova.compute.manager [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] network_info to inject: |[{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 25 11:29:43 np0005535469 nova_compute[254092]: 2025-11-25 16:29:43.824 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.247 254096 DEBUG nova.compute.manager [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-changed-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.248 254096 DEBUG nova.compute.manager [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing instance network info cache due to event network-changed-02104fc6-3780-400d-a6c2-577082384680. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.249 254096 DEBUG oslo_concurrency.lockutils [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.250 254096 DEBUG nova.network.neutron [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.293 254096 INFO nova.compute.manager [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 1.84 seconds to deallocate network for instance.#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.328 254096 DEBUG nova.compute.manager [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.328 254096 DEBUG oslo_concurrency.lockutils [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 DEBUG oslo_concurrency.lockutils [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 DEBUG oslo_concurrency.lockutils [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 DEBUG nova.compute.manager [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] No waiting events found dispatching network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 WARNING nova.compute.manager [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received unexpected event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.380 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.380 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:44 np0005535469 nice_mclean[284869]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:29:44 np0005535469 nice_mclean[284869]: --> relative data size: 1.0
Nov 25 11:29:44 np0005535469 nice_mclean[284869]: --> All data devices are unavailable
Nov 25 11:29:44 np0005535469 systemd[1]: libpod-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope: Deactivated successfully.
Nov 25 11:29:44 np0005535469 systemd[1]: libpod-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope: Consumed 1.114s CPU time.
Nov 25 11:29:44 np0005535469 podman[284898]: 2025-11-25 16:29:44.494569144 +0000 UTC m=+0.024496947 container died 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:29:44 np0005535469 nova_compute[254092]: 2025-11-25 16:29:44.530 254096 DEBUG oslo_concurrency.processutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9-merged.mount: Deactivated successfully.
Nov 25 11:29:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 89 op/s
Nov 25 11:29:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237505822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.022 254096 DEBUG oslo_concurrency.processutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:45 np0005535469 podman[284898]: 2025-11-25 16:29:45.027181039 +0000 UTC m=+0.557108822 container remove 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.032 254096 DEBUG nova.compute.provider_tree [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:29:45 np0005535469 systemd[1]: libpod-conmon-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope: Deactivated successfully.
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.069 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.072 254096 DEBUG nova.scheduler.client.report [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.126 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.188 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.189 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance network_info: |[{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.190 254096 DEBUG oslo_concurrency.lockutils [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.190 254096 DEBUG nova.network.neutron [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing network info cache for port 02104fc6-3780-400d-a6c2-577082384680 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.193 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start _get_guest_xml network_info=[{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.198 254096 WARNING nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.204 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.207 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.216 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.216 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.217 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.217 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.219 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.219 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.219 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.220 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.220 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.220 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.223 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.247 254096 INFO nova.scheduler.client.report [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Deleted allocations for instance 7b9f60af-05f0-43c7-bce7-227cb54ec793#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.365 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538862028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.663 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.683 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:45 np0005535469 nova_compute[254092]: 2025-11-25 16:29:45.688 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:45 np0005535469 podman[285098]: 2025-11-25 16:29:45.695951297 +0000 UTC m=+0.054108534 container create 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:29:45 np0005535469 podman[285098]: 2025-11-25 16:29:45.662970949 +0000 UTC m=+0.021128206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:29:45 np0005535469 systemd[1]: Started libpod-conmon-5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992.scope.
Nov 25 11:29:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:46 np0005535469 podman[285098]: 2025-11-25 16:29:46.047556218 +0000 UTC m=+0.405713465 container init 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:29:46 np0005535469 podman[285098]: 2025-11-25 16:29:46.058147196 +0000 UTC m=+0.416304433 container start 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 11:29:46 np0005535469 jolly_lalande[285135]: 167 167
Nov 25 11:29:46 np0005535469 systemd[1]: libpod-5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992.scope: Deactivated successfully.
Nov 25 11:29:46 np0005535469 podman[285098]: 2025-11-25 16:29:46.085085879 +0000 UTC m=+0.443243136 container attach 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:29:46 np0005535469 podman[285098]: 2025-11-25 16:29:46.087284628 +0000 UTC m=+0.445441865 container died 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:29:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493919523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.188 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.190 254096 DEBUG nova.virt.libvirt.vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.191 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.192 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.193 254096 DEBUG nova.objects.instance [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.240 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <name>instance-00000018</name>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:29:45</nova:creationTime>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <entry name="serial">df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <entry name="uuid">df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ff:1b:ad"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <target dev="tap02104fc6-37"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log" append="off"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:29:46 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:29:46 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:29:46 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:29:46 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Preparing to wait for external event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.248 254096 DEBUG nova.virt.libvirt.vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.249 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.250 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.250 254096 DEBUG os_vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.255 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.255 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.259 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02104fc6-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.260 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02104fc6-37, col_values=(('external_ids', {'iface-id': '02104fc6-3780-400d-a6c2-577082384680', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:1b:ad', 'vm-uuid': 'df0db130-3ffa-4a60-8f7d-fb285a797631'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:29:46 np0005535469 NetworkManager[48891]: <info>  [1764088186.2647] manager: (tap02104fc6-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.272 254096 INFO os_vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37')#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.381 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.382 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.382 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:ff:1b:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.383 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Using config drive#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.405 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-023e75c84ae4ca7d6c292fc67163766d3be802501ec03bf569e41b6f9faa2664-merged.mount: Deactivated successfully.
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.685 254096 DEBUG nova.network.neutron [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updated VIF entry in instance network info cache for port 02104fc6-3780-400d-a6c2-577082384680. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.685 254096 DEBUG nova.network.neutron [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.699 254096 DEBUG oslo_concurrency.lockutils [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:46 np0005535469 podman[285098]: 2025-11-25 16:29:46.763083508 +0000 UTC m=+1.121240745 container remove 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.813 254096 DEBUG nova.compute.manager [req-05470c1b-c0f0-40bd-880e-68f0b2656644 req-a1d1d69b-7d12-45f0-bdfd-d9646528fc9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-deleted-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.846 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Creating config drive at /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config#033[00m
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.852 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcybppv7r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:46 np0005535469 systemd[1]: libpod-conmon-5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992.scope: Deactivated successfully.
Nov 25 11:29:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Nov 25 11:29:46 np0005535469 podman[285205]: 2025-11-25 16:29:46.973038448 +0000 UTC m=+0.050791302 container create 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:29:46 np0005535469 nova_compute[254092]: 2025-11-25 16:29:46.993 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcybppv7r" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:47 np0005535469 systemd[1]: Started libpod-conmon-38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7.scope.
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.031 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.036 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:47 np0005535469 podman[285205]: 2025-11-25 16:29:46.949843046 +0000 UTC m=+0.027595930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:29:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:47 np0005535469 podman[285205]: 2025-11-25 16:29:47.08894785 +0000 UTC m=+0.166700704 container init 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:29:47 np0005535469 podman[285205]: 2025-11-25 16:29:47.097021819 +0000 UTC m=+0.174774673 container start 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:29:47 np0005535469 podman[285205]: 2025-11-25 16:29:47.11395499 +0000 UTC m=+0.191707864 container attach 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.248 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.250 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deleting local config drive /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config because it was imported into RBD.#033[00m
Nov 25 11:29:47 np0005535469 kernel: tap02104fc6-37: entered promiscuous mode
Nov 25 11:29:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:47Z|00084|binding|INFO|Claiming lport 02104fc6-3780-400d-a6c2-577082384680 for this chassis.
Nov 25 11:29:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:47Z|00085|binding|INFO|02104fc6-3780-400d-a6c2-577082384680: Claiming fa:16:3e:ff:1b:ad 10.100.0.14
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 NetworkManager[48891]: <info>  [1764088187.3148] manager: (tap02104fc6-37): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.317 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:1b:ad 10.100.0.14'], port_security=['fa:16:3e:ff:1b:ad 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '55a7690b-4aae-4eb8-9614-a3e59161db74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=02104fc6-3780-400d-a6c2-577082384680) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.319 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 02104fc6-3780-400d-a6c2-577082384680 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.321 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:29:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:47Z|00086|binding|INFO|Setting lport 02104fc6-3780-400d-a6c2-577082384680 ovn-installed in OVS
Nov 25 11:29:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:47Z|00087|binding|INFO|Setting lport 02104fc6-3780-400d-a6c2-577082384680 up in Southbound
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.343 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4894ab-fc3c-4f8b-96dc-40b2a7640c23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.344 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52e7d5b9-01 in ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.346 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52e7d5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.346 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a9e466-dc3c-42b7-9e43-0b4e9a03fad4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.347 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[13e6bdc1-dc56-48e4-8e3c-1f9a991d0e8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 systemd-udevd[285277]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:29:47 np0005535469 systemd-machined[216343]: New machine qemu-26-instance-00000018.
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.361 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[7da459b5-45f2-491d-a47a-ab041135dfbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 NetworkManager[48891]: <info>  [1764088187.3694] device (tap02104fc6-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:29:47 np0005535469 NetworkManager[48891]: <info>  [1764088187.3705] device (tap02104fc6-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:29:47 np0005535469 systemd[1]: Started Virtual Machine qemu-26-instance-00000018.
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.376 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15bd11f2-2af3-487d-ad65-dde434c5b70c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.414 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd3ce42-4a31-41e1-90b7-5034ac9a3c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 NetworkManager[48891]: <info>  [1764088187.4212] manager: (tap52e7d5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.420 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07f69700-b104-4a5b-a9d6-a109358698c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.458 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[06a580b6-44f7-4920-b529-784214bbe5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.461 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd22f08-7a36-4701-b578-b8891f64d1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 NetworkManager[48891]: <info>  [1764088187.4895] device (tap52e7d5b9-00): carrier: link connected
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.496 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f7981a-389d-4914-b758-70ef5acfeafa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.520 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[74b48dd8-d9ff-4025-af59-bfffc4cee7dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285309, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.544 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd308b1-f017-4e7b-907c-d9b5dddfd141]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:97ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464508, 'tstamp': 464508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285310, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c8b727-6264-4cdf-a74d-ee8de4cbc369]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285311, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.607 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a949216-a5b2-422f-9559-62eb0c3650b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.678 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a4ebc1-ec17-4c4c-a5fa-82c0cbb91574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.679 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.679 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.679 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:47 np0005535469 NetworkManager[48891]: <info>  [1764088187.6821] manager: (tap52e7d5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 25 11:29:47 np0005535469 kernel: tap52e7d5b9-00: entered promiscuous mode
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.685 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:47Z|00088|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.689 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.704 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00d7e7c0-4ced-4079-87f1-55f312bce78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:47 np0005535469 nova_compute[254092]: 2025-11-25 16:29:47.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.706 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:29:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.706 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'env', 'PROCESS_TAG=haproxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:29:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]: {
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:    "0": [
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:        {
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "devices": [
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "/dev/loop3"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            ],
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_name": "ceph_lv0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_size": "21470642176",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "name": "ceph_lv0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "tags": {
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cluster_name": "ceph",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.crush_device_class": "",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.encrypted": "0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osd_id": "0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.type": "block",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.vdo": "0"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            },
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "type": "block",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "vg_name": "ceph_vg0"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:        }
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:    ],
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:    "1": [
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:        {
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "devices": [
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "/dev/loop4"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            ],
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_name": "ceph_lv1",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_size": "21470642176",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "name": "ceph_lv1",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "tags": {
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cluster_name": "ceph",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.crush_device_class": "",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.encrypted": "0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osd_id": "1",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.type": "block",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.vdo": "0"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            },
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "type": "block",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "vg_name": "ceph_vg1"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:        }
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:    ],
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:    "2": [
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:        {
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "devices": [
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "/dev/loop5"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            ],
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_name": "ceph_lv2",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_size": "21470642176",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "name": "ceph_lv2",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "tags": {
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.cluster_name": "ceph",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.crush_device_class": "",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.encrypted": "0",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osd_id": "2",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.type": "block",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:                "ceph.vdo": "0"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            },
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "type": "block",
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:            "vg_name": "ceph_vg2"
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:        }
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]:    ]
Nov 25 11:29:47 np0005535469 flamboyant_leakey[285237]: }
Nov 25 11:29:47 np0005535469 systemd[1]: libpod-38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7.scope: Deactivated successfully.
Nov 25 11:29:47 np0005535469 podman[285327]: 2025-11-25 16:29:47.990111168 +0000 UTC m=+0.023698046 container died 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:29:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2-merged.mount: Deactivated successfully.
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.030 254096 DEBUG nova.compute.manager [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.030 254096 DEBUG oslo_concurrency.lockutils [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.030 254096 DEBUG oslo_concurrency.lockutils [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.031 254096 DEBUG oslo_concurrency.lockutils [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.031 254096 DEBUG nova.compute.manager [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Processing event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:29:48 np0005535469 podman[285327]: 2025-11-25 16:29:48.063512094 +0000 UTC m=+0.097098932 container remove 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:29:48 np0005535469 systemd[1]: libpod-conmon-38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7.scope: Deactivated successfully.
Nov 25 11:29:48 np0005535469 podman[285358]: 2025-11-25 16:29:48.111868999 +0000 UTC m=+0.075926946 container create 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:29:48 np0005535469 podman[285358]: 2025-11-25 16:29:48.060612935 +0000 UTC m=+0.024670912 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:29:48 np0005535469 systemd[1]: Started libpod-conmon-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d.scope.
Nov 25 11:29:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe048199884e7d28c39687c78a39d5bf8b12cab03e5bd84735091209e7c9cff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:48 np0005535469 podman[285358]: 2025-11-25 16:29:48.200746926 +0000 UTC m=+0.164804923 container init 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:29:48 np0005535469 podman[285358]: 2025-11-25 16:29:48.208050185 +0000 UTC m=+0.172108142 container start 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:29:48 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : New worker (285428) forked
Nov 25 11:29:48 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : Loading success.
Nov 25 11:29:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 25 11:29:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 25 11:29:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.518 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088188.51757, df0db130-3ffa-4a60-8f7d-fb285a797631 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.520 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Started (Lifecycle Event)#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.523 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.527 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.530 254096 INFO nova.virt.libvirt.driver [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance spawned successfully.#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.530 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.548 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.555 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.561 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.562 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.562 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.562 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.563 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.563 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.596 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.596 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088188.519018, df0db130-3ffa-4a60-8f7d-fb285a797631 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.597 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.630 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.639 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088188.5262735, df0db130-3ffa-4a60-8f7d-fb285a797631 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.640 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.647 254096 INFO nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 7.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.648 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.665 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.707 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.736 254096 INFO nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 9.77 seconds to build instance.#033[00m
Nov 25 11:29:48 np0005535469 nova_compute[254092]: 2025-11-25 16:29:48.755 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:48 np0005535469 podman[285569]: 2025-11-25 16:29:48.773167023 +0000 UTC m=+0.047707079 container create 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 11:29:48 np0005535469 systemd[1]: Started libpod-conmon-0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d.scope.
Nov 25 11:29:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:48 np0005535469 podman[285569]: 2025-11-25 16:29:48.749707685 +0000 UTC m=+0.024247771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:29:48 np0005535469 podman[285569]: 2025-11-25 16:29:48.849125538 +0000 UTC m=+0.123665614 container init 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:29:48 np0005535469 podman[285569]: 2025-11-25 16:29:48.857149467 +0000 UTC m=+0.131689523 container start 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:29:48 np0005535469 podman[285569]: 2025-11-25 16:29:48.860957671 +0000 UTC m=+0.135497727 container attach 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:29:48 np0005535469 condescending_hamilton[285586]: 167 167
Nov 25 11:29:48 np0005535469 systemd[1]: libpod-0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d.scope: Deactivated successfully.
Nov 25 11:29:48 np0005535469 podman[285569]: 2025-11-25 16:29:48.869779431 +0000 UTC m=+0.144319477 container died 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:29:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6fafadcdae995182c2a9d2d610769ba4d474fcfbe818019a6842b5a040835035-merged.mount: Deactivated successfully.
Nov 25 11:29:48 np0005535469 podman[285569]: 2025-11-25 16:29:48.909187162 +0000 UTC m=+0.183727208 container remove 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:29:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Nov 25 11:29:48 np0005535469 systemd[1]: libpod-conmon-0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d.scope: Deactivated successfully.
Nov 25 11:29:49 np0005535469 podman[285610]: 2025-11-25 16:29:49.124482848 +0000 UTC m=+0.053468005 container create 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:29:49 np0005535469 systemd[1]: Started libpod-conmon-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope.
Nov 25 11:29:49 np0005535469 podman[285610]: 2025-11-25 16:29:49.104545836 +0000 UTC m=+0.033531013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:29:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:29:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:29:49 np0005535469 podman[285610]: 2025-11-25 16:29:49.223073869 +0000 UTC m=+0.152059046 container init 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:29:49 np0005535469 podman[285610]: 2025-11-25 16:29:49.230527632 +0000 UTC m=+0.159512789 container start 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:29:49 np0005535469 podman[285610]: 2025-11-25 16:29:49.249100146 +0000 UTC m=+0.178085333 container attach 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 11:29:49 np0005535469 nova_compute[254092]: 2025-11-25 16:29:49.994 254096 INFO nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Rebuilding instance#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.116 254096 DEBUG nova.compute.manager [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.117 254096 DEBUG oslo_concurrency.lockutils [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.117 254096 DEBUG oslo_concurrency.lockutils [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.117 254096 DEBUG oslo_concurrency.lockutils [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.118 254096 DEBUG nova.compute.manager [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.118 254096 WARNING nova.compute.manager [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.249 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.268 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.331 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.351 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.360 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.373 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.383 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:29:50 np0005535469 nova_compute[254092]: 2025-11-25 16:29:50.387 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]: {
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "osd_id": 1,
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "type": "bluestore"
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:    },
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "osd_id": 2,
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "type": "bluestore"
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:    },
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "osd_id": 0,
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:        "type": "bluestore"
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]:    }
Nov 25 11:29:50 np0005535469 optimistic_agnesi[285626]: }
Nov 25 11:29:50 np0005535469 systemd[1]: libpod-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope: Deactivated successfully.
Nov 25 11:29:50 np0005535469 systemd[1]: libpod-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope: Consumed 1.146s CPU time.
Nov 25 11:29:50 np0005535469 podman[285610]: 2025-11-25 16:29:50.473228397 +0000 UTC m=+1.402213544 container died 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:29:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7-merged.mount: Deactivated successfully.
Nov 25 11:29:50 np0005535469 podman[285610]: 2025-11-25 16:29:50.542783289 +0000 UTC m=+1.471768446 container remove 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:29:50 np0005535469 systemd[1]: libpod-conmon-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope: Deactivated successfully.
Nov 25 11:29:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:29:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:29:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:29:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:29:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0d25c212-69c7-4736-93ea-7611483c2ab9 does not exist
Nov 25 11:29:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7a10dc5e-ecd1-4b57-9baf-50be3615dda9 does not exist
Nov 25 11:29:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 372 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029724204556554014 of space, bias 1.0, pg target 0.8917261366966204 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:29:51 np0005535469 nova_compute[254092]: 2025-11-25 16:29:51.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:29:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:52Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:bd:39 10.100.0.9
Nov 25 11:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:52Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:bd:39 10.100.0.9
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.518 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:52 np0005535469 kernel: tapd6146886-91 (unregistering): left promiscuous mode
Nov 25 11:29:52 np0005535469 NetworkManager[48891]: <info>  [1764088192.7339] device (tapd6146886-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:52Z|00089|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:52Z|00090|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:52Z|00091|binding|INFO|Releasing lport d6146886-91a1-4d5f-9234-e1d0154b4230 from this chassis (sb_readonly=0)
Nov 25 11:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:52Z|00092|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 down in Southbound
Nov 25 11:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:52Z|00093|binding|INFO|Removing iface tapd6146886-91 ovn-installed in OVS
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.773 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.775 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.777 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:29:52 np0005535469 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 11:29:52 np0005535469 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 15.101s CPU time.
Nov 25 11:29:52 np0005535469 systemd-machined[216343]: Machine qemu-20-instance-00000012 terminated.
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65edb48e-7c3c-48b1-b3e3-e99510d183b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.857 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[acd890ea-0754-4278-a1a9-f703e554c5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.861 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[39e79783-915a-4349-a45b-13ce4fd65adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:52 np0005535469 podman[285739]: 2025-11-25 16:29:52.867663195 +0000 UTC m=+0.103718761 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 25 11:29:52 np0005535469 podman[285742]: 2025-11-25 16:29:52.894187896 +0000 UTC m=+0.129986095 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:29:52 np0005535469 podman[285743]: 2025-11-25 16:29:52.894498565 +0000 UTC m=+0.131225089 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.898 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8d0d07-240a-4a3f-a032-7b49e984e6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.909 254096 DEBUG nova.compute.manager [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-changed-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.909 254096 DEBUG nova.compute.manager [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing instance network info cache due to event network-changed-02104fc6-3780-400d-a6c2-577082384680. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.909 254096 DEBUG oslo_concurrency.lockutils [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.910 254096 DEBUG oslo_concurrency.lockutils [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.910 254096 DEBUG nova.network.neutron [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing network info cache for port 02104fc6-3780-400d-a6c2-577082384680 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:29:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 375 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 206 op/s
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.921 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4211641-6a85-4de1-b8c5-ee3e7a56fc17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285808, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.940 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f51ad295-85dc-43f6-9898-c4934718fb39]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285809, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285809, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.941 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:52 np0005535469 nova_compute[254092]: 2025-11-25 16:29:52.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.950 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.950 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.951 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065714878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.098 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.099 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.102 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.102 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.105 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.105 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.108 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.108 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.112 254096 DEBUG nova.compute.manager [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.112 254096 DEBUG oslo_concurrency.lockutils [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.112 254096 DEBUG oslo_concurrency.lockutils [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.113 254096 DEBUG oslo_concurrency.lockutils [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.113 254096 DEBUG nova.compute.manager [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.113 254096 WARNING nova.compute.manager [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state error and task_state rebuilding.#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.307 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.309 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3748MB free_disk=59.804080963134766GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.309 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.309 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.415 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 3375e096-321c-459b-8b6a-e085bb62872f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 090ac2d7-979e-4706-8a01-5e94ab72282d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7777dd86-925e-4f98-bd68-e38ac540d97b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance df0db130-3ffa-4a60-8f7d-fb285a797631 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.417 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.417 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.421 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.427 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.431 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.432 254096 DEBUG nova.virt.libvirt.vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:49Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.433 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.433 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.434 254096 DEBUG os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.436 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6146886-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.444 254096 INFO os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.547 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.910 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting instance files /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.910 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deletion of /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del complete#033[00m
Nov 25 11:29:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:29:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028882549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:53 np0005535469 nova_compute[254092]: 2025-11-25 16:29:53.987 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.013 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.095 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.096 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.170 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.171 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating image(s)#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.195 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.220 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.250 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.256 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.326 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.327 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.327 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.327 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.349 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.354 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.622 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.689 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.768 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.768 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ensure instance console log exists: /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.769 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.769 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.769 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.772 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start _get_guest_xml network_info=[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.779 254096 WARNING nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.783 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.783 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.787 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.788 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.789 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.790 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.793 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.793 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:54 np0005535469 nova_compute[254092]: 2025-11-25 16:29:54.811 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 370 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 249 op/s
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.096 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.097 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.097 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.097 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869273612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869273612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946544503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.292 254096 DEBUG nova.compute.manager [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.292 254096 DEBUG oslo_concurrency.lockutils [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 DEBUG oslo_concurrency.lockutils [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 DEBUG oslo_concurrency.lockutils [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 DEBUG nova.compute.manager [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 WARNING nova.compute.manager [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.294 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.315 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.320 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.543 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088180.5394893, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.544 254096 INFO nova.compute.manager [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.576 254096 DEBUG nova.compute.manager [None req-14dfef4a-30f9-4049-904d-48d0a625224c - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:29:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292593944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.785 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.786 254096 DEBUG nova.virt.libvirt.vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:54Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.786 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.787 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.789 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <uuid>3375e096-321c-459b-8b6a-e085bb62872f</uuid>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <name>instance-00000012</name>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminTestJSON-server-1705426121</nova:name>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:29:54</nova:creationTime>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <nova:port uuid="d6146886-91a1-4d5f-9234-e1d0154b4230">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <entry name="serial">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <entry name="uuid">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk.config">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:dd:a2:8e"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <target dev="tapd6146886-91"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log" append="off"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:29:55 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:29:55 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:29:55 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:29:55 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.789 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Preparing to wait for external event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG nova.virt.libvirt.vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:54Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.791 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.791 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.791 254096 DEBUG os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.792 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.792 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.795 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6146886-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.795 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6146886-91, col_values=(('external_ids', {'iface-id': 'd6146886-91a1-4d5f-9234-e1d0154b4230', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:a2:8e', 'vm-uuid': '3375e096-321c-459b-8b6a-e085bb62872f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:55 np0005535469 NetworkManager[48891]: <info>  [1764088195.7975] manager: (tapd6146886-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.806 254096 INFO os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.858 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.858 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.858 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:dd:a2:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.859 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Using config drive#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.877 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.896 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:55 np0005535469 nova_compute[254092]: 2025-11-25 16:29:55.927 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'keypairs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.040 254096 DEBUG nova.network.neutron [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updated VIF entry in instance network info cache for port 02104fc6-3780-400d-a6c2-577082384680. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.041 254096 DEBUG nova.network.neutron [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.062 254096 DEBUG oslo_concurrency.lockutils [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.775 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating config drive at /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.780 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6do501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.910 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6do501s" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 372 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 254 op/s
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.934 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:29:56 np0005535469 nova_compute[254092]: 2025-11-25 16:29:56.937 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.097 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.098 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting local config drive /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:29:57 np0005535469 kernel: tapd6146886-91: entered promiscuous mode
Nov 25 11:29:57 np0005535469 NetworkManager[48891]: <info>  [1764088197.1499] manager: (tapd6146886-91): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:57Z|00094|binding|INFO|Claiming lport d6146886-91a1-4d5f-9234-e1d0154b4230 for this chassis.
Nov 25 11:29:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:57Z|00095|binding|INFO|d6146886-91a1-4d5f-9234-e1d0154b4230: Claiming fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.158 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.159 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.161 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:29:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:57Z|00096|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 ovn-installed in OVS
Nov 25 11:29:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:29:57Z|00097|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 up in Southbound
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c85dbce-bbc2-4bcf-a6da-b79d501f04d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:57 np0005535469 systemd-udevd[286168]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:57 np0005535469 systemd-machined[216343]: New machine qemu-27-instance-00000012.
Nov 25 11:29:57 np0005535469 NetworkManager[48891]: <info>  [1764088197.1987] device (tapd6146886-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:29:57 np0005535469 NetworkManager[48891]: <info>  [1764088197.1994] device (tapd6146886-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:29:57 np0005535469 systemd[1]: Started Virtual Machine qemu-27-instance-00000012.
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.215 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8780976d-c3eb-4096-b593-0a899ab59f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.219 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a5a588-e425-4b62-ad2f-ca485b1e7434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.251 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da3c93b3-e28d-4145-890a-4a2443eea37a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.267 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4e3d10-9d3c-4273-8f10-140d18b144c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286179, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.283 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9a837a12-72c9-4db9-9983-766307c24b4f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286181, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286181, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.284 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:29:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.288 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.555 254096 DEBUG nova.compute.manager [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.555 254096 DEBUG oslo_concurrency.lockutils [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.555 254096 DEBUG oslo_concurrency.lockutils [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.556 254096 DEBUG oslo_concurrency.lockutils [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.556 254096 DEBUG nova.compute.manager [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Processing event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.772 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 3375e096-321c-459b-8b6a-e085bb62872f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.773 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088197.7721932, 3375e096-321c-459b-8b6a-e085bb62872f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.774 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.776 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.779 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.783 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance spawned successfully.#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.783 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.812 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.821 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.829 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.829 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.830 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.831 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.832 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.833 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 25 11:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 25 11:29:57 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.859 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.859 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088197.7731946, 3375e096-321c-459b-8b6a-e085bb62872f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.860 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.888 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.893 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088197.7790942, 3375e096-321c-459b-8b6a-e085bb62872f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.894 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.916 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.919 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.930 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.956 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.989 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.989 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:57 np0005535469 nova_compute[254092]: 2025-11-25 16:29:57.989 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.052 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.784 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.785 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.785 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:29:58 np0005535469 nova_compute[254092]: 2025-11-25 16:29:58.786 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:29:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 372 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 254 op/s
Nov 25 11:29:59 np0005535469 nova_compute[254092]: 2025-11-25 16:29:59.965 254096 DEBUG nova.compute.manager [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:29:59 np0005535469 nova_compute[254092]: 2025-11-25 16:29:59.965 254096 DEBUG oslo_concurrency.lockutils [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:29:59 np0005535469 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 DEBUG oslo_concurrency.lockutils [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:29:59 np0005535469 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 DEBUG oslo_concurrency.lockutils [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:29:59 np0005535469 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 DEBUG nova.compute.manager [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:29:59 np0005535469 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 WARNING nova.compute.manager [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:00 np0005535469 nova_compute[254092]: 2025-11-25 16:30:00.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 372 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 214 op/s
Nov 25 11:30:01 np0005535469 nova_compute[254092]: 2025-11-25 16:30:01.027 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:01 np0005535469 nova_compute[254092]: 2025-11-25 16:30:01.049 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:01 np0005535469 nova_compute[254092]: 2025-11-25 16:30:01.049 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:30:01 np0005535469 nova_compute[254092]: 2025-11-25 16:30:01.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:02Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:1b:ad 10.100.0.14
Nov 25 11:30:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:02Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:1b:ad 10.100.0.14
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.518 254096 INFO nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Rebuilding instance#033[00m
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.815 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.835 254096 DEBUG nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.889 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.907 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 372 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 211 op/s
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.916 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.922 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.930 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:30:02 np0005535469 nova_compute[254092]: 2025-11-25 16:30:02.933 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:30:03 np0005535469 nova_compute[254092]: 2025-11-25 16:30:03.044 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 388 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Nov 25 11:30:05 np0005535469 nova_compute[254092]: 2025-11-25 16:30:05.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:05 np0005535469 nova_compute[254092]: 2025-11-25 16:30:05.797 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 405 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 168 op/s
Nov 25 11:30:07 np0005535469 nova_compute[254092]: 2025-11-25 16:30:07.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:07 np0005535469 nova_compute[254092]: 2025-11-25 16:30:07.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 405 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 152 op/s
Nov 25 11:30:09 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 11:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:30:10 np0005535469 nova_compute[254092]: 2025-11-25 16:30:10.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 422 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Nov 25 11:30:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:11Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 11:30:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:11Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.445 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.445 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.462 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.488 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.489 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.490 254096 DEBUG nova.objects.instance [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.566 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.567 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.575 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.576 254096 INFO nova.compute.claims [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:30:11 np0005535469 nova_compute[254092]: 2025-11-25 16:30:11.790 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.048 254096 DEBUG nova.objects.instance [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.061 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:30:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.228 254096 DEBUG nova.policy [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:30:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972223262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.246 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.253 254096 DEBUG nova.compute.provider_tree [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.272 254096 DEBUG nova.scheduler.client.report [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.339 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.339 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.438 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.438 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.472 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.494 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.653 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.654 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.655 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Creating image(s)#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.673 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.694 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.720 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.727 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.752 254096 DEBUG nova.policy [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb8bd106d2264d719b9ebd9f83f19c5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f2f26334db2f4e2cadc5664efd73eb67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.794 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.795 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.796 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.796 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.818 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.821 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 433 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 134 op/s
Nov 25 11:30:12 np0005535469 nova_compute[254092]: 2025-11-25 16:30:12.975 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:30:13 np0005535469 nova_compute[254092]: 2025-11-25 16:30:13.243 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully created port: 09e835b8-70c9-4cb4-bbc2-63fab5f2592e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:30:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:13.603 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:13.603 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:13.604 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 456 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 557 KiB/s rd, 5.0 MiB/s wr, 111 op/s
Nov 25 11:30:14 np0005535469 nova_compute[254092]: 2025-11-25 16:30:14.971 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.200 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] resizing rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.653 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Successfully created port: fb46dd7a-52d4-44cb-b99e-81d7d653885c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.842 254096 DEBUG nova.objects.instance [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lazy-loading 'migration_context' on Instance uuid 33b19faf-57e1-463b-8b4a-b50479a0ef0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.862 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.862 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Ensure instance console log exists: /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.862 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.863 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:15 np0005535469 nova_compute[254092]: 2025-11-25 16:30:15.863 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:16 np0005535469 nova_compute[254092]: 2025-11-25 16:30:16.764 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:16 np0005535469 nova_compute[254092]: 2025-11-25 16:30:16.764 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:16 np0005535469 nova_compute[254092]: 2025-11-25 16:30:16.867 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully updated port: 09e835b8-70c9-4cb4-bbc2-63fab5f2592e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:30:16 np0005535469 nova_compute[254092]: 2025-11-25 16:30:16.870 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:30:16 np0005535469 nova_compute[254092]: 2025-11-25 16:30:16.904 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:16 np0005535469 nova_compute[254092]: 2025-11-25 16:30:16.905 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:16 np0005535469 nova_compute[254092]: 2025-11-25 16:30:16.905 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:30:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 484 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 5.0 MiB/s wr, 122 op/s
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.023 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.024 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.030 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.031 254096 INFO nova.compute.claims [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.093 254096 WARNING nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.296 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:17 np0005535469 kernel: tapd6146886-91 (unregistering): left promiscuous mode
Nov 25 11:30:17 np0005535469 NetworkManager[48891]: <info>  [1764088217.4373] device (tapd6146886-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:17Z|00098|binding|INFO|Releasing lport d6146886-91a1-4d5f-9234-e1d0154b4230 from this chassis (sb_readonly=0)
Nov 25 11:30:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:17Z|00099|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 down in Southbound
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:17Z|00100|binding|INFO|Removing iface tapd6146886-91 ovn-installed in OVS
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:17 np0005535469 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 11:30:17 np0005535469 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000012.scope: Consumed 13.602s CPU time.
Nov 25 11:30:17 np0005535469 systemd-machined[216343]: Machine qemu-27-instance-00000012 terminated.
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.527 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.528 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.529 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.545 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0ea887-62a4-4a06-8cd7-c46065e58b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.573 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[770cc85e-8aa1-4f8c-a8b5-d911472d7f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.576 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba034cf9-5535-4ce1-96c2-197ad9baf092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.602 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf47656-542a-4882-bba7-06adce53572c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.622 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88725e88-28a7-4904-9c0a-05c5a435ae9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286445, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.639 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54095571-a149-4f05-865c-837b93c44418]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286446, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286446, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.641 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.648 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.648 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712337094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.781 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.787 254096 DEBUG nova.compute.provider_tree [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.801 254096 DEBUG nova.scheduler.client.report [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.865 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.865 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.980 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:30:17 np0005535469 nova_compute[254092]: 2025-11-25 16:30:17.981 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.012 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.046 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.077 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Successfully updated port: fb46dd7a-52d4-44cb-b99e-81d7d653885c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.170 254096 DEBUG nova.compute.manager [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-changed-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.170 254096 DEBUG nova.compute.manager [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing instance network info cache due to event network-changed-09e835b8-70c9-4cb4-bbc2-63fab5f2592e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.170 254096 DEBUG oslo_concurrency.lockutils [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.174 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.174 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquired lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.174 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.205 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance shutdown successfully after 15 seconds.#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.210 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.218 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.219 254096 DEBUG nova.virt.libvirt.vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:01Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.219 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.220 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.220 254096 DEBUG os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.221 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6146886-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.226 254096 INFO os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.246 254096 DEBUG nova.policy [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01171e7ab3a4447497eacf11bf89be63', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e1bcf74bb1148a3a0f388525c96c919', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.287 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.288 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.288 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Creating image(s)#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.307 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.329 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.348 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.352 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.414 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.415 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.415 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.415 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.432 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.436 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 484 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.971 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:30:18 np0005535469 nova_compute[254092]: 2025-11-25 16:30:18.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:18.986 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:18.987 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:30:19 np0005535469 nova_compute[254092]: 2025-11-25 16:30:19.167 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Successfully created port: 8daa55ae-6950-4c2f-8121-ce02930ab1d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:30:19 np0005535469 nova_compute[254092]: 2025-11-25 16:30:19.534 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:19 np0005535469 nova_compute[254092]: 2025-11-25 16:30:19.602 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] resizing rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.236 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.316 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Releasing lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.317 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance network_info: |[{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.319 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start _get_guest_xml network_info=[{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.323 254096 WARNING nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.328 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.329 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.333 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.334 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.334 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.335 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.335 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.338 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.341 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615290058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.792 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.812 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:20 np0005535469 nova_compute[254092]: 2025-11-25 16:30:20.815 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 505 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 5.3 MiB/s wr, 109 op/s
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.047 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.068 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Successfully updated port: 8daa55ae-6950-4c2f-8121-ce02930ab1d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.136 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.136 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing instance network info cache due to event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.137 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.137 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.137 254096 DEBUG nova.network.neutron [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.339 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.340 254096 DEBUG oslo_concurrency.lockutils [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.340 254096 DEBUG nova.network.neutron [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing network info cache for port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.343 254096 DEBUG nova.virt.libvirt.vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.344 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.344 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.345 254096 DEBUG os_vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.345 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.346 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.349 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09e835b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.349 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09e835b8-70, col_values=(('external_ids', {'iface-id': '09e835b8-70c9-4cb4-bbc2-63fab5f2592e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:4a:54', 'vm-uuid': 'df0db130-3ffa-4a60-8f7d-fb285a797631'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 NetworkManager[48891]: <info>  [1764088221.3522] manager: (tap09e835b8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.354 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.354 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquired lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.355 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.361 254096 INFO os_vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70')#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.361 254096 DEBUG nova.virt.libvirt.vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.362 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.362 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.365 254096 DEBUG nova.virt.libvirt.guest [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:79:4a:54"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <target dev="tap09e835b8-70"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:30:21 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 11:30:21 np0005535469 kernel: tap09e835b8-70: entered promiscuous mode
Nov 25 11:30:21 np0005535469 NetworkManager[48891]: <info>  [1764088221.3769] manager: (tap09e835b8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Nov 25 11:30:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:21Z|00101|binding|INFO|Claiming lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e for this chassis.
Nov 25 11:30:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:21Z|00102|binding|INFO|09e835b8-70c9-4cb4-bbc2-63fab5f2592e: Claiming fa:16:3e:79:4a:54 10.100.0.13
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:21Z|00103|binding|INFO|Setting lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e ovn-installed in OVS
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 systemd-udevd[286693]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:30:21 np0005535469 NetworkManager[48891]: <info>  [1764088221.4177] device (tap09e835b8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:30:21 np0005535469 NetworkManager[48891]: <info>  [1764088221.4189] device (tap09e835b8-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:30:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:21Z|00104|binding|INFO|Setting lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e up in Southbound
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.460 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:4a:54 10.100.0.13'], port_security=['fa:16:3e:79:4a:54 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=09e835b8-70c9-4cb4-bbc2-63fab5f2592e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.461 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.464 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.479 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d45b0c-6e34-493d-a692-99c84c38950d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791889168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.506 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[65f23f6d-1ce2-495f-9d7b-4cad8d2ecfe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.508 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.509 254096 DEBUG nova.virt.libvirt.vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1375199124',display_name='tempest-ServersTestManualDisk-server-1375199124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1375199124',id=25,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOQq+JT40N5kSAAkZKtTY8+kwc4Tq2+j0vXcLZMu4KKRGWjKEsrOB7QpF/UTscMrUzfK+p97q+eBa8XrywfkAV6Mo0KdjURR0zReL+ABXznVVDaiCZTtZ5HErawYUq7Fw==',key_name='tempest-keypair-637822798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2f26334db2f4e2cadc5664efd73eb67',ramdisk_id='',reservation_id='r-d0txwl4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-420094767',owner_user_name='tempest-ServersTestManualDisk-420094767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fb8bd106d2264d719b9ebd9f83f19c5a',uuid=33b19faf-57e1-463b-8b4a-b50479a0ef0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.509 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converting VIF {"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.509 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b73c8a6f-bd86-4c04-b00b-4e83d3a56871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.509 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.510 254096 DEBUG nova.objects.instance [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33b19faf-57e1-463b-8b4a-b50479a0ef0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.526 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <uuid>33b19faf-57e1-463b-8b4a-b50479a0ef0f</uuid>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <name>instance-00000019</name>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestManualDisk-server-1375199124</nova:name>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:30:20</nova:creationTime>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:user uuid="fb8bd106d2264d719b9ebd9f83f19c5a">tempest-ServersTestManualDisk-420094767-project-member</nova:user>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:project uuid="f2f26334db2f4e2cadc5664efd73eb67">tempest-ServersTestManualDisk-420094767</nova:project>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <nova:port uuid="fb46dd7a-52d4-44cb-b99e-81d7d653885c">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <entry name="serial">33b19faf-57e1-463b-8b4a-b50479a0ef0f</entry>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <entry name="uuid">33b19faf-57e1-463b-8b4a-b50479a0ef0f</entry>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d4:54:45"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <target dev="tapfb46dd7a-52"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/console.log" append="off"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:21 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:21 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Preparing to wait for external event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.528 254096 DEBUG nova.virt.libvirt.vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1375199124',display_name='tempest-ServersTestManualDisk-server-1375199124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1375199124',id=25,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOQq+JT40N5kSAAkZKtTY8+kwc4Tq2+j0vXcLZMu4KKRGWjKEsrOB7QpF/UTscMrUzfK+p97q+eBa8XrywfkAV6Mo0KdjURR0zReL+ABXznVVDaiCZTtZ5HErawYUq7Fw==',key_name='tempest-keypair-637822798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2f26334db2f4e2cadc5664efd73eb67',ramdisk_id='',reservation_id='r-d0txwl4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-420094767',owner_user_name='tempest-ServersTestManualDisk-420094767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fb8bd106d2264d719b9ebd9f83f19c5a',uuid=33b19faf-57e1-463b-8b4a-b50479a0ef0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.528 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converting VIF {"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG os_vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.530 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.532 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb46dd7a-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.532 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb46dd7a-52, col_values=(('external_ids', {'iface-id': 'fb46dd7a-52d4-44cb-b99e-81d7d653885c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:54:45', 'vm-uuid': '33b19faf-57e1-463b-8b4a-b50479a0ef0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 NetworkManager[48891]: <info>  [1764088221.5341] manager: (tapfb46dd7a-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[85fb8c43-5917-415b-bebe-ce02d4dbca9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.538 254096 INFO os_vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52')#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e65be2b1-82eb-457e-a64e-f08997b42b8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286703, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[070eaee5-344b-43f4-8f98-6094531cf4a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464522, 'tstamp': 464522}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286705, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464526, 'tstamp': 464526}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286705, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.574 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.650 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.650 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.651 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:ff:1b:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.651 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:79:4a:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.695 254096 DEBUG nova.virt.libvirt.guest [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:30:21</nova:creationTime>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    <nova:port uuid="09e835b8-70c9-4cb4-bbc2-63fab5f2592e">
Nov 25 11:30:21 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:21 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:30:21 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:30:21 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.727 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.731 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.731 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.731 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] No VIF found with MAC fa:16:3e:d4:54:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.732 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Using config drive#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.749 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.800 254096 DEBUG nova.objects.instance [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lazy-loading 'migration_context' on Instance uuid dc3c86a9-91cf-42fb-b11c-7de3305d8388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.811 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Ensure instance console log exists: /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:21 np0005535469 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.029 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.213 254096 DEBUG nova.compute.manager [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-changed-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.213 254096 DEBUG nova.compute.manager [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Refreshing instance network info cache due to event network-changed-8daa55ae-6950-4c2f-8121-ce02930ab1d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.213 254096 DEBUG oslo_concurrency.lockutils [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.614 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Creating config drive at /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.621 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmpoyfu3z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.759 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmpoyfu3z" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.833 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:22 np0005535469 nova_compute[254092]: 2025-11-25 16:30:22.836 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 488 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Nov 25 11:30:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:23Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:4a:54 10.100.0.13
Nov 25 11:30:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:23Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:4a:54 10.100.0.13
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.624 254096 DEBUG nova.network.neutron [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updated VIF entry in instance network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.625 254096 DEBUG nova.network.neutron [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.639 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.639 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.639 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 WARNING nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:23 np0005535469 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 WARNING nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 11:30:23 np0005535469 podman[286785]: 2025-11-25 16:30:23.650420421 +0000 UTC m=+0.060334032 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 11:30:23 np0005535469 podman[286784]: 2025-11-25 16:30:23.660429433 +0000 UTC m=+0.070486567 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 11:30:23 np0005535469 podman[286786]: 2025-11-25 16:30:23.689786861 +0000 UTC m=+0.098239732 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.032 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updating instance_info_cache with network_info: [{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.038 254096 DEBUG nova.network.neutron [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updated VIF entry in instance network info cache for port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.039 254096 DEBUG nova.network.neutron [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.050 254096 DEBUG oslo_concurrency.lockutils [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.099 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Releasing lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.099 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance network_info: |[{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.100 254096 DEBUG oslo_concurrency.lockutils [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.100 254096 DEBUG nova.network.neutron [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Refreshing network info cache for port 8daa55ae-6950-4c2f-8121-ce02930ab1d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.105 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start _get_guest_xml network_info=[{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.111 254096 WARNING nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.120 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.121 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.125 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.126 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.127 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.127 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.132 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.300 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.300 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.300 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 WARNING nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.302 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.302 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.302 254096 WARNING nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583423350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.601 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.633 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.639 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.746 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-09e835b8-70c9-4cb4-bbc2-63fab5f2592e" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.748 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-09e835b8-70c9-4cb4-bbc2-63fab5f2592e" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.769 254096 DEBUG nova.objects.instance [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.791 254096 DEBUG nova.virt.libvirt.vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.792 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.793 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.798 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.800 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.802 254096 DEBUG nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Attempting to detach device tap09e835b8-70 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 11:30:24 np0005535469 nova_compute[254092]: 2025-11-25 16:30:24.803 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 11:30:24 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:79:4a:54"/>
Nov 25 11:30:24 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:30:24 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:30:24 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:30:24 np0005535469 nova_compute[254092]:  <target dev="tap09e835b8-70"/>
Nov 25 11:30:24 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:30:24 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 11:30:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 457 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.007 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.010 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <name>instance-00000018</name>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:30:21</nova:creationTime>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:port uuid="09e835b8-70c9-4cb4-bbc2-63fab5f2592e">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev='tap02104fc6-37'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:79:4a:54'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev='tap09e835b8-70'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='net1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.025 254096 INFO nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap09e835b8-70 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the persistent domain config.#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.025 254096 DEBUG nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] (1/8): Attempting to detach device tap09e835b8-70 with device alias net1 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.026 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:79:4a:54"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <target dev="tap09e835b8-70"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 11:30:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494457192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.132 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.133 254096 DEBUG nova.virt.libvirt.vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1874518354',display_name='tempest-ImagesNegativeTestJSON-server-1874518354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1874518354',id=26,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e1bcf74bb1148a3a0f388525c96c919',ramdisk_id='',reservation_id='r-cagts7p4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1461337409',owner_user_name='tempest-ImagesNegativeTestJSON-1461337409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:18Z,user_data=None,user_id='01171e7ab3a4447497eacf11bf89be63',uuid=dc3c86a9-91cf-42fb-b11c-7de3305d8388,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.133 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converting VIF {"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.134 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.135 254096 DEBUG nova.objects.instance [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc3c86a9-91cf-42fb-b11c-7de3305d8388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:25 np0005535469 kernel: tap09e835b8-70 (unregistering): left promiscuous mode
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.1472] device (tap09e835b8-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.148 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <uuid>dc3c86a9-91cf-42fb-b11c-7de3305d8388</uuid>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <name>instance-0000001a</name>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:name>tempest-ImagesNegativeTestJSON-server-1874518354</nova:name>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:30:24</nova:creationTime>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:user uuid="01171e7ab3a4447497eacf11bf89be63">tempest-ImagesNegativeTestJSON-1461337409-project-member</nova:user>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:project uuid="4e1bcf74bb1148a3a0f388525c96c919">tempest-ImagesNegativeTestJSON-1461337409</nova:project>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <nova:port uuid="8daa55ae-6950-4c2f-8121-ce02930ab1d9">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name="serial">dc3c86a9-91cf-42fb-b11c-7de3305d8388</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name="uuid">dc3c86a9-91cf-42fb-b11c-7de3305d8388</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e0:3e:50"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev="tap8daa55ae-69"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/console.log" append="off"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.148 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Preparing to wait for external event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.149 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.149 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.149 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.150 254096 DEBUG nova.virt.libvirt.vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1874518354',display_name='tempest-ImagesNegativeTestJSON-server-1874518354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1874518354',id=26,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e1bcf74bb1148a3a0f388525c96c919',ramdisk_id='',reservation_id='r-cagts7p4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1461337409',owner_user_name='tempest-ImagesNegativeTestJSON-1461337409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:18Z,user_data=None,user_id='01171e7ab3a4447497eacf11bf89be63',uuid=dc3c86a9-91cf-42fb-b11c-7de3305d8388,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.150 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converting VIF {"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.151 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.151 254096 DEBUG os_vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.153 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.153 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.155 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daa55ae-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.156 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8daa55ae-69, col_values=(('external_ids', {'iface-id': '8daa55ae-6950-4c2f-8121-ce02930ab1d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:3e:50', 'vm-uuid': 'dc3c86a9-91cf-42fb-b11c-7de3305d8388'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00105|binding|INFO|Releasing lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e from this chassis (sb_readonly=0)
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00106|binding|INFO|Setting lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e down in Southbound
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00107|binding|INFO|Removing iface tap09e835b8-70 ovn-installed in OVS
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.1590] manager: (tap8daa55ae-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.163 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764088225.1634488, df0db130-3ffa-4a60-8f7d-fb285a797631 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.168 254096 DEBUG nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Start waiting for the detach event from libvirt for device tap09e835b8-70 with device alias net1 for instance df0db130-3ffa-4a60-8f7d-fb285a797631 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.169 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.173 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <name>instance-00000018</name>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:30:21</nova:creationTime>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:port uuid="09e835b8-70c9-4cb4-bbc2-63fab5f2592e">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target dev='tap02104fc6-37'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.173 254096 INFO nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap09e835b8-70 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the live domain config.#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.174 254096 DEBUG nova.virt.libvirt.vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.176 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.177 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.177 254096 DEBUG os_vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.183 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.185 254096 INFO os_vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69')#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.186 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09e835b8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.193 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:4a:54 10.100.0.13'], port_security=['fa:16:3e:79:4a:54 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=09e835b8-70c9-4cb4-bbc2-63fab5f2592e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.194 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.196 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.198 254096 INFO os_vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70')#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.198 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:30:25</nova:creationTime>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:30:25 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:25 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:30:25 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4361dcb6-9402-4e19-9f20-e0abca65ad01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.243 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[677f96c0-40ed-4ae6-87eb-1b287e1e261e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.245 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6602c8fe-21d8-4d4c-9f4f-f7997ed4123b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.270 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2df52d-b49c-42c5-91af-727237d9aab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.286 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[265aab35-5959-44a9-a009-e01b9dd0fb43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286931, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.299 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c096d181-ad92-487b-8afc-499c5f245bcb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464522, 'tstamp': 464522}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286932, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464526, 'tstamp': 464526}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286932, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.301 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.309 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.309 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.309 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.310 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.329 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.330 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deleting local config drive /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] No VIF found with MAC fa:16:3e:e0:3e:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.361 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Using config drive#033[00m
Nov 25 11:30:25 np0005535469 kernel: tapfb46dd7a-52: entered promiscuous mode
Nov 25 11:30:25 np0005535469 systemd-udevd[286917]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.3765] manager: (tapfb46dd7a-52): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00108|binding|INFO|Claiming lport fb46dd7a-52d4-44cb-b99e-81d7d653885c for this chassis.
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00109|binding|INFO|fb46dd7a-52d4-44cb-b99e-81d7d653885c: Claiming fa:16:3e:d4:54:45 10.100.0.10
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.3878] device (tapfb46dd7a-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.3893] device (tapfb46dd7a-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00110|binding|INFO|Setting lport fb46dd7a-52d4-44cb-b99e-81d7d653885c ovn-installed in OVS
Nov 25 11:30:25 np0005535469 systemd-machined[216343]: New machine qemu-28-instance-00000019.
Nov 25 11:30:25 np0005535469 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00111|binding|INFO|Setting lport fb46dd7a-52d4-44cb-b99e-81d7d653885c up in Southbound
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.462 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:54:45 10.100.0.10'], port_security=['fa:16:3e:d4:54:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '33b19faf-57e1-463b-8b4a-b50479a0ef0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2f26334db2f4e2cadc5664efd73eb67', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0797621-f8b9-4c57-8ae8-f4d291e244fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6129ca0-5388-4dd2-a829-e686213800fe, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fb46dd7a-52d4-44cb-b99e-81d7d653885c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.463 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fb46dd7a-52d4-44cb-b99e-81d7d653885c in datapath 6ab64ae8-b8fa-4795-a243-9ebe45233e37 bound to our chassis#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.465 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ab64ae8-b8fa-4795-a243-9ebe45233e37#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.476 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d689522b-2f7d-47f6-9dd9-a6fda1349675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.477 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ab64ae8-b1 in ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.479 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ab64ae8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.479 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50aff5a9-0538-4f75-aa4e-7edf7b2d4cdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b25f70-2f42-4838-9572-7a77a7f664c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.492 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.498 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8fa3a9-2be1-4209-a47f-dc2b3026892e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4820a9-4b7f-4abd-a158-ad78fc487bcd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.548 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bcbec8-ca44-4516-be8f-d824ea4de090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f38ad65-e093-4503-9b37-f8c9752376bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.5544] manager: (tap6ab64ae8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.583 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[56414b02-5ec0-40c6-b47d-dfae6a578cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.585 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebb0c54-239a-4bfa-826d-95a64e09059c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.6089] device (tap6ab64ae8-b0): carrier: link connected
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.614 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c695c0c7-178c-4c97-9751-6075f856cddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.632 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dad3d284-faa0-4ef5-871b-85af700c6902]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ab64ae8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:67:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468319, 'reachable_time': 15631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287002, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.645 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[780c28c9-4563-4567-820a-037d63b4a63e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:6763'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468319, 'tstamp': 468319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287003, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.661 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38a0135a-ca13-46f4-9de7-bc06f923c5f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ab64ae8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:67:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468319, 'reachable_time': 15631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287004, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.691 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9c7c2c-5c54-4b97-bc7a-dd3162d9852e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.744 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d550e787-ac2e-43c5-bb1e-a93f947b83ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ab64ae8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ab64ae8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 kernel: tap6ab64ae8-b0: entered promiscuous mode
Nov 25 11:30:25 np0005535469 NetworkManager[48891]: <info>  [1764088225.7478] manager: (tap6ab64ae8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.753 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ab64ae8-b0, col_values=(('external_ids', {'iface-id': 'd23fff41-4296-47a5-8279-de62f83ea17d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:25Z|00112|binding|INFO|Releasing lport d23fff41-4296-47a5-8279-de62f83ea17d from this chassis (sb_readonly=0)
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.775 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ab64ae8-b8fa-4795-a243-9ebe45233e37.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ab64ae8-b8fa-4795-a243-9ebe45233e37.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0aa86b2-4595-4d89-b87f-7126a6f06ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.777 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-6ab64ae8-b8fa-4795-a243-9ebe45233e37
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/6ab64ae8-b8fa-4795-a243-9ebe45233e37.pid.haproxy
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 6ab64ae8-b8fa-4795-a243-9ebe45233e37
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.777 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'env', 'PROCESS_TAG=haproxy-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ab64ae8-b8fa-4795-a243-9ebe45233e37.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.856 254096 DEBUG nova.network.neutron [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updated VIF entry in instance network info cache for port 8daa55ae-6950-4c2f-8121-ce02930ab1d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.857 254096 DEBUG nova.network.neutron [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updating instance_info_cache with network_info: [{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:25 np0005535469 nova_compute[254092]: 2025-11-25 16:30:25.880 254096 DEBUG oslo_concurrency.lockutils [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.990 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.050 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Creating config drive at /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.054 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ypur5pd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.205 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ypur5pd" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:26 np0005535469 podman[287042]: 2025-11-25 16:30:26.131504405 +0000 UTC m=+0.023307764 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.232 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.235 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.460 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-unplugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.461 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.461 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.461 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-unplugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 WARNING nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-unplugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.463 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.463 254096 WARNING nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.634 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.634 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.634 254096 DEBUG nova.network.neutron [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.655 254096 DEBUG nova.compute.manager [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-deleted-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.656 254096 INFO nova.compute.manager [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Neutron deleted interface 09e835b8-70c9-4cb4-bbc2-63fab5f2592e; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 11:30:26 np0005535469 nova_compute[254092]: 2025-11-25 16:30:26.656 254096 DEBUG nova.network.neutron [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:26 np0005535469 podman[287042]: 2025-11-25 16:30:26.731541503 +0000 UTC m=+0.623344852 container create 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:30:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 132 KiB/s rd, 2.9 MiB/s wr, 96 op/s
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.105 254096 DEBUG nova.objects.instance [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.165 254096 DEBUG nova.objects.instance [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.191 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088227.1914814, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.192 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.248 254096 DEBUG nova.virt.libvirt.vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.248 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.249 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.252 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.253 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.256 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088227.1915882, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.256 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.257 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <name>instance-00000018</name>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:30:25</nova:creationTime>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target dev='tap02104fc6-37'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.258 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.261 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <name>instance-00000018</name>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:30:25</nova:creationTime>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target dev='tap02104fc6-37'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.262 254096 WARNING nova.virt.libvirt.driver [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Detaching interface fa:16:3e:79:4a:54 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap09e835b8-70' not found.#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.263 254096 DEBUG nova.virt.libvirt.vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.263 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.264 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.264 254096 DEBUG os_vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.269 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09e835b8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.270 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.272 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.273 254096 INFO os_vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70')#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.275 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:30:27</nova:creationTime>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 11:30:27 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:30:27 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:30:27 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.278 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.301 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:30:27 np0005535469 systemd[1]: Started libpod-conmon-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771.scope.
Nov 25 11:30:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:30:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972254736acb7813ada732b95c02a171ed8ca59846d32b38baa12f81aa1e9009/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:27 np0005535469 nova_compute[254092]: 2025-11-25 16:30:27.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:27 np0005535469 podman[287042]: 2025-11-25 16:30:27.608481412 +0000 UTC m=+1.500284781 container init 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 11:30:27 np0005535469 podman[287042]: 2025-11-25 16:30:27.6142942 +0000 UTC m=+1.506097549 container start 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:30:27 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : New worker (287146) forked
Nov 25 11:30:27 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : Loading success.
Nov 25 11:30:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.220 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.224 254096 INFO nova.compute.manager [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Terminating instance#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.224 254096 DEBUG nova.compute.manager [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.747 254096 INFO nova.network.neutron [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.747 254096 DEBUG nova.network.neutron [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.759 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.783 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-09e835b8-70c9-4cb4-bbc2-63fab5f2592e" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.843 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.844 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deleting local config drive /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config because it was imported into RBD.#033[00m
Nov 25 11:30:28 np0005535469 kernel: tap02104fc6-37 (unregistering): left promiscuous mode
Nov 25 11:30:28 np0005535469 NetworkManager[48891]: <info>  [1764088228.8495] device (tap02104fc6-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.886 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.887 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.888 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.888 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.889 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Processing event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.889 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.890 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.890 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.891 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.892 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] No waiting events found dispatching network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.892 254096 WARNING nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received unexpected event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.893 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.898 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088228.8981485, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.899 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.901 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.910 254096 INFO nova.virt.libvirt.driver [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance spawned successfully.#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.911 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:30:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:28Z|00113|binding|INFO|Releasing lport 02104fc6-3780-400d-a6c2-577082384680 from this chassis (sb_readonly=0)
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:28Z|00114|binding|INFO|Setting lport 02104fc6-3780-400d-a6c2-577082384680 down in Southbound
Nov 25 11:30:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Nov 25 11:30:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:28Z|00115|binding|INFO|Removing iface tap02104fc6-37 ovn-installed in OVS
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.928 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.941 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.942 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.942 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.942 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.943 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.943 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.947 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:28 np0005535469 NetworkManager[48891]: <info>  [1764088228.9509] manager: (tap8daa55ae-69): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Nov 25 11:30:28 np0005535469 systemd-udevd[287164]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:30:28 np0005535469 kernel: tap8daa55ae-69: entered promiscuous mode
Nov 25 11:30:28 np0005535469 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 25 11:30:28 np0005535469 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Consumed 15.640s CPU time.
Nov 25 11:30:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:28Z|00116|if_status|INFO|Not updating pb chassis for 8daa55ae-6950-4c2f-8121-ce02930ab1d9 now as sb is readonly
Nov 25 11:30:28 np0005535469 systemd-machined[216343]: Machine qemu-26-instance-00000018 terminated.
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:28 np0005535469 NetworkManager[48891]: <info>  [1764088228.9629] device (tap8daa55ae-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:30:28 np0005535469 NetworkManager[48891]: <info>  [1764088228.9643] device (tap8daa55ae-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.982 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:28 np0005535469 systemd-machined[216343]: New machine qemu-29-instance-0000001a.
Nov 25 11:30:28 np0005535469 nova_compute[254092]: 2025-11-25 16:30:28.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:28 np0005535469 systemd[1]: Started Virtual Machine qemu-29-instance-0000001a.
Nov 25 11:30:29 np0005535469 NetworkManager[48891]: <info>  [1764088229.0571] manager: (tap02104fc6-37): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.071 254096 INFO nova.virt.libvirt.driver [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance destroyed successfully.#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.071 254096 DEBUG nova.objects.instance [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'resources' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.081 254096 DEBUG nova.virt.libvirt.vif [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.082 254096 DEBUG nova.network.os_vif_util [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.082 254096 DEBUG nova.network.os_vif_util [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.083 254096 DEBUG os_vif [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02104fc6-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.087 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.089 254096 INFO os_vif [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37')#033[00m
Nov 25 11:30:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:29Z|00117|binding|INFO|Claiming lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 for this chassis.
Nov 25 11:30:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:29Z|00118|binding|INFO|8daa55ae-6950-4c2f-8121-ce02930ab1d9: Claiming fa:16:3e:e0:3e:50 10.100.0.13
Nov 25 11:30:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.205 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:1b:ad 10.100.0.14'], port_security=['fa:16:3e:ff:1b:ad 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '55a7690b-4aae-4eb8-9614-a3e59161db74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=02104fc6-3780-400d-a6c2-577082384680) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.207 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 02104fc6-3780-400d-a6c2-577082384680 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis#033[00m
Nov 25 11:30:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:29Z|00119|binding|INFO|Setting lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 ovn-installed in OVS
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.209 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:30:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.210 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef59f743-86ca-4f01-95fd-0aac94773f3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.211 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace which is not needed anymore#033[00m
Nov 25 11:30:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.379 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3e:50 10.100.0.13'], port_security=['fa:16:3e:e0:3e:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dc3c86a9-91cf-42fb-b11c-7de3305d8388', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d276419-01f1-4c77-9031-28a38923f36b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e1bcf74bb1148a3a0f388525c96c919', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93dbcee8-770d-45ab-a643-3e941ad2b6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71f4e8d2-793d-4f17-8ffd-c8df15f9380b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8daa55ae-6950-4c2f-8121-ce02930ab1d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:29Z|00120|binding|INFO|Setting lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 up in Southbound
Nov 25 11:30:29 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : haproxy version is 2.8.14-c23fe91
Nov 25 11:30:29 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : path to executable is /usr/sbin/haproxy
Nov 25 11:30:29 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [WARNING]  (285417) : Exiting Master process...
Nov 25 11:30:29 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [WARNING]  (285417) : Exiting Master process...
Nov 25 11:30:29 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [ALERT]    (285417) : Current worker (285428) exited with code 143 (Terminated)
Nov 25 11:30:29 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [WARNING]  (285417) : All workers exited. Exiting... (0)
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.500 254096 INFO nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 16.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.500 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:29 np0005535469 systemd[1]: libpod-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d.scope: Deactivated successfully.
Nov 25 11:30:29 np0005535469 podman[287231]: 2025-11-25 16:30:29.511586319 +0000 UTC m=+0.206710172 container died 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.686 254096 INFO nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 18.15 seconds to build instance.#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.722 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.824 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting instance files /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del#033[00m
Nov 25 11:30:29 np0005535469 nova_compute[254092]: 2025-11-25 16:30:29.825 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deletion of /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del complete#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.079 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.080 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating image(s)#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.099 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d-userdata-shm.mount: Deactivated successfully.
Nov 25 11:30:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3fe048199884e7d28c39687c78a39d5bf8b12cab03e5bd84735091209e7c9cff-merged.mount: Deactivated successfully.
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.128 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.148 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.152 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.183 254096 DEBUG nova.compute.manager [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.184 254096 DEBUG oslo_concurrency.lockutils [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.184 254096 DEBUG oslo_concurrency.lockutils [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.184 254096 DEBUG oslo_concurrency.lockutils [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.185 254096 DEBUG nova.compute.manager [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Processing event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.223 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.224 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.224 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.225 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.259 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.270 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.331 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088230.3313322, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.332 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Started (Lifecycle Event)#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.335 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.340 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.358 254096 INFO nova.virt.libvirt.driver [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance spawned successfully.#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.361 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.364 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.392 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088230.335023, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.392 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.398 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.399 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.399 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.399 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.400 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.400 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.417 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.420 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088230.3396242, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.421 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.440 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:30 np0005535469 podman[287231]: 2025-11-25 16:30:30.451171982 +0000 UTC m=+1.146295835 container cleanup 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:30:30 np0005535469 systemd[1]: libpod-conmon-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d.scope: Deactivated successfully.
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.484 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.594 254096 INFO nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 12.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.595 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.791 254096 INFO nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 13.79 seconds to build instance.#033[00m
Nov 25 11:30:30 np0005535469 nova_compute[254092]: 2025-11-25 16:30:30.840 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1017 KiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 11:30:31 np0005535469 podman[287396]: 2025-11-25 16:30:31.320889264 +0000 UTC m=+0.847187171 container remove 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.326 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6dde58c4-0bc1-4148-8b59-f89c2ea57ee2]: (4, ('Tue Nov 25 04:30:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d)\n9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d\nTue Nov 25 04:30:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d)\n9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.328 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f54a3ef7-43b4-49fe-93ef-e734ce73019b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.328 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:31 np0005535469 nova_compute[254092]: 2025-11-25 16:30:31.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:31 np0005535469 kernel: tap52e7d5b9-00: left promiscuous mode
Nov 25 11:30:31 np0005535469 nova_compute[254092]: 2025-11-25 16:30:31.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.334 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99fb2cc9-0054-4870-90fd-5981750ddb9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.348 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa3c591-7215-46ec-bd89-88ace77c7640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.351 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[622d48c9-41d4-4959-bfaa-a84ce1ccbaac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 nova_compute[254092]: 2025-11-25 16:30:31.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[437a4e85-4867-4319-800f-29a8799972de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464499, 'reachable_time': 32001, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287412, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 systemd[1]: run-netns-ovnmeta\x2d52e7d5b9\x2d0570\x2d4e5c\x2db3da\x2d9dfcb924b83d.mount: Deactivated successfully.
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.375 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.375 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[02d46be6-6139-4204-b972-e1265a526cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.376 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8daa55ae-6950-4c2f-8121-ce02930ab1d9 in datapath 1d276419-01f1-4c77-9031-28a38923f36b unbound from our chassis#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.378 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d276419-01f1-4c77-9031-28a38923f36b#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.389 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ab923d-08ef-425e-bc5d-9a3713bf4726]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.390 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d276419-01 in ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.392 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d276419-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.393 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7adca014-382b-4b8e-838e-7d1c92e58d16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.394 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7379971-b511-4573-b718-90d0a773ad04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.404 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4446d455-0d7f-4fb5-ad2f-d2c4db9cbd6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55630daa-5bdd-4a2a-bfb5-a766d3009859]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.443 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[23c6bdde-85a1-4ccd-bcef-280a1807d715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[430e6186-6d8a-46d1-aae3-7068f845ac54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 NetworkManager[48891]: <info>  [1764088231.4514] manager: (tap1d276419-00): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.487 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f5356c79-c783-4d2c-b66e-7ec7844e7b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.490 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf10e76-9c0f-468d-899d-585b15cc7f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 NetworkManager[48891]: <info>  [1764088231.5158] device (tap1d276419-00): carrier: link connected
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.521 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4b36b3-1728-418a-bae3-e48b0dc78906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.538 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9c1231-7c0e-4df8-b2fb-fc1a4aacea0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d276419-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:0e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468910, 'reachable_time': 43373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287440, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c530ef1d-3c92-4e69-9527-82140712af9d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:e94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468910, 'tstamp': 468910}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287441, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.577 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8ae60b-82b3-430c-8a0c-941d30da7ff2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d276419-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:0e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468910, 'reachable_time': 43373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287442, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.610 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1791b1-e594-4e93-9884-d68b5082397c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.674 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a528fa4d-d03c-4037-9f90-5bf34a8c7e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d276419-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d276419-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:31 np0005535469 nova_compute[254092]: 2025-11-25 16:30:31.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:31 np0005535469 NetworkManager[48891]: <info>  [1764088231.6790] manager: (tap1d276419-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 25 11:30:31 np0005535469 kernel: tap1d276419-00: entered promiscuous mode
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.680 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d276419-00, col_values=(('external_ids', {'iface-id': 'a9472e97-6aa6-485c-8695-66d5c06679d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:31 np0005535469 nova_compute[254092]: 2025-11-25 16:30:31.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:31Z|00121|binding|INFO|Releasing lport a9472e97-6aa6-485c-8695-66d5c06679d5 from this chassis (sb_readonly=0)
Nov 25 11:30:31 np0005535469 nova_compute[254092]: 2025-11-25 16:30:31.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.698 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d276419-01f1-4c77-9031-28a38923f36b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d276419-01f1-4c77-9031-28a38923f36b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8de3374f-5d8c-4a74-b5ea-27030381238a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.702 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-1d276419-01f1-4c77-9031-28a38923f36b
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/1d276419-01f1-4c77-9031-28a38923f36b.pid.haproxy
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 1d276419-01f1-4c77-9031-28a38923f36b
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:30:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.702 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'env', 'PROCESS_TAG=haproxy-1d276419-01f1-4c77-9031-28a38923f36b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d276419-01f1-4c77-9031-28a38923f36b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.119 254096 DEBUG nova.compute.manager [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-unplugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.119 254096 DEBUG oslo_concurrency.lockutils [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG oslo_concurrency.lockutils [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG oslo_concurrency.lockutils [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG nova.compute.manager [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-unplugged-02104fc6-3780-400d-a6c2-577082384680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG nova.compute.manager [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-unplugged-02104fc6-3780-400d-a6c2-577082384680 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:30:32 np0005535469 podman[287474]: 2025-11-25 16:30:32.032161288 +0000 UTC m=+0.020949051 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:30:32 np0005535469 podman[287474]: 2025-11-25 16:30:32.356152849 +0000 UTC m=+0.344940592 container create f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:30:32 np0005535469 systemd[1]: Started libpod-conmon-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0.scope.
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:30:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4de00267f053678af55b710532281e3e8b0c27859e6b07f037a9b50c7bcde3c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.676 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088217.6750712, 3375e096-321c-459b-8b6a-e085bb62872f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.677 254096 INFO nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.678 254096 DEBUG nova.compute.manager [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG oslo_concurrency.lockutils [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG oslo_concurrency.lockutils [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG oslo_concurrency.lockutils [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG nova.compute.manager [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] No waiting events found dispatching network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.680 254096 WARNING nova.compute.manager [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received unexpected event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:32 np0005535469 nova_compute[254092]: 2025-11-25 16:30:32.696 254096 DEBUG nova.compute.manager [None req-24f29e88-a0eb-49f8-b44b-27436800971d - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:32 np0005535469 podman[287474]: 2025-11-25 16:30:32.740146272 +0000 UTC m=+0.728934015 container init f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 11:30:32 np0005535469 podman[287474]: 2025-11-25 16:30:32.745718094 +0000 UTC m=+0.734505837 container start f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:30:32 np0005535469 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : New worker (287495) forked
Nov 25 11:30:32 np0005535469 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : Loading success.
Nov 25 11:30:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 410 KiB/s wr, 114 op/s
Nov 25 11:30:33 np0005535469 nova_compute[254092]: 2025-11-25 16:30:33.141 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.871s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:33 np0005535469 nova_compute[254092]: 2025-11-25 16:30:33.270 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.117 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.118 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ensure instance console log exists: /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.118 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.118 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.119 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.121 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start _get_guest_xml network_info=[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.125 254096 WARNING nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.129 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.129 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.131 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.132 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.132 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.132 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.135 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.135 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.156 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.331 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.332 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.333 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.333 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.333 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.335 254096 INFO nova.compute.manager [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Terminating instance#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.336 254096 DEBUG nova.compute.manager [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:30:34 np0005535469 kernel: tap8daa55ae-69 (unregistering): left promiscuous mode
Nov 25 11:30:34 np0005535469 NetworkManager[48891]: <info>  [1764088234.4023] device (tap8daa55ae-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:34Z|00122|binding|INFO|Releasing lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 from this chassis (sb_readonly=0)
Nov 25 11:30:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:34Z|00123|binding|INFO|Setting lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 down in Southbound
Nov 25 11:30:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:34Z|00124|binding|INFO|Removing iface tap8daa55ae-69 ovn-installed in OVS
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:34 np0005535469 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 25 11:30:34 np0005535469 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Consumed 4.649s CPU time.
Nov 25 11:30:34 np0005535469 systemd-machined[216343]: Machine qemu-29-instance-0000001a terminated.
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.573 254096 INFO nova.virt.libvirt.driver [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance destroyed successfully.#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.574 254096 DEBUG nova.objects.instance [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lazy-loading 'resources' on Instance uuid dc3c86a9-91cf-42fb-b11c-7de3305d8388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.591 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3e:50 10.100.0.13'], port_security=['fa:16:3e:e0:3e:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dc3c86a9-91cf-42fb-b11c-7de3305d8388', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d276419-01f1-4c77-9031-28a38923f36b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e1bcf74bb1148a3a0f388525c96c919', 'neutron:revision_number': '4', 'neutron:security_group_ids': '93dbcee8-770d-45ab-a643-3e941ad2b6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71f4e8d2-793d-4f17-8ffd-c8df15f9380b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8daa55ae-6950-4c2f-8121-ce02930ab1d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.594 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8daa55ae-6950-4c2f-8121-ce02930ab1d9 in datapath 1d276419-01f1-4c77-9031-28a38923f36b unbound from our chassis#033[00m
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.596 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d276419-01f1-4c77-9031-28a38923f36b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0227ac-a2df-4c60-ad49-285d01542b5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.598 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b namespace which is not needed anymore#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.598 254096 DEBUG nova.virt.libvirt.vif [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1874518354',display_name='tempest-ImagesNegativeTestJSON-server-1874518354',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1874518354',id=26,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e1bcf74bb1148a3a0f388525c96c919',ramdisk_id='',reservation_id='r-cagts7p4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1461337409',owner_user_name='tempest-ImagesNegativeTestJSON-1461337409-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:30Z,user_data=None,user_id='01171e7ab3a4447497eacf11bf89be63',uuid=dc3c86a9-91cf-42fb-b11c-7de3305d8388,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.598 254096 DEBUG nova.network.os_vif_util [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converting VIF {"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.599 254096 DEBUG nova.network.os_vif_util [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.599 254096 DEBUG os_vif [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.601 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daa55ae-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.607 254096 INFO os_vif [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69')#033[00m
Nov 25 11:30:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939421658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.669 254096 DEBUG nova.compute.manager [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG oslo_concurrency.lockutils [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG oslo_concurrency.lockutils [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG oslo_concurrency.lockutils [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG nova.compute.manager [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 WARNING nova.compute.manager [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.695 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.716 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.720 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:34 np0005535469 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : haproxy version is 2.8.14-c23fe91
Nov 25 11:30:34 np0005535469 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : path to executable is /usr/sbin/haproxy
Nov 25 11:30:34 np0005535469 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [WARNING]  (287493) : Exiting Master process...
Nov 25 11:30:34 np0005535469 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [ALERT]    (287493) : Current worker (287495) exited with code 143 (Terminated)
Nov 25 11:30:34 np0005535469 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [WARNING]  (287493) : All workers exited. Exiting... (0)
Nov 25 11:30:34 np0005535469 systemd[1]: libpod-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0.scope: Deactivated successfully.
Nov 25 11:30:34 np0005535469 podman[287653]: 2025-11-25 16:30:34.772915415 +0000 UTC m=+0.065812042 container died f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:30:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0-userdata-shm.mount: Deactivated successfully.
Nov 25 11:30:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a4de00267f053678af55b710532281e3e8b0c27859e6b07f037a9b50c7bcde3c-merged.mount: Deactivated successfully.
Nov 25 11:30:34 np0005535469 podman[287653]: 2025-11-25 16:30:34.85696666 +0000 UTC m=+0.149863267 container cleanup f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:30:34 np0005535469 systemd[1]: libpod-conmon-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0.scope: Deactivated successfully.
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.874 254096 INFO nova.virt.libvirt.driver [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deleting instance files /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631_del#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.875 254096 INFO nova.virt.libvirt.driver [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deletion of /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631_del complete#033[00m
Nov 25 11:30:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 424 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 205 op/s
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.953 254096 INFO nova.compute.manager [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 6.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.954 254096 DEBUG oslo.service.loopingcall [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.954 254096 DEBUG nova.compute.manager [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.955 254096 DEBUG nova.network.neutron [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:30:34 np0005535469 podman[287709]: 2025-11-25 16:30:34.962749327 +0000 UTC m=+0.079772500 container remove f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.970 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c057820b-9808-4580-86c0-aba7443a356b]: (4, ('Tue Nov 25 04:30:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b (f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0)\nf4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0\nTue Nov 25 04:30:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b (f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0)\nf4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.973 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a66e12dc-ad19-4c7c-bb94-dda519bd3a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.976 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d276419-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:34 np0005535469 kernel: tap1d276419-00: left promiscuous mode
Nov 25 11:30:34 np0005535469 nova_compute[254092]: 2025-11-25 16:30:34.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.002 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4f3ae4-c3cd-49df-a314-11a7bed39317]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.020 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17877fd8-d943-402f-9929-bdf1c1c0fc0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.021 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d685c491-7e6b-4559-9c34-57052948f9fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28e193df-2982-41f2-9f63-9ef6ed0e3b12]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468902, 'reachable_time': 42220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287733, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:35 np0005535469 systemd[1]: run-netns-ovnmeta\x2d1d276419\x2d01f1\x2d4c77\x2d9031\x2d28a38923f36b.mount: Deactivated successfully.
Nov 25 11:30:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.051 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:30:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.052 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a6f9a2-ee34-4939-b339-d9654de7206c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833353936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.290 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.293 254096 DEBUG nova.virt.libvirt.vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:29Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.294 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.296 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.299 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <uuid>3375e096-321c-459b-8b6a-e085bb62872f</uuid>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <name>instance-00000012</name>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAdminTestJSON-server-1705426121</nova:name>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:30:34</nova:creationTime>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <nova:port uuid="d6146886-91a1-4d5f-9234-e1d0154b4230">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <entry name="serial">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <entry name="uuid">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk.config">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:dd:a2:8e"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <target dev="tapd6146886-91"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log" append="off"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:30:35 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:35 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:35 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:35 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.307 254096 DEBUG nova.virt.libvirt.vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:29Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.308 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.309 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.310 254096 DEBUG os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.311 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.316 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6146886-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.317 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6146886-91, col_values=(('external_ids', {'iface-id': 'd6146886-91a1-4d5f-9234-e1d0154b4230', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:a2:8e', 'vm-uuid': '3375e096-321c-459b-8b6a-e085bb62872f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:35 np0005535469 NetworkManager[48891]: <info>  [1764088235.3197] manager: (tapd6146886-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.326 254096 INFO os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.573 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.573 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.574 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:dd:a2:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.574 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Using config drive#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.595 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.617 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.655 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'keypairs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.786 254096 DEBUG nova.compute.manager [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-unplugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.786 254096 DEBUG oslo_concurrency.lockutils [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.787 254096 DEBUG oslo_concurrency.lockutils [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.787 254096 DEBUG oslo_concurrency.lockutils [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.787 254096 DEBUG nova.compute.manager [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] No waiting events found dispatching network-vif-unplugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:35 np0005535469 nova_compute[254092]: 2025-11-25 16:30:35.788 254096 DEBUG nova.compute.manager [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-unplugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.071 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating config drive at /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.078 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp67rqo0p6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.207 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp67rqo0p6" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.231 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.236 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.294 254096 INFO nova.virt.libvirt.driver [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deleting instance files /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388_del#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.296 254096 INFO nova.virt.libvirt.driver [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deletion of /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388_del complete#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.392 254096 INFO nova.compute.manager [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 2.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.393 254096 DEBUG oslo.service.loopingcall [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.393 254096 DEBUG nova.compute.manager [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.394 254096 DEBUG nova.network.neutron [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.418 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.419 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting local config drive /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:30:36 np0005535469 kernel: tapd6146886-91: entered promiscuous mode
Nov 25 11:30:36 np0005535469 systemd-udevd[287601]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:30:36 np0005535469 NetworkManager[48891]: <info>  [1764088236.4646] manager: (tapd6146886-91): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 25 11:30:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:36Z|00125|binding|INFO|Claiming lport d6146886-91a1-4d5f-9234-e1d0154b4230 for this chassis.
Nov 25 11:30:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:36Z|00126|binding|INFO|d6146886-91a1-4d5f-9234-e1d0154b4230: Claiming fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:36 np0005535469 NetworkManager[48891]: <info>  [1764088236.4775] device (tapd6146886-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:30:36 np0005535469 NetworkManager[48891]: <info>  [1764088236.4799] device (tapd6146886-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.481 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.486 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis#033[00m
Nov 25 11:30:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:36Z|00127|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 ovn-installed in OVS
Nov 25 11:30:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:36Z|00128|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 up in Southbound
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.489 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.491 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.505 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad82042-461f-4827-911e-780e30f0b598]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:36 np0005535469 systemd-machined[216343]: New machine qemu-30-instance-00000012.
Nov 25 11:30:36 np0005535469 systemd[1]: Started Virtual Machine qemu-30-instance-00000012.
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.539 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ad909832-154b-4b43-9800-052afb622509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.543 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[561e4298-a891-4e04-8942-03f75577ebfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.574 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[07273b1f-cc8d-4112-bdd0-815ea730445c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a685e7c-08ae-4b2c-bdc2-36347c433885]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287821, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c747e66-ae1d-4042-a4fc-5e682870abfc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.631 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.632 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.632 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.633 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.765 254096 DEBUG nova.network.neutron [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.783 254096 INFO nova.compute.manager [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 1.83 seconds to deallocate network for instance.#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.842 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:36 np0005535469 nova_compute[254092]: 2025-11-25 16:30:36.843 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 407 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.060 254096 DEBUG oslo_concurrency.processutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.136 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088237.1355405, 3375e096-321c-459b-8b6a-e085bb62872f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.137 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.141 254096 DEBUG nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.141 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.146 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance spawned successfully.#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.146 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.181 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.190 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.191 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.193 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.193 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.201 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.229 254096 DEBUG nova.network.neutron [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.255 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.255 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088237.144586, 3375e096-321c-459b-8b6a-e085bb62872f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.255 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.268 254096 INFO nova.compute.manager [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 0.88 seconds to deallocate network for instance.#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.275 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.280 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.286 254096 DEBUG nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.318 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.357 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.358 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171751371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.561 254096 DEBUG oslo_concurrency.processutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.567 254096 DEBUG nova.compute.provider_tree [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.582 254096 DEBUG nova.scheduler.client.report [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.610 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.612 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.648 254096 INFO nova.scheduler.client.report [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Deleted allocations for instance df0db130-3ffa-4a60-8f7d-fb285a797631#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.725 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.782 254096 DEBUG nova.compute.manager [req-f92792e7-9c1c-4878-bb24-0e6f9cdf07f2 req-91033713-4ffa-42ee-a09a-a7aee7e290df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-deleted-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.804 254096 DEBUG oslo_concurrency.processutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.926 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.927 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing instance network info cache due to event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.927 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.928 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:37 np0005535469 nova_compute[254092]: 2025-11-25 16:30:37.928 254096 DEBUG nova.network.neutron [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:30:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041010035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.335 254096 DEBUG oslo_concurrency.processutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.340 254096 DEBUG nova.compute.provider_tree [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.356 254096 DEBUG nova.scheduler.client.report [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.382 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.385 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.385 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.428 254096 INFO nova.scheduler.client.report [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Deleted allocations for instance dc3c86a9-91cf-42fb-b11c-7de3305d8388#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.453 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:38 np0005535469 nova_compute[254092]: 2025-11-25 16:30:38.503 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 407 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.279 254096 DEBUG nova.network.neutron [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updated VIF entry in instance network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.280 254096 DEBUG nova.network.neutron [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.299 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.299 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.299 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.300 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.300 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.300 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] No waiting events found dispatching network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 WARNING nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received unexpected event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-deleted-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:39 np0005535469 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 WARNING nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.001 254096 DEBUG nova.compute.manager [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.001 254096 DEBUG oslo_concurrency.lockutils [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.001 254096 DEBUG oslo_concurrency.lockutils [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.002 254096 DEBUG oslo_concurrency.lockutils [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.002 254096 DEBUG nova.compute.manager [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.002 254096 WARNING nova.compute.manager [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:30:40
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.084 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.084 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.085 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.085 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.087 254096 INFO nova.compute.manager [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Terminating instance#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.088 254096 DEBUG nova.compute.manager [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:30:40 np0005535469 kernel: tap0f27a287-0c (unregistering): left promiscuous mode
Nov 25 11:30:40 np0005535469 NetworkManager[48891]: <info>  [1764088240.1366] device (tap0f27a287-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:40Z|00129|binding|INFO|Releasing lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 from this chassis (sb_readonly=0)
Nov 25 11:30:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:40Z|00130|binding|INFO|Setting lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 down in Southbound
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:40Z|00131|binding|INFO|Removing iface tap0f27a287-0c ovn-installed in OVS
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.152 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:bd:39 10.100.0.9'], port_security=['fa:16:3e:a8:bd:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7777dd86-925e-4f98-bd68-e38ac540d97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0f27a287-0c09-4767-a6cf-a7f4f8870ea1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.155 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.157 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cc8691-7080-4704-b6b1-05e8074870c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.207 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6316209f-aff6-4eca-8591-c23936611d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.210 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e74b7d-f80e-40ff-80eb-7799d218ca9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:40 np0005535469 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 25 11:30:40 np0005535469 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 17.802s CPU time.
Nov 25 11:30:40 np0005535469 systemd-machined[216343]: Machine qemu-25-instance-00000017 terminated.
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.242 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7891a214-50ba-40c9-932e-ba59fbec2514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.259 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d67b4f0-709d-4794-b65d-f168610237d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287921, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.279 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ceaf5dab-6117-4b2c-8d8d-0b1df0208682]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287922, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287922, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.281 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.323 254096 INFO nova.virt.libvirt.driver [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance destroyed successfully.#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.323 254096 DEBUG nova.objects.instance [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 7777dd86-925e-4f98-bd68-e38ac540d97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.335 254096 DEBUG nova.virt.libvirt.vif [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-890496249',display_name='tempest-ServersAdminTestJSON-server-890496249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-890496249',id=23,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-pzomq4ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:36Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=7777dd86-925e-4f98-bd68-e38ac540d97b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.337 254096 DEBUG nova.network.os_vif_util [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.338 254096 DEBUG nova.network.os_vif_util [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.339 254096 DEBUG os_vif [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.341 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f27a287-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.347 254096 INFO os_vif [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c')#033[00m
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.903 254096 INFO nova.virt.libvirt.driver [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deleting instance files /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b_del#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.903 254096 INFO nova.virt.libvirt.driver [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deletion of /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b_del complete#033[00m
Nov 25 11:30:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 372 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 281 op/s
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.948 254096 INFO nova.compute.manager [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.948 254096 DEBUG oslo.service.loopingcall [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.948 254096 DEBUG nova.compute.manager [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:30:40 np0005535469 nova_compute[254092]: 2025-11-25 16:30:40.949 254096 DEBUG nova.network.neutron [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:30:41 np0005535469 nova_compute[254092]: 2025-11-25 16:30:41.454 254096 DEBUG nova.network.neutron [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:41 np0005535469 nova_compute[254092]: 2025-11-25 16:30:41.467 254096 INFO nova.compute.manager [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 0.52 seconds to deallocate network for instance.#033[00m
Nov 25 11:30:41 np0005535469 nova_compute[254092]: 2025-11-25 16:30:41.508 254096 DEBUG nova.compute.manager [req-43da5c4c-24cf-4713-9b0a-a2504e5e3adf req-dc9d189e-3c1e-4566-8725-cfee57909a22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-deleted-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:41 np0005535469 nova_compute[254092]: 2025-11-25 16:30:41.510 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:41 np0005535469 nova_compute[254092]: 2025-11-25 16:30:41.511 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:41 np0005535469 nova_compute[254092]: 2025-11-25 16:30:41.633 254096 DEBUG oslo_concurrency.processutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:41Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:54:45 10.100.0.10
Nov 25 11:30:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:41Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:54:45 10.100.0.10
Nov 25 11:30:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788548521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.063 254096 DEBUG oslo_concurrency.processutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.069 254096 DEBUG nova.compute.provider_tree [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-unplugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] No waiting events found dispatching network-vif-unplugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 WARNING nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received unexpected event network-vif-unplugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.084 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] No waiting events found dispatching network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.084 254096 WARNING nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received unexpected event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.089 254096 DEBUG nova.scheduler.client.report [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.109 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.135 254096 INFO nova.scheduler.client.report [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance 7777dd86-925e-4f98-bd68-e38ac540d97b#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.208 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:42 np0005535469 nova_compute[254092]: 2025-11-25 16:30:42.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 346 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.256 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.256 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.257 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.258 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.258 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.259 254096 INFO nova.compute.manager [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Terminating instance#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.260 254096 DEBUG nova.compute.manager [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:30:43 np0005535469 kernel: tap0d1cf86d-66 (unregistering): left promiscuous mode
Nov 25 11:30:43 np0005535469 NetworkManager[48891]: <info>  [1764088243.3239] device (tap0d1cf86d-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00132|binding|INFO|Releasing lport 0d1cf86d-6639-47eb-8de1-718476d1c006 from this chassis (sb_readonly=0)
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00133|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 down in Southbound
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00134|binding|INFO|Removing iface tap0d1cf86d-66 ovn-installed in OVS
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.352 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.353 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.354 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.382 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[95d99d47-8093-4fea-9007-4a7f75fd01a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 25 11:30:43 np0005535469 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 16.532s CPU time.
Nov 25 11:30:43 np0005535469 systemd-machined[216343]: Machine qemu-24-instance-00000016 terminated.
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.407 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[264a7b16-f966-457b-a029-31b10a649f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.410 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f51670f9-6abb-4656-9a28-f077d423bbf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.435 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2978b643-e178-470f-8a58-799ac2479833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b0e3b2-4e89-418b-b20f-70f9ef1f210f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287984, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.461 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c8cf46-f140-41cb-adb2-881cfd7da3ba]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287985, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287985, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.463 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.468 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.468 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:43 np0005535469 systemd-udevd[287975]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:30:43 np0005535469 kernel: tap0d1cf86d-66: entered promiscuous mode
Nov 25 11:30:43 np0005535469 NetworkManager[48891]: <info>  [1764088243.4928] manager: (tap0d1cf86d-66): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00135|binding|INFO|Claiming lport 0d1cf86d-6639-47eb-8de1-718476d1c006 for this chassis.
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00136|binding|INFO|0d1cf86d-6639-47eb-8de1-718476d1c006: Claiming fa:16:3e:78:52:60 10.100.0.10
Nov 25 11:30:43 np0005535469 kernel: tap0d1cf86d-66 (unregistering): left promiscuous mode
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.500 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.500 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.502 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.504 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.518 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d57a9d49-cf8b-4d09-8b66-57b5fcca1158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00137|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 ovn-installed in OVS
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00138|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 up in Southbound
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00139|binding|INFO|Releasing lport 0d1cf86d-6639-47eb-8de1-718476d1c006 from this chassis (sb_readonly=1)
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00140|if_status|INFO|Not setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 down as sb is readonly
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00141|binding|INFO|Removing iface tap0d1cf86d-66 ovn-installed in OVS
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00142|binding|INFO|Releasing lport 0d1cf86d-6639-47eb-8de1-718476d1c006 from this chassis (sb_readonly=0)
Nov 25 11:30:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:43Z|00143|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 down in Southbound
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.531 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.531 254096 INFO nova.virt.libvirt.driver [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance destroyed successfully.#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.532 254096 DEBUG nova.objects.instance [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.543 254096 DEBUG nova.virt.libvirt.vif [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2013800485',display_name='tempest-ServersAdminTestJSON-server-2013800485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2013800485',id=22,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-p696wseq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:17Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=f0cb83d8-c2a3-49d1-8c01-b9be9922abd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.544 254096 DEBUG nova.network.os_vif_util [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.544 254096 DEBUG nova.network.os_vif_util [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.544 254096 DEBUG os_vif [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.546 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d1cf86d-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.549 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ef4213-d1bf-4f6b-82db-20482ea5c03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.550 254096 INFO os_vif [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66')#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.557 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[00dcf611-265f-4f6c-bdcd-1a5e748baaa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.586 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[388fc9a7-4080-4efd-8474-133ef1021563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8894b2cc-59ed-4724-bdd7-858d0459531c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288014, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.621 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[773e6d23-b571-434e-ad25-cf07275f3f5f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288015, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288015, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.623 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.625 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.625 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.627 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.629 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.644 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[51057887-a864-4962-ba87-fc855aa69e30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.676 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[12c3f08a-3543-448d-80e4-f665b7a0d70e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.679 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[87055f16-de57-407e-b7b0-ed3e3f3bc3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.710 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2f7742-e3bd-41ca-be7a-ff60d43ca328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.728 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b0512fca-ec5c-410f-bc99-24c9e8710546]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288021, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf8e57d-18cc-449f-945a-4a62bb220eec]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288023, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288023, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.746 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.894 254096 INFO nova.virt.libvirt.driver [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deleting instance files /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_del#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.895 254096 INFO nova.virt.libvirt.driver [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deletion of /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_del complete#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.951 254096 INFO nova.compute.manager [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.951 254096 DEBUG oslo.service.loopingcall [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.952 254096 DEBUG nova.compute.manager [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:30:43 np0005535469 nova_compute[254092]: 2025-11-25 16:30:43.952 254096 DEBUG nova.network.neutron [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.067 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088229.0663986, df0db130-3ffa-4a60-8f7d-fb285a797631 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.068 254096 INFO nova.compute.manager [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.086 254096 DEBUG nova.compute.manager [None req-5f6deaa3-bd11-45ff-a118-6d1ffc875df9 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:44Z|00144|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 11:30:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:44Z|00145|binding|INFO|Releasing lport d23fff41-4296-47a5-8279-de62f83ea17d from this chassis (sb_readonly=0)
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.289 254096 DEBUG nova.compute.manager [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-unplugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG oslo_concurrency.lockutils [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG oslo_concurrency.lockutils [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG oslo_concurrency.lockutils [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG nova.compute.manager [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] No waiting events found dispatching network-vif-unplugged-0d1cf86d-6639-47eb-8de1-718476d1c006 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG nova.compute.manager [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-unplugged-0d1cf86d-6639-47eb-8de1-718476d1c006 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.853 254096 DEBUG nova.network.neutron [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.872 254096 INFO nova.compute.manager [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 0.92 seconds to deallocate network for instance.#033[00m
Nov 25 11:30:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 273 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 344 op/s
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.930 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:44 np0005535469 nova_compute[254092]: 2025-11-25 16:30:44.932 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:45 np0005535469 nova_compute[254092]: 2025-11-25 16:30:45.043 254096 DEBUG oslo_concurrency.processutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953352328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:45 np0005535469 nova_compute[254092]: 2025-11-25 16:30:45.477 254096 DEBUG oslo_concurrency.processutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:45 np0005535469 nova_compute[254092]: 2025-11-25 16:30:45.483 254096 DEBUG nova.compute.provider_tree [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:45 np0005535469 nova_compute[254092]: 2025-11-25 16:30:45.502 254096 DEBUG nova.scheduler.client.report [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:45 np0005535469 nova_compute[254092]: 2025-11-25 16:30:45.587 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:45 np0005535469 nova_compute[254092]: 2025-11-25 16:30:45.613 254096 INFO nova.scheduler.client.report [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance f0cb83d8-c2a3-49d1-8c01-b9be9922abd1#033[00m
Nov 25 11:30:45 np0005535469 nova_compute[254092]: 2025-11-25 16:30:45.683 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.304 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.305 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.325 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.394 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.394 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.401 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.402 254096 INFO nova.compute.claims [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.485 254096 DEBUG nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.486 254096 DEBUG oslo_concurrency.lockutils [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.486 254096 DEBUG oslo_concurrency.lockutils [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 DEBUG oslo_concurrency.lockutils [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 DEBUG nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] No waiting events found dispatching network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 WARNING nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received unexpected event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 DEBUG nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-deleted-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.564 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 246 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 236 op/s
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.951 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.951 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.952 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.952 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.952 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.954 254096 INFO nova.compute.manager [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Terminating instance#033[00m
Nov 25 11:30:46 np0005535469 nova_compute[254092]: 2025-11-25 16:30:46.955 254096 DEBUG nova.compute.manager [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:30:47 np0005535469 kernel: tap7c4d5f4d-36 (unregistering): left promiscuous mode
Nov 25 11:30:47 np0005535469 NetworkManager[48891]: <info>  [1764088247.0061] device (tap7c4d5f4d-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:47Z|00146|binding|INFO|Releasing lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c from this chassis (sb_readonly=0)
Nov 25 11:30:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:47Z|00147|binding|INFO|Setting lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c down in Southbound
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.010 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:47Z|00148|binding|INFO|Removing iface tap7c4d5f4d-36 ovn-installed in OVS
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.020 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:5a:58 10.100.0.11'], port_security=['fa:16:3e:d8:5a:58 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '090ac2d7-979e-4706-8a01-5e94ab72282d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.023 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis#033[00m
Nov 25 11:30:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268997110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.025 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ba09ff-4085-4e49-b03d-efa1b14bd31f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.059 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:47 np0005535469 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 25 11:30:47 np0005535469 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Consumed 16.546s CPU time.
Nov 25 11:30:47 np0005535469 systemd-machined[216343]: Machine qemu-22-instance-00000013 terminated.
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.068 254096 DEBUG nova.compute.provider_tree [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.079 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc56c69-5b24-4798-bb2b-a02cb95ce52d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.081 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ed80fdb6-5ffe-46b0-89e0-2027a5283c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.086 254096 DEBUG nova.scheduler.client.report [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.105 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.106 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.110 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[183b61b8-d2ac-4fa3-b2b8-aef467a36de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.127 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7ad35c-353c-415f-8a86-4882a1e2aa2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288079, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.142 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8613e502-d8d9-4928-9617-98b972ef1000]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288080, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288080, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.144 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.150 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.151 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.152 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.152 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.153 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.153 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.170 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.183 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.195 254096 INFO nova.virt.libvirt.driver [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance destroyed successfully.#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.195 254096 DEBUG nova.objects.instance [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 090ac2d7-979e-4706-8a01-5e94ab72282d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.220 254096 DEBUG nova.virt.libvirt.vif [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:28:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-391611256',display_name='tempest-ServersAdminTestJSON-server-391611256',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-391611256',id=19,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-6weq4dsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:28:54Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=090ac2d7-979e-4706-8a01-5e94ab72282d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.221 254096 DEBUG nova.network.os_vif_util [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.221 254096 DEBUG nova.network.os_vif_util [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.222 254096 DEBUG os_vif [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.223 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c4d5f4d-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.227 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.230 254096 INFO os_vif [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36')#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.300 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.302 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.302 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Creating image(s)#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.327 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.352 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.382 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.386 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.457 254096 DEBUG nova.policy [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.480 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.481 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.481 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.482 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.503 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.506 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 07003872-27e7-4fd9-80cf-a34257d5aa97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.538 254096 DEBUG nova.compute.manager [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-unplugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.539 254096 DEBUG oslo_concurrency.lockutils [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.539 254096 DEBUG oslo_concurrency.lockutils [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.540 254096 DEBUG oslo_concurrency.lockutils [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.540 254096 DEBUG nova.compute.manager [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] No waiting events found dispatching network-vif-unplugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.540 254096 DEBUG nova.compute.manager [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-unplugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.579 254096 INFO nova.virt.libvirt.driver [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deleting instance files /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d_del#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.580 254096 INFO nova.virt.libvirt.driver [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deletion of /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d_del complete#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.646 254096 INFO nova.compute.manager [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.646 254096 DEBUG oslo.service.loopingcall [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.647 254096 DEBUG nova.compute.manager [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.647 254096 DEBUG nova.network.neutron [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.807 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 07003872-27e7-4fd9-80cf-a34257d5aa97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.879 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:30:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.973 254096 DEBUG nova.objects.instance [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.991 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.991 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Ensure instance console log exists: /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.992 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.992 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:47 np0005535469 nova_compute[254092]: 2025-11-25 16:30:47.993 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.470 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully created port: 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.548 254096 DEBUG nova.network.neutron [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.608 254096 DEBUG nova.compute.manager [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-deleted-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.608 254096 INFO nova.compute.manager [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Neutron deleted interface 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.608 254096 DEBUG nova.network.neutron [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.643 254096 INFO nova.compute.manager [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.650 254096 DEBUG nova.compute.manager [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Detach interface failed, port_id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c, reason: Instance 090ac2d7-979e-4706-8a01-5e94ab72282d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.755 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.755 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:48 np0005535469 nova_compute[254092]: 2025-11-25 16:30:48.860 254096 DEBUG oslo_concurrency.processutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 246 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 200 op/s
Nov 25 11:30:49 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 11:30:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577763299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.341 254096 DEBUG oslo_concurrency.processutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.349 254096 DEBUG nova.compute.provider_tree [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.362 254096 DEBUG nova.scheduler.client.report [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.412 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.546 254096 INFO nova.scheduler.client.report [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance 090ac2d7-979e-4706-8a01-5e94ab72282d#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.569 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088234.5651274, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.569 254096 INFO nova.compute.manager [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.607 254096 DEBUG nova.compute.manager [None req-8758d0a6-8765-42e6-826e-86e6630d2a22 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.610 254096 DEBUG nova.compute.manager [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG oslo_concurrency.lockutils [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG oslo_concurrency.lockutils [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG oslo_concurrency.lockutils [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG nova.compute.manager [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] No waiting events found dispatching network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.612 254096 WARNING nova.compute.manager [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received unexpected event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.636 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.803 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.837 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.837 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:49 np0005535469 nova_compute[254092]: 2025-11-25 16:30:49.837 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.043 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.234 254096 INFO nova.compute.manager [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Terminating instance#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.235 254096 DEBUG nova.compute.manager [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:30:50 np0005535469 kernel: tapd6146886-91 (unregistering): left promiscuous mode
Nov 25 11:30:50 np0005535469 NetworkManager[48891]: <info>  [1764088250.4223] device (tapd6146886-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:50Z|00149|binding|INFO|Releasing lport d6146886-91a1-4d5f-9234-e1d0154b4230 from this chassis (sb_readonly=0)
Nov 25 11:30:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:50Z|00150|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 down in Southbound
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:50Z|00151|binding|INFO|Removing iface tapd6146886-91 ovn-installed in OVS
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.465 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.466 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis#033[00m
Nov 25 11:30:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.468 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3403825e-13ff-43e0-80c4-b59cf23ed30b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:30:50 np0005535469 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 11:30:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.470 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79d452be-ffc6-465f-8aed-a171a4c03270]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.471 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b namespace which is not needed anymore#033[00m
Nov 25 11:30:50 np0005535469 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000012.scope: Consumed 12.483s CPU time.
Nov 25 11:30:50 np0005535469 systemd-machined[216343]: Machine qemu-30-instance-00000012 terminated.
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.711 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.729 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.729 254096 DEBUG nova.objects.instance [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.783 254096 DEBUG nova.virt.libvirt.vif [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:39Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.785 254096 DEBUG nova.network.os_vif_util [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.786 254096 DEBUG nova.network.os_vif_util [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.786 254096 DEBUG os_vif [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.789 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6146886-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.797 254096 INFO os_vif [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')#033[00m
Nov 25 11:30:50 np0005535469 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : haproxy version is 2.8.14-c23fe91
Nov 25 11:30:50 np0005535469 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : path to executable is /usr/sbin/haproxy
Nov 25 11:30:50 np0005535469 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [WARNING]  (282466) : Exiting Master process...
Nov 25 11:30:50 np0005535469 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [ALERT]    (282466) : Current worker (282492) exited with code 143 (Terminated)
Nov 25 11:30:50 np0005535469 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [WARNING]  (282466) : All workers exited. Exiting... (0)
Nov 25 11:30:50 np0005535469 systemd[1]: libpod-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb.scope: Deactivated successfully.
Nov 25 11:30:50 np0005535469 podman[288322]: 2025-11-25 16:30:50.82133239 +0000 UTC m=+0.225656307 container died a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.846 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 227 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 267 op/s
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG nova.compute.manager [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG oslo_concurrency.lockutils [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG oslo_concurrency.lockutils [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG oslo_concurrency.lockutils [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.944 254096 DEBUG nova.compute.manager [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:50 np0005535469 nova_compute[254092]: 2025-11-25 16:30:50.944 254096 DEBUG nova.compute.manager [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.007 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.008 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance network_info: |[{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.010 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start _get_guest_xml network_info=[{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.015 254096 WARNING nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.020 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.021 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.024 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.024 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.024 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.027 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.029 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016913856466106662 of space, bias 1.0, pg target 0.5074156939831999 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:30:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb-userdata-shm.mount: Deactivated successfully.
Nov 25 11:30:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e493ef27e1af43f842f739b06d394ae37970231d4264741926f2efe0d296ab9a-merged.mount: Deactivated successfully.
Nov 25 11:30:51 np0005535469 podman[288322]: 2025-11-25 16:30:51.240525351 +0000 UTC m=+0.644849268 container cleanup a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:30:51 np0005535469 systemd[1]: libpod-conmon-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb.scope: Deactivated successfully.
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107126535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:51 np0005535469 podman[288507]: 2025-11-25 16:30:51.532768468 +0000 UTC m=+0.260908256 container remove a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.543 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3dbb396-ae6e-40ba-9fe2-4a18f7c1e697]: (4, ('Tue Nov 25 04:30:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b (a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb)\na2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb\nTue Nov 25 04:30:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b (a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb)\na2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.546 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e4c21e-4f53-4913-9162-46c689c2abe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:51 np0005535469 kernel: tap3403825e-10: left promiscuous mode
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfe7f5c-794b-4c92-9a79-302b02536c38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.586 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.589 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ec5762-9138-4ab3-9097-da9aa277a275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5861b4c6-e5b7-4786-b7ef-04cc60867ccf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.591 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50190f01-cae4-4175-83d3-8c5004ccd385]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458721, 'reachable_time': 35828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288561, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:51 np0005535469 nova_compute[254092]: 2025-11-25 16:30:51.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.635 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:30:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.636 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcf13c2-3338-4447-86ed-463bbb0032e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:51 np0005535469 systemd[1]: run-netns-ovnmeta\x2d3403825e\x2d13ff\x2d43e0\x2d80c4\x2db59cf23ed30b.mount: Deactivated successfully.
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 54df5eb0-2a86-4b99-83d4-4437222dd3b6 does not exist
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7b826c35-6482-4d41-b625-fae84abfa332 does not exist
Nov 25 11:30:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5d824418-87d2-4b86-9304-6c43d6d56495 does not exist
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:30:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:30:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:30:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195468863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.062 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.065 254096 DEBUG nova.virt.libvirt.vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.066 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.067 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.070 254096 DEBUG nova.objects.instance [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.088 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <name>instance-0000001b</name>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:30:51</nova:creationTime>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <entry name="serial">07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <entry name="uuid">07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:83:61:d4"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <target dev="tap19d5425c-f0"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log" append="off"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:30:52 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:30:52 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:30:52 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:30:52 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.093 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Preparing to wait for external event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.093 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.094 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.094 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.097 254096 DEBUG nova.virt.libvirt.vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.097 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.098 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.099 254096 DEBUG os_vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.102 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.103 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.107 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.107 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d5425c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.108 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19d5425c-f0, col_values=(('external_ids', {'iface-id': '19d5425c-f0c6-4c68-b8a6-cb1c6357d249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:61:d4', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:52 np0005535469 NetworkManager[48891]: <info>  [1764088252.1107] manager: (tap19d5425c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.119 254096 INFO os_vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0')#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.128 254096 DEBUG nova.compute.manager [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.128 254096 DEBUG nova.compute.manager [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.129 254096 DEBUG oslo_concurrency.lockutils [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.129 254096 DEBUG oslo_concurrency.lockutils [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.129 254096 DEBUG nova.network.neutron [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.216 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.216 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.216 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.217 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Using config drive#033[00m
Nov 25 11:30:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:30:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:30:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.260 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.355 254096 INFO nova.virt.libvirt.driver [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting instance files /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.357 254096 INFO nova.virt.libvirt.driver [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deletion of /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del complete#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.414 254096 INFO nova.compute.manager [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 2.18 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.415 254096 DEBUG oslo.service.loopingcall [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.415 254096 DEBUG nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.415 254096 DEBUG nova.network.neutron [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:30:52 np0005535469 nova_compute[254092]: 2025-11-25 16:30:52.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:52 np0005535469 podman[288747]: 2025-11-25 16:30:52.487532284 +0000 UTC m=+0.043171995 container create 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:30:52 np0005535469 systemd[1]: Started libpod-conmon-0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be.scope.
Nov 25 11:30:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:30:52 np0005535469 podman[288747]: 2025-11-25 16:30:52.468565548 +0000 UTC m=+0.024205269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:30:52 np0005535469 podman[288747]: 2025-11-25 16:30:52.678211139 +0000 UTC m=+0.233850860 container init 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:30:52 np0005535469 podman[288747]: 2025-11-25 16:30:52.686629218 +0000 UTC m=+0.242268929 container start 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:30:52 np0005535469 dazzling_dijkstra[288764]: 167 167
Nov 25 11:30:52 np0005535469 systemd[1]: libpod-0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be.scope: Deactivated successfully.
Nov 25 11:30:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:52 np0005535469 podman[288747]: 2025-11-25 16:30:52.918855114 +0000 UTC m=+0.474494815 container attach 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 11:30:52 np0005535469 podman[288747]: 2025-11-25 16:30:52.920669414 +0000 UTC m=+0.476309125 container died 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:30:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 235 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.0 MiB/s wr, 226 op/s
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.014 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Creating config drive at /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.025 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpytiwmf_q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.111 254096 DEBUG nova.compute.manager [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.112 254096 DEBUG oslo_concurrency.lockutils [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.113 254096 DEBUG oslo_concurrency.lockutils [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.113 254096 DEBUG oslo_concurrency.lockutils [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.113 254096 DEBUG nova.compute.manager [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.114 254096 WARNING nova.compute.manager [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.163 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpytiwmf_q" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.190 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:30:53 np0005535469 nova_compute[254092]: 2025-11-25 16:30:53.195 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-59ed4f6cbeead337044a1280cd64762640aed6ed14eafc6a1a92a0b35f0fb845-merged.mount: Deactivated successfully.
Nov 25 11:30:53 np0005535469 podman[288747]: 2025-11-25 16:30:53.968596773 +0000 UTC m=+1.524236474 container remove 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:30:54 np0005535469 systemd[1]: libpod-conmon-0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be.scope: Deactivated successfully.
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.116 254096 DEBUG nova.network.neutron [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:54 np0005535469 podman[288830]: 2025-11-25 16:30:54.122248191 +0000 UTC m=+0.023507170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.252 254096 INFO nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 1.84 seconds to deallocate network for instance.#033[00m
Nov 25 11:30:54 np0005535469 podman[288830]: 2025-11-25 16:30:54.2950303 +0000 UTC m=+0.196289249 container create 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.343 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.343 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:54 np0005535469 podman[288833]: 2025-11-25 16:30:54.366468753 +0000 UTC m=+0.261152793 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:30:54 np0005535469 podman[288826]: 2025-11-25 16:30:54.370300547 +0000 UTC m=+0.267028023 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 11:30:54 np0005535469 systemd[1]: Started libpod-conmon-806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5.scope.
Nov 25 11:30:54 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:30:54 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.426 254096 DEBUG oslo_concurrency.processutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:30:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:54 np0005535469 podman[288830]: 2025-11-25 16:30:54.621326794 +0000 UTC m=+0.522585763 container init 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:30:54 np0005535469 podman[288830]: 2025-11-25 16:30:54.63221593 +0000 UTC m=+0.533474879 container start 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 11:30:54 np0005535469 podman[288830]: 2025-11-25 16:30:54.823254435 +0000 UTC m=+0.724513415 container attach 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:30:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822219638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.880 254096 DEBUG oslo_concurrency.processutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.887 254096 DEBUG nova.compute.provider_tree [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:54 np0005535469 podman[288834]: 2025-11-25 16:30:54.892335495 +0000 UTC m=+0.782968955 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.900 254096 DEBUG nova.scheduler.client.report [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 191 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 622 KiB/s rd, 5.9 MiB/s wr, 227 op/s
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.950 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.953 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.953 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.953 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:30:54 np0005535469 nova_compute[254092]: 2025-11-25 16:30:54.954 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.075 254096 DEBUG nova.network.neutron [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.076 254096 DEBUG nova.network.neutron [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.102 254096 DEBUG oslo_concurrency.lockutils [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.130 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.130 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deleting local config drive /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config because it was imported into RBD.#033[00m
Nov 25 11:30:55 np0005535469 kernel: tap19d5425c-f0: entered promiscuous mode
Nov 25 11:30:55 np0005535469 NetworkManager[48891]: <info>  [1764088255.1740] manager: (tap19d5425c-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Nov 25 11:30:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:55Z|00152|binding|INFO|Claiming lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 for this chassis.
Nov 25 11:30:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:55Z|00153|binding|INFO|19d5425c-f0c6-4c68-b8a6-cb1c6357d249: Claiming fa:16:3e:83:61:d4 10.100.0.8
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 systemd-udevd[288952]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:30:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:55Z|00154|binding|INFO|Setting lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 ovn-installed in OVS
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.211 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 NetworkManager[48891]: <info>  [1764088255.2196] device (tap19d5425c-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:30:55 np0005535469 NetworkManager[48891]: <info>  [1764088255.2207] device (tap19d5425c-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:30:55 np0005535469 systemd-machined[216343]: New machine qemu-31-instance-0000001b.
Nov 25 11:30:55 np0005535469 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.323 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088240.3218522, 7777dd86-925e-4f98-bd68-e38ac540d97b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.324 254096 INFO nova.compute.manager [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:30:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:30:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1659250051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:30:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:30:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1659250051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.335 254096 DEBUG nova.compute.manager [req-062dc3a0-17eb-4e2f-8182-db0c41f2ed29 req-9d5ad000-f81a-488a-a01f-930bc83481bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-deleted-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:55Z|00155|binding|INFO|Setting lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 up in Southbound
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.340 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:61:d4 10.100.0.8'], port_security=['fa:16:3e:83:61:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd11e91d-04bc-4ecb-8ad4-320a6572500c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=19d5425c-f0c6-4c68-b8a6-cb1c6357d249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.341 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.343 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.343 254096 INFO nova.scheduler.client.report [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance 3375e096-321c-459b-8b6a-e085bb62872f#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.355 254096 DEBUG nova.compute.manager [None req-02984fc1-bd08-4954-8e5b-04bcaabb8c9a - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3d288e-d6ae-4990-9635-bbdb3ae355c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.361 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52e7d5b9-01 in ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.363 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52e7d5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.363 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3381f99e-2595-4122-b209-f20a7f52b2c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.364 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[647601fb-abde-4944-9d41-0b5cae055edc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.376 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fa11f420-5375-496a-9396-8b6af422ce20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.390 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[982e8a73-b2d3-4a38-817f-f6e453173591]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.426 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[caf16519-e446-4b4e-864a-2452ef32f7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.436 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8421fd58-0429-4d87-8e7e-0e3a0e2b5e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 NetworkManager[48891]: <info>  [1764088255.4372] manager: (tap52e7d5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.461 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.474 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd1293d-5797-49e5-963d-7012c3ab5bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.478 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f8180e70-c3e5-450a-bdca-72f4b01e0930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 NetworkManager[48891]: <info>  [1764088255.5025] device (tap52e7d5b9-00): carrier: link connected
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.507 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[936761a7-cdf7-4d34-8122-540abe20058c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.525 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0e2cda-8a8d-4c59-8a96-9babf150fded]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289011, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.542 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4491d3a6-3321-484a-a09c-146eb98ef3f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:97ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471309, 'tstamp': 471309}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289012, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.559 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5c37d9-b160-407f-8a06-56c3e4c0dcf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289014, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.588 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b711e6-0e05-4621-a5aa-391c6566ff9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727524151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.638 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.649 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01529207-1bd8-4dc5-a8c2-ef92fc6948bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.650 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.651 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.651 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 NetworkManager[48891]: <info>  [1764088255.6550] manager: (tap52e7d5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 25 11:30:55 np0005535469 kernel: tap52e7d5b9-00: entered promiscuous mode
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.659 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:55Z|00156|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.661 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.670 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd1b310-59e7-4bbc-a625-3c2ed257f070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.671 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:30:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.672 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'env', 'PROCESS_TAG=haproxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:55 np0005535469 agitated_feynman[288896]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:30:55 np0005535469 agitated_feynman[288896]: --> relative data size: 1.0
Nov 25 11:30:55 np0005535469 agitated_feynman[288896]: --> All data devices are unavailable
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.712 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.713 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.716 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.717 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:30:55 np0005535469 systemd[1]: libpod-806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5.scope: Deactivated successfully.
Nov 25 11:30:55 np0005535469 podman[288830]: 2025-11-25 16:30:55.727144808 +0000 UTC m=+1.628403757 container died 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.845 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088255.844613, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.845 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Started (Lifecycle Event)#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.862 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.866 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088255.8447454, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.866 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.884 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.887 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.910 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.910 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4210MB free_disk=59.903011322021484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.911 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.911 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:55 np0005535469 nova_compute[254092]: 2025-11-25 16:30:55.917 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.240 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 33b19faf-57e1-463b-8b4a-b50479a0ef0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.241 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.301 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:30:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae-merged.mount: Deactivated successfully.
Nov 25 11:30:56 np0005535469 podman[288830]: 2025-11-25 16:30:56.651057743 +0000 UTC m=+2.552316692 container remove 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:30:56 np0005535469 systemd[1]: libpod-conmon-806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5.scope: Deactivated successfully.
Nov 25 11:30:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:30:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174536763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.742 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.749 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.763 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.849 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.849 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.850 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.850 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.850 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.851 254096 INFO nova.compute.manager [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Terminating instance#033[00m
Nov 25 11:30:56 np0005535469 nova_compute[254092]: 2025-11-25 16:30:56.852 254096 DEBUG nova.compute.manager [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:30:56 np0005535469 podman[289152]: 2025-11-25 16:30:56.765851515 +0000 UTC m=+0.027451897 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:30:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.025 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.026 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:57 np0005535469 podman[289152]: 2025-11-25 16:30:57.072818234 +0000 UTC m=+0.334418596 container create 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:57 np0005535469 systemd[1]: Started libpod-conmon-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb.scope.
Nov 25 11:30:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:30:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bbff1afa7ddae500921472f918939c0d088677612449620b59fac58799310/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:30:57 np0005535469 kernel: tapfb46dd7a-52 (unregistering): left promiscuous mode
Nov 25 11:30:57 np0005535469 NetworkManager[48891]: <info>  [1764088257.4821] device (tapfb46dd7a-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:57Z|00157|binding|INFO|Releasing lport fb46dd7a-52d4-44cb-b99e-81d7d653885c from this chassis (sb_readonly=0)
Nov 25 11:30:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:57Z|00158|binding|INFO|Setting lport fb46dd7a-52d4-44cb-b99e-81d7d653885c down in Southbound
Nov 25 11:30:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:30:57Z|00159|binding|INFO|Removing iface tapfb46dd7a-52 ovn-installed in OVS
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.558 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:54:45 10.100.0.10'], port_security=['fa:16:3e:d4:54:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '33b19faf-57e1-463b-8b4a-b50479a0ef0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2f26334db2f4e2cadc5664efd73eb67', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0797621-f8b9-4c57-8ae8-f4d291e244fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6129ca0-5388-4dd2-a829-e686213800fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fb46dd7a-52d4-44cb-b99e-81d7d653885c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:30:57 np0005535469 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 25 11:30:57 np0005535469 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 14.335s CPU time.
Nov 25 11:30:57 np0005535469 systemd-machined[216343]: Machine qemu-28-instance-00000019 terminated.
Nov 25 11:30:57 np0005535469 podman[289152]: 2025-11-25 16:30:57.638044945 +0000 UTC m=+0.899645307 container init 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:30:57 np0005535469 podman[289152]: 2025-11-25 16:30:57.644327586 +0000 UTC m=+0.905927948 container start 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:30:57 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : New worker (289285) forked
Nov 25 11:30:57 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : Loading success.
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.690 254096 INFO nova.virt.libvirt.driver [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance destroyed successfully.#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.691 254096 DEBUG nova.objects.instance [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lazy-loading 'resources' on Instance uuid 33b19faf-57e1-463b-8b4a-b50479a0ef0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.701 254096 DEBUG nova.virt.libvirt.vif [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1375199124',display_name='tempest-ServersTestManualDisk-server-1375199124',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1375199124',id=25,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOQq+JT40N5kSAAkZKtTY8+kwc4Tq2+j0vXcLZMu4KKRGWjKEsrOB7QpF/UTscMrUzfK+p97q+eBa8XrywfkAV6Mo0KdjURR0zReL+ABXznVVDaiCZTtZ5HErawYUq7Fw==',key_name='tempest-keypair-637822798',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f2f26334db2f4e2cadc5664efd73eb67',ramdisk_id='',reservation_id='r-d0txwl4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-420094767',owner_user_name='tempest-ServersTestManualDisk-420094767-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fb8bd106d2264d719b9ebd9f83f19c5a',uuid=33b19faf-57e1-463b-8b4a-b50479a0ef0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.701 254096 DEBUG nova.network.os_vif_util [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converting VIF {"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.702 254096 DEBUG nova.network.os_vif_util [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.702 254096 DEBUG os_vif [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.704 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb46dd7a-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:57 np0005535469 nova_compute[254092]: 2025-11-25 16:30:57.709 254096 INFO os_vif [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52')#033[00m
Nov 25 11:30:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:30:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.947 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fb46dd7a-52d4-44cb-b99e-81d7d653885c in datapath 6ab64ae8-b8fa-4795-a243-9ebe45233e37 unbound from our chassis#033[00m
Nov 25 11:30:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.950 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ab64ae8-b8fa-4795-a243-9ebe45233e37, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:30:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.951 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af1f6063-bf1a-4fe7-bafe-2b479a9374dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:30:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.951 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 namespace which is not needed anymore#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.026 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.027 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:58 np0005535469 podman[289339]: 2025-11-25 16:30:58.011197473 +0000 UTC m=+0.021015803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:30:58 np0005535469 podman[289339]: 2025-11-25 16:30:58.302492525 +0000 UTC m=+0.312310825 container create 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 11:30:58 np0005535469 systemd[1]: Started libpod-conmon-4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66.scope.
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.519 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088243.5145116, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.519 254096 INFO nova.compute.manager [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:30:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:30:58 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : haproxy version is 2.8.14-c23fe91
Nov 25 11:30:58 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : path to executable is /usr/sbin/haproxy
Nov 25 11:30:58 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [WARNING]  (287144) : Exiting Master process...
Nov 25 11:30:58 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [WARNING]  (287144) : Exiting Master process...
Nov 25 11:30:58 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [ALERT]    (287144) : Current worker (287146) exited with code 143 (Terminated)
Nov 25 11:30:58 np0005535469 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [WARNING]  (287144) : All workers exited. Exiting... (0)
Nov 25 11:30:58 np0005535469 systemd[1]: libpod-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771.scope: Deactivated successfully.
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.544 254096 DEBUG nova.compute.manager [None req-ecf1541e-2746-4b4a-b9f3-40ebf45e122f - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.729 254096 DEBUG nova.compute.manager [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG oslo_concurrency.lockutils [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG oslo_concurrency.lockutils [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG oslo_concurrency.lockutils [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG nova.compute.manager [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Processing event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.731 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.734 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088258.7343185, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.734 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.736 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.739 254096 INFO nova.virt.libvirt.driver [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance spawned successfully.#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.739 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.756 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.759 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.759 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.760 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.760 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.760 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.761 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.765 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG nova.compute.manager [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-unplugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG oslo_concurrency.lockutils [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG oslo_concurrency.lockutils [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG oslo_concurrency.lockutils [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG nova.compute.manager [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] No waiting events found dispatching network-vif-unplugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.788 254096 DEBUG nova.compute.manager [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-unplugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.790 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:30:58 np0005535469 podman[289339]: 2025-11-25 16:30:58.87266621 +0000 UTC m=+0.882484520 container init 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:30:58 np0005535469 podman[289366]: 2025-11-25 16:30:58.876214928 +0000 UTC m=+0.842703319 container died 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:30:58 np0005535469 podman[289339]: 2025-11-25 16:30:58.882376725 +0000 UTC m=+0.892195035 container start 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:30:58 np0005535469 adoring_saha[289381]: 167 167
Nov 25 11:30:58 np0005535469 systemd[1]: libpod-4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66.scope: Deactivated successfully.
Nov 25 11:30:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 3.8 MiB/s wr, 127 op/s
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.939 254096 INFO nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 11.64 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:30:58 np0005535469 nova_compute[254092]: 2025-11-25 16:30:58.939 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:30:59 np0005535469 nova_compute[254092]: 2025-11-25 16:30:59.152 254096 INFO nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 12.78 seconds to build instance.#033[00m
Nov 25 11:30:59 np0005535469 nova_compute[254092]: 2025-11-25 16:30:59.411 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:30:59 np0005535469 podman[289339]: 2025-11-25 16:30:59.701436689 +0000 UTC m=+1.711255009 container attach 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 11:30:59 np0005535469 podman[289339]: 2025-11-25 16:30:59.702453058 +0000 UTC m=+1.712271368 container died 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:31:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9f8a2175b9392d89f04a2ce17873977b82d9c3b5170918db62c07eca67b932fb-merged.mount: Deactivated successfully.
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:31:00 np0005535469 podman[289339]: 2025-11-25 16:31:00.55156913 +0000 UTC m=+2.561387440 container remove 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.559 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:31:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771-userdata-shm.mount: Deactivated successfully.
Nov 25 11:31:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-972254736acb7813ada732b95c02a171ed8ca59846d32b38baa12f81aa1e9009-merged.mount: Deactivated successfully.
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.798 254096 DEBUG nova.compute.manager [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG oslo_concurrency.lockutils [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG oslo_concurrency.lockutils [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG oslo_concurrency.lockutils [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG nova.compute.manager [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 WARNING nova.compute.manager [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.857 254096 DEBUG nova.compute.manager [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.857 254096 DEBUG oslo_concurrency.lockutils [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.857 254096 DEBUG oslo_concurrency.lockutils [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.858 254096 DEBUG oslo_concurrency.lockutils [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.858 254096 DEBUG nova.compute.manager [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] No waiting events found dispatching network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:00 np0005535469 nova_compute[254092]: 2025-11-25 16:31:00.858 254096 WARNING nova.compute.manager [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received unexpected event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:31:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 166 op/s
Nov 25 11:31:01 np0005535469 podman[289421]: 2025-11-25 16:31:01.159844292 +0000 UTC m=+0.482218225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:31:01 np0005535469 podman[289366]: 2025-11-25 16:31:01.216928994 +0000 UTC m=+3.183417385 container cleanup 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:31:01 np0005535469 systemd[1]: libpod-conmon-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771.scope: Deactivated successfully.
Nov 25 11:31:01 np0005535469 podman[289421]: 2025-11-25 16:31:01.262914335 +0000 UTC m=+0.585288248 container create d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 11:31:01 np0005535469 systemd[1]: libpod-conmon-4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66.scope: Deactivated successfully.
Nov 25 11:31:01 np0005535469 systemd[1]: Started libpod-conmon-d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953.scope.
Nov 25 11:31:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:31:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:01 np0005535469 nova_compute[254092]: 2025-11-25 16:31:01.566 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:01 np0005535469 podman[289421]: 2025-11-25 16:31:01.893549896 +0000 UTC m=+1.215923819 container init d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:31:01 np0005535469 podman[289421]: 2025-11-25 16:31:01.904342169 +0000 UTC m=+1.226716082 container start d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:31:02 np0005535469 podman[289421]: 2025-11-25 16:31:02.182975656 +0000 UTC m=+1.505349599 container attach d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:31:02 np0005535469 nova_compute[254092]: 2025-11-25 16:31:02.193 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088247.1926734, 090ac2d7-979e-4706-8a01-5e94ab72282d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:02 np0005535469 nova_compute[254092]: 2025-11-25 16:31:02.193 254096 INFO nova.compute.manager [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:31:02 np0005535469 nova_compute[254092]: 2025-11-25 16:31:02.210 254096 DEBUG nova.compute.manager [None req-aea58e5c-377d-4dd4-83f1-01517dd0938a - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:02 np0005535469 podman[289436]: 2025-11-25 16:31:02.374884026 +0000 UTC m=+1.133576210 container remove 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.382 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaebdb6-88a0-4b22-ac36-7fdc665c21c7]: (4, ('Tue Nov 25 04:30:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 (533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771)\n533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771\nTue Nov 25 04:31:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 (533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771)\n533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.385 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99acf10c-ed50-4340-a3e8-2d684b161608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.386 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ab64ae8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:02 np0005535469 nova_compute[254092]: 2025-11-25 16:31:02.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:02 np0005535469 kernel: tap6ab64ae8-b0: left promiscuous mode
Nov 25 11:31:02 np0005535469 nova_compute[254092]: 2025-11-25 16:31:02.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.407 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2bbe16-4284-4197-bd3f-65cefe4d5771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.422 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88499f58-edf6-43ec-b135-0c6e43c7fa5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.423 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cbd8d7-083c-4bd7-a34c-e16e3b44c86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.438 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86231929-f7ca-4a6f-8f52-9a76ab5b5b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468313, 'reachable_time': 42982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289459, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.442 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:31:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.442 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3e93f0-179f-4296-bb35-0ef0528aaa95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:02 np0005535469 systemd[1]: run-netns-ovnmeta\x2d6ab64ae8\x2db8fa\x2d4795\x2da243\x2d9ebe45233e37.mount: Deactivated successfully.
Nov 25 11:31:02 np0005535469 nova_compute[254092]: 2025-11-25 16:31:02.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]: {
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:    "0": [
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:        {
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "devices": [
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "/dev/loop3"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            ],
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_name": "ceph_lv0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_size": "21470642176",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "name": "ceph_lv0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "tags": {
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cluster_name": "ceph",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.crush_device_class": "",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.encrypted": "0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osd_id": "0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.type": "block",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.vdo": "0"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            },
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "type": "block",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "vg_name": "ceph_vg0"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:        }
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:    ],
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:    "1": [
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:        {
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "devices": [
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "/dev/loop4"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            ],
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_name": "ceph_lv1",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_size": "21470642176",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "name": "ceph_lv1",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "tags": {
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cluster_name": "ceph",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.crush_device_class": "",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.encrypted": "0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osd_id": "1",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.type": "block",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.vdo": "0"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            },
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "type": "block",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "vg_name": "ceph_vg1"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:        }
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:    ],
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:    "2": [
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:        {
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "devices": [
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "/dev/loop5"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            ],
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_name": "ceph_lv2",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_size": "21470642176",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "name": "ceph_lv2",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "tags": {
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.cluster_name": "ceph",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.crush_device_class": "",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.encrypted": "0",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osd_id": "2",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.type": "block",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:                "ceph.vdo": "0"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            },
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "type": "block",
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:            "vg_name": "ceph_vg2"
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:        }
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]:    ]
Nov 25 11:31:02 np0005535469 stoic_dewdney[289453]: }
Nov 25 11:31:02 np0005535469 nova_compute[254092]: 2025-11-25 16:31:02.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:02 np0005535469 systemd[1]: libpod-d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953.scope: Deactivated successfully.
Nov 25 11:31:02 np0005535469 podman[289421]: 2025-11-25 16:31:02.748058195 +0000 UTC m=+2.070432098 container died d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:31:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Nov 25 11:31:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65-merged.mount: Deactivated successfully.
Nov 25 11:31:03 np0005535469 podman[289421]: 2025-11-25 16:31:03.751111773 +0000 UTC m=+3.073485686 container remove d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:31:03 np0005535469 systemd[1]: libpod-conmon-d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953.scope: Deactivated successfully.
Nov 25 11:31:04 np0005535469 podman[289618]: 2025-11-25 16:31:04.421441143 +0000 UTC m=+0.111215466 container create 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:31:04 np0005535469 podman[289618]: 2025-11-25 16:31:04.335691501 +0000 UTC m=+0.025465824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:31:04 np0005535469 systemd[1]: Started libpod-conmon-0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc.scope.
Nov 25 11:31:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:31:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 99 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 133 op/s
Nov 25 11:31:05 np0005535469 podman[289618]: 2025-11-25 16:31:05.070834683 +0000 UTC m=+0.760609016 container init 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:31:05 np0005535469 podman[289618]: 2025-11-25 16:31:05.080379393 +0000 UTC m=+0.770153686 container start 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:31:05 np0005535469 gifted_haibt[289634]: 167 167
Nov 25 11:31:05 np0005535469 systemd[1]: libpod-0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc.scope: Deactivated successfully.
Nov 25 11:31:05 np0005535469 podman[289618]: 2025-11-25 16:31:05.288404731 +0000 UTC m=+0.978179054 container attach 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:31:05 np0005535469 podman[289618]: 2025-11-25 16:31:05.288963896 +0000 UTC m=+0.978738199 container died 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 11:31:05 np0005535469 nova_compute[254092]: 2025-11-25 16:31:05.727 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088250.725776, 3375e096-321c-459b-8b6a-e085bb62872f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:05 np0005535469 nova_compute[254092]: 2025-11-25 16:31:05.729 254096 INFO nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:31:05 np0005535469 nova_compute[254092]: 2025-11-25 16:31:05.760 254096 DEBUG nova.compute.manager [None req-9b6703f6-d26c-4656-8a5a-56f1d920a103 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-33f214967b0b7e82d7cdbd753440e937abb572c4677afed5a81a0e60710121ec-merged.mount: Deactivated successfully.
Nov 25 11:31:06 np0005535469 podman[289618]: 2025-11-25 16:31:06.535678571 +0000 UTC m=+2.225452884 container remove 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:31:06 np0005535469 systemd[1]: libpod-conmon-0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc.scope: Deactivated successfully.
Nov 25 11:31:06 np0005535469 podman[289660]: 2025-11-25 16:31:06.70737152 +0000 UTC m=+0.024324593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:31:06 np0005535469 podman[289660]: 2025-11-25 16:31:06.887003935 +0000 UTC m=+0.203956958 container create 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:31:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 103 op/s
Nov 25 11:31:07 np0005535469 systemd[1]: Started libpod-conmon-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope.
Nov 25 11:31:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:31:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:07 np0005535469 podman[289660]: 2025-11-25 16:31:07.249545605 +0000 UTC m=+0.566498628 container init 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:31:07 np0005535469 podman[289660]: 2025-11-25 16:31:07.259139386 +0000 UTC m=+0.576092409 container start 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:31:07 np0005535469 nova_compute[254092]: 2025-11-25 16:31:07.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:07 np0005535469 nova_compute[254092]: 2025-11-25 16:31:07.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:08 np0005535469 podman[289660]: 2025-11-25 16:31:08.626174603 +0000 UTC m=+1.943127626 container attach 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:31:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 90 op/s
Nov 25 11:31:09 np0005535469 cool_cerf[289676]: {
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "osd_id": 1,
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "type": "bluestore"
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:    },
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "osd_id": 2,
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "type": "bluestore"
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:    },
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "osd_id": 0,
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:        "type": "bluestore"
Nov 25 11:31:09 np0005535469 cool_cerf[289676]:    }
Nov 25 11:31:09 np0005535469 cool_cerf[289676]: }
Nov 25 11:31:09 np0005535469 systemd[1]: libpod-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope: Deactivated successfully.
Nov 25 11:31:09 np0005535469 podman[289660]: 2025-11-25 16:31:09.398905309 +0000 UTC m=+2.715858342 container died 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:31:09 np0005535469 systemd[1]: libpod-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope: Consumed 1.005s CPU time.
Nov 25 11:31:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c-merged.mount: Deactivated successfully.
Nov 25 11:31:10 np0005535469 nova_compute[254092]: 2025-11-25 16:31:10.028 254096 INFO nova.virt.libvirt.driver [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deleting instance files /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f_del#033[00m
Nov 25 11:31:10 np0005535469 nova_compute[254092]: 2025-11-25 16:31:10.029 254096 INFO nova.virt.libvirt.driver [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deletion of /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f_del complete#033[00m
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:31:10 np0005535469 podman[289660]: 2025-11-25 16:31:10.449407947 +0000 UTC m=+3.766360970 container remove 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:31:10 np0005535469 systemd[1]: libpod-conmon-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope: Deactivated successfully.
Nov 25 11:31:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:31:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:31:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:31:10 np0005535469 nova_compute[254092]: 2025-11-25 16:31:10.625 254096 INFO nova.compute.manager [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 13.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:31:10 np0005535469 nova_compute[254092]: 2025-11-25 16:31:10.626 254096 DEBUG oslo.service.loopingcall [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:31:10 np0005535469 nova_compute[254092]: 2025-11-25 16:31:10.627 254096 DEBUG nova.compute.manager [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:31:10 np0005535469 nova_compute[254092]: 2025-11-25 16:31:10.627 254096 DEBUG nova.network.neutron [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:31:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9c5c3c91-d17d-4353-b547-5a28208b06fe does not exist
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 92ef3fdd-e8d5-45d1-af6a-dee581d0088f does not exist
Nov 25 11:31:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 853 B/s wr, 94 op/s
Nov 25 11:31:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:31:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:31:12 np0005535469 nova_compute[254092]: 2025-11-25 16:31:12.589 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:12 np0005535469 nova_compute[254092]: 2025-11-25 16:31:12.690 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088257.6891708, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:12 np0005535469 nova_compute[254092]: 2025-11-25 16:31:12.690 254096 INFO nova.compute.manager [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:31:12 np0005535469 nova_compute[254092]: 2025-11-25 16:31:12.710 254096 DEBUG nova.compute.manager [None req-264876d1-fff8-494b-bb36-1119b214a046 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:12 np0005535469 nova_compute[254092]: 2025-11-25 16:31:12.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 91 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 297 KiB/s wr, 62 op/s
Nov 25 11:31:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:13.603 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:13.604 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:13.604 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:14Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:61:d4 10.100.0.8
Nov 25 11:31:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:14Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:61:d4 10.100.0.8
Nov 25 11:31:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 101 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.3 MiB/s wr, 51 op/s
Nov 25 11:31:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 113 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Nov 25 11:31:17 np0005535469 nova_compute[254092]: 2025-11-25 16:31:17.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:17 np0005535469 nova_compute[254092]: 2025-11-25 16:31:17.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 113 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.233 254096 DEBUG nova.compute.manager [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.233 254096 DEBUG nova.compute.manager [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.234 254096 DEBUG oslo_concurrency.lockutils [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.235 254096 DEBUG oslo_concurrency.lockutils [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.235 254096 DEBUG nova.network.neutron [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.301 254096 DEBUG nova.network.neutron [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.484 254096 INFO nova.compute.manager [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 8.86 seconds to deallocate network for instance.#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.630 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.631 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.812 254096 DEBUG oslo_concurrency.processutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:19.912 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:31:19 np0005535469 nova_compute[254092]: 2025-11-25 16:31:19.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:19.914 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:31:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:31:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43086363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:31:20 np0005535469 nova_compute[254092]: 2025-11-25 16:31:20.243 254096 DEBUG oslo_concurrency.processutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:20 np0005535469 nova_compute[254092]: 2025-11-25 16:31:20.252 254096 DEBUG nova.compute.provider_tree [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:31:20 np0005535469 nova_compute[254092]: 2025-11-25 16:31:20.267 254096 DEBUG nova.scheduler.client.report [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:31:20 np0005535469 nova_compute[254092]: 2025-11-25 16:31:20.412 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:20 np0005535469 nova_compute[254092]: 2025-11-25 16:31:20.667 254096 INFO nova.scheduler.client.report [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Deleted allocations for instance 33b19faf-57e1-463b-8b4a-b50479a0ef0f#033[00m
Nov 25 11:31:20 np0005535469 nova_compute[254092]: 2025-11-25 16:31:20.825 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 23.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 120 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 11:31:21 np0005535469 nova_compute[254092]: 2025-11-25 16:31:21.447 254096 DEBUG nova.compute.manager [req-79fe46d2-5686-43e5-8d46-50c085c96d76 req-720b6f8d-9aca-42ec-91e6-0432a0b61664 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-deleted-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:21 np0005535469 nova_compute[254092]: 2025-11-25 16:31:21.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:21.916 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.068 254096 DEBUG nova.network.neutron [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.069 254096 DEBUG nova.network.neutron [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.162 254096 DEBUG oslo_concurrency.lockutils [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:22Z|00160|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.850 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.851 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.851 254096 DEBUG nova.objects.instance [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.883 254096 DEBUG nova.objects.instance [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:22 np0005535469 nova_compute[254092]: 2025-11-25 16:31:22.896 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:31:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 11:31:23 np0005535469 nova_compute[254092]: 2025-11-25 16:31:23.446 254096 DEBUG nova.policy [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:31:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:24 np0005535469 podman[289797]: 2025-11-25 16:31:24.636510676 +0000 UTC m=+0.047774921 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 25 11:31:24 np0005535469 podman[289796]: 2025-11-25 16:31:24.642683923 +0000 UTC m=+0.054121642 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 25 11:31:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Nov 25 11:31:25 np0005535469 nova_compute[254092]: 2025-11-25 16:31:25.631 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully created port: 63499eed-d192-4aec-8ab6-1c3384834ed4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:31:25 np0005535469 podman[289835]: 2025-11-25 16:31:25.65996334 +0000 UTC m=+0.084728146 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:31:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 819 KiB/s wr, 29 op/s
Nov 25 11:31:27 np0005535469 nova_compute[254092]: 2025-11-25 16:31:27.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:27 np0005535469 nova_compute[254092]: 2025-11-25 16:31:27.725 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:27 np0005535469 nova_compute[254092]: 2025-11-25 16:31:27.882 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:27 np0005535469 nova_compute[254092]: 2025-11-25 16:31:27.883 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:27 np0005535469 nova_compute[254092]: 2025-11-25 16:31:27.932 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.246 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.246 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.253 254096 INFO nova.compute.claims [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.292 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: 63499eed-d192-4aec-8ab6-1c3384834ed4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.397 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.397 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.398 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.435 254096 DEBUG nova.compute.manager [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.435 254096 DEBUG nova.compute.manager [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-63499eed-d192-4aec-8ab6-1c3384834ed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.436 254096 DEBUG oslo_concurrency.lockutils [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.572 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:28 np0005535469 nova_compute[254092]: 2025-11-25 16:31:28.670 254096 WARNING nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:31:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:31:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364636560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.051 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.057 254096 DEBUG nova.compute.provider_tree [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.075 254096 DEBUG nova.scheduler.client.report [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.100 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.101 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.156 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.157 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.182 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.203 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:31:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 101 KiB/s wr, 13 op/s
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.297 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.298 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.299 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Creating image(s)#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.318 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.341 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.362 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.366 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.401 254096 DEBUG nova.policy [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87058665de814ae0a51a12ff02b0d9aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed571eebde434695bae813d7bb21f4c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.428 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.428 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.429 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.429 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.497 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:29 np0005535469 nova_compute[254092]: 2025-11-25 16:31:29.500 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:30Z|00161|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:31:30 np0005535469 nova_compute[254092]: 2025-11-25 16:31:30.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:30 np0005535469 nova_compute[254092]: 2025-11-25 16:31:30.415 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Successfully created port: bb9f2265-a40c-44da-bb0e-dc52c5d0873b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.101 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.165 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] resizing rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:31:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 126 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 238 KiB/s wr, 17 op/s
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.734 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Successfully updated port: bb9f2265-a40c-44da-bb0e-dc52c5d0873b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.784 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.784 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquired lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.784 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.791 254096 DEBUG nova.objects.instance [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'migration_context' on Instance uuid e1c7a84b-16df-49a8-83a7-a97bd47e0d43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.806 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.807 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Ensure instance console log exists: /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.807 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.807 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.808 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.809 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.846 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.847 254096 DEBUG oslo_concurrency.lockutils [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.847 254096 DEBUG nova.network.neutron [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port 63499eed-d192-4aec-8ab6-1c3384834ed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.850 254096 DEBUG nova.virt.libvirt.vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.850 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.851 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.852 254096 DEBUG os_vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.853 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.854 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.855 254096 DEBUG nova.compute.manager [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-changed-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.856 254096 DEBUG nova.compute.manager [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Refreshing instance network info cache due to event network-changed-bb9f2265-a40c-44da-bb0e-dc52c5d0873b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.856 254096 DEBUG oslo_concurrency.lockutils [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.859 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63499eed-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.860 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63499eed-d1, col_values=(('external_ids', {'iface-id': '63499eed-d192-4aec-8ab6-1c3384834ed4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:56:d4', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:31 np0005535469 NetworkManager[48891]: <info>  [1764088291.8631] manager: (tap63499eed-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.871 254096 INFO os_vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1')#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.872 254096 DEBUG nova.virt.libvirt.vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.872 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.873 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.877 254096 DEBUG nova.virt.libvirt.guest [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 11:31:31 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:9c:56:d4"/>
Nov 25 11:31:31 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:31:31 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:31:31 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:31:31 np0005535469 nova_compute[254092]:  <target dev="tap63499eed-d1"/>
Nov 25 11:31:31 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:31:31 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 11:31:31 np0005535469 kernel: tap63499eed-d1: entered promiscuous mode
Nov 25 11:31:31 np0005535469 NetworkManager[48891]: <info>  [1764088291.8936] manager: (tap63499eed-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Nov 25 11:31:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:31Z|00162|binding|INFO|Claiming lport 63499eed-d192-4aec-8ab6-1c3384834ed4 for this chassis.
Nov 25 11:31:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:31Z|00163|binding|INFO|63499eed-d192-4aec-8ab6-1c3384834ed4: Claiming fa:16:3e:9c:56:d4 10.100.0.12
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:31Z|00164|binding|INFO|Setting lport 63499eed-d192-4aec-8ab6-1c3384834ed4 ovn-installed in OVS
Nov 25 11:31:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:31Z|00165|binding|INFO|Setting lport 63499eed-d192-4aec-8ab6-1c3384834ed4 up in Southbound
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.913 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:d4 10.100.0.12'], port_security=['fa:16:3e:9c:56:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63499eed-d192-4aec-8ab6-1c3384834ed4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:31:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.915 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63499eed-d192-4aec-8ab6-1c3384834ed4 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.916 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:31:31 np0005535469 systemd-udevd[290055]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:31:31 np0005535469 NetworkManager[48891]: <info>  [1764088291.9448] device (tap63499eed-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:31:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.942 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7a93eb-79d6-4cb8-9a58-87ccf859fc26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:31 np0005535469 NetworkManager[48891]: <info>  [1764088291.9463] device (tap63499eed-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.974 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.975 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.975 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:31 np0005535469 nova_compute[254092]: 2025-11-25 16:31:31.975 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:9c:56:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.976 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a060741c-9860-43c5-8eb2-c45bad864f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.979 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[955a786b-8715-466f-980a-287381203a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.005 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d56f0450-4c37-4de9-930e-7f64abceba0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.006 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.012 254096 DEBUG nova.virt.libvirt.guest [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:32</nova:creationTime>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:31:32 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 11:31:32 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:32 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:31:32 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:31:32 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.023 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7732f8-e61c-4164-9c78-bf1537adfc1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290063, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.043 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.043 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf79fb63-a25d-4b48-ab75-e516d7396213]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290064, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290064, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.045 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.049 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.049 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:32 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:32Z|00166|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.968 254096 DEBUG nova.compute.manager [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.969 254096 DEBUG oslo_concurrency.lockutils [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.969 254096 DEBUG oslo_concurrency.lockutils [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.969 254096 DEBUG oslo_concurrency.lockutils [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.970 254096 DEBUG nova.compute.manager [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:32 np0005535469 nova_compute[254092]: 2025-11-25 16:31:32.970 254096 WARNING nova.compute.manager [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 126 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 155 KiB/s wr, 7 op/s
Nov 25 11:31:33 np0005535469 nova_compute[254092]: 2025-11-25 16:31:33.316 254096 DEBUG nova.network.neutron [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port 63499eed-d192-4aec-8ab6-1c3384834ed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:31:33 np0005535469 nova_compute[254092]: 2025-11-25 16:31:33.317 254096 DEBUG nova.network.neutron [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:33 np0005535469 nova_compute[254092]: 2025-11-25 16:31:33.335 254096 DEBUG oslo_concurrency.lockutils [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:33 np0005535469 nova_compute[254092]: 2025-11-25 16:31:33.685 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:33 np0005535469 nova_compute[254092]: 2025-11-25 16:31:33.686 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:33 np0005535469 nova_compute[254092]: 2025-11-25 16:31:33.686 254096 DEBUG nova.objects.instance [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:34Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:56:d4 10.100.0.12
Nov 25 11:31:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:34Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:56:d4 10.100.0.12
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.192 254096 DEBUG nova.objects.instance [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.203 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updating instance_info_cache with network_info: [{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.205 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.226 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Releasing lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.227 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance network_info: |[{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.227 254096 DEBUG oslo_concurrency.lockutils [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.227 254096 DEBUG nova.network.neutron [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Refreshing network info cache for port bb9f2265-a40c-44da-bb0e-dc52c5d0873b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.230 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start _get_guest_xml network_info=[{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.234 254096 WARNING nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.239 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.239 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.248 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.249 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.250 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.250 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.251 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.251 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.251 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.252 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.252 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.254 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.254 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.257 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.536 254096 DEBUG nova.policy [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:31:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:31:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767422786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.688 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.710 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:34 np0005535469 nova_compute[254092]: 2025-11-25 16:31:34.714 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:31:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966684258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.155 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.157 254096 DEBUG nova.virt.libvirt.vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1117381742',id=28,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-f10hzrkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:31:29Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=e1c7a84b-16df-49a8-83a7-a97bd47e0d43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.157 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.158 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.159 254096 DEBUG nova.objects.instance [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1c7a84b-16df-49a8-83a7-a97bd47e0d43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.190 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <uuid>e1c7a84b-16df-49a8-83a7-a97bd47e0d43</uuid>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <name>instance-0000001c</name>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1117381742</nova:name>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:31:34</nova:creationTime>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:user uuid="87058665de814ae0a51a12ff02b0d9aa">tempest-ImagesOneServerNegativeTestJSON-964953831-project-member</nova:user>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:project uuid="ed571eebde434695bae813d7bb21f4c3">tempest-ImagesOneServerNegativeTestJSON-964953831</nova:project>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <nova:port uuid="bb9f2265-a40c-44da-bb0e-dc52c5d0873b">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <entry name="serial">e1c7a84b-16df-49a8-83a7-a97bd47e0d43</entry>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <entry name="uuid">e1c7a84b-16df-49a8-83a7-a97bd47e0d43</entry>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:79:09:1a"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <target dev="tapbb9f2265-a4"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/console.log" append="off"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:31:35 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:31:35 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:31:35 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:31:35 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.192 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Preparing to wait for external event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.193 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.193 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.194 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.195 254096 DEBUG nova.virt.libvirt.vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1117381742',id=28,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-f10hzrkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:31:29Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=e1c7a84b-16df-49a8-83a7-a97bd47e0d43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.195 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.196 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.197 254096 DEBUG os_vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.198 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.198 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.199 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.203 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb9f2265-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.203 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbb9f2265-a4, col_values=(('external_ids', {'iface-id': 'bb9f2265-a40c-44da-bb0e-dc52c5d0873b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:09:1a', 'vm-uuid': 'e1c7a84b-16df-49a8-83a7-a97bd47e0d43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:35 np0005535469 NetworkManager[48891]: <info>  [1764088295.2062] manager: (tapbb9f2265-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.211 254096 INFO os_vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4')#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.235 254096 DEBUG nova.compute.manager [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG oslo_concurrency.lockutils [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG oslo_concurrency.lockutils [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG oslo_concurrency.lockutils [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG nova.compute.manager [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.237 254096 WARNING nova.compute.manager [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.275 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.276 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.276 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No VIF found with MAC fa:16:3e:79:09:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.276 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Using config drive#033[00m
Nov 25 11:31:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.295 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.682 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully created port: aa0c168b-de51-438f-a68b-f9c78a24ca7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.827 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Creating config drive at /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.832 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcktj4yfp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:35 np0005535469 nova_compute[254092]: 2025-11-25 16:31:35.977 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcktj4yfp" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.006 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.011 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.064 254096 DEBUG nova.network.neutron [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updated VIF entry in instance network info cache for port bb9f2265-a40c-44da-bb0e-dc52c5d0873b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.066 254096 DEBUG nova.network.neutron [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updating instance_info_cache with network_info: [{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.100 254096 DEBUG oslo_concurrency.lockutils [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.514 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.515 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deleting local config drive /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config because it was imported into RBD.#033[00m
Nov 25 11:31:36 np0005535469 kernel: tapbb9f2265-a4: entered promiscuous mode
Nov 25 11:31:36 np0005535469 NetworkManager[48891]: <info>  [1764088296.5619] manager: (tapbb9f2265-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:36Z|00167|binding|INFO|Claiming lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b for this chassis.
Nov 25 11:31:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:36Z|00168|binding|INFO|bb9f2265-a40c-44da-bb0e-dc52c5d0873b: Claiming fa:16:3e:79:09:1a 10.100.0.3
Nov 25 11:31:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:36Z|00169|binding|INFO|Setting lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b ovn-installed in OVS
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:36 np0005535469 systemd-udevd[290200]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:31:36 np0005535469 systemd-machined[216343]: New machine qemu-32-instance-0000001c.
Nov 25 11:31:36 np0005535469 NetworkManager[48891]: <info>  [1764088296.6042] device (tapbb9f2265-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:31:36 np0005535469 NetworkManager[48891]: <info>  [1764088296.6053] device (tapbb9f2265-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:31:36 np0005535469 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Nov 25 11:31:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:36Z|00170|binding|INFO|Setting lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b up in Southbound
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.686 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:09:1a 10.100.0.3'], port_security=['fa:16:3e:79:09:1a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1c7a84b-16df-49a8-83a7-a97bd47e0d43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bb9f2265-a40c-44da-bb0e-dc52c5d0873b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.688 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bb9f2265-a40c-44da-bb0e-dc52c5d0873b in datapath 50e18e22-7850-458c-8d66-5932e0495377 bound to our chassis#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.690 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50e18e22-7850-458c-8d66-5932e0495377#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.700 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ebef1ef1-3a0e-4dad-aa2e-7902e1f999e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.701 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50e18e22-71 in ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.702 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50e18e22-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.702 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4106c52-6e2a-47f5-947c-fc7a366abf7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3f528f-5762-44df-b9c2-31a14912720e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.714 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c53ddf-3c6f-493c-a205-3880d5524436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02e96196-c99b-4007-ad9f-88a6e3c1dc8e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.758 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7522ce99-390b-41da-81fb-41d8f3dcbc34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 NetworkManager[48891]: <info>  [1764088296.7645] manager: (tap50e18e22-70): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06836e8d-eec9-4a41-8c56-6eced22d5697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 systemd-udevd[290203]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.796 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2eb0be-b9d8-4d36-9a03-780a36d420d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.799 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8bf835-df91-49f1-8926-d210b8f5de14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 NetworkManager[48891]: <info>  [1764088296.8181] device (tap50e18e22-70): carrier: link connected
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.822 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5702b7-9aef-4839-a170-148bc59add2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.837 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1e09fb-4aa3-46e2-83a9-4df7302ad3a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475440, 'reachable_time': 38886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290235, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.853 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edbfa5a7-f623-49d0-ba4c-ddc9ce4812da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:147d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475440, 'tstamp': 475440}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290236, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.868 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02806e5a-a90b-4f2d-b73a-17ac47a0faf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475440, 'reachable_time': 38886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290237, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:36Z|00171|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de81b308-d495-4c46-9863-27f56be5cfd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.959 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e23fd83e-35cb-4a93-a8aa-2e6b5a0bdece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e18e22-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:36 np0005535469 NetworkManager[48891]: <info>  [1764088296.9644] manager: (tap50e18e22-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Nov 25 11:31:36 np0005535469 kernel: tap50e18e22-70: entered promiscuous mode
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.968 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50e18e22-70, col_values=(('external_ids', {'iface-id': '5b591f76-4d04-4b30-9182-d359be87068c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:36Z|00172|binding|INFO|Releasing lport 5b591f76-4d04-4b30-9182-d359be87068c from this chassis (sb_readonly=0)
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.985 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.986 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40fd173d-9995-41c0-8111-d3319fbe20de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.987 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-50e18e22-7850-458c-8d66-5932e0495377
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:31:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.988 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'env', 'PROCESS_TAG=haproxy-50e18e22-7850-458c-8d66-5932e0495377', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50e18e22-7850-458c-8d66-5932e0495377.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:31:36 np0005535469 nova_compute[254092]: 2025-11-25 16:31:36.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:31:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6009 writes, 27K keys, 6009 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6009 writes, 6009 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1517 writes, 6836 keys, 1517 commit groups, 1.0 writes per commit group, ingest: 9.34 MB, 0.02 MB/s#012Interval WAL: 1517 writes, 1517 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     20.2      1.48              0.11        15    0.099       0      0       0.0       0.0#012  L6      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     95.8     78.2      1.29              0.29        14    0.092     65K   7795       0.0       0.0#012 Sum      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     44.7     47.3      2.77              0.40        29    0.096     65K   7795       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     57.4     57.0      0.67              0.12         8    0.084     21K   2586       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     95.8     78.2      1.29              0.29        14    0.092     65K   7795       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     20.8      1.43              0.11        14    0.102       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.1 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 2.8 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 13.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(865,12.66 MB,4.16329%) FilterBlock(30,188.05 KB,0.0604077%) IndexBlock(30,338.98 KB,0.108895%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 11:31:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.369 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088297.368979, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.370 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Started (Lifecycle Event)#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.387 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.391 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088297.3713624, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.406 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.409 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:31:37 np0005535469 podman[290310]: 2025-11-25 16:31:37.328920905 +0000 UTC m=+0.019634205 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.427 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:31:37 np0005535469 podman[290310]: 2025-11-25 16:31:37.474627077 +0000 UTC m=+0.165340357 container create f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.600 254096 DEBUG nova.compute.manager [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.600 254096 DEBUG oslo_concurrency.lockutils [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.600 254096 DEBUG oslo_concurrency.lockutils [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.601 254096 DEBUG oslo_concurrency.lockutils [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.601 254096 DEBUG nova.compute.manager [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Processing event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.602 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.605 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088297.6050208, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.606 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.612 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.616 254096 INFO nova.virt.libvirt.driver [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance spawned successfully.#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.616 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.632 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.638 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.643 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.644 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.644 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.644 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.645 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.645 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.683 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:31:37 np0005535469 systemd[1]: Started libpod-conmon-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66.scope.
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.697 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: aa0c168b-de51-438f-a68b-f9c78a24ca7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:31:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:31:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a528aaf225b549659531755e9e5982927109e3e368e2369118320e622c687d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.740 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.741 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.742 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:31:37 np0005535469 podman[290310]: 2025-11-25 16:31:37.748626689 +0000 UTC m=+0.439339999 container init f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:31:37 np0005535469 podman[290310]: 2025-11-25 16:31:37.754445197 +0000 UTC m=+0.445158477 container start f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.765 254096 INFO nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 8.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.766 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:37 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : New worker (290332) forked
Nov 25 11:31:37 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : Loading success.
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.882 254096 DEBUG nova.compute.manager [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.882 254096 DEBUG nova.compute.manager [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-aa0c168b-de51-438f-a68b-f9c78a24ca7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.883 254096 DEBUG oslo_concurrency.lockutils [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.897 254096 INFO nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 9.69 seconds to build instance.#033[00m
Nov 25 11:31:37 np0005535469 nova_compute[254092]: 2025-11-25 16:31:37.939 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:38 np0005535469 nova_compute[254092]: 2025-11-25 16:31:38.156 254096 WARNING nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:31:38 np0005535469 nova_compute[254092]: 2025-11-25 16:31:38.157 254096 WARNING nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:31:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:31:40
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.data']
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:31:40 np0005535469 nova_compute[254092]: 2025-11-25 16:31:40.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:31:40 np0005535469 nova_compute[254092]: 2025-11-25 16:31:40.297 254096 DEBUG nova.compute.manager [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:40 np0005535469 nova_compute[254092]: 2025-11-25 16:31:40.298 254096 DEBUG oslo_concurrency.lockutils [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:40 np0005535469 nova_compute[254092]: 2025-11-25 16:31:40.298 254096 DEBUG oslo_concurrency.lockutils [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:40 np0005535469 nova_compute[254092]: 2025-11-25 16:31:40.299 254096 DEBUG oslo_concurrency.lockutils [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:40 np0005535469 nova_compute[254092]: 2025-11-25 16:31:40.299 254096 DEBUG nova.compute.manager [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] No waiting events found dispatching network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:40 np0005535469 nova_compute[254092]: 2025-11-25 16:31:40.299 254096 WARNING nova.compute.manager [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received unexpected event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:31:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.761 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.798 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.800 254096 DEBUG oslo_concurrency.lockutils [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.801 254096 DEBUG nova.network.neutron [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port aa0c168b-de51-438f-a68b-f9c78a24ca7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.804 254096 DEBUG nova.virt.libvirt.vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.805 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.806 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.806 254096 DEBUG os_vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.807 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.808 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.811 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa0c168b-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.811 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa0c168b-de, col_values=(('external_ids', {'iface-id': 'aa0c168b-de51-438f-a68b-f9c78a24ca7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:0b:3f', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 NetworkManager[48891]: <info>  [1764088301.8142] manager: (tapaa0c168b-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.821 254096 INFO os_vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de')#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.821 254096 DEBUG nova.virt.libvirt.vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.822 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.822 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.825 254096 DEBUG nova.virt.libvirt.guest [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:33:0b:3f"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <target dev="tapaa0c168b-de"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:31:41 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 11:31:41 np0005535469 kernel: tapaa0c168b-de: entered promiscuous mode
Nov 25 11:31:41 np0005535469 NetworkManager[48891]: <info>  [1764088301.8372] manager: (tapaa0c168b-de): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Nov 25 11:31:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:41Z|00173|binding|INFO|Claiming lport aa0c168b-de51-438f-a68b-f9c78a24ca7c for this chassis.
Nov 25 11:31:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:41Z|00174|binding|INFO|aa0c168b-de51-438f-a68b-f9c78a24ca7c: Claiming fa:16:3e:33:0b:3f 10.100.0.4
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.851 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:3f 10.100.0.4'], port_security=['fa:16:3e:33:0b:3f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aa0c168b-de51-438f-a68b-f9c78a24ca7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.853 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aa0c168b-de51-438f-a68b-f9c78a24ca7c in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.854 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:31:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:41Z|00175|binding|INFO|Setting lport aa0c168b-de51-438f-a68b-f9c78a24ca7c ovn-installed in OVS
Nov 25 11:31:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:41Z|00176|binding|INFO|Setting lport aa0c168b-de51-438f-a68b-f9c78a24ca7c up in Southbound
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[63ff6bd3-bee6-4a11-9ab7-b74b1f816af0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:41 np0005535469 systemd-udevd[290348]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:31:41 np0005535469 NetworkManager[48891]: <info>  [1764088301.8907] device (tapaa0c168b-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:31:41 np0005535469 NetworkManager[48891]: <info>  [1764088301.8915] device (tapaa0c168b-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.906 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6161e431-f825-4baf-9641-15193abe4faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3e23d114-5bcb-41de-bd8d-e0c05d53f468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.918 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.919 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.919 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.919 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:9c:56:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.920 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:33:0b:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.944 254096 DEBUG nova.virt.libvirt.guest [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:41</nova:creationTime>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:31:41 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 11:31:41 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:31:41 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:41 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:31:41 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:31:41 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.944 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9db6c008-44ac-4fe4-8703-0cb1260e5a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.964 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5331882-bcac-4a77-aae6-4ddbfd4eda4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290355, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.977 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[77ebec65-53bc-48fd-abb8-ffdc56bac787]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290356, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290356, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.980 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:41 np0005535469 nova_compute[254092]: 2025-11-25 16:31:41.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.984 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:42 np0005535469 nova_compute[254092]: 2025-11-25 16:31:42.551 254096 DEBUG nova.compute.manager [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:42 np0005535469 nova_compute[254092]: 2025-11-25 16:31:42.551 254096 DEBUG oslo_concurrency.lockutils [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:42 np0005535469 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 DEBUG oslo_concurrency.lockutils [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:42 np0005535469 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 DEBUG oslo_concurrency.lockutils [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:42 np0005535469 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 DEBUG nova.compute.manager [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:42 np0005535469 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 WARNING nova.compute.manager [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:42 np0005535469 nova_compute[254092]: 2025-11-25 16:31:42.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:42Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:0b:3f 10.100.0.4
Nov 25 11:31:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:42Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:0b:3f 10.100.0.4
Nov 25 11:31:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 97 op/s
Nov 25 11:31:43 np0005535469 nova_compute[254092]: 2025-11-25 16:31:43.762 254096 DEBUG nova.network.neutron [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port aa0c168b-de51-438f-a68b-f9c78a24ca7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:31:43 np0005535469 nova_compute[254092]: 2025-11-25 16:31:43.763 254096 DEBUG nova.network.neutron [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:43 np0005535469 nova_compute[254092]: 2025-11-25 16:31:43.780 254096 DEBUG oslo_concurrency.lockutils [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.309 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "36f65013-2906-4794-9e23-e92dc7814b6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.310 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.328 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.653 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.653 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.662 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.663 254096 INFO nova.compute.claims [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.773 254096 DEBUG nova.compute.manager [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.773 254096 DEBUG oslo_concurrency.lockutils [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.774 254096 DEBUG oslo_concurrency.lockutils [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.774 254096 DEBUG oslo_concurrency.lockutils [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.774 254096 DEBUG nova.compute.manager [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.775 254096 WARNING nova.compute.manager [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:44 np0005535469 nova_compute[254092]: 2025-11-25 16:31:44.845 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.091 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-fd70e1c0-089e-49c8-b856-6ffd16627e8b" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.092 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-fd70e1c0-089e-49c8-b856-6ffd16627e8b" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.092 254096 DEBUG nova.objects.instance [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Nov 25 11:31:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:31:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688448314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.366 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.372 254096 DEBUG nova.compute.provider_tree [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.387 254096 DEBUG nova.scheduler.client.report [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.556 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.557 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.700 254096 DEBUG nova.compute.manager [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.728 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.759 254096 INFO nova.compute.manager [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] instance snapshotting#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.806 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:31:45 np0005535469 nova_compute[254092]: 2025-11-25 16:31:45.920 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.278 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.279 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.279 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating image(s)#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.299 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.324 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.344 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.348 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.410 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.411 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.412 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.412 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.435 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.443 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:46 np0005535469 nova_compute[254092]: 2025-11-25 16:31:46.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Nov 25 11:31:47 np0005535469 nova_compute[254092]: 2025-11-25 16:31:47.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.064 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.145 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] resizing rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.245 254096 DEBUG nova.objects.instance [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'migration_context' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.261 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.261 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Ensure instance console log exists: /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.262 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.262 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.262 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.264 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.274 254096 WARNING nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.275 254096 INFO nova.virt.libvirt.driver [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Beginning live snapshot process#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.280 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.281 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.287 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.288 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.289 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.289 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.290 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.290 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.291 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.291 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.291 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.292 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.292 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.293 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.293 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.293 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.297 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.443 254096 DEBUG nova.objects.instance [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.456 254096 DEBUG nova.virt.libvirt.imagebackend [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.463 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:31:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:31:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300794903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.779 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.805 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.811 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:48 np0005535469 nova_compute[254092]: 2025-11-25 16:31:48.842 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] creating snapshot(a9eea299738b43a3870b0b13d430950a) on rbd image(e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:31:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.067 254096 DEBUG nova.policy [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:31:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 25 11:31:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 25 11:31:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.209 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] cloning vms/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk@a9eea299738b43a3870b0b13d430950a to images/bc794908-f5ef-4cca-8c6d-36584cf3f9c9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:31:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:31:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302803226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.274 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.276 254096 DEBUG nova.objects.instance [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.295 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <uuid>36f65013-2906-4794-9e23-e92dc7814b6e</uuid>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <name>instance-0000001d</name>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerShowV254Test-server-179263035</nova:name>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:31:48</nova:creationTime>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <nova:user uuid="23c828e6ebbd4d0488f6edbbe9616ca7">tempest-ServerShowV254Test-285881419-project-member</nova:user>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <nova:project uuid="1a87d91cb59d45c29155c8f5cb5ad745">tempest-ServerShowV254Test-285881419</nova:project>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <entry name="serial">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <entry name="uuid">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk.config">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log" append="off"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 88 op/s
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:31:49 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:31:49 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:31:49 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:31:49 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.365 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] flattening images/bc794908-f5ef-4cca-8c6d-36584cf3f9c9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.416 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.418 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.419 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Using config drive#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.449 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.650 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating config drive at /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.655 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2rahvtk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.766 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] removing snapshot(a9eea299738b43a3870b0b13d430950a) on rbd image(e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.789 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2rahvtk" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.810 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.819 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.983 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:49 np0005535469 nova_compute[254092]: 2025-11-25 16:31:49.984 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting local config drive /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config because it was imported into RBD.#033[00m
Nov 25 11:31:50 np0005535469 systemd-machined[216343]: New machine qemu-33-instance-0000001d.
Nov 25 11:31:50 np0005535469 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Nov 25 11:31:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 25 11:31:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 25 11:31:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.188 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] creating snapshot(snap) on rbd image(bc794908-f5ef-4cca-8c6d-36584cf3f9c9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.387 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088310.3870687, 36f65013-2906-4794-9e23-e92dc7814b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.388 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.390 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.390 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.394 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance spawned successfully.#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.394 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.418 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.422 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.423 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.423 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.423 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.424 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.424 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.428 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.458 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.458 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088310.3881764, 36f65013-2906-4794-9e23-e92dc7814b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.459 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Started (Lifecycle Event)#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.493 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.495 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.505 254096 INFO nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 4.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.506 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.518 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.564 254096 INFO nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 5.94 seconds to build instance.#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.584 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.650 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: fd70e1c0-089e-49c8-b856-6ffd16627e8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.677 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.677 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.678 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.845 254096 DEBUG nova.compute.manager [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.845 254096 DEBUG nova.compute.manager [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-fd70e1c0-089e-49c8-b856-6ffd16627e8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.845 254096 DEBUG oslo_concurrency.lockutils [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.904 254096 WARNING nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.905 254096 WARNING nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:31:50 np0005535469 nova_compute[254092]: 2025-11-25 16:31:50.905 254096 WARNING nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016669032240997591 of space, bias 1.0, pg target 0.5000709672299277 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009338295443445985 of space, bias 1.0, pg target 0.28014886330337957 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:31:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 25 11:31:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 25 11:31:51 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 25 11:31:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 293 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 284 op/s
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image bc794908-f5ef-4cca-8c6d-36584cf3f9c9 could not be found.
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID bc794908-f5ef-4cca-8c6d-36584cf3f9c9
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver 
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver 
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image bc794908-f5ef-4cca-8c6d-36584cf3f9c9 could not be found.
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver #033[00m
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.567 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] removing snapshot(snap) on rbd image(bc794908-f5ef-4cca-8c6d-36584cf3f9c9) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:31:51 np0005535469 nova_compute[254092]: 2025-11-25 16:31:51.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:51 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:51Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:09:1a 10.100.0.3
Nov 25 11:31:51 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:51Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:09:1a 10.100.0.3
Nov 25 11:31:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 25 11:31:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 25 11:31:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 25 11:31:52 np0005535469 nova_compute[254092]: 2025-11-25 16:31:52.417 254096 WARNING nova.compute.manager [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Image not found during snapshot: nova.exception.ImageNotFound: Image bc794908-f5ef-4cca-8c6d-36584cf3f9c9 could not be found.#033[00m
Nov 25 11:31:52 np0005535469 nova_compute[254092]: 2025-11-25 16:31:52.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.281 254096 INFO nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Rebuilding instance#033[00m
Nov 25 11:31:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 293 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 16 MiB/s wr, 411 op/s
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.512 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.526 254096 DEBUG nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.571 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'pci_requests' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.582 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.595 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'resources' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.604 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'migration_context' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.614 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:31:53 np0005535469 nova_compute[254092]: 2025-11-25 16:31:53.617 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:31:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.115 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.116 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.116 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.117 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.117 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.118 254096 INFO nova.compute.manager [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Terminating instance#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.119 254096 DEBUG nova.compute.manager [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:31:54 np0005535469 kernel: tapbb9f2265-a4 (unregistering): left promiscuous mode
Nov 25 11:31:54 np0005535469 NetworkManager[48891]: <info>  [1764088314.1616] device (tapbb9f2265-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:31:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:54Z|00177|binding|INFO|Releasing lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b from this chassis (sb_readonly=0)
Nov 25 11:31:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:54Z|00178|binding|INFO|Setting lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b down in Southbound
Nov 25 11:31:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:54Z|00179|binding|INFO|Removing iface tapbb9f2265-a4 ovn-installed in OVS
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.180 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:09:1a 10.100.0.3'], port_security=['fa:16:3e:79:09:1a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1c7a84b-16df-49a8-83a7-a97bd47e0d43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bb9f2265-a40c-44da-bb0e-dc52c5d0873b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.181 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bb9f2265-a40c-44da-bb0e-dc52c5d0873b in datapath 50e18e22-7850-458c-8d66-5932e0495377 unbound from our chassis#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.182 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50e18e22-7850-458c-8d66-5932e0495377, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.184 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[177dc6d8-d297-42a6-9276-3bf0b1d20f1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.186 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace which is not needed anymore#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 25 11:31:54 np0005535469 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 13.160s CPU time.
Nov 25 11:31:54 np0005535469 systemd-machined[216343]: Machine qemu-32-instance-0000001c terminated.
Nov 25 11:31:54 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : haproxy version is 2.8.14-c23fe91
Nov 25 11:31:54 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : path to executable is /usr/sbin/haproxy
Nov 25 11:31:54 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [WARNING]  (290330) : Exiting Master process...
Nov 25 11:31:54 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [ALERT]    (290330) : Current worker (290332) exited with code 143 (Terminated)
Nov 25 11:31:54 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [WARNING]  (290330) : All workers exited. Exiting... (0)
Nov 25 11:31:54 np0005535469 systemd[1]: libpod-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66.scope: Deactivated successfully.
Nov 25 11:31:54 np0005535469 podman[290925]: 2025-11-25 16:31:54.338929343 +0000 UTC m=+0.053452385 container died f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.353 254096 INFO nova.virt.libvirt.driver [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance destroyed successfully.#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.353 254096 DEBUG nova.objects.instance [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'resources' on Instance uuid e1c7a84b-16df-49a8-83a7-a97bd47e0d43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66-userdata-shm.mount: Deactivated successfully.
Nov 25 11:31:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-65a528aaf225b549659531755e9e5982927109e3e368e2369118320e622c687d-merged.mount: Deactivated successfully.
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.374 254096 DEBUG nova.virt.libvirt.vif [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1117381742',id=28,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:31:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-f10hzrkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:31:52Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=e1c7a84b-16df-49a8-83a7-a97bd47e0d43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.375 254096 DEBUG nova.network.os_vif_util [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.376 254096 DEBUG nova.network.os_vif_util [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.376 254096 DEBUG os_vif [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.379 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb9f2265-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.386 254096 INFO os_vif [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4')#033[00m
Nov 25 11:31:54 np0005535469 podman[290925]: 2025-11-25 16:31:54.386097576 +0000 UTC m=+0.100620618 container cleanup f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:31:54 np0005535469 systemd[1]: libpod-conmon-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66.scope: Deactivated successfully.
Nov 25 11:31:54 np0005535469 podman[290974]: 2025-11-25 16:31:54.460295713 +0000 UTC m=+0.045991841 container remove f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.467 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99e228aa-baa3-413d-95e2-920d27213d21]: (4, ('Tue Nov 25 04:31:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66)\nf1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66\nTue Nov 25 04:31:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66)\nf1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.469 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dae8e346-3df2-459c-918f-ce73f645683e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:54 np0005535469 kernel: tap50e18e22-70: left promiscuous mode
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.476 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7ff43a-11b9-4b9b-9385-c651f2546497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.501 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed202205-845e-41d8-ad05-7434d949301e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.502 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91af90fd-fb36-4123-bac4-419262f79c2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.522 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2276a1af-94a0-48ab-b7a0-8fc9392ad16e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475434, 'reachable_time': 17749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291001, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 systemd[1]: run-netns-ovnmeta\x2d50e18e22\x2d7850\x2d458c\x2d8d66\x2d5932e0495377.mount: Deactivated successfully.
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.527 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:31:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.527 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6557e1d1-ee8a-4d68-a245-0159f4c0e607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.824 254096 INFO nova.virt.libvirt.driver [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deleting instance files /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_del#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.825 254096 INFO nova.virt.libvirt.driver [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deletion of /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_del complete#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.882 254096 INFO nova.compute.manager [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.883 254096 DEBUG oslo.service.loopingcall [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.884 254096 DEBUG nova.compute.manager [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:31:54 np0005535469 nova_compute[254092]: 2025-11-25 16:31:54.884 254096 DEBUG nova.network.neutron [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:31:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:31:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1729059012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:31:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:31:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1729059012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:31:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 194 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 11 MiB/s wr, 499 op/s
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.582 254096 DEBUG nova.network.neutron [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.603 254096 INFO nova.compute.manager [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 0.72 seconds to deallocate network for instance.#033[00m
Nov 25 11:31:55 np0005535469 podman[291004]: 2025-11-25 16:31:55.648783105 +0000 UTC m=+0.063994751 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.652 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.652 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:55 np0005535469 podman[291003]: 2025-11-25 16:31:55.676087697 +0000 UTC m=+0.093609566 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.726 254096 DEBUG nova.compute.manager [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-unplugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.726 254096 DEBUG oslo_concurrency.lockutils [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 DEBUG oslo_concurrency.lockutils [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 DEBUG oslo_concurrency.lockutils [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 DEBUG nova.compute.manager [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] No waiting events found dispatching network-vif-unplugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 WARNING nova.compute.manager [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received unexpected event network-vif-unplugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.745 254096 DEBUG oslo_concurrency.processutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:55 np0005535469 podman[291039]: 2025-11-25 16:31:55.775420699 +0000 UTC m=+0.073812179 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.916 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.940 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.941 254096 DEBUG oslo_concurrency.lockutils [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.941 254096 DEBUG nova.network.neutron [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port fd70e1c0-089e-49c8-b856-6ffd16627e8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.943 254096 DEBUG nova.virt.libvirt.vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.944 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.944 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.945 254096 DEBUG os_vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.946 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.946 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.948 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd70e1c0-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.949 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd70e1c0-08, col_values=(('external_ids', {'iface-id': 'fd70e1c0-089e-49c8-b856-6ffd16627e8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:d0:f4', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:55 np0005535469 NetworkManager[48891]: <info>  [1764088315.9515] manager: (tapfd70e1c0-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.962 254096 INFO os_vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08')#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.962 254096 DEBUG nova.virt.libvirt.vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.963 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.963 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.966 254096 DEBUG nova.virt.libvirt.guest [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 11:31:55 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:4d:d0:f4"/>
Nov 25 11:31:55 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:31:55 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:31:55 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:31:55 np0005535469 nova_compute[254092]:  <target dev="tapfd70e1c0-08"/>
Nov 25 11:31:55 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:31:55 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 11:31:55 np0005535469 kernel: tapfd70e1c0-08: entered promiscuous mode
Nov 25 11:31:55 np0005535469 systemd-udevd[290903]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:31:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:55Z|00180|binding|INFO|Claiming lport fd70e1c0-089e-49c8-b856-6ffd16627e8b for this chassis.
Nov 25 11:31:55 np0005535469 NetworkManager[48891]: <info>  [1764088315.9787] manager: (tapfd70e1c0-08): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Nov 25 11:31:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:55Z|00181|binding|INFO|fd70e1c0-089e-49c8-b856-6ffd16627e8b: Claiming fa:16:3e:4d:d0:f4 10.100.0.14
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:55.985 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d0:f4 10.100.0.14'], port_security=['fa:16:3e:4d:d0:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fd70e1c0-089e-49c8-b856-6ffd16627e8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:31:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:55.986 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fd70e1c0-089e-49c8-b856-6ffd16627e8b in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:31:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:55.987 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:31:55 np0005535469 NetworkManager[48891]: <info>  [1764088315.9895] device (tapfd70e1c0-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:31:55 np0005535469 NetworkManager[48891]: <info>  [1764088315.9905] device (tapfd70e1c0-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:31:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:55Z|00182|binding|INFO|Setting lport fd70e1c0-089e-49c8-b856-6ffd16627e8b ovn-installed in OVS
Nov 25 11:31:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:55Z|00183|binding|INFO|Setting lport fd70e1c0-089e-49c8-b856-6ffd16627e8b up in Southbound
Nov 25 11:31:55 np0005535469 nova_compute[254092]: 2025-11-25 16:31:55.996 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29b7b034-929a-4688-b626-db3205329806]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.040 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ec955e79-5f3e-4b40-b233-74531a470908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.043 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[23f1c6ff-2c14-4237-97cb-cb8f08f007b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:9c:56:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.068 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:33:0b:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.068 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:4d:d0:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[93d820c6-c830-4875-8104-b89ad13ce28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.086 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8240135e-c599-4b28-a77c-796a56411789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291095, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.097 254096 DEBUG nova.virt.libvirt.guest [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:56</nova:creationTime>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:31:56 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 11:31:56 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:31:56 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 11:31:56 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:56 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:31:56 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:31:56 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a27f5ab-e2bf-4eda-adff-f8fa6b107628]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291096, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291096, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.101 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.104 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.124 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-fd70e1c0-089e-49c8-b856-6ffd16627e8b" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:31:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432131039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.297 254096 DEBUG oslo_concurrency.processutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.303 254096 DEBUG nova.compute.provider_tree [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.317 254096 DEBUG nova.scheduler.client.report [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.338 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.366 254096 INFO nova.scheduler.client.report [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Deleted allocations for instance e1c7a84b-16df-49a8-83a7-a97bd47e0d43#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.436 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.516 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:31:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2088680910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:31:56 np0005535469 nova_compute[254092]: 2025-11-25 16:31:56.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.071 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.115 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.115 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.115 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.289 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4078MB free_disk=59.915714263916016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.291 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.291 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 167 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 295 op/s
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.364 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 36f65013-2906-4794-9e23-e92dc7814b6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.427 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.818 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.818 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.818 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] No waiting events found dispatching network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 WARNING nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received unexpected event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-deleted-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 WARNING nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.821 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.821 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.821 254096 WARNING nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:31:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732409056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.842 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.847 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.859 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.902 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:31:57 np0005535469 nova_compute[254092]: 2025-11-25 16:31:57.902 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.295 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-63499eed-d192-4aec-8ab6-1c3384834ed4" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.295 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-63499eed-d192-4aec-8ab6-1c3384834ed4" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.318 254096 DEBUG nova.objects.instance [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.346 254096 DEBUG nova.virt.libvirt.vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.347 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.348 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.351 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.353 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.356 254096 DEBUG nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Attempting to detach device tap63499eed-d1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.356 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:9c:56:d4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <target dev="tap63499eed-d1"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.370 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.374 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <name>instance-0000001b</name>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:56</nova:creationTime>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:83:61:d4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='tap19d5425c-f0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:9c:56:d4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='tap63499eed-d1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='net1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='tapaa0c168b-de'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='net2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='tapfd70e1c0-08'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='net3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.374 254096 INFO nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap63499eed-d1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the persistent domain config.#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] (1/8): Attempting to detach device tap63499eed-d1 with device alias net1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.375 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:9c:56:d4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <target dev="tap63499eed-d1"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 11:31:58 np0005535469 kernel: tap63499eed-d1 (unregistering): left promiscuous mode
Nov 25 11:31:58 np0005535469 NetworkManager[48891]: <info>  [1764088318.4873] device (tap63499eed-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:58Z|00184|binding|INFO|Releasing lport 63499eed-d192-4aec-8ab6-1c3384834ed4 from this chassis (sb_readonly=0)
Nov 25 11:31:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:58Z|00185|binding|INFO|Setting lport 63499eed-d192-4aec-8ab6-1c3384834ed4 down in Southbound
Nov 25 11:31:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:58Z|00186|binding|INFO|Removing iface tap63499eed-d1 ovn-installed in OVS
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.501 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:d4 10.100.0.12'], port_security=['fa:16:3e:9c:56:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63499eed-d192-4aec-8ab6-1c3384834ed4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.502 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63499eed-d192-4aec-8ab6-1c3384834ed4 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.504 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.513 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764088318.5118237, 07003872-27e7-4fd9-80cf-a34257d5aa97 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.514 254096 DEBUG nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Start waiting for the detach event from libvirt for device tap63499eed-d1 with device alias net1 for instance 07003872-27e7-4fd9-80cf-a34257d5aa97 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.514 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.527 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <name>instance-0000001b</name>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:56</nova:creationTime>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0b05dc6f-4889-4380-b791-797384b5612b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:83:61:d4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='tap19d5425c-f0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='tapaa0c168b-de'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='net2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target dev='tapfd70e1c0-08'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='net3'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.528 254096 INFO nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap63499eed-d1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the live domain config.#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.528 254096 DEBUG nova.virt.libvirt.vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.529 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.529 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.530 254096 DEBUG os_vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.531 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63499eed-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.539 254096 INFO os_vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1')#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.540 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:58</nova:creationTime>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 11:31:58 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:31:58 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:31:58 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.555 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[059aa060-3f2d-45a3-a929-6f83891f03cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.558 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6c6d7d-0044-4de7-b0c8-244a217ab1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.584 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3ce9b1-1722-4745-90b0-2f922fad0e99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b244d82b-866a-435c-890e-aad15e76f074]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291155, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.613 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45eed61f-2356-483a-a8c5-18e460a1722c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291156, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291156, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.614 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:58 np0005535469 nova_compute[254092]: 2025-11-25 16:31:58.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.617 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.617 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:31:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:31:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:31:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 25 11:31:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 25 11:31:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 25 11:31:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:59Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:d0:f4 10.100.0.14
Nov 25 11:31:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:31:59Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:d0:f4 10.100.0.14
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.088 254096 DEBUG nova.network.neutron [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port fd70e1c0-089e-49c8-b856-6ffd16627e8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.088 254096 DEBUG nova.network.neutron [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.108 254096 DEBUG oslo_concurrency.lockutils [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.286 254096 DEBUG nova.compute.manager [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-unplugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.286 254096 DEBUG oslo_concurrency.lockutils [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 DEBUG oslo_concurrency.lockutils [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 DEBUG oslo_concurrency.lockutils [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 DEBUG nova.compute.manager [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-unplugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:31:59 np0005535469 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 WARNING nova.compute.manager [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-unplugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:31:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 167 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 144 KiB/s wr, 214 op/s
Nov 25 11:32:00 np0005535469 nova_compute[254092]: 2025-11-25 16:32:00.017 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:00 np0005535469 nova_compute[254092]: 2025-11-25 16:32:00.017 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:00 np0005535469 nova_compute[254092]: 2025-11-25 16:32:00.017 254096 DEBUG nova.network.neutron [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:00 np0005535469 nova_compute[254092]: 2025-11-25 16:32:00.904 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:00 np0005535469 nova_compute[254092]: 2025-11-25 16:32:00.905 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:00 np0005535469 nova_compute[254092]: 2025-11-25 16:32:00.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:32:00 np0005535469 nova_compute[254092]: 2025-11-25 16:32:00.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.192 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 126 KiB/s wr, 188 op/s
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.388 254096 DEBUG nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.388 254096 DEBUG oslo_concurrency.lockutils [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.389 254096 DEBUG oslo_concurrency.lockutils [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.389 254096 DEBUG oslo_concurrency.lockutils [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.390 254096 DEBUG nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.390 254096 WARNING nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.390 254096 DEBUG nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.391 254096 INFO nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Neutron deleted interface 63499eed-d192-4aec-8ab6-1c3384834ed4; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.391 254096 DEBUG nova.network.neutron [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.414 254096 DEBUG nova.objects.instance [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.439 254096 DEBUG nova.objects.instance [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.459 254096 DEBUG nova.virt.libvirt.vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.460 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.461 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.466 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.472 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <name>instance-0000001b</name>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:58</nova:creationTime>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:83:61:d4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='tap19d5425c-f0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='tapaa0c168b-de'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='net2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='tapfd70e1c0-08'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='net3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.473 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.478 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <name>instance-0000001b</name>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:31:58</nova:creationTime>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:83:61:d4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='tap19d5425c-f0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='tapaa0c168b-de'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='net2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target dev='tapfd70e1c0-08'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='net3'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.480 254096 WARNING nova.virt.libvirt.driver [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Detaching interface fa:16:3e:9c:56:d4 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap63499eed-d1' not found.#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.481 254096 DEBUG nova.virt.libvirt.vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.482 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.483 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.483 254096 DEBUG os_vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.486 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63499eed-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.487 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.489 254096 INFO os_vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1')#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.491 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:32:01</nova:creationTime>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 11:32:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:32:01 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:32:01 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.591 254096 INFO nova.network.neutron [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Port 63499eed-d192-4aec-8ab6-1c3384834ed4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.766 163338 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 1f139a1f-92d8-4b65-b724-a2554b80ff31 with type ""#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.767 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d0:f4 10.100.0.14'], port_security=['fa:16:3e:4d:d0:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fd70e1c0-089e-49c8-b856-6ffd16627e8b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.768 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fd70e1c0-089e-49c8-b856-6ffd16627e8b in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.769 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:32:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:01Z|00187|binding|INFO|Removing iface tapfd70e1c0-08 ovn-installed in OVS
Nov 25 11:32:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:01Z|00188|binding|INFO|Removing lport fd70e1c0-089e-49c8-b856-6ffd16627e8b ovn-installed in OVS
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.785 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15b4e6a5-11a2-4843-a195-0257a741eda5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.816 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc8a3cb-6a99-4020-b372-a91461154b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.820 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e5e958-d980-41c7-8b48-f0379040357e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.854 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[183aa633-0148-4a1f-b4d4-ee0d61a380d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88ba20bc-3045-474d-933a-1c7f865aba80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291162, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.894 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c214fdf-6c83-4f39-bfad-d2a5bb0d6995]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291163, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291163, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.896 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:01 np0005535469 nova_compute[254092]: 2025-11-25 16:32:01.902 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.902 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.903 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.903 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.904 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.070 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.071 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.071 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.072 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.072 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.074 254096 INFO nova.compute.manager [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Terminating instance#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.075 254096 DEBUG nova.compute.manager [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:32:02 np0005535469 kernel: tap19d5425c-f0 (unregistering): left promiscuous mode
Nov 25 11:32:02 np0005535469 NetworkManager[48891]: <info>  [1764088322.1526] device (tap19d5425c-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:32:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:02Z|00189|binding|INFO|Releasing lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 from this chassis (sb_readonly=0)
Nov 25 11:32:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:02Z|00190|binding|INFO|Setting lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 down in Southbound
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:02Z|00191|binding|INFO|Removing iface tap19d5425c-f0 ovn-installed in OVS
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.167 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:61:d4 10.100.0.8'], port_security=['fa:16:3e:83:61:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd11e91d-04bc-4ecb-8ad4-320a6572500c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=19d5425c-f0c6-4c68-b8a6-cb1c6357d249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.169 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.172 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 kernel: tapaa0c168b-de (unregistering): left promiscuous mode
Nov 25 11:32:02 np0005535469 NetworkManager[48891]: <info>  [1764088322.1851] device (tapaa0c168b-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.190 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ef2fa6-3bf3-4659-8818-4acfa9fa9569]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:02Z|00192|binding|INFO|Releasing lport aa0c168b-de51-438f-a68b-f9c78a24ca7c from this chassis (sb_readonly=0)
Nov 25 11:32:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:02Z|00193|binding|INFO|Setting lport aa0c168b-de51-438f-a68b-f9c78a24ca7c down in Southbound
Nov 25 11:32:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:02Z|00194|binding|INFO|Removing iface tapaa0c168b-de ovn-installed in OVS
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.204 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:3f 10.100.0.4'], port_security=['fa:16:3e:33:0b:3f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aa0c168b-de51-438f-a68b-f9c78a24ca7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:02 np0005535469 kernel: tapfd70e1c0-08 (unregistering): left promiscuous mode
Nov 25 11:32:02 np0005535469 NetworkManager[48891]: <info>  [1764088322.2139] device (tapfd70e1c0-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.231 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fea464ee-4c07-480e-a86a-f1098b3c04be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.235 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5cb4ae-eaec-438d-bdb2-bfd09826b2ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.264 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8db6bbee-c529-4edb-af81-6e79a2923a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.285 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b847663c-f613-4a7e-a0cf-52f122f92e0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291188, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 25 11:32:02 np0005535469 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 15.556s CPU time.
Nov 25 11:32:02 np0005535469 systemd-machined[216343]: Machine qemu-31-instance-0000001b terminated.
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba2ff9c-0683-4bc2-8ec4-6abe32236e01]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291189, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291189, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.302 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.314 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.314 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.314 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.315 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.316 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aa0c168b-de51-438f-a68b-f9c78a24ca7c in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.317 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.317 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4303c86a-b904-4e5c-ae75-f9a9d1c59843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.318 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace which is not needed anymore#033[00m
Nov 25 11:32:02 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : haproxy version is 2.8.14-c23fe91
Nov 25 11:32:02 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : path to executable is /usr/sbin/haproxy
Nov 25 11:32:02 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [WARNING]  (289282) : Exiting Master process...
Nov 25 11:32:02 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [WARNING]  (289282) : Exiting Master process...
Nov 25 11:32:02 np0005535469 systemd[1]: libpod-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb.scope: Deactivated successfully.
Nov 25 11:32:02 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [ALERT]    (289282) : Current worker (289285) exited with code 143 (Terminated)
Nov 25 11:32:02 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [WARNING]  (289282) : All workers exited. Exiting... (0)
Nov 25 11:32:02 np0005535469 podman[291210]: 2025-11-25 16:32:02.461035241 +0000 UTC m=+0.048560566 container died 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:32:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb-userdata-shm.mount: Deactivated successfully.
Nov 25 11:32:02 np0005535469 NetworkManager[48891]: <info>  [1764088322.4936] manager: (tap19d5425c-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Nov 25 11:32:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a43bbff1afa7ddae500921472f918939c0d088677612449620b59fac58799310-merged.mount: Deactivated successfully.
Nov 25 11:32:02 np0005535469 podman[291210]: 2025-11-25 16:32:02.503506041 +0000 UTC m=+0.091031366 container cleanup 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:32:02 np0005535469 NetworkManager[48891]: <info>  [1764088322.5077] manager: (tapaa0c168b-de): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Nov 25 11:32:02 np0005535469 systemd[1]: libpod-conmon-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb.scope: Deactivated successfully.
Nov 25 11:32:02 np0005535469 NetworkManager[48891]: <info>  [1764088322.5176] manager: (tapfd70e1c0-08): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.542 254096 INFO nova.virt.libvirt.driver [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance destroyed successfully.#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.543 254096 DEBUG nova.objects.instance [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'resources' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.561 254096 DEBUG nova.virt.libvirt.vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.562 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.563 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.564 254096 DEBUG os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.567 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d5425c-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.572 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 podman[291254]: 2025-11-25 16:32:02.575519412 +0000 UTC m=+0.048535136 container remove 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.578 254096 INFO os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0')#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.579 254096 DEBUG nova.virt.libvirt.vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.580 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.581 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.581 254096 DEBUG os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.582 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6708c2-d3a7-4bec-88bf-23b5ee6945b6]: (4, ('Tue Nov 25 04:32:02 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb)\n2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb\nTue Nov 25 04:32:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb)\n2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.583 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa0c168b-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.584 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5eea209d-ee7e-4d73-b70f-051c46c89f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.585 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.586 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 kernel: tap52e7d5b9-00: left promiscuous mode
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.610 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[14cbb560-a96e-4a3d-beed-a33a1c930b1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.611 254096 INFO os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de')#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.612 254096 DEBUG nova.virt.libvirt.vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.613 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.613 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.614 254096 DEBUG os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.616 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd70e1c0-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:02 np0005535469 nova_compute[254092]: 2025-11-25 16:32:02.626 254096 INFO os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08')#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.627 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c839b81-a4a5-4fb5-b74c-56cf1ea1bf7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.628 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f44804dc-b867-424c-bdf2-af5069941083]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.650 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b168fc10-784d-45fd-bc14-41fe23ad3c83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471301, 'reachable_time': 29179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291296, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:02 np0005535469 systemd[1]: run-netns-ovnmeta\x2d52e7d5b9\x2d0570\x2d4e5c\x2db3da\x2d9dfcb924b83d.mount: Deactivated successfully.
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.653 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:32:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.654 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c454ce53-3c34-46d9-8b7a-0543ca3e5c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 115 KiB/s wr, 171 op/s
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.516 254096 INFO nova.virt.libvirt.driver [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deleting instance files /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97_del#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.517 254096 INFO nova.virt.libvirt.driver [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deletion of /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97_del complete#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.576 254096 INFO nova.compute.manager [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 1.50 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.576 254096 DEBUG oslo.service.loopingcall [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.576 254096 DEBUG nova.compute.manager [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.577 254096 DEBUG nova.network.neutron [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.600 254096 DEBUG nova.compute.manager [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.600 254096 INFO nova.compute.manager [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Neutron deleted interface fd70e1c0-089e-49c8-b856-6ffd16627e8b; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.601 254096 DEBUG nova.network.neutron [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.633 254096 DEBUG nova.compute.manager [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Detach interface failed, port_id=fd70e1c0-089e-49c8-b856-6ffd16627e8b, reason: Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 11:32:03 np0005535469 nova_compute[254092]: 2025-11-25 16:32:03.667 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:32:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.263 254096 DEBUG nova.network.neutron [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.297 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.299 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.300 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.300 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.320 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-63499eed-d192-4aec-8ab6-1c3384834ed4" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.497 254096 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port fd70e1c0-089e-49c8-b856-6ffd16627e8b could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 25 11:32:04 np0005535469 nova_compute[254092]: 2025-11-25 16:32:04.498 254096 DEBUG nova.network.neutron [-] Unable to show port fd70e1c0-089e-49c8-b856-6ffd16627e8b as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.029 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.030 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.049 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.129 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.129 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.136 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.136 254096 INFO nova.compute.claims [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.277 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 143 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1001 KiB/s rd, 1.3 MiB/s wr, 132 op/s
Nov 25 11:32:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255337934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.750 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.756 254096 DEBUG nova.compute.provider_tree [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.771 254096 DEBUG nova.scheduler.client.report [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.788 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.789 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.844 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.845 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.875 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.898 254096 DEBUG nova.network.neutron [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.900 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.926 254096 INFO nova.compute.manager [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 2.35 seconds to deallocate network for instance.#033[00m
Nov 25 11:32:05 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.961 254096 DEBUG nova.compute.manager [req-6f43143f-2d74-4123-a054-7f105b4e3090 req-f0b59cc4-e73a-452b-b650-b6d36d3723ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:05.999 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.001 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.001 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Creating image(s)#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.019 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.043 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.068 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.072 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.102 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.103 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.139 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.140 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.141 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.141 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.159 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.162 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3573b86d-afab-4a6f-970e-7db532c23eb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.199 254096 DEBUG nova.policy [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87058665de814ae0a51a12ff02b0d9aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed571eebde434695bae813d7bb21f4c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.253 254096 DEBUG oslo_concurrency.processutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.489 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3573b86d-afab-4a6f-970e-7db532c23eb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.568 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] resizing rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:06 np0005535469 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 25 11:32:06 np0005535469 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 13.709s CPU time.
Nov 25 11:32:06 np0005535469 systemd-machined[216343]: Machine qemu-33-instance-0000001d terminated.
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.645 254096 DEBUG nova.objects.instance [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 3573b86d-afab-4a6f-970e-7db532c23eb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.658 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Ensure instance console log exists: /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.709 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366631061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.735 254096 DEBUG oslo_concurrency.processutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.739 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.740 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.741 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.749 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.750 254096 DEBUG nova.compute.provider_tree [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.756 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance destroyed successfully.#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.761 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance destroyed successfully.#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.773 254096 DEBUG nova.scheduler.client.report [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.797 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.836 254096 INFO nova.scheduler.client.report [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Deleted allocations for instance 07003872-27e7-4fd9-80cf-a34257d5aa97#033[00m
Nov 25 11:32:06 np0005535469 nova_compute[254092]: 2025-11-25 16:32:06.914 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.085 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Successfully created port: 479811bd-7043-4423-9815-a17763247b3b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.233 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting instance files /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.234 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deletion of /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del complete#033[00m
Nov 25 11:32:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 121 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.389 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.389 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating image(s)#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.409 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.428 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.447 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.450 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.515 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.516 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.516 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.517 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.536 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.540 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.605 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.606 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.636 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.723 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.723 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.730 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.730 254096 INFO nova.compute.claims [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.827 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.881 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] resizing rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.958 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.959 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Ensure instance console log exists: /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.959 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.959 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.960 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.961 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.963 254096 WARNING nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.968 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.988 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.989 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.994 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.995 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.995 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.995 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.996 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.996 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:07 np0005535469 nova_compute[254092]: 2025-11-25 16:32:07.999 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.019 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.048 254096 DEBUG nova.compute.manager [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.049 254096 INFO nova.compute.manager [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Neutron deleted interface aa0c168b-de51-438f-a68b-f9c78a24ca7c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.049 254096 DEBUG nova.network.neutron [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.051 254096 DEBUG nova.compute.manager [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Detach interface failed, port_id=aa0c168b-de51-438f-a68b-f9c78a24ca7c, reason: Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 11:32:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285855637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.391 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.398 254096 DEBUG nova.compute.provider_tree [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.406 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Successfully updated port: 479811bd-7043-4423-9815-a17763247b3b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.418 254096 DEBUG nova.scheduler.client.report [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.422 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.422 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquired lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.422 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113155774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.441 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.442 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.445 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.467 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.472 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.502 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.503 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.519 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.579 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.673 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.676 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.677 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Creating image(s)#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.697 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.717 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.736 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.740 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.764 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.801 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.801 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.802 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.802 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.819 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.822 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9bd4d655-c683-4433-a739-168946211a75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.856 254096 DEBUG nova.policy [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a46b9493b027436fbd21d09ff5ac15e4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ae7f32b97104afd930af5d5f5754532', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:32:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172723358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.886 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.888 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <uuid>36f65013-2906-4794-9e23-e92dc7814b6e</uuid>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <name>instance-0000001d</name>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerShowV254Test-server-179263035</nova:name>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:07</nova:creationTime>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <nova:user uuid="23c828e6ebbd4d0488f6edbbe9616ca7">tempest-ServerShowV254Test-285881419-project-member</nova:user>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <nova:project uuid="1a87d91cb59d45c29155c8f5cb5ad745">tempest-ServerShowV254Test-285881419</nova:project>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <entry name="serial">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <entry name="uuid">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk.config">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log" append="off"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:08 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:08 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:08 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:08 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.953 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.954 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.956 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Using config drive#033[00m
Nov 25 11:32:08 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.979 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:08.999 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.120 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9bd4d655-c683-4433-a739-168946211a75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.166 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] resizing rbd image 9bd4d655-c683-4433-a739-168946211a75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.238 254096 DEBUG nova.objects.instance [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bd4d655-c683-4433-a739-168946211a75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.251 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Ensure instance console log exists: /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 121 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.349 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088314.3483362, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.349 254096 INFO nova.compute.manager [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.367 254096 DEBUG nova.compute.manager [None req-78d0d6d3-d257-4005-b426-dc63cee069fe - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.556 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating config drive at /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.562 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c9uok1g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.694 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c9uok1g" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.720 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.724 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.807 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updating instance_info_cache with network_info: [{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.825 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Releasing lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.826 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance network_info: |[{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.828 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start _get_guest_xml network_info=[{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.833 254096 WARNING nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.837 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.838 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.841 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.841 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.842 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.842 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.845 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.847 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.876 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:09 np0005535469 nova_compute[254092]: 2025-11-25 16:32:09.877 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting local config drive /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:09 np0005535469 systemd-machined[216343]: New machine qemu-34-instance-0000001d.
Nov 25 11:32:09 np0005535469 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.030 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Successfully created port: e1641afa-e435-45ca-a0fe-d2bb9b12981a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.156 254096 DEBUG nova.compute.manager [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-changed-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.156 254096 DEBUG nova.compute.manager [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Refreshing instance network info cache due to event network-changed-479811bd-7043-4423-9815-a17763247b3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.157 254096 DEBUG oslo_concurrency.lockutils [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.157 254096 DEBUG oslo_concurrency.lockutils [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.157 254096 DEBUG nova.network.neutron [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Refreshing network info cache for port 479811bd-7043-4423-9815-a17763247b3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596985713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.274 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.293 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.297 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549115621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.710 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.712 254096 DEBUG nova.virt.libvirt.vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1171232620',id=30,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-wlltia02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:05Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=3573b86d-afab-4a6f-970e-7db532c23eb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.713 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.714 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.715 254096 DEBUG nova.objects.instance [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3573b86d-afab-4a6f-970e-7db532c23eb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.741 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <uuid>3573b86d-afab-4a6f-970e-7db532c23eb3</uuid>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <name>instance-0000001e</name>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1171232620</nova:name>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:09</nova:creationTime>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:user uuid="87058665de814ae0a51a12ff02b0d9aa">tempest-ImagesOneServerNegativeTestJSON-964953831-project-member</nova:user>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:project uuid="ed571eebde434695bae813d7bb21f4c3">tempest-ImagesOneServerNegativeTestJSON-964953831</nova:project>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <nova:port uuid="479811bd-7043-4423-9815-a17763247b3b">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <entry name="serial">3573b86d-afab-4a6f-970e-7db532c23eb3</entry>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <entry name="uuid">3573b86d-afab-4a6f-970e-7db532c23eb3</entry>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3573b86d-afab-4a6f-970e-7db532c23eb3_disk">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:34:0a:5f"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <target dev="tap479811bd-70"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/console.log" append="off"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:10 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:10 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:10 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:10 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.746 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Preparing to wait for external event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.747 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.747 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.747 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.748 254096 DEBUG nova.virt.libvirt.vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1171232620',id=30,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-wlltia02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:05Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=3573b86d-afab-4a6f-970e-7db532c23eb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.748 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.749 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.750 254096 DEBUG os_vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.751 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.751 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.754 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap479811bd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.755 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap479811bd-70, col_values=(('external_ids', {'iface-id': '479811bd-7043-4423-9815-a17763247b3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:0a:5f', 'vm-uuid': '3573b86d-afab-4a6f-970e-7db532c23eb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:10 np0005535469 NetworkManager[48891]: <info>  [1764088330.7575] manager: (tap479811bd-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.763 254096 INFO os_vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70')#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.811 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.812 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.812 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No VIF found with MAC fa:16:3e:34:0a:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.813 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Using config drive#033[00m
Nov 25 11:32:10 np0005535469 nova_compute[254092]: 2025-11-25 16:32:10.831 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.015 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 36f65013-2906-4794-9e23-e92dc7814b6e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.017 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088331.015148, 36f65013-2906-4794-9e23-e92dc7814b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.017 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.019 254096 DEBUG nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.020 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.024 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance spawned successfully.#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.024 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.041 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.046 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.049 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.050 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.050 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.050 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.051 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.052 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.075 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.076 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088331.015657, 36f65013-2906-4794-9e23-e92dc7814b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.076 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.111 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.115 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.128 254096 DEBUG nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.148 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Successfully updated port: e1641afa-e435-45ca-a0fe-d2bb9b12981a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.167 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.171 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.171 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.171 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.208 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.209 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.209 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.266 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 7.5 MiB/s wr, 206 op/s
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.331 254096 DEBUG nova.network.neutron [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updated VIF entry in instance network info cache for port 479811bd-7043-4423-9815-a17763247b3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.332 254096 DEBUG nova.network.neutron [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updating instance_info_cache with network_info: [{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.344 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Creating config drive at /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.351 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhuvr8wt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.379 254096 DEBUG oslo_concurrency.lockutils [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.411 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.485 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhuvr8wt" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.513 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.523 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.677 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.679 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deleting local config drive /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:32:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3bb48ba9-bdf3-46d3-a080-1641f6eaa0df does not exist
Nov 25 11:32:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e981d795-a7e6-45ef-99d7-280ee226ad9f does not exist
Nov 25 11:32:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6daba3ae-5538-4ec1-a575-14b5b3c1b0c3 does not exist
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:32:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.726 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "36f65013-2906-4794-9e23-e92dc7814b6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.726 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.727 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "36f65013-2906-4794-9e23-e92dc7814b6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.727 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.727 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.729 254096 INFO nova.compute.manager [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Terminating instance#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.730 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "refresh_cache-36f65013-2906-4794-9e23-e92dc7814b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.730 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquired lock "refresh_cache-36f65013-2906-4794-9e23-e92dc7814b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.731 254096 DEBUG nova.network.neutron [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:11 np0005535469 kernel: tap479811bd-70: entered promiscuous mode
Nov 25 11:32:11 np0005535469 NetworkManager[48891]: <info>  [1764088331.7400] manager: (tap479811bd-70): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 25 11:32:11 np0005535469 systemd-udevd[292154]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:11Z|00195|binding|INFO|Claiming lport 479811bd-7043-4423-9815-a17763247b3b for this chassis.
Nov 25 11:32:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:11Z|00196|binding|INFO|479811bd-7043-4423-9815-a17763247b3b: Claiming fa:16:3e:34:0a:5f 10.100.0.14
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:11 np0005535469 NetworkManager[48891]: <info>  [1764088331.7525] device (tap479811bd-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:32:11 np0005535469 NetworkManager[48891]: <info>  [1764088331.7549] device (tap479811bd-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.753 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0a:5f 10.100.0.14'], port_security=['fa:16:3e:34:0a:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3573b86d-afab-4a6f-970e-7db532c23eb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479811bd-7043-4423-9815-a17763247b3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.755 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479811bd-7043-4423-9815-a17763247b3b in datapath 50e18e22-7850-458c-8d66-5932e0495377 bound to our chassis#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.756 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50e18e22-7850-458c-8d66-5932e0495377#033[00m
Nov 25 11:32:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:11Z|00197|binding|INFO|Setting lport 479811bd-7043-4423-9815-a17763247b3b ovn-installed in OVS
Nov 25 11:32:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:11Z|00198|binding|INFO|Setting lport 479811bd-7043-4423-9815-a17763247b3b up in Southbound
Nov 25 11:32:11 np0005535469 nova_compute[254092]: 2025-11-25 16:32:11.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.768 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e45ec790-82d5-4846-b640-71aa8307b9cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.771 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50e18e22-71 in ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.773 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50e18e22-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d615b7f-23ae-43c1-8efb-fe0e0c82958a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.774 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55910452-437d-40a1-a101-d121340bca8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 systemd-machined[216343]: New machine qemu-35-instance-0000001e.
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.789 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[af8f17e7-5d62-4585-aafe-adc2d356f49c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce155722-4ff6-4f52-afd4-6b2a68e28bf1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.848 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1f6464-4f93-48d0-b180-8714dbc387bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 NetworkManager[48891]: <info>  [1764088331.8559] manager: (tap50e18e22-70): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Nov 25 11:32:11 np0005535469 systemd-udevd[292358]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.856 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d1e12d-90a2-4c73-802a-b82a4ff2831b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.897 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4749f00a-6e46-47d5-af3b-89e9fbf198de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.900 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[42e4558e-4751-4f17-a59f-81cd05ab5b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 NetworkManager[48891]: <info>  [1764088331.9260] device (tap50e18e22-70): carrier: link connected
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.931 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24ffca32-9222-4042-9b7c-1f2794783c6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e021faa0-4fbb-46f9-b1a1-dca023d4aca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478951, 'reachable_time': 30093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292446, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.966 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d85fea67-f9e8-4a53-9a8f-47990142cc39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:147d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478951, 'tstamp': 478951}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292457, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b85fa956-4aa8-415f-99a9-577440a7a7ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478951, 'reachable_time': 30093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292471, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.020 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fd3d7c-88f7-4e70-962c-8b361abd343f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6864ff6-3a08-4e33-85f5-0548bd3996ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.083 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.083 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.083 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e18e22-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.094 254096 DEBUG nova.network.neutron [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:12 np0005535469 kernel: tap50e18e22-70: entered promiscuous mode
Nov 25 11:32:12 np0005535469 NetworkManager[48891]: <info>  [1764088332.1227] manager: (tap50e18e22-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.125 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50e18e22-70, col_values=(('external_ids', {'iface-id': '5b591f76-4d04-4b30-9182-d359be87068c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:12Z|00199|binding|INFO|Releasing lport 5b591f76-4d04-4b30-9182-d359be87068c from this chassis (sb_readonly=0)
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.131 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.131 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2f9c9f-069e-426a-ab25-e0db4fdf6d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.132 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-50e18e22-7850-458c-8d66-5932e0495377
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:32:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.134 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'env', 'PROCESS_TAG=haproxy-50e18e22-7850-458c-8d66-5932e0495377', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50e18e22-7850-458c-8d66-5932e0495377.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.272 254096 DEBUG nova.compute.manager [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.273 254096 DEBUG nova.compute.manager [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.273 254096 DEBUG oslo_concurrency.lockutils [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:12 np0005535469 podman[292559]: 2025-11-25 16:32:12.337836398 +0000 UTC m=+0.055253438 container create e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.343 254096 DEBUG nova.compute.manager [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.344 254096 DEBUG oslo_concurrency.lockutils [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.344 254096 DEBUG oslo_concurrency.lockutils [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.345 254096 DEBUG oslo_concurrency.lockutils [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.345 254096 DEBUG nova.compute.manager [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Processing event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.357 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088332.3567722, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.360 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.366 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.374 254096 INFO nova.virt.libvirt.driver [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance spawned successfully.#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.375 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:12 np0005535469 systemd[1]: Started libpod-conmon-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope.
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.384 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:12 np0005535469 podman[292559]: 2025-11-25 16:32:12.306791517 +0000 UTC m=+0.024208577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.406 254096 DEBUG nova.network.neutron [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.408 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.409 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088332.3578255, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.409 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.412 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.412 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.413 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.413 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.414 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.414 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.421 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Releasing lock "refresh_cache-36f65013-2906-4794-9e23-e92dc7814b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.422 254096 DEBUG nova.compute.manager [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:32:12 np0005535469 podman[292559]: 2025-11-25 16:32:12.432895503 +0000 UTC m=+0.150312563 container init e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 11:32:12 np0005535469 podman[292559]: 2025-11-25 16:32:12.440114918 +0000 UTC m=+0.157531958 container start e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:32:12 np0005535469 podman[292559]: 2025-11-25 16:32:12.443372397 +0000 UTC m=+0.160789457 container attach e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.444 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:12 np0005535469 sharp_satoshi[292576]: 167 167
Nov 25 11:32:12 np0005535469 systemd[1]: libpod-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope: Deactivated successfully.
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.447 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088332.3633657, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:12 np0005535469 conmon[292576]: conmon e08236e352e3c160d8c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope/container/memory.events
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.448 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:12 np0005535469 podman[292559]: 2025-11-25 16:32:12.448953478 +0000 UTC m=+0.166370518 container died e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.472 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3bf6c28813c619b2b34335b3602d299632a8b7e2b6d6e94170426f143351e0b4-merged.mount: Deactivated successfully.
Nov 25 11:32:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:32:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:32:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:32:12 np0005535469 podman[292559]: 2025-11-25 16:32:12.498380446 +0000 UTC m=+0.215797486 container remove e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:32:12 np0005535469 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 25 11:32:12 np0005535469 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Consumed 2.503s CPU time.
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.502 254096 INFO nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 6.50 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.504 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.505 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:12 np0005535469 systemd-machined[216343]: Machine qemu-34-instance-0000001d terminated.
Nov 25 11:32:12 np0005535469 systemd[1]: libpod-conmon-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope: Deactivated successfully.
Nov 25 11:32:12 np0005535469 podman[292615]: 2025-11-25 16:32:12.577741956 +0000 UTC m=+0.071196940 container create 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.592 254096 INFO nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 7.49 seconds to build instance.#033[00m
Nov 25 11:32:12 np0005535469 podman[292615]: 2025-11-25 16:32:12.529808508 +0000 UTC m=+0.023263472 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.628 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.630 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:12 np0005535469 systemd[1]: Started libpod-conmon-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe.scope.
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.651 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance destroyed successfully.#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.652 254096 DEBUG nova.objects.instance [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'resources' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0bd5c0024043409a719a708d58c044f8ad9aa267adbb154aaa37825ce7da9b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:12 np0005535469 podman[292615]: 2025-11-25 16:32:12.695208298 +0000 UTC m=+0.188663282 container init 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.698 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:12 np0005535469 podman[292615]: 2025-11-25 16:32:12.705327922 +0000 UTC m=+0.198782886 container start 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:32:12 np0005535469 podman[292641]: 2025-11-25 16:32:12.72035653 +0000 UTC m=+0.060423139 container create 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.724 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.725 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance network_info: |[{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.726 254096 DEBUG oslo_concurrency.lockutils [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.726 254096 DEBUG nova.network.neutron [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.729 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start _get_guest_xml network_info=[{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:12 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : New worker (292675) forked
Nov 25 11:32:12 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : Loading success.
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.735 254096 WARNING nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.748 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.749 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.752 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.753 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.753 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.754 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.754 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.754 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:12 np0005535469 nova_compute[254092]: 2025-11-25 16:32:12.758 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:12 np0005535469 podman[292641]: 2025-11-25 16:32:12.686890333 +0000 UTC m=+0.026956992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:32:12 np0005535469 systemd[1]: Started libpod-conmon-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope.
Nov 25 11:32:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:12 np0005535469 podman[292641]: 2025-11-25 16:32:12.882151681 +0000 UTC m=+0.222218310 container init 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:32:12 np0005535469 podman[292641]: 2025-11-25 16:32:12.89353813 +0000 UTC m=+0.233604769 container start 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:32:12 np0005535469 podman[292641]: 2025-11-25 16:32:12.918762393 +0000 UTC m=+0.258829022 container attach 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:32:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627263816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.289 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 7.5 MiB/s wr, 206 op/s
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.314 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.320 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:13.606 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:13.607 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:13.607 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234581576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.865 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.867 254096 DEBUG nova.virt.libvirt.vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-744086586',display_name='tempest-FloatingIPsAssociationTestJSON-server-744086586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-744086586',id=31,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-ev11hsqu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:08Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=9bd4d655-c683-4433-a739-168946211a75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.868 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.869 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.870 254096 DEBUG nova.objects.instance [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bd4d655-c683-4433-a739-168946211a75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.888 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <uuid>9bd4d655-c683-4433-a739-168946211a75</uuid>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <name>instance-0000001f</name>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-744086586</nova:name>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:12</nova:creationTime>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:user uuid="a46b9493b027436fbd21d09ff5ac15e4">tempest-FloatingIPsAssociationTestJSON-993856073-project-member</nova:user>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:project uuid="4ae7f32b97104afd930af5d5f5754532">tempest-FloatingIPsAssociationTestJSON-993856073</nova:project>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <nova:port uuid="e1641afa-e435-45ca-a0fe-d2bb9b12981a">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <entry name="serial">9bd4d655-c683-4433-a739-168946211a75</entry>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <entry name="uuid">9bd4d655-c683-4433-a739-168946211a75</entry>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9bd4d655-c683-4433-a739-168946211a75_disk">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9bd4d655-c683-4433-a739-168946211a75_disk.config">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:3d:bb:1b"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <target dev="tape1641afa-e4"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/console.log" append="off"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:13 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:13 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:13 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:13 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.894 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Preparing to wait for external event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.894 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.894 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.895 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.895 254096 DEBUG nova.virt.libvirt.vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-744086586',display_name='tempest-FloatingIPsAssociationTestJSON-server-744086586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-744086586',id=31,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-ev11hsqu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:08Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=9bd4d655-c683-4433-a739-168946211a75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.896 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.896 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.897 254096 DEBUG os_vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.898 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.898 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1641afa-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1641afa-e4, col_values=(('external_ids', {'iface-id': 'e1641afa-e435-45ca-a0fe-d2bb9b12981a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:bb:1b', 'vm-uuid': '9bd4d655-c683-4433-a739-168946211a75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:13 np0005535469 NetworkManager[48891]: <info>  [1764088333.9043] manager: (tape1641afa-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.911 254096 INFO os_vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4')#033[00m
Nov 25 11:32:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.965 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.966 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.966 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No VIF found with MAC fa:16:3e:3d:bb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.967 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Using config drive#033[00m
Nov 25 11:32:13 np0005535469 nova_compute[254092]: 2025-11-25 16:32:13.985 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:14 np0005535469 modest_goodall[292689]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:32:14 np0005535469 modest_goodall[292689]: --> relative data size: 1.0
Nov 25 11:32:14 np0005535469 modest_goodall[292689]: --> All data devices are unavailable
Nov 25 11:32:14 np0005535469 systemd[1]: libpod-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope: Deactivated successfully.
Nov 25 11:32:14 np0005535469 systemd[1]: libpod-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope: Consumed 1.069s CPU time.
Nov 25 11:32:14 np0005535469 podman[292641]: 2025-11-25 16:32:14.055406882 +0000 UTC m=+1.395473501 container died 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.131 254096 DEBUG nova.network.neutron [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.132 254096 DEBUG nova.network.neutron [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.147 254096 DEBUG oslo_concurrency.lockutils [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.364 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Creating config drive at /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.372 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh00k38w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.502 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh00k38w" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917-merged.mount: Deactivated successfully.
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.528 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.532 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config 9bd4d655-c683-4433-a739-168946211a75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.853 254096 DEBUG nova.compute.manager [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.854 254096 DEBUG oslo_concurrency.lockutils [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.855 254096 DEBUG oslo_concurrency.lockutils [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.855 254096 DEBUG oslo_concurrency.lockutils [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.855 254096 DEBUG nova.compute.manager [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] No waiting events found dispatching network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:14 np0005535469 nova_compute[254092]: 2025-11-25 16:32:14.856 254096 WARNING nova.compute.manager [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received unexpected event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b for instance with vm_state active and task_state None.#033[00m
Nov 25 11:32:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 140 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.5 MiB/s wr, 321 op/s
Nov 25 11:32:15 np0005535469 podman[292641]: 2025-11-25 16:32:15.332749212 +0000 UTC m=+2.672815831 container remove 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:32:15 np0005535469 nova_compute[254092]: 2025-11-25 16:32:15.382 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:15 np0005535469 nova_compute[254092]: 2025-11-25 16:32:15.383 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:15 np0005535469 systemd[1]: libpod-conmon-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope: Deactivated successfully.
Nov 25 11:32:15 np0005535469 nova_compute[254092]: 2025-11-25 16:32:15.521 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:32:15 np0005535469 nova_compute[254092]: 2025-11-25 16:32:15.770 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:15 np0005535469 nova_compute[254092]: 2025-11-25 16:32:15.771 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:15 np0005535469 nova_compute[254092]: 2025-11-25 16:32:15.779 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:32:15 np0005535469 nova_compute[254092]: 2025-11-25 16:32:15.779 254096 INFO nova.compute.claims [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:32:16 np0005535469 podman[292993]: 2025-11-25 16:32:16.019934615 +0000 UTC m=+0.022818879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.136 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.171 254096 DEBUG nova.compute.manager [None req-dc585528-2c9f-4dc3-81ea-7304281f3d40 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.223 254096 INFO nova.compute.manager [None req-dc585528-2c9f-4dc3-81ea-7304281f3d40 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] instance snapshotting#033[00m
Nov 25 11:32:16 np0005535469 podman[292993]: 2025-11-25 16:32:16.285215731 +0000 UTC m=+0.288099995 container create 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 25 11:32:16 np0005535469 systemd[1]: Started libpod-conmon-7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa.scope.
Nov 25 11:32:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123802513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.751 254096 WARNING nova.compute.manager [None req-dc585528-2c9f-4dc3-81ea-7304281f3d40 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Image not found during snapshot: nova.exception.ImageNotFound: Image 9b3e0672-bbee-46c0-933c-e8761fa34df1 could not be found.#033[00m
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.768 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.776 254096 DEBUG nova.compute.provider_tree [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.795 254096 DEBUG nova.scheduler.client.report [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.864 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:16 np0005535469 nova_compute[254092]: 2025-11-25 16:32:16.865 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:32:16 np0005535469 podman[292993]: 2025-11-25 16:32:16.882289514 +0000 UTC m=+0.885173828 container init 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:32:16 np0005535469 podman[292993]: 2025-11-25 16:32:16.894147555 +0000 UTC m=+0.897031809 container start 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:32:16 np0005535469 compassionate_matsumoto[293030]: 167 167
Nov 25 11:32:16 np0005535469 systemd[1]: libpod-7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa.scope: Deactivated successfully.
Nov 25 11:32:17 np0005535469 podman[292993]: 2025-11-25 16:32:17.266362998 +0000 UTC m=+1.269247292 container attach 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:32:17 np0005535469 podman[292993]: 2025-11-25 16:32:17.267067958 +0000 UTC m=+1.269952222 container died 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:32:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 281 op/s
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.535 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088322.5347798, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.536 254096 INFO nova.compute.manager [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.552 254096 DEBUG nova.compute.manager [None req-d169351b-30a7-493f-a590-536db81ab23e - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b4c76d64a2587e64b1f97564bb7ab2b1a03066c0a58d24e83ccd8071522880eb-merged.mount: Deactivated successfully.
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.667 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config 9bd4d655-c683-4433-a739-168946211a75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.667 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Deleting local config drive /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:17 np0005535469 NetworkManager[48891]: <info>  [1764088337.7649] manager: (tape1641afa-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 25 11:32:17 np0005535469 kernel: tape1641afa-e4: entered promiscuous mode
Nov 25 11:32:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:17Z|00200|binding|INFO|Claiming lport e1641afa-e435-45ca-a0fe-d2bb9b12981a for this chassis.
Nov 25 11:32:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:17Z|00201|binding|INFO|e1641afa-e435-45ca-a0fe-d2bb9b12981a: Claiming fa:16:3e:3d:bb:1b 10.100.0.4
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:17Z|00202|binding|INFO|Setting lport e1641afa-e435-45ca-a0fe-d2bb9b12981a ovn-installed in OVS
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:17 np0005535469 systemd-udevd[293063]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:17 np0005535469 systemd-machined[216343]: New machine qemu-36-instance-0000001f.
Nov 25 11:32:17 np0005535469 systemd[1]: Started Virtual Machine qemu-36-instance-0000001f.
Nov 25 11:32:17 np0005535469 NetworkManager[48891]: <info>  [1764088337.8489] device (tape1641afa-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:32:17 np0005535469 NetworkManager[48891]: <info>  [1764088337.8501] device (tape1641afa-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:32:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:17Z|00203|binding|INFO|Setting lport e1641afa-e435-45ca-a0fe-d2bb9b12981a up in Southbound
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.971 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bb:1b 10.100.0.4'], port_security=['fa:16:3e:3d:bb:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9bd4d655-c683-4433-a739-168946211a75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8b56bc-492b-4082-b8de-60d496652da7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ae7f32b97104afd930af5d5f5754532', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c729f64-6320-4599-a74a-ed70b78f82ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81a9c566-d2a0-4e54-9d3c-480355005098, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e1641afa-e435-45ca-a0fe-d2bb9b12981a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.972 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e1641afa-e435-45ca-a0fe-d2bb9b12981a in datapath 7e8b56bc-492b-4082-b8de-60d496652da7 bound to our chassis#033[00m
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.974 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e8b56bc-492b-4082-b8de-60d496652da7#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.980 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:32:17 np0005535469 nova_compute[254092]: 2025-11-25 16:32:17.981 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.993 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a5ca85-5026-411d-86a6-09cf1a15ac2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.994 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e8b56bc-41 in ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.996 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e8b56bc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[baa1b9b1-3b3a-450f-8fc1-abc77258922e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.997 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcc292e-5bb7-4ebf-957e-a0e5c3c3fe2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.008 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[622832bf-d176-4328-b610-1bc4081df95b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.019 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f954c72-0f8b-4f43-8548-f8c141b5c461]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.034 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:32:18 np0005535469 podman[292993]: 2025-11-25 16:32:18.036307734 +0000 UTC m=+2.039191998 container remove 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.082 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb845b9-db5a-4007-9198-355b9f4b71fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.086 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.093 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb991be-91a9-4109-b93f-51a19f4ce7f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 NetworkManager[48891]: <info>  [1764088338.0945] manager: (tap7e8b56bc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 25 11:32:18 np0005535469 systemd[1]: libpod-conmon-7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa.scope: Deactivated successfully.
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.136 254096 DEBUG nova.policy [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.134 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d10eb-bf6d-4bb1-b1c6-939f8450dab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.157 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5983060b-20b7-4707-93a6-3453788277f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 NetworkManager[48891]: <info>  [1764088338.1859] device (tap7e8b56bc-40): carrier: link connected
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.191 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dbaa2650-3d36-4ff9-b94f-b03fdc861248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ff902f-2a0b-440d-aaab-e3b0d145b642]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293124, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.226 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3fe7d5-54ce-4556-b7fe-24495f7b9d8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:16c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479577, 'tstamp': 479577}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293135, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.240 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e3c06e-e2ba-4256-b018-a8b689ab4dc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293136, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.285 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60ce4900-77d8-4257-b8a2-22411f41d330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 podman[293122]: 2025-11-25 16:32:18.30824891 +0000 UTC m=+0.108961383 container create d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:32:18 np0005535469 podman[293122]: 2025-11-25 16:32:18.225611632 +0000 UTC m=+0.026324145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.355 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.356 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.357 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Creating image(s)#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.373 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f5a85b-2f3d-4958-9a21-e874581bdf81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8b56bc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e8b56bc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:18 np0005535469 kernel: tap7e8b56bc-40: entered promiscuous mode
Nov 25 11:32:18 np0005535469 NetworkManager[48891]: <info>  [1764088338.3781] manager: (tap7e8b56bc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.382 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e8b56bc-40, col_values=(('external_ids', {'iface-id': 'f3398af3-7278-4ca0-adcc-f3bb48f595e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:18 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:18Z|00204|binding|INFO|Releasing lport f3398af3-7278-4ca0-adcc-f3bb48f595e9 from this chassis (sb_readonly=0)
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.386 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e8b56bc-492b-4082-b8de-60d496652da7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e8b56bc-492b-4082-b8de-60d496652da7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.397 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[967b2812-c47c-4c33-9442-dcadd8ec5814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.401 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-7e8b56bc-492b-4082-b8de-60d496652da7
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/7e8b56bc-492b-4082-b8de-60d496652da7.pid.haproxy
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 7e8b56bc-492b-4082-b8de-60d496652da7
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:32:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.402 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'env', 'PROCESS_TAG=haproxy-7e8b56bc-492b-4082-b8de-60d496652da7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e8b56bc-492b-4082-b8de-60d496652da7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.404 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:18 np0005535469 systemd[1]: Started libpod-conmon-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope.
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.449 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.484 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.502 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.533 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088338.4962533, 9bd4d655-c683-4433-a739-168946211a75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.533 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.560 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.564 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088338.4963384, 9bd4d655-c683-4433-a739-168946211a75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.565 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.567 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.568 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.568 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.569 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.600 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.649 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.696 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:18 np0005535469 podman[293122]: 2025-11-25 16:32:18.700074893 +0000 UTC m=+0.500787406 container init d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:18 np0005535469 podman[293122]: 2025-11-25 16:32:18.713460726 +0000 UTC m=+0.514173209 container start d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.729 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.773 254096 DEBUG nova.compute.manager [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.774 254096 DEBUG oslo_concurrency.lockutils [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.774 254096 DEBUG oslo_concurrency.lockutils [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.774 254096 DEBUG oslo_concurrency.lockutils [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.775 254096 DEBUG nova.compute.manager [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Processing event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.775 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.790 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088338.7878494, 9bd4d655-c683-4433-a739-168946211a75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.791 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.798 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.812 254096 INFO nova.virt.libvirt.driver [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance spawned successfully.#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.812 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.817 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.825 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:18 np0005535469 podman[293122]: 2025-11-25 16:32:18.827841084 +0000 UTC m=+0.628553567 container attach d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.839 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.839 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.839 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.840 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.840 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.840 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.860 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:18 np0005535469 nova_compute[254092]: 2025-11-25 16:32:18.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:18 np0005535469 podman[293274]: 2025-11-25 16:32:18.850920519 +0000 UTC m=+0.103523705 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.029 254096 INFO nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Took 10.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.030 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:19 np0005535469 podman[293274]: 2025-11-25 16:32:19.031735777 +0000 UTC m=+0.284338933 container create 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:32:19 np0005535469 systemd[1]: Started libpod-conmon-034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8.scope.
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.084 254096 INFO nova.virt.libvirt.driver [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting instance files /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.086 254096 INFO nova.virt.libvirt.driver [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deletion of /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del complete#033[00m
Nov 25 11:32:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d27d58d374f702763ab8e079fd623c67dbe3ed8e1cbfe8cc867e245d586f30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.140 254096 INFO nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Took 11.44 seconds to build instance.#033[00m
Nov 25 11:32:19 np0005535469 podman[293274]: 2025-11-25 16:32:19.169148 +0000 UTC m=+0.421751176 container init 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:32:19 np0005535469 podman[293274]: 2025-11-25 16:32:19.17693502 +0000 UTC m=+0.429538176 container start 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.183 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.188 254096 INFO nova.compute.manager [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 6.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.189 254096 DEBUG oslo.service.loopingcall [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.189 254096 DEBUG nova.compute.manager [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.189 254096 DEBUG nova.network.neutron [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:32:19 np0005535469 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [NOTICE]   (293310) : New worker (293312) forked
Nov 25 11:32:19 np0005535469 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [NOTICE]   (293310) : Loading success.
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.229 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Successfully created port: 82f63517-7636-46bf-b4e1-ba191ddad018 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.283 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 265 op/s
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.350 254096 DEBUG nova.network.neutron [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.358 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.391 254096 DEBUG nova.network.neutron [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.465 254096 INFO nova.compute.manager [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 0.28 seconds to deallocate network for instance.#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.486 254096 DEBUG nova.objects.instance [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.511 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.511 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Ensure instance console log exists: /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.512 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.512 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.512 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.514 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.514 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]: {
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:    "0": [
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:        {
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "devices": [
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "/dev/loop3"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            ],
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_name": "ceph_lv0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_size": "21470642176",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "name": "ceph_lv0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "tags": {
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cluster_name": "ceph",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.crush_device_class": "",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.encrypted": "0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osd_id": "0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.type": "block",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.vdo": "0"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            },
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "type": "block",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "vg_name": "ceph_vg0"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:        }
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:    ],
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:    "1": [
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:        {
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "devices": [
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "/dev/loop4"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            ],
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_name": "ceph_lv1",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_size": "21470642176",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "name": "ceph_lv1",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "tags": {
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cluster_name": "ceph",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.crush_device_class": "",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.encrypted": "0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osd_id": "1",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.type": "block",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.vdo": "0"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            },
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "type": "block",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "vg_name": "ceph_vg1"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:        }
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:    ],
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:    "2": [
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:        {
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "devices": [
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "/dev/loop5"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            ],
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_name": "ceph_lv2",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_size": "21470642176",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "name": "ceph_lv2",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "tags": {
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.cluster_name": "ceph",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.crush_device_class": "",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.encrypted": "0",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osd_id": "2",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.type": "block",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:                "ceph.vdo": "0"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            },
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "type": "block",
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:            "vg_name": "ceph_vg2"
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:        }
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]:    ]
Nov 25 11:32:19 np0005535469 practical_cartwright[293206]: }
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.565 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.565 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.566 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.566 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.566 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.567 254096 INFO nova.compute.manager [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Terminating instance#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.568 254096 DEBUG nova.compute.manager [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:32:19 np0005535469 systemd[1]: libpod-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope: Deactivated successfully.
Nov 25 11:32:19 np0005535469 conmon[293206]: conmon d00d98f71129d2f079f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope/container/memory.events
Nov 25 11:32:19 np0005535469 podman[293122]: 2025-11-25 16:32:19.597598404 +0000 UTC m=+1.398310907 container died d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:32:19 np0005535469 kernel: tap479811bd-70 (unregistering): left promiscuous mode
Nov 25 11:32:19 np0005535469 NetworkManager[48891]: <info>  [1764088339.6075] device (tap479811bd-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:19 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:19Z|00205|binding|INFO|Releasing lport 479811bd-7043-4423-9815-a17763247b3b from this chassis (sb_readonly=0)
Nov 25 11:32:19 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:19Z|00206|binding|INFO|Setting lport 479811bd-7043-4423-9815-a17763247b3b down in Southbound
Nov 25 11:32:19 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:19Z|00207|binding|INFO|Removing iface tap479811bd-70 ovn-installed in OVS
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986-merged.mount: Deactivated successfully.
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.631 254096 DEBUG oslo_concurrency.processutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.633 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0a:5f 10.100.0.14'], port_security=['fa:16:3e:34:0a:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3573b86d-afab-4a6f-970e-7db532c23eb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479811bd-7043-4423-9815-a17763247b3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.635 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479811bd-7043-4423-9815-a17763247b3b in datapath 50e18e22-7850-458c-8d66-5932e0495377 unbound from our chassis#033[00m
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.636 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50e18e22-7850-458c-8d66-5932e0495377, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.637 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6c6d61-430d-4edc-a3a2-a1f26735e5a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.638 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace which is not needed anymore#033[00m
Nov 25 11:32:19 np0005535469 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 25 11:32:19 np0005535469 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 7.734s CPU time.
Nov 25 11:32:19 np0005535469 systemd-machined[216343]: Machine qemu-35-instance-0000001e terminated.
Nov 25 11:32:19 np0005535469 podman[293122]: 2025-11-25 16:32:19.659844681 +0000 UTC m=+1.460557164 container remove d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:19 np0005535469 systemd[1]: libpod-conmon-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope: Deactivated successfully.
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:19 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : haproxy version is 2.8.14-c23fe91
Nov 25 11:32:19 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : path to executable is /usr/sbin/haproxy
Nov 25 11:32:19 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [WARNING]  (292673) : Exiting Master process...
Nov 25 11:32:19 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [ALERT]    (292673) : Current worker (292675) exited with code 143 (Terminated)
Nov 25 11:32:19 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [WARNING]  (292673) : All workers exited. Exiting... (0)
Nov 25 11:32:19 np0005535469 systemd[1]: libpod-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe.scope: Deactivated successfully.
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:19 np0005535469 podman[293448]: 2025-11-25 16:32:19.818232871 +0000 UTC m=+0.064191900 container died 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.818 254096 INFO nova.virt.libvirt.driver [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance destroyed successfully.#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.819 254096 DEBUG nova.objects.instance [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'resources' on Instance uuid 3573b86d-afab-4a6f-970e-7db532c23eb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.836 254096 DEBUG nova.virt.libvirt.vif [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1171232620',id=30,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-wlltia02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:16Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=3573b86d-afab-4a6f-970e-7db532c23eb3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.838 254096 DEBUG nova.network.os_vif_util [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.839 254096 DEBUG nova.network.os_vif_util [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.840 254096 DEBUG os_vif [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.846 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap479811bd-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.859 254096 INFO os_vif [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70')#033[00m
Nov 25 11:32:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe-userdata-shm.mount: Deactivated successfully.
Nov 25 11:32:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e0bd5c0024043409a719a708d58c044f8ad9aa267adbb154aaa37825ce7da9b6-merged.mount: Deactivated successfully.
Nov 25 11:32:19 np0005535469 podman[293448]: 2025-11-25 16:32:19.881426443 +0000 UTC m=+0.127385442 container cleanup 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:32:19 np0005535469 systemd[1]: libpod-conmon-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe.scope: Deactivated successfully.
Nov 25 11:32:19 np0005535469 podman[293554]: 2025-11-25 16:32:19.968578274 +0000 UTC m=+0.052387221 container remove 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4f34c4-5664-403b-bdec-a7803a71b364]: (4, ('Tue Nov 25 04:32:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe)\n46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe\nTue Nov 25 04:32:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe)\n46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.980 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc022f19-b545-4870-9be8-b9d557761a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.982 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:19 np0005535469 kernel: tap50e18e22-70: left promiscuous mode
Nov 25 11:32:19 np0005535469 nova_compute[254092]: 2025-11-25 16:32:19.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7bcb90-dc69-4562-bd3d-c999dc794c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2dda8c1b-7e83-4740-985c-0e8e156233cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.026 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.031 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7714e0f4-5f54-4c24-a723-1ecc32304fe8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.055 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5901d7-7fa9-4514-9356-dd1e7435d2cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478943, 'reachable_time': 34185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293615, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:20 np0005535469 systemd[1]: run-netns-ovnmeta\x2d50e18e22\x2d7850\x2d458c\x2d8d66\x2d5932e0495377.mount: Deactivated successfully.
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.059 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.059 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[17ee3968-0ebc-4afc-98b8-a134e5e341ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.066 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:32:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514997394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.208 254096 DEBUG oslo_concurrency.processutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.216 254096 DEBUG nova.compute.provider_tree [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.238 254096 DEBUG nova.scheduler.client.report [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.261 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.298 254096 INFO nova.scheduler.client.report [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Deleted allocations for instance 36f65013-2906-4794-9e23-e92dc7814b6e#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.346 254096 INFO nova.virt.libvirt.driver [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deleting instance files /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3_del#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.347 254096 INFO nova.virt.libvirt.driver [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deletion of /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3_del complete#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.369 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.401 254096 DEBUG nova.compute.manager [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-unplugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.402 254096 DEBUG oslo_concurrency.lockutils [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.402 254096 DEBUG oslo_concurrency.lockutils [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.402 254096 DEBUG oslo_concurrency.lockutils [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.403 254096 DEBUG nova.compute.manager [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] No waiting events found dispatching network-vif-unplugged-479811bd-7043-4423-9815-a17763247b3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.403 254096 DEBUG nova.compute.manager [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-unplugged-479811bd-7043-4423-9815-a17763247b3b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:32:20 np0005535469 podman[293660]: 2025-11-25 16:32:20.405756726 +0000 UTC m=+0.044421225 container create 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.418 254096 INFO nova.compute.manager [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.418 254096 DEBUG oslo.service.loopingcall [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.419 254096 DEBUG nova.compute.manager [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.419 254096 DEBUG nova.network.neutron [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:32:20 np0005535469 systemd[1]: Started libpod-conmon-27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e.scope.
Nov 25 11:32:20 np0005535469 podman[293660]: 2025-11-25 16:32:20.385515468 +0000 UTC m=+0.024179987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:32:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:20 np0005535469 podman[293660]: 2025-11-25 16:32:20.511471189 +0000 UTC m=+0.150135688 container init 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:32:20 np0005535469 podman[293660]: 2025-11-25 16:32:20.520406531 +0000 UTC m=+0.159071030 container start 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 11:32:20 np0005535469 podman[293660]: 2025-11-25 16:32:20.524181993 +0000 UTC m=+0.162846552 container attach 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:32:20 np0005535469 great_feynman[293677]: 167 167
Nov 25 11:32:20 np0005535469 systemd[1]: libpod-27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e.scope: Deactivated successfully.
Nov 25 11:32:20 np0005535469 podman[293660]: 2025-11-25 16:32:20.529156618 +0000 UTC m=+0.167821157 container died 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:32:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-640fae6eb89f25ed8c78e49639d6639a442b8fa9907d27487fdfd638d4ffbb7f-merged.mount: Deactivated successfully.
Nov 25 11:32:20 np0005535469 podman[293660]: 2025-11-25 16:32:20.579528103 +0000 UTC m=+0.218192622 container remove 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:32:20 np0005535469 systemd[1]: libpod-conmon-27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e.scope: Deactivated successfully.
Nov 25 11:32:20 np0005535469 podman[293700]: 2025-11-25 16:32:20.778740859 +0000 UTC m=+0.049027349 container create 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:32:20 np0005535469 systemd[1]: Started libpod-conmon-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope.
Nov 25 11:32:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:20 np0005535469 podman[293700]: 2025-11-25 16:32:20.758784108 +0000 UTC m=+0.029070628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.853 254096 DEBUG nova.compute.manager [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG oslo_concurrency.lockutils [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG oslo_concurrency.lockutils [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG oslo_concurrency.lockutils [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG nova.compute.manager [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] No waiting events found dispatching network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:20 np0005535469 nova_compute[254092]: 2025-11-25 16:32:20.856 254096 WARNING nova.compute.manager [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received unexpected event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a for instance with vm_state active and task_state None.#033[00m
Nov 25 11:32:20 np0005535469 podman[293700]: 2025-11-25 16:32:20.875937082 +0000 UTC m=+0.146223592 container init 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:32:20 np0005535469 podman[293700]: 2025-11-25 16:32:20.885774618 +0000 UTC m=+0.156061108 container start 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:32:20 np0005535469 podman[293700]: 2025-11-25 16:32:20.889833318 +0000 UTC m=+0.160119828 container attach 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 11:32:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 171 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.1 MiB/s wr, 350 op/s
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.382 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Successfully updated port: 82f63517-7636-46bf-b4e1-ba191ddad018 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.399 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.400 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.400 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.665 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.748 254096 DEBUG nova.network.neutron [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.774 254096 INFO nova.compute.manager [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.831 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.832 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:21 np0005535469 nova_compute[254092]: 2025-11-25 16:32:21.916 254096 DEBUG oslo_concurrency.processutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]: {
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "osd_id": 1,
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "type": "bluestore"
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:    },
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "osd_id": 2,
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "type": "bluestore"
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:    },
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "osd_id": 0,
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:        "type": "bluestore"
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]:    }
Nov 25 11:32:21 np0005535469 modest_mahavira[293716]: }
Nov 25 11:32:21 np0005535469 podman[293700]: 2025-11-25 16:32:21.977673484 +0000 UTC m=+1.247959974 container died 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:32:21 np0005535469 systemd[1]: libpod-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope: Deactivated successfully.
Nov 25 11:32:21 np0005535469 systemd[1]: libpod-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope: Consumed 1.066s CPU time.
Nov 25 11:32:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b-merged.mount: Deactivated successfully.
Nov 25 11:32:22 np0005535469 podman[293700]: 2025-11-25 16:32:22.043878898 +0000 UTC m=+1.314165378 container remove 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:32:22 np0005535469 systemd[1]: libpod-conmon-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope: Deactivated successfully.
Nov 25 11:32:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:32:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:32:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:32:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:32:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 607c2767-74d2-4a41-8912-1a54487e6eeb does not exist
Nov 25 11:32:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 28a0e214-6bc0-4656-8e15-9805c998868b does not exist
Nov 25 11:32:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196492239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.426 254096 DEBUG oslo_concurrency.processutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.435 254096 DEBUG nova.compute.provider_tree [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.450 254096 DEBUG nova.scheduler.client.report [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.468 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.492 254096 INFO nova.scheduler.client.report [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Deleted allocations for instance 3573b86d-afab-4a6f-970e-7db532c23eb3#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.548 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.708 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.731 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.732 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance network_info: |[{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.737 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start _get_guest_xml network_info=[{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.744 254096 WARNING nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.750 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.751 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.755 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.755 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.756 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.756 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.759 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:22 np0005535469 nova_compute[254092]: 2025-11-25 16:32:22.762 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.013 254096 DEBUG nova.compute.manager [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.014 254096 DEBUG oslo_concurrency.lockutils [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.014 254096 DEBUG oslo_concurrency.lockutils [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.014 254096 DEBUG oslo_concurrency.lockutils [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.015 254096 DEBUG nova.compute.manager [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] No waiting events found dispatching network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.015 254096 WARNING nova.compute.manager [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received unexpected event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:32:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:32:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:32:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40773822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.227 254096 DEBUG nova.compute.manager [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.228 254096 DEBUG nova.compute.manager [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.228 254096 DEBUG oslo_concurrency.lockutils [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.229 254096 DEBUG oslo_concurrency.lockutils [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.229 254096 DEBUG nova.network.neutron [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.245 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.269 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.276 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 171 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 233 op/s
Nov 25 11:32:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733694160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.720 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.723 254096 DEBUG nova.virt.libvirt.vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.724 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.726 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.729 254096 DEBUG nova.objects.instance [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.746 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <uuid>800c66e3-ee9f-4766-92f2-ecda5671cde3</uuid>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <name>instance-00000020</name>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:22</nova:creationTime>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <entry name="serial">800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <entry name="uuid">800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f0:7b:ca"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <target dev="tap82f63517-76"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log" append="off"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:23 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:23 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:23 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:23 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.759 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Preparing to wait for external event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.759 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.760 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.760 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.762 254096 DEBUG nova.virt.libvirt.vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.762 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.763 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.764 254096 DEBUG os_vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.767 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.768 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.773 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82f63517-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.774 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82f63517-76, col_values=(('external_ids', {'iface-id': '82f63517-7636-46bf-b4e1-ba191ddad018', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:7b:ca', 'vm-uuid': '800c66e3-ee9f-4766-92f2-ecda5671cde3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:23 np0005535469 NetworkManager[48891]: <info>  [1764088343.7780] manager: (tap82f63517-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.786 254096 INFO os_vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76')#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.847 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.848 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.848 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:f0:7b:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.849 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Using config drive#033[00m
Nov 25 11:32:23 np0005535469 nova_compute[254092]: 2025-11-25 16:32:23.872 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:24 np0005535469 nova_compute[254092]: 2025-11-25 16:32:24.317 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Creating config drive at /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config#033[00m
Nov 25 11:32:24 np0005535469 nova_compute[254092]: 2025-11-25 16:32:24.330 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqezyp34s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:24 np0005535469 nova_compute[254092]: 2025-11-25 16:32:24.476 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqezyp34s" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:24 np0005535469 nova_compute[254092]: 2025-11-25 16:32:24.511 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:24 np0005535469 nova_compute[254092]: 2025-11-25 16:32:24.517 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.227 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.227 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Deleting local config drive /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:25 np0005535469 kernel: tap82f63517-76: entered promiscuous mode
Nov 25 11:32:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:25Z|00208|binding|INFO|Claiming lport 82f63517-7636-46bf-b4e1-ba191ddad018 for this chassis.
Nov 25 11:32:25 np0005535469 NetworkManager[48891]: <info>  [1764088345.3055] manager: (tap82f63517-76): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Nov 25 11:32:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:25Z|00209|binding|INFO|82f63517-7636-46bf-b4e1-ba191ddad018: Claiming fa:16:3e:f0:7b:ca 10.100.0.6
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 285 op/s
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.312 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:7b:ca 10.100.0.6'], port_security=['fa:16:3e:f0:7b:ca 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '800c66e3-ee9f-4766-92f2-ecda5671cde3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=82f63517-7636-46bf-b4e1-ba191ddad018) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.313 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 82f63517-7636-46bf-b4e1-ba191ddad018 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.314 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:25Z|00210|binding|INFO|Setting lport 82f63517-7636-46bf-b4e1-ba191ddad018 ovn-installed in OVS
Nov 25 11:32:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:25Z|00211|binding|INFO|Setting lport 82f63517-7636-46bf-b4e1-ba191ddad018 up in Southbound
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.333 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11ce2ded-3ac1-48bb-a9d4-d6f92e1f8abd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.334 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52e7d5b9-01 in ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.336 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52e7d5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5266d0fd-a93b-4bb6-b099-3766c8edbd1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.339 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[68f6cd5f-2654-4697-811b-77b675c90663]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.349 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d28eb5d8-0fe2-438d-a933-d586d4b0b7ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 systemd-udevd[293970]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:25 np0005535469 systemd-machined[216343]: New machine qemu-37-instance-00000020.
Nov 25 11:32:25 np0005535469 NetworkManager[48891]: <info>  [1764088345.3693] device (tap82f63517-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdc63b4-6dee-4a47-bde0-bec4dbb1b710]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 NetworkManager[48891]: <info>  [1764088345.3722] device (tap82f63517-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:32:25 np0005535469 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.400 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dffdef69-8fa0-4638-9f63-9ff022135a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.405 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9315e53-8aa0-4b51-9644-c77e06144948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 systemd-udevd[293973]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:25 np0005535469 NetworkManager[48891]: <info>  [1764088345.4077] manager: (tap52e7d5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.444 254096 DEBUG nova.network.neutron [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.445 254096 DEBUG nova.network.neutron [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.449 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3c7966-c654-4b6a-9d50-a841aab84352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.452 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7e14f8-927f-46a2-858e-591082806ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.457 254096 DEBUG oslo_concurrency.lockutils [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.457 254096 DEBUG nova.compute.manager [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-deleted-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:25 np0005535469 NetworkManager[48891]: <info>  [1764088345.4768] device (tap52e7d5b9-00): carrier: link connected
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.487 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a46688-f9d2-46bb-b7a9-e91d85f7151e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[885046db-a3da-4df2-a996-62ed292f7484]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294001, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.519 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcdf522-4437-48f0-ac03-65faadbcf231]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:97ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480306, 'tstamp': 480306}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294002, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.535 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[98613650-1819-429b-b870-b87351242aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294003, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.563 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0330332a-1abe-4d50-9538-d590325277c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88b7a91c-a71f-4b3a-825b-4fe9ab3d8c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.614 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.614 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.615 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:25 np0005535469 kernel: tap52e7d5b9-00: entered promiscuous mode
Nov 25 11:32:25 np0005535469 NetworkManager[48891]: <info>  [1764088345.6183] manager: (tap52e7d5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.620 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:25Z|00212|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.623 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:25 np0005535469 nova_compute[254092]: 2025-11-25 16:32:25.640 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.640 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.641 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae974164-1085-4136-ada2-b7f27a34b633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.643 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:32:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.644 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'env', 'PROCESS_TAG=haproxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:32:26 np0005535469 podman[294035]: 2025-11-25 16:32:26.033349422 +0000 UTC m=+0.068205249 container create 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 11:32:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:26.072 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:26 np0005535469 systemd[1]: Started libpod-conmon-892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9.scope.
Nov 25 11:32:26 np0005535469 podman[294035]: 2025-11-25 16:32:26.004631104 +0000 UTC m=+0.039487041 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:32:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e328b169ee953840c8739043c0f2cb5f976a6a6651659cd6ce366c78ba5f4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:26 np0005535469 podman[294048]: 2025-11-25 16:32:26.144840862 +0000 UTC m=+0.077606143 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 11:32:26 np0005535469 podman[294035]: 2025-11-25 16:32:26.145309785 +0000 UTC m=+0.180165642 container init 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:32:26 np0005535469 podman[294049]: 2025-11-25 16:32:26.145342366 +0000 UTC m=+0.071748975 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 11:32:26 np0005535469 podman[294035]: 2025-11-25 16:32:26.151468901 +0000 UTC m=+0.186324738 container start 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:32:26 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [NOTICE]   (294132) : New worker (294150) forked
Nov 25 11:32:26 np0005535469 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [NOTICE]   (294132) : Loading success.
Nov 25 11:32:26 np0005535469 podman[294050]: 2025-11-25 16:32:26.204535029 +0000 UTC m=+0.128139122 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.283 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088346.2824101, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.283 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.305 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.308 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088346.282534, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.308 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.326 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.329 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.347 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.445 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.445 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.464 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.542 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.543 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.550 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.550 254096 INFO nova.compute.claims [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG nova.compute.manager [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG oslo_concurrency.lockutils [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG oslo_concurrency.lockutils [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG oslo_concurrency.lockutils [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.556 254096 DEBUG nova.compute.manager [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Processing event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.556 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.559 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088346.5594587, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.560 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.561 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.565 254096 INFO nova.virt.libvirt.driver [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance spawned successfully.#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.565 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.582 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.584 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.584 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.584 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.585 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.585 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.586 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.591 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.616 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.650 254096 INFO nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Took 8.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.650 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.683 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.762 254096 INFO nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Took 11.02 seconds to build instance.#033[00m
Nov 25 11:32:26 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.791 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.994 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:26.994 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.018 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.099 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209657782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.213 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.220 254096 DEBUG nova.compute.provider_tree [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.239 254096 DEBUG nova.scheduler.client.report [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.261 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.262 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.264 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.270 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.270 254096 INFO nova.compute.claims [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:32:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.310 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "616ec95d-6c7d-420e-991d-3cbc11339768" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.310 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.349 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.350 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.370 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.374 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.399 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.470 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.488 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.489 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.490 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Creating image(s)#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.507 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.527 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.547 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.550 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.579 254096 DEBUG nova.policy [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a46b9493b027436fbd21d09ff5ac15e4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ae7f32b97104afd930af5d5f5754532', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.596 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.629 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.630 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.631 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.632 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.657 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.664 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.692 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088332.6485481, 36f65013-2906-4794-9e23-e92dc7814b6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.693 254096 INFO nova.compute.manager [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.715 254096 DEBUG nova.compute.manager [None req-360242e8-125a-447b-90d9-e7049a4b7e7c - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:27 np0005535469 nova_compute[254092]: 2025-11-25 16:32:27.944 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.000 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] resizing rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083808979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.059 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.065 254096 DEBUG nova.compute.provider_tree [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.102 254096 DEBUG nova.scheduler.client.report [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.115 254096 DEBUG nova.objects.instance [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'migration_context' on Instance uuid a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.139 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.140 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.144 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.145 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.146 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Ensure instance console log exists: /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.146 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.147 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.147 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.155 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.155 254096 INFO nova.compute.claims [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.210 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.211 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.232 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.248 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.334 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.335 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.336 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Creating image(s)#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.359 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.383 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.410 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.414 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.460 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.488 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.489 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.490 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.490 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.511 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.515 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.663 254096 DEBUG nova.policy [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87058665de814ae0a51a12ff02b0d9aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed571eebde434695bae813d7bb21f4c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG nova.compute.manager [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG oslo_concurrency.lockutils [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG oslo_concurrency.lockutils [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG oslo_concurrency.lockutils [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.719 254096 DEBUG nova.compute.manager [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] No waiting events found dispatching network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.719 254096 WARNING nova.compute.manager [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received unexpected event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.836 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688985666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.894 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] resizing rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.924 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.932 254096 DEBUG nova.compute.provider_tree [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.946 254096 DEBUG nova.scheduler.client.report [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.965 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:28 np0005535469 nova_compute[254092]: 2025-11-25 16:32:28.966 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.002 254096 DEBUG nova.objects.instance [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.008 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.017 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.018 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Ensure instance console log exists: /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.018 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.018 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.019 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.022 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.036 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.121 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.122 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.122 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating image(s)#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.146 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.171 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.198 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.201 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.295 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.296 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.297 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.297 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.320 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.324 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.419 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Successfully created port: 5cf8fe87-4cac-403f-8611-0ddb37516abd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.598 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.673 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] resizing rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.785 254096 DEBUG nova.objects.instance [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.806 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.807 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Ensure instance console log exists: /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.808 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.808 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.808 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.810 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.814 254096 WARNING nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.821 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.821 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.824 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.825 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.825 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.826 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.826 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.826 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:29 np0005535469 nova_compute[254092]: 2025-11-25 16:32:29.831 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.039 254096 DEBUG nova.compute.manager [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.039 254096 DEBUG nova.compute.manager [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.040 254096 DEBUG oslo_concurrency.lockutils [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.040 254096 DEBUG oslo_concurrency.lockutils [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.040 254096 DEBUG nova.network.neutron [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.187 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Successfully created port: 2a798aec-112b-42d0-9128-639b456b201e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:32:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58006728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.287 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.311 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.317 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480326742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.735 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.738 254096 DEBUG nova.objects.instance [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.761 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <uuid>616ec95d-6c7d-420e-991d-3cbc11339768</uuid>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <name>instance-00000023</name>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerShowV257Test-server-2017219999</nova:name>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:29</nova:creationTime>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <nova:user uuid="204d6790ef4644f6a11d8afd611b7f8d">tempest-ServerShowV257Test-1749590920-project-member</nova:user>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <nova:project uuid="f7ec8b6f4599458ebb55ba5d9a7463c3">tempest-ServerShowV257Test-1749590920</nova:project>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <entry name="serial">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <entry name="uuid">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk.config">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log" append="off"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:30 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:30 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:30 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:30 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.832 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.833 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.833 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Using config drive#033[00m
Nov 25 11:32:30 np0005535469 nova_compute[254092]: 2025-11-25 16:32:30.860 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.048 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating config drive at /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.053 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesy6ubfe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.186 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesy6ubfe" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.209 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.213 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.256 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Successfully updated port: 5cf8fe87-4cac-403f-8611-0ddb37516abd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.274 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.275 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.275 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.310 254096 DEBUG nova.network.neutron [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.311 254096 DEBUG nova.network.neutron [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 276 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 294 op/s
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.334 254096 DEBUG oslo_concurrency.lockutils [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.347 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.347 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting local config drive /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:31Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:bb:1b 10.100.0.4
Nov 25 11:32:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:31Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:bb:1b 10.100.0.4
Nov 25 11:32:31 np0005535469 systemd-machined[216343]: New machine qemu-38-instance-00000023.
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.412 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:31 np0005535469 systemd[1]: Started Virtual Machine qemu-38-instance-00000023.
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.926 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Successfully updated port: 2a798aec-112b-42d0-9128-639b456b201e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.947 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.947 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquired lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.948 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.971 254096 DEBUG nova.compute.manager [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.972 254096 DEBUG nova.compute.manager [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing instance network info cache due to event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:31 np0005535469 nova_compute[254092]: 2025-11-25 16:32:31.972 254096 DEBUG oslo_concurrency.lockutils [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.150 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.218 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088352.2181742, 616ec95d-6c7d-420e-991d-3cbc11339768 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.219 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.221 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.221 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.233 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance spawned successfully.#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.233 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.248 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.253 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.258 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.259 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.259 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.260 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.260 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.261 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.284 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.284 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088352.2182398, 616ec95d-6c7d-420e-991d-3cbc11339768 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.285 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.309 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.312 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.331 254096 INFO nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 3.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.332 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.342 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.405 254096 INFO nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 4.97 seconds to build instance.#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.424 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.560 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.561 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.578 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.632 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.632 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.638 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.638 254096 INFO nova.compute.claims [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.728 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.757 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.757 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance network_info: |[{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.758 254096 DEBUG oslo_concurrency.lockutils [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.758 254096 DEBUG nova.network.neutron [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.761 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start _get_guest_xml network_info=[{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.767 254096 WARNING nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.772 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.774 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.786 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.787 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.787 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.788 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.789 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.789 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.789 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.790 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.790 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.791 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.791 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.792 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.792 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.792 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.796 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:32 np0005535469 nova_compute[254092]: 2025-11-25 16:32:32.862 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285403488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.239 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.266 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.274 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 276 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 210 op/s
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814882808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.342 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.349 254096 DEBUG nova.compute.provider_tree [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.365 254096 DEBUG nova.scheduler.client.report [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.386 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.387 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.421 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updating instance_info_cache with network_info: [{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.430482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353430519, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2120, "num_deletes": 254, "total_data_size": 3156663, "memory_usage": 3222672, "flush_reason": "Manual Compaction"}
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.445 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.446 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353452306, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3099041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25965, "largest_seqno": 28084, "table_properties": {"data_size": 3089769, "index_size": 5702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20240, "raw_average_key_size": 20, "raw_value_size": 3070673, "raw_average_value_size": 3107, "num_data_blocks": 251, "num_entries": 988, "num_filter_entries": 988, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088156, "oldest_key_time": 1764088156, "file_creation_time": 1764088353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21872 microseconds, and 8525 cpu microseconds.
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.452 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Releasing lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.453 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance network_info: |[{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.452351) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3099041 bytes OK
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.452374) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.454077) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.454112) EVENT_LOG_v1 {"time_micros": 1764088353454104, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.454136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3147689, prev total WAL file size 3147689, number of live WAL files 2.
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.455378) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3026KB)], [59(7299KB)]
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353455423, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10573254, "oldest_snapshot_seqno": -1}
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.458 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start _get_guest_xml network_info=[{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.466 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.482 254096 WARNING nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.491 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.493 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.497 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.498 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.498 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.499 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.500 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.500 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.500 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.501 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.501 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.502 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.502 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.502 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.503 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.503 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5359 keys, 8850373 bytes, temperature: kUnknown
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353505949, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8850373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8813246, "index_size": 22617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 133641, "raw_average_key_size": 24, "raw_value_size": 8715465, "raw_average_value_size": 1626, "num_data_blocks": 929, "num_entries": 5359, "num_filter_entries": 5359, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.506130) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8850373 bytes
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.507059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.1 rd, 175.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.1 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5882, records dropped: 523 output_compression: NoCompression
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.507073) EVENT_LOG_v1 {"time_micros": 1764088353507066, "job": 32, "event": "compaction_finished", "compaction_time_micros": 50576, "compaction_time_cpu_micros": 22840, "output_level": 6, "num_output_files": 1, "total_output_size": 8850373, "num_input_records": 5882, "num_output_records": 5359, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353507565, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353508753, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.455293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.508 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.541 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.661 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.663 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.664 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Creating image(s)#033[00m
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1134914171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.698 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.720 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.742 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.751 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.787 254096 DEBUG nova.policy [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.795 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.797 254096 DEBUG nova.virt.libvirt.vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1769139174',display_name='tempest-FloatingIPsAssociationTestJSON-server-1769139174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1769139174',id=33,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-f0yptx40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:27Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.797 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.798 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.799 254096 DEBUG nova.objects.instance [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.816 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <uuid>a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc</uuid>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <name>instance-00000021</name>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1769139174</nova:name>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:32</nova:creationTime>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:user uuid="a46b9493b027436fbd21d09ff5ac15e4">tempest-FloatingIPsAssociationTestJSON-993856073-project-member</nova:user>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:project uuid="4ae7f32b97104afd930af5d5f5754532">tempest-FloatingIPsAssociationTestJSON-993856073</nova:project>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <nova:port uuid="5cf8fe87-4cac-403f-8611-0ddb37516abd">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <entry name="serial">a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc</entry>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <entry name="uuid">a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc</entry>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:8c:8a:c5"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <target dev="tap5cf8fe87-4c"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/console.log" append="off"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:33 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:33 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:33 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:33 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.817 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Preparing to wait for external event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.817 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.818 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.818 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.818 254096 DEBUG nova.virt.libvirt.vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1769139174',display_name='tempest-FloatingIPsAssociationTestJSON-server-1769139174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1769139174',id=33,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-f0yptx40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:27Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.819 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.819 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.820 254096 DEBUG os_vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.825 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.825 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.831 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.831 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.831 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.832 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.856 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.860 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.895 254096 INFO nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Rebuilding instance#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cf8fe87-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.903 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5cf8fe87-4c, col_values=(('external_ids', {'iface-id': '5cf8fe87-4cac-403f-8611-0ddb37516abd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:8a:c5', 'vm-uuid': 'a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:33 np0005535469 NetworkManager[48891]: <info>  [1764088353.9052] manager: (tap5cf8fe87-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.916 254096 INFO os_vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c')#033[00m
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428703887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.970 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.970 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.971 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No VIF found with MAC fa:16:3e:8c:8a:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:32:33 np0005535469 nova_compute[254092]: 2025-11-25 16:32:33.971 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Using config drive#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.000 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.029 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.059 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.076 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.136 254096 DEBUG nova.compute.manager [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-changed-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.136 254096 DEBUG nova.compute.manager [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Refreshing instance network info cache due to event network-changed-2a798aec-112b-42d0-9128-639b456b201e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.136 254096 DEBUG oslo_concurrency.lockutils [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.137 254096 DEBUG oslo_concurrency.lockutils [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.137 254096 DEBUG nova.network.neutron [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Refreshing network info cache for port 2a798aec-112b-42d0-9128-639b456b201e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.146 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.175 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.210 254096 DEBUG nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.215 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.294 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.301 254096 DEBUG nova.objects.instance [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.304 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.312 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.312 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Ensure instance console log exists: /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.313 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.313 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.313 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.314 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'resources' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.324 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.332 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.335 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.473 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Creating config drive at /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.478 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5fwxjvwf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160125470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.548 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.550 254096 DEBUG nova.virt.libvirt.vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1220833469',id=34,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-g9yti154',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:28Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=2724cb7d-6b8e-4861-ae3d-72e34da31fe5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.550 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.551 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.552 254096 DEBUG nova.objects.instance [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.567 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <uuid>2724cb7d-6b8e-4861-ae3d-72e34da31fe5</uuid>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <name>instance-00000022</name>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1220833469</nova:name>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:33</nova:creationTime>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:user uuid="87058665de814ae0a51a12ff02b0d9aa">tempest-ImagesOneServerNegativeTestJSON-964953831-project-member</nova:user>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:project uuid="ed571eebde434695bae813d7bb21f4c3">tempest-ImagesOneServerNegativeTestJSON-964953831</nova:project>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <nova:port uuid="2a798aec-112b-42d0-9128-639b456b201e">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <entry name="serial">2724cb7d-6b8e-4861-ae3d-72e34da31fe5</entry>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <entry name="uuid">2724cb7d-6b8e-4861-ae3d-72e34da31fe5</entry>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:af:3e:5a"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <target dev="tap2a798aec-11"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/console.log" append="off"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:34 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:34 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:34 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:34 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.568 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Preparing to wait for external event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.568 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.568 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.569 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.569 254096 DEBUG nova.virt.libvirt.vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1220833469',id=34,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-g9yti154',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:28Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=2724cb7d-6b8e-4861-ae3d-72e34da31fe5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.569 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.570 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.570 254096 DEBUG os_vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.571 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.572 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.574 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a798aec-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.575 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a798aec-11, col_values=(('external_ids', {'iface-id': '2a798aec-112b-42d0-9128-639b456b201e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:3e:5a', 'vm-uuid': '2724cb7d-6b8e-4861-ae3d-72e34da31fe5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:34 np0005535469 NetworkManager[48891]: <info>  [1764088354.5775] manager: (tap2a798aec-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.585 254096 INFO os_vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11')#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.608 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5fwxjvwf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.631 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.634 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.665 254096 DEBUG nova.network.neutron [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updated VIF entry in instance network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.666 254096 DEBUG nova.network.neutron [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.693 254096 DEBUG oslo_concurrency.lockutils [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.696 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.696 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.696 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No VIF found with MAC fa:16:3e:af:3e:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.697 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Using config drive#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.718 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.780 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.781 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deleting local config drive /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.810 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088339.8088124, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.810 254096 INFO nova.compute.manager [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.813 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Successfully created port: f95d61ca-d58c-4f07-879f-5e5412976e42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.837 254096 DEBUG nova.compute.manager [None req-bff26bf7-c739-4fe7-be9d-d3eaabc09c0c - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:34 np0005535469 kernel: tap5cf8fe87-4c: entered promiscuous mode
Nov 25 11:32:34 np0005535469 NetworkManager[48891]: <info>  [1764088354.8393] manager: (tap5cf8fe87-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Nov 25 11:32:34 np0005535469 systemd-udevd[294905]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:34Z|00213|binding|INFO|Claiming lport 5cf8fe87-4cac-403f-8611-0ddb37516abd for this chassis.
Nov 25 11:32:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:34Z|00214|binding|INFO|5cf8fe87-4cac-403f-8611-0ddb37516abd: Claiming fa:16:3e:8c:8a:c5 10.100.0.9
Nov 25 11:32:34 np0005535469 NetworkManager[48891]: <info>  [1764088354.8605] device (tap5cf8fe87-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:32:34 np0005535469 NetworkManager[48891]: <info>  [1764088354.8617] device (tap5cf8fe87-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.863 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:8a:c5 10.100.0.9'], port_security=['fa:16:3e:8c:8a:c5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8b56bc-492b-4082-b8de-60d496652da7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ae7f32b97104afd930af5d5f5754532', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c729f64-6320-4599-a74a-ed70b78f82ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81a9c566-d2a0-4e54-9d3c-480355005098, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cf8fe87-4cac-403f-8611-0ddb37516abd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.865 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cf8fe87-4cac-403f-8611-0ddb37516abd in datapath 7e8b56bc-492b-4082-b8de-60d496652da7 bound to our chassis#033[00m
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.868 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e8b56bc-492b-4082-b8de-60d496652da7#033[00m
Nov 25 11:32:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:34Z|00215|binding|INFO|Setting lport 5cf8fe87-4cac-403f-8611-0ddb37516abd ovn-installed in OVS
Nov 25 11:32:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:34Z|00216|binding|INFO|Setting lport 5cf8fe87-4cac-403f-8611-0ddb37516abd up in Southbound
Nov 25 11:32:34 np0005535469 nova_compute[254092]: 2025-11-25 16:32:34.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:34 np0005535469 systemd-machined[216343]: New machine qemu-39-instance-00000021.
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.890 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79d8bd0e-a7fe-46ee-9839-43934f7606d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:34 np0005535469 systemd[1]: Started Virtual Machine qemu-39-instance-00000021.
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.928 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[955ee817-e987-45ff-a854-298d430df4fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.931 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9644c2ff-2cd2-4185-a08b-9c254acebd55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.964 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b740a3-dee6-454a-892a-6c6d25df61e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.985 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d41919f3-5272-4cce-9066-581e6242eec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295327, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f28171a-c223-400c-aefb-756c2dce189d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479591, 'tstamp': 479591}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295329, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479595, 'tstamp': 479595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295329, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.007 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8b56bc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e8b56bc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.014 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e8b56bc-40, col_values=(('external_ids', {'iface-id': 'f3398af3-7278-4ca0-adcc-f3bb48f595e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.216 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088355.2155972, a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.216 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.236 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.241 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088355.2157326, a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.241 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.256 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.258 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 334 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.9 MiB/s wr, 328 op/s
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.346 254096 DEBUG nova.compute.manager [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.347 254096 DEBUG oslo_concurrency.lockutils [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.347 254096 DEBUG oslo_concurrency.lockutils [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.348 254096 DEBUG oslo_concurrency.lockutils [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.348 254096 DEBUG nova.compute.manager [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Processing event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.348 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.351 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088355.3510094, a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.351 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.353 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.355 254096 INFO nova.virt.libvirt.driver [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance spawned successfully.#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.355 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.365 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Creating config drive at /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.369 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuweqdtsg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.401 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.406 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.406 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.407 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.407 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.408 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.408 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.413 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.488 254096 INFO nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.489 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.502 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuweqdtsg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.528 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.531 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.567 254096 DEBUG nova.network.neutron [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updated VIF entry in instance network info cache for port 2a798aec-112b-42d0-9128-639b456b201e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.568 254096 DEBUG nova.network.neutron [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updating instance_info_cache with network_info: [{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.573 254096 INFO nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 9.06 seconds to build instance.#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.585 254096 DEBUG oslo_concurrency.lockutils [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.586 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.697 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.697 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deleting local config drive /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:35 np0005535469 kernel: tap2a798aec-11: entered promiscuous mode
Nov 25 11:32:35 np0005535469 NetworkManager[48891]: <info>  [1764088355.7497] manager: (tap2a798aec-11): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Nov 25 11:32:35 np0005535469 systemd-udevd[295374]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:35Z|00217|binding|INFO|Claiming lport 2a798aec-112b-42d0-9128-639b456b201e for this chassis.
Nov 25 11:32:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:35Z|00218|binding|INFO|2a798aec-112b-42d0-9128-639b456b201e: Claiming fa:16:3e:af:3e:5a 10.100.0.14
Nov 25 11:32:35 np0005535469 NetworkManager[48891]: <info>  [1764088355.7675] device (tap2a798aec-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:32:35 np0005535469 NetworkManager[48891]: <info>  [1764088355.7702] device (tap2a798aec-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.770 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3e:5a 10.100.0.14'], port_security=['fa:16:3e:af:3e:5a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2724cb7d-6b8e-4861-ae3d-72e34da31fe5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a798aec-112b-42d0-9128-639b456b201e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.772 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a798aec-112b-42d0-9128-639b456b201e in datapath 50e18e22-7850-458c-8d66-5932e0495377 bound to our chassis#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.773 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50e18e22-7850-458c-8d66-5932e0495377#033[00m
Nov 25 11:32:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:35Z|00219|binding|INFO|Setting lport 2a798aec-112b-42d0-9128-639b456b201e ovn-installed in OVS
Nov 25 11:32:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:35Z|00220|binding|INFO|Setting lport 2a798aec-112b-42d0-9128-639b456b201e up in Southbound
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:35 np0005535469 nova_compute[254092]: 2025-11-25 16:32:35.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad676319-6fe9-403e-ac59-b193fa83b2fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.787 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50e18e22-71 in ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.789 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50e18e22-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[52a9595e-1644-4b87-b360-3e2d5c260751]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.790 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[367dcdcf-72fd-4c91-b3fc-a007a9f95cfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 systemd-machined[216343]: New machine qemu-40-instance-00000022.
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.812 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6e2afe-0939-41ff-acbf-04dba9a432c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 systemd[1]: Started Virtual Machine qemu-40-instance-00000022.
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.838 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a76a3a-c2a4-42fe-aad5-68916c4d767c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.871 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[96fcc073-fa7c-4bae-aff3-eedb823f4be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 NetworkManager[48891]: <info>  [1764088355.8807] manager: (tap50e18e22-70): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.883 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e91d90-e251-4f79-9142-cc503bdb0ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 systemd-udevd[295427]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.941 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd75704-5aa4-4ce2-acd8-8c4d6a0e347a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.944 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[80886324-85bd-4c96-933b-4d0382f60326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:35 np0005535469 NetworkManager[48891]: <info>  [1764088355.9682] device (tap50e18e22-70): carrier: link connected
Nov 25 11:32:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.975 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eabd88f2-abe3-484f-9b36-ba054d488040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb040ed5-c824-40c7-b9dc-734dbf569cf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481355, 'reachable_time': 28567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295463, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.029 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbe71e0-9026-405e-92fa-3f33f3c2e2f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:147d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481355, 'tstamp': 481355}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295464, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.051 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Successfully updated port: f95d61ca-d58c-4f07-879f-5e5412976e42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be70a05b-b77d-416b-b288-999f9801987e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481355, 'reachable_time': 28567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295465, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.072 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.073 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.073 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.105 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a9ebc7-1814-4713-9ff4-1bbb664ce65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.139 254096 DEBUG nova.compute.manager [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.140 254096 DEBUG nova.compute.manager [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.140 254096 DEBUG oslo_concurrency.lockutils [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.208 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[966910e7-ba4f-43b4-8993-29d6bece2c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.211 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e18e22-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:36 np0005535469 kernel: tap50e18e22-70: entered promiscuous mode
Nov 25 11:32:36 np0005535469 NetworkManager[48891]: <info>  [1764088356.2149] manager: (tap50e18e22-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.218 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50e18e22-70, col_values=(('external_ids', {'iface-id': '5b591f76-4d04-4b30-9182-d359be87068c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.229 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:36Z|00221|binding|INFO|Releasing lport 5b591f76-4d04-4b30-9182-d359be87068c from this chassis (sb_readonly=0)
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.234 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4121da8b-7a3f-4f9f-ae38-85bf27484ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.245 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-50e18e22-7850-458c-8d66-5932e0495377
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.249 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'env', 'PROCESS_TAG=haproxy-50e18e22-7850-458c-8d66-5932e0495377', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50e18e22-7850-458c-8d66-5932e0495377.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:32:36 np0005535469 nova_compute[254092]: 2025-11-25 16:32:36.253 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:36 np0005535469 podman[295495]: 2025-11-25 16:32:36.724750253 +0000 UTC m=+0.056440160 container create a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:32:36 np0005535469 systemd[1]: Started libpod-conmon-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope.
Nov 25 11:32:36 np0005535469 podman[295495]: 2025-11-25 16:32:36.692888 +0000 UTC m=+0.024577927 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:32:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:32:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70c021b67aa7e63f187afb6ab9f2ef19c3ce9218078adb4a42a3a569b52662a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:32:36 np0005535469 podman[295495]: 2025-11-25 16:32:36.826586592 +0000 UTC m=+0.158276499 container init a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:32:36 np0005535469 podman[295495]: 2025-11-25 16:32:36.848084063 +0000 UTC m=+0.179773990 container start a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:32:36 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : New worker (295515) forked
Nov 25 11:32:36 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : Loading success.
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.091 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088357.0906174, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.091 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.120 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088357.0911803, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.120 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.144 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.146 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.149 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.168 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.169 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.169 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance network_info: |[{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.170 254096 DEBUG oslo_concurrency.lockutils [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.170 254096 DEBUG nova.network.neutron [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.172 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start _get_guest_xml network_info=[{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.176 254096 WARNING nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.181 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.181 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.187 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.187 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.187 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.192 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 353 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 327 op/s
Nov 25 11:32:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755106099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.634 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.634 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] No waiting events found dispatching network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 WARNING nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received unexpected event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd for instance with vm_state active and task_state None.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Processing event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.637 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.637 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] No waiting events found dispatching network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.637 254096 WARNING nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received unexpected event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.638 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.638 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.641 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088357.6410172, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.641 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.643 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.662 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.665 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.694 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.695 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.702 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.704 254096 INFO nova.virt.libvirt.driver [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance spawned successfully.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.704 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.725 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.730 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.731 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.731 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.731 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.732 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.732 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.784 254096 INFO nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 9.45 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.785 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.845 254096 INFO nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 10.76 seconds to build instance.#033[00m
Nov 25 11:32:37 np0005535469 nova_compute[254092]: 2025-11-25 16:32:37.862 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271733712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.168 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.169 254096 DEBUG nova.virt.libvirt.vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.169 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.170 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.171 254096 DEBUG nova.objects.instance [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.184 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <uuid>1d318e56-4a8c-4806-aa87-e837708f2a1f</uuid>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <name>instance-00000024</name>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <nova:name>tempest-tempest.common.compute-instance-822239868</nova:name>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:37</nova:creationTime>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <nova:port uuid="f95d61ca-d58c-4f07-879f-5e5412976e42">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <entry name="serial">1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <entry name="uuid">1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d7:b8:12"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <target dev="tapf95d61ca-d5"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log" append="off"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:38 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:38 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:38 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:38 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Preparing to wait for external event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.186 254096 DEBUG nova.virt.libvirt.vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.186 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.187 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.187 254096 DEBUG os_vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.188 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.188 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.191 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf95d61ca-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.191 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf95d61ca-d5, col_values=(('external_ids', {'iface-id': 'f95d61ca-d58c-4f07-879f-5e5412976e42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:b8:12', 'vm-uuid': '1d318e56-4a8c-4806-aa87-e837708f2a1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:38 np0005535469 NetworkManager[48891]: <info>  [1764088358.1938] manager: (tapf95d61ca-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.200 254096 INFO os_vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5')#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.254 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.255 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.255 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:d7:b8:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.255 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Using config drive#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.277 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.538 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Creating config drive at /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.544 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqv7kf432 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.693 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqv7kf432" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.727 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.731 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.838 254096 DEBUG nova.network.neutron [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updated VIF entry in instance network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.839 254096 DEBUG nova.network.neutron [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.855 254096 DEBUG oslo_concurrency.lockutils [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.922 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.922 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Deleting local config drive /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:38 np0005535469 kernel: tapf95d61ca-d5: entered promiscuous mode
Nov 25 11:32:38 np0005535469 NetworkManager[48891]: <info>  [1764088358.9732] manager: (tapf95d61ca-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Nov 25 11:32:38 np0005535469 nova_compute[254092]: 2025-11-25 16:32:38.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:38Z|00222|binding|INFO|Claiming lport f95d61ca-d58c-4f07-879f-5e5412976e42 for this chassis.
Nov 25 11:32:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:38Z|00223|binding|INFO|f95d61ca-d58c-4f07-879f-5e5412976e42: Claiming fa:16:3e:d7:b8:12 10.100.0.12
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:38.998 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b8:12 10.100.0.12'], port_security=['fa:16:3e:d7:b8:12 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1d318e56-4a8c-4806-aa87-e837708f2a1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f95d61ca-d58c-4f07-879f-5e5412976e42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.000 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f95d61ca-d58c-4f07-879f-5e5412976e42 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.002 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:32:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:39Z|00224|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 ovn-installed in OVS
Nov 25 11:32:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:39Z|00225|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 up in Southbound
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:39 np0005535469 systemd-udevd[295701]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:32:39 np0005535469 systemd-machined[216343]: New machine qemu-41-instance-00000024.
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.027 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cc34f2-3490-40bf-932e-83869bc4ae1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:39 np0005535469 systemd[1]: Started Virtual Machine qemu-41-instance-00000024.
Nov 25 11:32:39 np0005535469 NetworkManager[48891]: <info>  [1764088359.0357] device (tapf95d61ca-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:32:39 np0005535469 NetworkManager[48891]: <info>  [1764088359.0371] device (tapf95d61ca-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[71b40e96-aa5c-45b9-9cb6-1848e46e422b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.074 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[500ddd29-c2b1-43e1-8458-744a52245df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.110 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb221b10-415c-486e-b8f2-c4f10c6c2794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.140 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[193e9e66-f3de-46b5-90ab-3448b0a1a3b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295715, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.161 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc4fbbc-596e-4f94-9df7-0b8cb32cf383]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295716, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295716, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.163 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.165 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.166 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.166 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.166 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 353 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 327 op/s
Nov 25 11:32:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:39Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:7b:ca 10.100.0.6
Nov 25 11:32:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:39Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:7b:ca 10.100.0.6
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.582 254096 DEBUG nova.compute.manager [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG nova.compute.manager [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG oslo_concurrency.lockutils [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG oslo_concurrency.lockutils [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG nova.network.neutron [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.593 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088359.5920138, 1d318e56-4a8c-4806-aa87-e837708f2a1f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.593 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.628 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.633 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088359.5921452, 1d318e56-4a8c-4806-aa87-e837708f2a1f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.634 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.654 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.659 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:39 np0005535469 nova_compute[254092]: 2025-11-25 16:32:39.683 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:32:40
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'images']
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.315 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.316 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.316 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.317 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.317 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.318 254096 INFO nova.compute.manager [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Terminating instance#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.320 254096 DEBUG nova.compute.manager [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:32:40 np0005535469 kernel: tap2a798aec-11 (unregistering): left promiscuous mode
Nov 25 11:32:40 np0005535469 NetworkManager[48891]: <info>  [1764088360.3625] device (tap2a798aec-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:40Z|00226|binding|INFO|Releasing lport 2a798aec-112b-42d0-9128-639b456b201e from this chassis (sb_readonly=0)
Nov 25 11:32:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:40Z|00227|binding|INFO|Setting lport 2a798aec-112b-42d0-9128-639b456b201e down in Southbound
Nov 25 11:32:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:40Z|00228|binding|INFO|Removing iface tap2a798aec-11 ovn-installed in OVS
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.378 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3e:5a 10.100.0.14'], port_security=['fa:16:3e:af:3e:5a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2724cb7d-6b8e-4861-ae3d-72e34da31fe5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a798aec-112b-42d0-9128-639b456b201e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.379 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a798aec-112b-42d0-9128-639b456b201e in datapath 50e18e22-7850-458c-8d66-5932e0495377 unbound from our chassis#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.380 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50e18e22-7850-458c-8d66-5932e0495377, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9378c406-79f4-4040-80c1-034d2be85265]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.382 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace which is not needed anymore#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.397 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 25 11:32:40 np0005535469 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000022.scope: Consumed 3.833s CPU time.
Nov 25 11:32:40 np0005535469 systemd-machined[216343]: Machine qemu-40-instance-00000022 terminated.
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:32:40 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : haproxy version is 2.8.14-c23fe91
Nov 25 11:32:40 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : path to executable is /usr/sbin/haproxy
Nov 25 11:32:40 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [WARNING]  (295513) : Exiting Master process...
Nov 25 11:32:40 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [WARNING]  (295513) : Exiting Master process...
Nov 25 11:32:40 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [ALERT]    (295513) : Current worker (295515) exited with code 143 (Terminated)
Nov 25 11:32:40 np0005535469 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [WARNING]  (295513) : All workers exited. Exiting... (0)
Nov 25 11:32:40 np0005535469 systemd[1]: libpod-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope: Deactivated successfully.
Nov 25 11:32:40 np0005535469 conmon[295509]: conmon a6ca155eb1f434f8462f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope/container/memory.events
Nov 25 11:32:40 np0005535469 podman[295779]: 2025-11-25 16:32:40.532817823 +0000 UTC m=+0.046093130 container died a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:32:40 np0005535469 NetworkManager[48891]: <info>  [1764088360.5372] manager: (tap2a798aec-11): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.550 254096 INFO nova.virt.libvirt.driver [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance destroyed successfully.#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.550 254096 DEBUG nova.objects.instance [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'resources' on Instance uuid 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.566 254096 DEBUG nova.virt.libvirt.vif [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1220833469',id=34,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-g9yti154',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:37Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=2724cb7d-6b8e-4861-ae3d-72e34da31fe5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.566 254096 DEBUG nova.network.os_vif_util [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.567 254096 DEBUG nova.network.os_vif_util [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.567 254096 DEBUG os_vif [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.570 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a798aec-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68-userdata-shm.mount: Deactivated successfully.
Nov 25 11:32:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b70c021b67aa7e63f187afb6ab9f2ef19c3ce9218078adb4a42a3a569b52662a-merged.mount: Deactivated successfully.
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.581 254096 INFO os_vif [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11')#033[00m
Nov 25 11:32:40 np0005535469 podman[295779]: 2025-11-25 16:32:40.597756152 +0000 UTC m=+0.111031449 container cleanup a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:32:40 np0005535469 systemd[1]: libpod-conmon-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope: Deactivated successfully.
Nov 25 11:32:40 np0005535469 podman[295831]: 2025-11-25 16:32:40.682004054 +0000 UTC m=+0.056750788 container remove a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.688 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0cf3cc-cfc0-4df2-a135-225948deb479]: (4, ('Tue Nov 25 04:32:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68)\na6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68\nTue Nov 25 04:32:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68)\na6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.690 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a89221f-c9e6-4625-9ad7-923c4f99d8f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.691 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 kernel: tap50e18e22-70: left promiscuous mode
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c0804e9a-a673-4a35-bf0f-33e5feccc357]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[584e1d2b-9d57-49af-b8f5-b0eb56aaa7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.728 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[299d21c2-0e2c-49de-bf9c-0c3ac9053a08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8508c59-5618-4c0c-a826-e7439e2d8327]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481345, 'reachable_time': 38178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295846, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 systemd[1]: run-netns-ovnmeta\x2d50e18e22\x2d7850\x2d458c\x2d8d66\x2d5932e0495377.mount: Deactivated successfully.
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.751 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:32:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.751 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[700382a7-8a85-4d3f-8e9c-e6ce43a9ae97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.916 254096 DEBUG nova.compute.manager [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-unplugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG oslo_concurrency.lockutils [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG oslo_concurrency.lockutils [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG oslo_concurrency.lockutils [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG nova.compute.manager [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] No waiting events found dispatching network-vif-unplugged-2a798aec-112b-42d0-9128-639b456b201e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG nova.compute.manager [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-unplugged-2a798aec-112b-42d0-9128-639b456b201e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.982 254096 DEBUG nova.network.neutron [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.982 254096 DEBUG nova.network.neutron [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.988 254096 INFO nova.virt.libvirt.driver [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deleting instance files /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_del#033[00m
Nov 25 11:32:40 np0005535469 nova_compute[254092]: 2025-11-25 16:32:40.988 254096 INFO nova.virt.libvirt.driver [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deletion of /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_del complete#033[00m
Nov 25 11:32:41 np0005535469 nova_compute[254092]: 2025-11-25 16:32:41.005 254096 DEBUG oslo_concurrency.lockutils [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:41 np0005535469 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 INFO nova.compute.manager [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:32:41 np0005535469 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 DEBUG oslo.service.loopingcall [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:32:41 np0005535469 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 DEBUG nova.compute.manager [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:32:41 np0005535469 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 DEBUG nova.network.neutron [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:32:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 381 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 11 MiB/s wr, 537 op/s
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.272 254096 DEBUG nova.network.neutron [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.342 254096 INFO nova.compute.manager [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 1.30 seconds to deallocate network for instance.#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.444 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.445 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.455 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Processing event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.458 254096 WARNING nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.459 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.465 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088362.4633458, 1d318e56-4a8c-4806-aa87-e837708f2a1f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.465 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.466 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.470 254096 INFO nova.virt.libvirt.driver [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance spawned successfully.#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.470 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.495 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.502 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.502 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.508 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.539 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.564 254096 INFO nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.564 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.591 254096 DEBUG oslo_concurrency.processutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.626 254096 INFO nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Took 10.01 seconds to build instance.#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.640 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:42 np0005535469 nova_compute[254092]: 2025-11-25 16:32:42.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.002 254096 DEBUG nova.compute.manager [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG oslo_concurrency.lockutils [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG oslo_concurrency.lockutils [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG oslo_concurrency.lockutils [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG nova.compute.manager [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] No waiting events found dispatching network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.004 254096 WARNING nova.compute.manager [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received unexpected event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:32:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925592700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.045 254096 DEBUG oslo_concurrency.processutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.053 254096 DEBUG nova.compute.provider_tree [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.065 254096 DEBUG nova.scheduler.client.report [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.085 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.107 254096 INFO nova.scheduler.client.report [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Deleted allocations for instance 2724cb7d-6b8e-4861-ae3d-72e34da31fe5#033[00m
Nov 25 11:32:43 np0005535469 nova_compute[254092]: 2025-11-25 16:32:43.154 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 381 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.7 MiB/s wr, 378 op/s
Nov 25 11:32:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:44 np0005535469 nova_compute[254092]: 2025-11-25 16:32:44.382 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:32:45 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 11:32:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 357 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.1 MiB/s wr, 497 op/s
Nov 25 11:32:45 np0005535469 nova_compute[254092]: 2025-11-25 16:32:45.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:46 np0005535469 nova_compute[254092]: 2025-11-25 16:32:46.340 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-deleted-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:46 np0005535469 nova_compute[254092]: 2025-11-25 16:32:46.341 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:46 np0005535469 nova_compute[254092]: 2025-11-25 16:32:46.342 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:46 np0005535469 nova_compute[254092]: 2025-11-25 16:32:46.342 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:46 np0005535469 nova_compute[254092]: 2025-11-25 16:32:46.343 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:46 np0005535469 nova_compute[254092]: 2025-11-25 16:32:46.343 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:47 np0005535469 nova_compute[254092]: 2025-11-25 16:32:47.265 254096 DEBUG nova.compute.manager [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:47 np0005535469 nova_compute[254092]: 2025-11-25 16:32:47.265 254096 DEBUG nova.compute.manager [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:47 np0005535469 nova_compute[254092]: 2025-11-25 16:32:47.266 254096 DEBUG oslo_concurrency.lockutils [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:47 np0005535469 nova_compute[254092]: 2025-11-25 16:32:47.266 254096 DEBUG oslo_concurrency.lockutils [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:47 np0005535469 nova_compute[254092]: 2025-11-25 16:32:47.266 254096 DEBUG nova.network.neutron [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 366 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 4.6 MiB/s wr, 407 op/s
Nov 25 11:32:47 np0005535469 nova_compute[254092]: 2025-11-25 16:32:47.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:47 np0005535469 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 25 11:32:47 np0005535469 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000023.scope: Consumed 13.213s CPU time.
Nov 25 11:32:47 np0005535469 systemd-machined[216343]: Machine qemu-38-instance-00000023 terminated.
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.281 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.281 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.297 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing instance network info cache due to event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.404 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance shutdown successfully after 14 seconds.#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.411 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance destroyed successfully.#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.418 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance destroyed successfully.#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.882 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting instance files /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del#033[00m
Nov 25 11:32:48 np0005535469 nova_compute[254092]: 2025-11-25 16:32:48.883 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deletion of /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del complete#033[00m
Nov 25 11:32:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.040 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.042 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating image(s)#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.063 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.090 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.114 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.118 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.193 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.195 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.196 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.197 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.218 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.220 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 366 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.3 MiB/s wr, 357 op/s
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.336 254096 DEBUG nova.network.neutron [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.337 254096 DEBUG nova.network.neutron [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.357 254096 DEBUG nova.compute.manager [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.358 254096 DEBUG nova.compute.manager [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.358 254096 DEBUG oslo_concurrency.lockutils [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.358 254096 DEBUG oslo_concurrency.lockutils [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.359 254096 DEBUG nova.network.neutron [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.366 254096 DEBUG oslo_concurrency.lockutils [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.483 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.531 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] resizing rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:32:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:49Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:8a:c5 10.100.0.9
Nov 25 11:32:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:49Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:8a:c5 10.100.0.9
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.605 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.605 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Ensure instance console log exists: /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.607 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.613 254096 WARNING nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.621 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.621 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.626 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.626 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.626 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:49 np0005535469 nova_compute[254092]: 2025-11-25 16:32:49.644 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2485257155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.122 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.142 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.145 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.170 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updated VIF entry in instance network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.171 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.186 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.470 254096 DEBUG nova.compute.manager [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.470 254096 DEBUG nova.compute.manager [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.471 254096 DEBUG oslo_concurrency.lockutils [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:32:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743957538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.583 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.586 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <uuid>616ec95d-6c7d-420e-991d-3cbc11339768</uuid>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <name>instance-00000023</name>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerShowV257Test-server-2017219999</nova:name>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:32:49</nova:creationTime>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <nova:user uuid="204d6790ef4644f6a11d8afd611b7f8d">tempest-ServerShowV257Test-1749590920-project-member</nova:user>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <nova:project uuid="f7ec8b6f4599458ebb55ba5d9a7463c3">tempest-ServerShowV257Test-1749590920</nova:project>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <entry name="serial">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <entry name="uuid">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk.config">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log" append="off"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:32:50 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:32:50 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:32:50 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:32:50 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.640 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.641 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.642 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Using config drive#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.664 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.686 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.721 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'keypairs' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.865 254096 DEBUG nova.network.neutron [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updated VIF entry in instance network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.865 254096 DEBUG nova.network.neutron [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.886 254096 DEBUG oslo_concurrency.lockutils [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.887 254096 DEBUG oslo_concurrency.lockutils [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:50 np0005535469 nova_compute[254092]: 2025-11-25 16:32:50.887 254096 DEBUG nova.network.neutron [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029655526592107834 of space, bias 1.0, pg target 0.8896657977632351 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.128 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating config drive at /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.138 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpm99syc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.272 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpm99syc" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.295 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.299 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 372 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 8.2 MiB/s wr, 484 op/s
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.338 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.339 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.340 254096 DEBUG nova.objects.instance [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.434 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.435 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting local config drive /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config because it was imported into RBD.#033[00m
Nov 25 11:32:51 np0005535469 systemd-machined[216343]: New machine qemu-42-instance-00000023.
Nov 25 11:32:51 np0005535469 systemd[1]: Started Virtual Machine qemu-42-instance-00000023.
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.818 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 616ec95d-6c7d-420e-991d-3cbc11339768 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.819 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088371.8182952, 616ec95d-6c7d-420e-991d-3cbc11339768 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.820 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.827 254096 DEBUG nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.827 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.830 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance spawned successfully.#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.831 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.838 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.842 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.846 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.846 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.847 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.847 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.847 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.848 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.873 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.874 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088371.827008, 616ec95d-6c7d-420e-991d-3cbc11339768 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.874 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Started (Lifecycle Event)#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.894 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.896 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.905 254096 DEBUG nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.931 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.962 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.963 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:51 np0005535469 nova_compute[254092]: 2025-11-25 16:32:51.963 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:32:52 np0005535469 nova_compute[254092]: 2025-11-25 16:32:52.024 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:52 np0005535469 nova_compute[254092]: 2025-11-25 16:32:52.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:53 np0005535469 nova_compute[254092]: 2025-11-25 16:32:53.216 254096 DEBUG nova.objects.instance [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:53 np0005535469 nova_compute[254092]: 2025-11-25 16:32:53.230 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:32:53 np0005535469 nova_compute[254092]: 2025-11-25 16:32:53.317 254096 DEBUG nova.network.neutron [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updated VIF entry in instance network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:53 np0005535469 nova_compute[254092]: 2025-11-25 16:32:53.318 254096 DEBUG nova.network.neutron [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 372 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 274 op/s
Nov 25 11:32:53 np0005535469 nova_compute[254092]: 2025-11-25 16:32:53.333 254096 DEBUG oslo_concurrency.lockutils [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:53 np0005535469 nova_compute[254092]: 2025-11-25 16:32:53.947 254096 DEBUG nova.policy [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.145 254096 DEBUG nova.compute.manager [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.145 254096 DEBUG nova.compute.manager [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing instance network info cache due to event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.146 254096 DEBUG oslo_concurrency.lockutils [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.146 254096 DEBUG oslo_concurrency.lockutils [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.146 254096 DEBUG nova.network.neutron [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.231 254096 DEBUG nova.compute.manager [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG nova.compute.manager [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG oslo_concurrency.lockutils [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG oslo_concurrency.lockutils [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG nova.network.neutron [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.400 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.402 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.402 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.402 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.403 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.404 254096 INFO nova.compute.manager [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Terminating instance#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.405 254096 DEBUG nova.compute.manager [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:32:54 np0005535469 kernel: tap5cf8fe87-4c (unregistering): left promiscuous mode
Nov 25 11:32:54 np0005535469 NetworkManager[48891]: <info>  [1764088374.4743] device (tap5cf8fe87-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:54Z|00229|binding|INFO|Releasing lport 5cf8fe87-4cac-403f-8611-0ddb37516abd from this chassis (sb_readonly=0)
Nov 25 11:32:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:54Z|00230|binding|INFO|Setting lport 5cf8fe87-4cac-403f-8611-0ddb37516abd down in Southbound
Nov 25 11:32:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:54Z|00231|binding|INFO|Removing iface tap5cf8fe87-4c ovn-installed in OVS
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.496 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:8a:c5 10.100.0.9'], port_security=['fa:16:3e:8c:8a:c5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8b56bc-492b-4082-b8de-60d496652da7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ae7f32b97104afd930af5d5f5754532', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c729f64-6320-4599-a74a-ed70b78f82ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81a9c566-d2a0-4e54-9d3c-480355005098, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cf8fe87-4cac-403f-8611-0ddb37516abd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.497 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cf8fe87-4cac-403f-8611-0ddb37516abd in datapath 7e8b56bc-492b-4082-b8de-60d496652da7 unbound from our chassis#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.498 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e8b56bc-492b-4082-b8de-60d496652da7#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.508 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.515 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a125578c-4d5b-47f7-b6c9-7565791c89f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:54 np0005535469 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000021.scope: Deactivated successfully.
Nov 25 11:32:54 np0005535469 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000021.scope: Consumed 13.207s CPU time.
Nov 25 11:32:54 np0005535469 systemd-machined[216343]: Machine qemu-39-instance-00000021 terminated.
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.541 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6c52299a-3359-4816-b402-5e0a3b30bb5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.545 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[22217619-1070-47a6-a2d9-f37db149e3e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.572 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[86d22f75-59f2-498e-802c-e33f5601ced9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.574 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "616ec95d-6c7d-420e-991d-3cbc11339768" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.575 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.576 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "616ec95d-6c7d-420e-991d-3cbc11339768-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.576 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.576 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.578 254096 INFO nova.compute.manager [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Terminating instance#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.579 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "refresh_cache-616ec95d-6c7d-420e-991d-3cbc11339768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.579 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquired lock "refresh_cache-616ec95d-6c7d-420e-991d-3cbc11339768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.579 254096 DEBUG nova.network.neutron [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.590 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdcef5b-ae16-4f0b-beea-bf35d41786fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296246, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.606 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbc2765-dfb3-4d7b-8c28-85e7a93e835c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479591, 'tstamp': 479591}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296247, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479595, 'tstamp': 479595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296247, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.609 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8b56bc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.619 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e8b56bc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.620 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.621 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e8b56bc-40, col_values=(('external_ids', {'iface-id': 'f3398af3-7278-4ca0-adcc-f3bb48f595e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.621 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.647 254096 INFO nova.virt.libvirt.driver [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance destroyed successfully.#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.648 254096 DEBUG nova.objects.instance [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'resources' on Instance uuid a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.661 254096 DEBUG nova.virt.libvirt.vif [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1769139174',display_name='tempest-FloatingIPsAssociationTestJSON-server-1769139174',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1769139174',id=33,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-f0yptx40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:35Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.662 254096 DEBUG nova.network.os_vif_util [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.662 254096 DEBUG nova.network.os_vif_util [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.663 254096 DEBUG os_vif [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.666 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cf8fe87-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.684 254096 INFO os_vif [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c')#033[00m
Nov 25 11:32:54 np0005535469 nova_compute[254092]: 2025-11-25 16:32:54.765 254096 DEBUG nova.network.neutron [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.021 254096 INFO nova.virt.libvirt.driver [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deleting instance files /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_del#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.022 254096 INFO nova.virt.libvirt.driver [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deletion of /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_del complete#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.075 254096 INFO nova.compute.manager [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.076 254096 DEBUG oslo.service.loopingcall [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.077 254096 DEBUG nova.compute.manager [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.077 254096 DEBUG nova.network.neutron [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:32:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:55Z|00232|binding|INFO|Releasing lport f3398af3-7278-4ca0-adcc-f3bb48f595e9 from this chassis (sb_readonly=0)
Nov 25 11:32:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:55Z|00233|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 11:32:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:32:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63055523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:32:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:32:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63055523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 343 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 364 op/s
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.549 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088360.5478022, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.549 254096 INFO nova.compute.manager [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.566 254096 DEBUG nova.network.neutron [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.570 254096 DEBUG nova.compute.manager [None req-09e0b0e4-f77d-40a1-bc81-8e5db12a1159 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.578 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Releasing lock "refresh_cache-616ec95d-6c7d-420e-991d-3cbc11339768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.579 254096 DEBUG nova.compute.manager [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.587 254096 DEBUG nova.network.neutron [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updated VIF entry in instance network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.587 254096 DEBUG nova.network.neutron [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.603 254096 DEBUG oslo_concurrency.lockutils [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:55 np0005535469 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 25 11:32:55 np0005535469 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Consumed 4.110s CPU time.
Nov 25 11:32:55 np0005535469 systemd-machined[216343]: Machine qemu-42-instance-00000023 terminated.
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.777 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Successfully updated port: 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.799 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance destroyed successfully.#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.799 254096 DEBUG nova.objects.instance [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'resources' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:32:55 np0005535469 nova_compute[254092]: 2025-11-25 16:32:55.802 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:32:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:55Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:b8:12 10.100.0.12
Nov 25 11:32:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:32:55Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:b8:12 10.100.0.12
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.207 254096 INFO nova.virt.libvirt.driver [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting instance files /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.208 254096 INFO nova.virt.libvirt.driver [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deletion of /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del complete#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.292 254096 INFO nova.compute.manager [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.292 254096 DEBUG oslo.service.loopingcall [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.293 254096 DEBUG nova.compute.manager [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.293 254096 DEBUG nova.network.neutron [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.525 254096 DEBUG nova.network.neutron [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.526 254096 DEBUG nova.network.neutron [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:56 np0005535469 podman[296302]: 2025-11-25 16:32:56.646391426 +0000 UTC m=+0.060162641 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 11:32:56 np0005535469 podman[296301]: 2025-11-25 16:32:56.651412502 +0000 UTC m=+0.065180857 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:32:56 np0005535469 podman[296303]: 2025-11-25 16:32:56.668577177 +0000 UTC m=+0.080843180 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.695 254096 DEBUG oslo_concurrency.lockutils [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.696 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.696 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.862 254096 DEBUG nova.network.neutron [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.877 254096 DEBUG nova.network.neutron [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.898 254096 DEBUG nova.network.neutron [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.899 254096 INFO nova.compute.manager [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 0.61 seconds to deallocate network for instance.#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.936 254096 INFO nova.compute.manager [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 1.86 seconds to deallocate network for instance.#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.946 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.946 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.972 254096 WARNING nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it#033[00m
Nov 25 11:32:56 np0005535469 nova_compute[254092]: 2025-11-25 16:32:56.983 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.071 254096 DEBUG oslo_concurrency.processutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 300 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.7 MiB/s wr, 314 op/s
Nov 25 11:32:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/28605362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.516 254096 DEBUG oslo_concurrency.processutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.523 254096 DEBUG nova.compute.provider_tree [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.544 254096 DEBUG nova.scheduler.client.report [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.573 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.607 254096 INFO nova.scheduler.client.report [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Deleted allocations for instance 616ec95d-6c7d-420e-991d-3cbc11339768#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.687 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.687 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.705 254096 DEBUG oslo_concurrency.processutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.735 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:57 np0005535469 nova_compute[254092]: 2025-11-25 16:32:57.738 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573068817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.170 254096 DEBUG oslo_concurrency.processutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.176 254096 DEBUG nova.compute.provider_tree [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.188 254096 DEBUG nova.scheduler.client.report [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.208 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.210 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.211 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.211 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.211 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.264 254096 INFO nova.scheduler.client.report [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Deleted allocations for instance a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.322 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:32:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164591214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.636 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.703 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.703 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.708 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.708 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.712 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.712 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:32:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.990 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.992 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3702MB free_disk=59.84079360961914GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.992 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:32:58 np0005535469 nova_compute[254092]: 2025-11-25 16:32:58.993 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.061 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 9bd4d655-c683-4433-a739-168946211a75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1d318e56-4a8c-4806-aa87-e837708f2a1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.157 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:32:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 300 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 286 op/s
Nov 25 11:32:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:32:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964841470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.646 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.654 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.670 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.695 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:32:59 np0005535469 nova_compute[254092]: 2025-11-25 16:32:59.696 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.322 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-unplugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.323 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.323 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.323 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] No waiting events found dispatching network-vif-unplugged-5cf8fe87-4cac-403f-8611-0ddb37516abd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 WARNING nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received unexpected event network-vif-unplugged-5cf8fe87-4cac-403f-8611-0ddb37516abd for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.325 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.325 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] No waiting events found dispatching network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.325 254096 WARNING nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received unexpected event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.383 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.397 254096 DEBUG nova.compute.manager [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.398 254096 DEBUG nova.compute.manager [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.398 254096 DEBUG oslo_concurrency.lockutils [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.402 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.403 254096 DEBUG oslo_concurrency.lockutils [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.403 254096 DEBUG nova.network.neutron [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.407 254096 DEBUG nova.virt.libvirt.vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.407 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.408 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.408 254096 DEBUG os_vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.409 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.410 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.412 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ce87d5c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.413 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ce87d5c-fb, col_values=(('external_ids', {'iface-id': '0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:a3:51', 'vm-uuid': '800c66e3-ee9f-4766-92f2-ecda5671cde3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:00 np0005535469 NetworkManager[48891]: <info>  [1764088380.4157] manager: (tap0ce87d5c-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.424 254096 INFO os_vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb')#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.424 254096 DEBUG nova.virt.libvirt.vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.425 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.425 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.429 254096 DEBUG nova.virt.libvirt.guest [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <target dev="tap0ce87d5c-fb"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:33:00 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 11:33:00 np0005535469 kernel: tap0ce87d5c-fb: entered promiscuous mode
Nov 25 11:33:00 np0005535469 NetworkManager[48891]: <info>  [1764088380.4442] manager: (tap0ce87d5c-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Nov 25 11:33:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:33:00Z|00234|binding|INFO|Claiming lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for this chassis.
Nov 25 11:33:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:33:00Z|00235|binding|INFO|0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9: Claiming fa:16:3e:bc:a3:51 10.100.0.11
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.458 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:51 10.100.0.11'], port_security=['fa:16:3e:bc:a3:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '800c66e3-ee9f-4766-92f2-ecda5671cde3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.460 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.462 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:33:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:33:00Z|00236|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 ovn-installed in OVS
Nov 25 11:33:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:33:00Z|00237|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 up in Southbound
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.477 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c46bab26-7f35-40b8-9ed6-eb3b009dc132]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:33:00 np0005535469 systemd-udevd[296461]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:33:00 np0005535469 NetworkManager[48891]: <info>  [1764088380.5093] device (tap0ce87d5c-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:33:00 np0005535469 NetworkManager[48891]: <info>  [1764088380.5108] device (tap0ce87d5c-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.517 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[474bed40-347e-4a41-9d67-3881125a7f69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.522 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4e8ff5-3b66-469f-9e4e-b82aa7a49c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.534 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.535 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.535 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:f0:7b:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.535 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:bc:a3:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.552 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4441af-4e6a-4e17-863a-dfbf996cccdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.569 254096 DEBUG nova.virt.libvirt.guest [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:33:00</nova:creationTime>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 11:33:00 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 11:33:00 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:33:00 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:33:00 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:33:00 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.571 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b134e59d-b700-4398-8857-d11d9f01c436]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296468, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72f56a1b-7f21-42f0-b32c-f2f2f635e574]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296469, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296469, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.587 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.589 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.590 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.590 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:33:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.591 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:33:00 np0005535469 nova_compute[254092]: 2025-11-25 16:33:00.594 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:33:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 279 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 317 op/s
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.825 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.826 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.841 254096 DEBUG nova.objects.instance [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.883 254096 DEBUG nova.virt.libvirt.vif [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.883 254096 DEBUG nova.network.os_vif_util [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.884 254096 DEBUG nova.network.os_vif_util [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.888 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.890 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.894 254096 DEBUG nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Attempting to detach device tap0ce87d5c-fb from instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.894 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <target dev="tap0ce87d5c-fb"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:33:01 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.899 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.906 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface>not found in domain: <domain type='kvm' id='37'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <name>instance-00000020</name>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <uuid>800c66e3-ee9f-4766-92f2-ecda5671cde3</uuid>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:33:00</nova:creationTime>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:33:01 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <entry name='serial'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <entry name='uuid'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk' index='2'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config' index='1'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:f0:7b:ca'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target dev='tap82f63517-76'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:bc:a3:51'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target dev='tap0ce87d5c-fb'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='net1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c565,c717</label>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c717</imagelabel>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:33:01 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:33:01 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.907 254096 INFO nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap0ce87d5c-fb from instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 from the persistent domain config.#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.907 254096 DEBUG nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] (1/8): Attempting to detach device tap0ce87d5c-fb with device alias net1 from instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.908 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]:  <target dev="tap0ce87d5c-fb"/>
Nov 25 11:33:01 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:33:01 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 11:33:01 np0005535469 kernel: tap0ce87d5c-fb (unregistering): left promiscuous mode
Nov 25 11:33:01 np0005535469 NetworkManager[48891]: <info>  [1764088381.9682] device (tap0ce87d5c-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:33:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:33:01Z|00238|binding|INFO|Releasing lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 from this chassis (sb_readonly=0)
Nov 25 11:33:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:33:01Z|00239|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 down in Southbound
Nov 25 11:33:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:33:01Z|00240|binding|INFO|Removing iface tap0ce87d5c-fb ovn-installed in OVS
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:33:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:01.982 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:51 10.100.0.11'], port_security=['fa:16:3e:bc:a3:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '800c66e3-ee9f-4766-92f2-ecda5671cde3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:33:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:01.986 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis#033[00m
Nov 25 11:33:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:33:01.988 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d#033[00m
Nov 25 11:33:01 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.996 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764088381.9961386, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 11:33:02 np0005535469 nova_compute[254092]: 2025-11-25 16:33:01.999 254096 DEBUG nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Start waiting for the detach event from libvirt for device tap0ce87d5c-fb with device alias net1 for instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 11:33:02 np0005535469 nova_compute[254092]: 2025-11-25 16:33:02.000 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 11:33:02 np0005535469 nova_compute[254092]: 2025-11-25 16:33:02.006 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface>not found in domain: <domain type='kvm' id='37'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <name>instance-00000020</name>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <uuid>800c66e3-ee9f-4766-92f2-ecda5671cde3</uuid>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:33:00</nova:creationTime>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:33:02 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <resource>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </resource>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <entry name='serial'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <entry name='uuid'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk' index='2'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config' index='1'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </controller>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:f0:7b:ca'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target dev='tap82f63517-76'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      </target>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/1'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <source path='/dev/pts/1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </console>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </input>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c565,c717</label>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c717</imagelabel>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 11:33:02 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.764 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.765 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Ensure instance console log exists: /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.766 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.766 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.766 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.817 254096 DEBUG nova.compute.manager [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.818 254096 DEBUG oslo_concurrency.lockutils [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.818 254096 DEBUG oslo_concurrency.lockutils [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.819 254096 DEBUG oslo_concurrency.lockutils [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.819 254096 DEBUG nova.compute.manager [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] No waiting events found dispatching network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.819 254096 WARNING nova.compute.manager [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received unexpected event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 for instance with vm_state active and task_state image_snapshot_pending.#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.823 254096 DEBUG nova.compute.manager [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:21 np0005535469 nova_compute[254092]: 2025-11-25 16:36:21.878 254096 INFO nova.compute.manager [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] instance snapshotting#033[00m
Nov 25 11:36:22 np0005535469 rsyslogd[1006]: imjournal: 9270 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.075 254096 DEBUG nova.network.neutron [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.114 254096 DEBUG oslo_concurrency.lockutils [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.115 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.116 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.131 254096 INFO nova.virt.libvirt.driver [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Beginning live snapshot process#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.257 254096 DEBUG nova.virt.libvirt.imagebackend [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.377 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.502 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(3f3380e7e7b34d8ea4b1ffb0a0af6e38) on rbd image(e549d4e8-b824-480b-b81a-83e2ea1eff12_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:36:22 np0005535469 nova_compute[254092]: 2025-11-25 16:36:22.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 25 11:36:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 25 11:36:22 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.008 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning vms/e549d4e8-b824-480b-b81a-83e2ea1eff12_disk@3f3380e7e7b34d8ea4b1ffb0a0af6e38 to images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.082 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] flattening images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.114 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Updating instance_info_cache with network_info: [{"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.135 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.136 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance network_info: |[{"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.144 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Start _get_guest_xml network_info=[{"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.154 254096 WARNING nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.161 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.163 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.166 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.166 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.167 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.169 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.170 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.171 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.171 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.171 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.172 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.172 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.173 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.173 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.174 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.174 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.178 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.310 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(3f3380e7e7b34d8ea4b1ffb0a0af6e38) on rbd image(e549d4e8-b824-480b-b81a-83e2ea1eff12_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:36:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 175 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 746 KiB/s wr, 82 op/s
Nov 25 11:36:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:36:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485946769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.631 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.649 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:23 np0005535469 nova_compute[254092]: 2025-11-25 16:36:23.653 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 25 11:36:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 25 11:36:23 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.024 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(snap) on rbd image(12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:36:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:36:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040185271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.161 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.164 254096 DEBUG nova.virt.libvirt.vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1994523502',display_name='tempest-DeleteServersTestJSON-server-1994523502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1994523502',id=47,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ahiysc8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:19Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=be1b8151-4e42-40db-813c-8b3b3e216949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.165 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.167 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.168 254096 DEBUG nova.objects.instance [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid be1b8151-4e42-40db-813c-8b3b3e216949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.186 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <uuid>be1b8151-4e42-40db-813c-8b3b3e216949</uuid>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <name>instance-0000002f</name>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <nova:name>tempest-DeleteServersTestJSON-server-1994523502</nova:name>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:36:23</nova:creationTime>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <nova:port uuid="b2aa1e65-61e9-41b3-a2f4-400d17aafefb">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <entry name="serial">be1b8151-4e42-40db-813c-8b3b3e216949</entry>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <entry name="uuid">be1b8151-4e42-40db-813c-8b3b3e216949</entry>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/be1b8151-4e42-40db-813c-8b3b3e216949_disk">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/be1b8151-4e42-40db-813c-8b3b3e216949_disk.config">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:15:0c:af"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <target dev="tapb2aa1e65-61"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/console.log" append="off"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:36:24 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:36:24 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:36:24 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:36:24 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.193 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Preparing to wait for external event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.194 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.195 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.195 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.196 254096 DEBUG nova.virt.libvirt.vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1994523502',display_name='tempest-DeleteServersTestJSON-server-1994523502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1994523502',id=47,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ahiysc8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:19Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=be1b8151-4e42-40db-813c-8b3b3e216949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.197 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.198 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.199 254096 DEBUG os_vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.208 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2aa1e65-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.208 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2aa1e65-61, col_values=(('external_ids', {'iface-id': 'b2aa1e65-61e9-41b3-a2f4-400d17aafefb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:0c:af', 'vm-uuid': 'be1b8151-4e42-40db-813c-8b3b3e216949'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:24 np0005535469 NetworkManager[48891]: <info>  [1764088584.2118] manager: (tapb2aa1e65-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.221 254096 INFO os_vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61')#033[00m
Nov 25 11:36:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.290 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.291 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.291 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:15:0c:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.292 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Using config drive#033[00m
Nov 25 11:36:24 np0005535469 nova_compute[254092]: 2025-11-25 16:36:24.316 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 25 11:36:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 25 11:36:24 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 25 11:36:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 236 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 4.6 MiB/s wr, 284 op/s
Nov 25 11:36:25 np0005535469 nova_compute[254092]: 2025-11-25 16:36:25.696 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Creating config drive at /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config#033[00m
Nov 25 11:36:25 np0005535469 nova_compute[254092]: 2025-11-25 16:36:25.701 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xmid3zv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:25 np0005535469 nova_compute[254092]: 2025-11-25 16:36:25.842 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xmid3zv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:25 np0005535469 nova_compute[254092]: 2025-11-25 16:36:25.869 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:25 np0005535469 nova_compute[254092]: 2025-11-25 16:36:25.873 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config be1b8151-4e42-40db-813c-8b3b3e216949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.027 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config be1b8151-4e42-40db-813c-8b3b3e216949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.028 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deleting local config drive /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config because it was imported into RBD.#033[00m
Nov 25 11:36:26 np0005535469 NetworkManager[48891]: <info>  [1764088586.0941] manager: (tapb2aa1e65-61): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 25 11:36:26 np0005535469 kernel: tapb2aa1e65-61: entered promiscuous mode
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:26Z|00404|binding|INFO|Claiming lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb for this chassis.
Nov 25 11:36:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:26Z|00405|binding|INFO|b2aa1e65-61e9-41b3-a2f4-400d17aafefb: Claiming fa:16:3e:15:0c:af 10.100.0.14
Nov 25 11:36:26 np0005535469 systemd-udevd[307917]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:36:26 np0005535469 systemd-machined[216343]: New machine qemu-56-instance-0000002f.
Nov 25 11:36:26 np0005535469 NetworkManager[48891]: <info>  [1764088586.1824] device (tapb2aa1e65-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:36:26 np0005535469 NetworkManager[48891]: <info>  [1764088586.1831] device (tapb2aa1e65-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:36:26 np0005535469 systemd[1]: Started Virtual Machine qemu-56-instance-0000002f.
Nov 25 11:36:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:26Z|00406|binding|INFO|Setting lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb ovn-installed in OVS
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:26Z|00407|binding|INFO|Setting lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb up in Southbound
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.362 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:0c:af 10.100.0.14'], port_security=['fa:16:3e:15:0c:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be1b8151-4e42-40db-813c-8b3b3e216949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b2aa1e65-61e9-41b3-a2f4-400d17aafefb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.363 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b2aa1e65-61e9-41b3-a2f4-400d17aafefb in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.364 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac1f9e0-2934-498e-8b22-220cd94684c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.382 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.385 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.385 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c19caa73-2133-4afd-a886-397e44749982]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.386 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a91fd537-27b4-40be-9f5b-856a6df08a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.401 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[299fbdca-d415-405b-b80d-6187c191b862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.421 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[beebd969-999e-4009-95a9-d82fc3d919e0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.461 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.463 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4d882aa5-8b34-498a-8971-a4653b67a69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 systemd-udevd[307920]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.469 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bff4ea92-f0ce-4889-bb38-8b15932f79a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 NetworkManager[48891]: <info>  [1764088586.4707] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.510 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[720c2175-d96f-42b8-80fe-3e2154ac2804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.515 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7b97cb6f-e73e-4732-ae43-9209f2b61e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 NetworkManager[48891]: <info>  [1764088586.5458] device (tape469a950-70): carrier: link connected
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.553 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3417f1a9-cb39-4643-9462-d2848e3ed914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.574 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4dada1f8-d0f9-4f64-8ad1-b1adee6dbd06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504413, 'reachable_time': 19143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307951, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.592 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b148a1b-61be-42bf-808f-5dcaf253276a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504413, 'tstamp': 504413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307959, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.618 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f2958c-ce62-4fe5-9d78-a1bcda9fad42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504413, 'reachable_time': 19143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307971, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.661 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0eecd0cf-7bfd-4db8-8c65-98e188c124e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.673 254096 INFO nova.virt.libvirt.driver [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Snapshot image upload complete#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.674 254096 INFO nova.compute.manager [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 4.79 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd3a109-0346-4204-8f5b-7d610ef0903b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.743 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.743 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.744 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:26 np0005535469 NetworkManager[48891]: <info>  [1764088586.7473] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Nov 25 11:36:26 np0005535469 kernel: tape469a950-70: entered promiscuous mode
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.752 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:26Z|00408|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.758 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aa85f5-d898-403a-8ff2-19c198d3f65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.760 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:36:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.760 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.791 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088586.7909305, be1b8151-4e42-40db-813c-8b3b3e216949 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.791 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Started (Lifecycle Event)#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.809 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.813 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088586.7909877, be1b8151-4e42-40db-813c-8b3b3e216949 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.814 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.835 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.838 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:36:26 np0005535469 nova_compute[254092]: 2025-11-25 16:36:26.855 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.215 254096 DEBUG nova.compute.manager [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.215 254096 DEBUG oslo_concurrency.lockutils [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.216 254096 DEBUG oslo_concurrency.lockutils [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.216 254096 DEBUG oslo_concurrency.lockutils [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.216 254096 DEBUG nova.compute.manager [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Processing event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.217 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:36:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 261 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.434 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088587.43278, be1b8151-4e42-40db-813c-8b3b3e216949 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.436 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:36:27 np0005535469 podman[308027]: 2025-11-25 16:36:27.44995293 +0000 UTC m=+0.320320861 container create b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.452 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.458 254096 INFO nova.virt.libvirt.driver [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance spawned successfully.#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.458 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.476 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.484 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.491 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.492 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.492 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.493 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.493 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.494 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:27 np0005535469 systemd[1]: Started libpod-conmon-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39.scope.
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.513 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:36:27 np0005535469 podman[308027]: 2025-11-25 16:36:27.421026866 +0000 UTC m=+0.291394827 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:36:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:36:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865c78254137b37ca1978a730536ff10014696e33d25bc8a2e8e961241b00cca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:27 np0005535469 podman[308027]: 2025-11-25 16:36:27.546037107 +0000 UTC m=+0.416405078 container init b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.554 254096 INFO nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.554 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:27 np0005535469 podman[308027]: 2025-11-25 16:36:27.555312729 +0000 UTC m=+0.425680670 container start b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:36:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : New worker (308049) forked
Nov 25 11:36:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : Loading success.
Nov 25 11:36:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:27.614 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.614 254096 INFO nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 9.52 seconds to build instance.#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.635 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:27 np0005535469 nova_compute[254092]: 2025-11-25 16:36:27.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 25 11:36:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 25 11:36:29 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 25 11:36:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 261 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.694 254096 DEBUG nova.compute.manager [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.694 254096 DEBUG oslo_concurrency.lockutils [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.694 254096 DEBUG oslo_concurrency.lockutils [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.695 254096 DEBUG oslo_concurrency.lockutils [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.695 254096 DEBUG nova.compute.manager [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] No waiting events found dispatching network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.695 254096 WARNING nova.compute.manager [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received unexpected event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb for instance with vm_state active and task_state None.#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.957 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.959 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.959 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.960 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.960 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.963 254096 INFO nova.compute.manager [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Terminating instance#033[00m
Nov 25 11:36:29 np0005535469 nova_compute[254092]: 2025-11-25 16:36:29.967 254096 DEBUG nova.compute.manager [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:36:30 np0005535469 kernel: tapb2aa1e65-61 (unregistering): left promiscuous mode
Nov 25 11:36:30 np0005535469 NetworkManager[48891]: <info>  [1764088590.3895] device (tapb2aa1e65-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:36:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:30Z|00409|binding|INFO|Releasing lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb from this chassis (sb_readonly=0)
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:30Z|00410|binding|INFO|Setting lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb down in Southbound
Nov 25 11:36:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:30Z|00411|binding|INFO|Removing iface tapb2aa1e65-61 ovn-installed in OVS
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.415 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:0c:af 10.100.0.14'], port_security=['fa:16:3e:15:0c:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be1b8151-4e42-40db-813c-8b3b3e216949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b2aa1e65-61e9-41b3-a2f4-400d17aafefb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.417 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b2aa1e65-61e9-41b3-a2f4-400d17aafefb in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.418 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.420 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25559ceb-f1f8-423e-95c2-d1e81c325a78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.421 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:30 np0005535469 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Nov 25 11:36:30 np0005535469 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002f.scope: Consumed 3.100s CPU time.
Nov 25 11:36:30 np0005535469 systemd-machined[216343]: Machine qemu-56-instance-0000002f terminated.
Nov 25 11:36:30 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : haproxy version is 2.8.14-c23fe91
Nov 25 11:36:30 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : path to executable is /usr/sbin/haproxy
Nov 25 11:36:30 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [WARNING]  (308047) : Exiting Master process...
Nov 25 11:36:30 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [ALERT]    (308047) : Current worker (308049) exited with code 143 (Terminated)
Nov 25 11:36:30 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [WARNING]  (308047) : All workers exited. Exiting... (0)
Nov 25 11:36:30 np0005535469 systemd[1]: libpod-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39.scope: Deactivated successfully.
Nov 25 11:36:30 np0005535469 podman[308080]: 2025-11-25 16:36:30.592096965 +0000 UTC m=+0.064912142 container died b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.618 254096 INFO nova.virt.libvirt.driver [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance destroyed successfully.#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.619 254096 DEBUG nova.objects.instance [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid be1b8151-4e42-40db-813c-8b3b3e216949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.641 254096 DEBUG nova.virt.libvirt.vif [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1994523502',display_name='tempest-DeleteServersTestJSON-server-1994523502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1994523502',id=47,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ahiysc8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:27Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=be1b8151-4e42-40db-813c-8b3b3e216949,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.641 254096 DEBUG nova.network.os_vif_util [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.642 254096 DEBUG nova.network.os_vif_util [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.642 254096 DEBUG os_vif [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.646 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2aa1e65-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.653 254096 INFO os_vif [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61')#033[00m
Nov 25 11:36:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-865c78254137b37ca1978a730536ff10014696e33d25bc8a2e8e961241b00cca-merged.mount: Deactivated successfully.
Nov 25 11:36:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39-userdata-shm.mount: Deactivated successfully.
Nov 25 11:36:30 np0005535469 podman[308080]: 2025-11-25 16:36:30.678055318 +0000 UTC m=+0.150870495 container cleanup b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:36:30 np0005535469 systemd[1]: libpod-conmon-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39.scope: Deactivated successfully.
Nov 25 11:36:30 np0005535469 podman[308118]: 2025-11-25 16:36:30.757086341 +0000 UTC m=+0.083495766 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:36:30 np0005535469 podman[308147]: 2025-11-25 16:36:30.807346885 +0000 UTC m=+0.098390040 container remove b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:36:30 np0005535469 podman[308126]: 2025-11-25 16:36:30.813032619 +0000 UTC m=+0.117819487 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.813 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11d86504-3077-4f15-b045-65efb721dcf3]: (4, ('Tue Nov 25 04:36:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39)\nb76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39\nTue Nov 25 04:36:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39)\nb76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.858 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2636a78f-c9a1-4253-b78b-211516a69b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.860 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:30 np0005535469 kernel: tape469a950-70: left promiscuous mode
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:30 np0005535469 podman[308141]: 2025-11-25 16:36:30.893161253 +0000 UTC m=+0.191970199 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.897 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34f0a325-8436-476c-9463-4fe5d43abc09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.924 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.925 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.925 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.926 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.926 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.928 254096 INFO nova.compute.manager [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Terminating instance#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.928 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f75fd5e-0c65-4c6c-95db-29fdf367dab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 nova_compute[254092]: 2025-11-25 16:36:30.929 254096 DEBUG nova.compute.manager [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.930 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c73a7c2-3470-4ed4-beec-421894c27d40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9635bdb6-44e8-4b0d-8b9c-d6771ee3111e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504404, 'reachable_time': 27699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308215, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.961 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:36:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.961 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[65fac341-9db2-474f-8f01-ba05ec35dae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:30 np0005535469 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 11:36:31 np0005535469 kernel: tap660536bc-d4 (unregistering): left promiscuous mode
Nov 25 11:36:31 np0005535469 NetworkManager[48891]: <info>  [1764088591.0780] device (tap660536bc-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.083 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:31Z|00412|binding|INFO|Releasing lport 660536bc-d4bf-4a4b-9515-06043951c25e from this chassis (sb_readonly=0)
Nov 25 11:36:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:31Z|00413|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e down in Southbound
Nov 25 11:36:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:31Z|00414|binding|INFO|Removing iface tap660536bc-d4 ovn-installed in OVS
Nov 25 11:36:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.096 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:64 10.100.0.8'], port_security=['fa:16:3e:10:46:64 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '11', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=660536bc-d4bf-4a4b-9515-06043951c25e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.097 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 660536bc-d4bf-4a4b-9515-06043951c25e in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f unbound from our chassis#033[00m
Nov 25 11:36:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.099 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.110 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[670526fb-7c65-4215-9207-cd6fc444df94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.111 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f namespace which is not needed anymore#033[00m
Nov 25 11:36:31 np0005535469 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 25 11:36:31 np0005535469 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000025.scope: Consumed 5.456s CPU time.
Nov 25 11:36:31 np0005535469 systemd-machined[216343]: Machine qemu-54-instance-00000025 terminated.
Nov 25 11:36:31 np0005535469 NetworkManager[48891]: <info>  [1764088591.3537] manager: (tap660536bc-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.371 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance destroyed successfully.#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.372 254096 DEBUG nova.objects.instance [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'resources' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 262 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.8 MiB/s wr, 357 op/s
Nov 25 11:36:31 np0005535469 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [NOTICE]   (307320) : haproxy version is 2.8.14-c23fe91
Nov 25 11:36:31 np0005535469 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [NOTICE]   (307320) : path to executable is /usr/sbin/haproxy
Nov 25 11:36:31 np0005535469 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [WARNING]  (307320) : Exiting Master process...
Nov 25 11:36:31 np0005535469 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [WARNING]  (307320) : Exiting Master process...
Nov 25 11:36:31 np0005535469 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [ALERT]    (307320) : Current worker (307322) exited with code 143 (Terminated)
Nov 25 11:36:31 np0005535469 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [WARNING]  (307320) : All workers exited. Exiting... (0)
Nov 25 11:36:31 np0005535469 systemd[1]: libpod-62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95.scope: Deactivated successfully.
Nov 25 11:36:31 np0005535469 podman[308238]: 2025-11-25 16:36:31.420794848 +0000 UTC m=+0.196860222 container died 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.435 254096 DEBUG nova.virt.libvirt.vif [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:12Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.436 254096 DEBUG nova.network.os_vif_util [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.437 254096 DEBUG nova.network.os_vif_util [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.438 254096 DEBUG os_vif [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.440 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap660536bc-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.448 254096 INFO os_vif [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4')#033[00m
Nov 25 11:36:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.617 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95-userdata-shm.mount: Deactivated successfully.
Nov 25 11:36:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-87e86859f0aeb96e23040d62d31f0a380e999b425012da5a016c66f9024dcf09-merged.mount: Deactivated successfully.
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.772 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-unplugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] No waiting events found dispatching network-vif-unplugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-unplugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] No waiting events found dispatching network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 WARNING nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received unexpected event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.776 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.776 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.916 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.916 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:31 np0005535469 podman[308238]: 2025-11-25 16:36:31.984950983 +0000 UTC m=+0.761016367 container cleanup 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:36:31 np0005535469 nova_compute[254092]: 2025-11-25 16:36:31.991 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:36:31 np0005535469 systemd[1]: libpod-conmon-62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95.scope: Deactivated successfully.
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.086 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.087 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.103 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.105 254096 INFO nova.compute.claims [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:36:32 np0005535469 podman[308296]: 2025-11-25 16:36:32.126584126 +0000 UTC m=+0.090862877 container remove 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f08875ca-4a51-4c6a-840f-e8f06092bdba]: (4, ('Tue Nov 25 04:36:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95)\n62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95\nTue Nov 25 04:36:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95)\n62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.140 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd173a2-8d34-40fd-a2b4-ca1abca2726c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.142 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:32 np0005535469 kernel: tap3960d4c5-60: left promiscuous mode
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87b87eb3-14eb-422b-9d15-26ac4ca1efe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.167 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3d7e49-770e-46ae-8ab0-5719229e31cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.171 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80a26766-f4f2-4286-83fd-be0f819f5a24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.189 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1ab4b4-68ac-47f3-bc35-3126b0eac51d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502959, 'reachable_time': 26314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308310, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.192 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:36:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.192 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ea750019-4e26-4414-9675-d6e807aa1d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:32 np0005535469 systemd[1]: run-netns-ovnmeta\x2d3960d4c5\x2d60d7\x2d49e3\x2db26d\x2df1317dd96f9f.mount: Deactivated successfully.
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.298 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.438 254096 INFO nova.virt.libvirt.driver [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deleting instance files /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949_del#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.440 254096 INFO nova.virt.libvirt.driver [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deletion of /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949_del complete#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.500 254096 INFO nova.compute.manager [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 2.53 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.501 254096 DEBUG oslo.service.loopingcall [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.502 254096 DEBUG nova.compute.manager [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.503 254096 DEBUG nova.network.neutron [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:36:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:36:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876505788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.809 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.821 254096 DEBUG nova.compute.provider_tree [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.839 254096 DEBUG nova.scheduler.client.report [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.871 254096 INFO nova.virt.libvirt.driver [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deleting instance files /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_del#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.873 254096 INFO nova.virt.libvirt.driver [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deletion of /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_del complete#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.878 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.879 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.960 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.961 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.971 254096 INFO nova.compute.manager [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Took 2.04 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.972 254096 DEBUG oslo.service.loopingcall [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.973 254096 DEBUG nova.compute.manager [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.973 254096 DEBUG nova.network.neutron [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:36:32 np0005535469 nova_compute[254092]: 2025-11-25 16:36:32.982 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.010 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.150 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.152 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.153 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Creating image(s)#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.176 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.200 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.222 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.227 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "34d43962814c7a60ec771694e1897a2898936965" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.228 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "34d43962814c7a60ec771694e1897a2898936965" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.301 254096 DEBUG nova.policy [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:36:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 262 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 4.2 MiB/s wr, 315 op/s
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.556 254096 DEBUG nova.network.neutron [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.618 254096 INFO nova.compute.manager [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 1.11 seconds to deallocate network for instance.#033[00m
Nov 25 11:36:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:33Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:7b:cf 10.100.0.6
Nov 25 11:36:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:33Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:7b:cf 10.100.0.6
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.678 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.680 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.810 254096 DEBUG nova.virt.libvirt.imagebackend [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.878 254096 DEBUG oslo_concurrency.processutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.920 254096 DEBUG nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG oslo_concurrency.lockutils [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG oslo_concurrency.lockutils [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG oslo_concurrency.lockutils [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.922 254096 WARNING nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.922 254096 DEBUG nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-deleted-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.929 254096 DEBUG nova.virt.libvirt.imagebackend [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 11:36:33 np0005535469 nova_compute[254092]: 2025-11-25 16:36:33.931 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd@snap to None/6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:36:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:36:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2825584110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.398 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "34d43962814c7a60ec771694e1897a2898936965" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.442 254096 DEBUG oslo_concurrency.processutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.446 254096 DEBUG nova.network.neutron [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.498 254096 INFO nova.compute.manager [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Took 1.52 seconds to deallocate network for instance.#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.505 254096 DEBUG nova.compute.manager [req-2a80050c-641b-4ee4-ae60-a3e0f3c211a3 req-0f984a44-05a3-4b6d-8e33-103746b41c39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-deleted-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.554 254096 DEBUG nova.compute.provider_tree [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.561 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.565 254096 DEBUG nova.objects.instance [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ca43770-6e19-4279-9bf2-c44dcc4d5260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.574 254096 DEBUG nova.scheduler.client.report [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.587 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.588 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Ensure instance console log exists: /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.589 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.589 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.589 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.597 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.600 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.639 254096 INFO nova.scheduler.client.report [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance be1b8151-4e42-40db-813c-8b3b3e216949#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.693 254096 DEBUG oslo_concurrency.processutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.740 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:34 np0005535469 nova_compute[254092]: 2025-11-25 16:36:34.809 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Successfully created port: a7a57913-2b29-44b6-ba43-4f56bfad86dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:36:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:36:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418472395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:36:35 np0005535469 nova_compute[254092]: 2025-11-25 16:36:35.230 254096 DEBUG oslo_concurrency.processutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:35 np0005535469 nova_compute[254092]: 2025-11-25 16:36:35.240 254096 DEBUG nova.compute.provider_tree [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:36:35 np0005535469 nova_compute[254092]: 2025-11-25 16:36:35.269 254096 DEBUG nova.scheduler.client.report [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:36:35 np0005535469 nova_compute[254092]: 2025-11-25 16:36:35.300 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:35 np0005535469 nova_compute[254092]: 2025-11-25 16:36:35.349 254096 INFO nova.scheduler.client.report [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Deleted allocations for instance a3d5d205-98f0-4820-a96c-7f3e59d0cdd9#033[00m
Nov 25 11:36:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 178 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 275 op/s
Nov 25 11:36:35 np0005535469 nova_compute[254092]: 2025-11-25 16:36:35.688 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.041 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Successfully updated port: a7a57913-2b29-44b6-ba43-4f56bfad86dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.105 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.106 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.106 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.344 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.632 254096 DEBUG nova.compute.manager [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-changed-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.633 254096 DEBUG nova.compute.manager [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Refreshing instance network info cache due to event network-changed-a7a57913-2b29-44b6-ba43-4f56bfad86dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:36:36 np0005535469 nova_compute[254092]: 2025-11-25 16:36:36.633 254096 DEBUG oslo_concurrency.lockutils [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:36:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 167 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 268 op/s
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.472 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updating instance_info_cache with network_info: [{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.518 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.518 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance network_info: |[{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.519 254096 DEBUG oslo_concurrency.lockutils [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.519 254096 DEBUG nova.network.neutron [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Refreshing network info cache for port a7a57913-2b29-44b6-ba43-4f56bfad86dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.521 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start _get_guest_xml network_info=[{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:36:21Z,direct_url=<?>,disk_format='raw',id=12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd,min_disk=1,min_ram=0,name='tempest-test-snap-2087609465',owner='8bd01e0913564ac783fae350d6861e24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:36:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.527 254096 WARNING nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.531 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.531 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.538 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.539 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.539 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.539 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:36:21Z,direct_url=<?>,disk_format='raw',id=12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd,min_disk=1,min_ram=0,name='tempest-test-snap-2087609465',owner='8bd01e0913564ac783fae350d6861e24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:36:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.540 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.540 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.540 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.542 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.542 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.545 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:36:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278933619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:36:37 np0005535469 nova_compute[254092]: 2025-11-25 16:36:37.998 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.023 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.027 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:36:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527864369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.479 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.481 254096 DEBUG nova.virt.libvirt.vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1002945108',display_name='tempest-ImagesTestJSON-server-1002945108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1002945108',id=48,image_ref='12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-szp3bkr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e549d4e8-b824-480b-b81a-83e2ea1eff12',image_min_disk='1',image_min_ram='0',image_owner_id='8bd01e0913564ac783fae350d6861e24',image_owner_project_name='tempest-ImagesTestJSON-217824554',image_owner_user_name='tempest-ImagesTestJSON-217824554-project-member',image_user_id='75e65df891a54a2caabb073a427430b9',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:33Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=6ca43770-6e19-4279-9bf2-c44dcc4d5260,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.481 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.482 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.484 254096 DEBUG nova.objects.instance [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ca43770-6e19-4279-9bf2-c44dcc4d5260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.496 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <uuid>6ca43770-6e19-4279-9bf2-c44dcc4d5260</uuid>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <name>instance-00000030</name>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <nova:name>tempest-ImagesTestJSON-server-1002945108</nova:name>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:36:37</nova:creationTime>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <nova:port uuid="a7a57913-2b29-44b6-ba43-4f56bfad86dc">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <entry name="serial">6ca43770-6e19-4279-9bf2-c44dcc4d5260</entry>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <entry name="uuid">6ca43770-6e19-4279-9bf2-c44dcc4d5260</entry>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ff:6f:40"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <target dev="tapa7a57913-2b"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/console.log" append="off"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:36:38 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:36:38 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:36:38 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:36:38 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.498 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Preparing to wait for external event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.498 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.499 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.499 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.500 254096 DEBUG nova.virt.libvirt.vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1002945108',display_name='tempest-ImagesTestJSON-server-1002945108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1002945108',id=48,image_ref='12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-szp3bkr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e549d4e8-b824-480b-b81a-83e2ea1eff12',image_min_disk='1',image_min_ram='0',image_owner_id='8bd01e0913564ac783fae350d6861e24',image_owner_project_name='tempest-ImagesTestJSON-217824554',image_owner_user_name='tempest-ImagesTestJSON-217824554-project-member',image_user_id='75e65df891a54a2caabb073a427430b9',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:33Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=6ca43770-6e19-4279-9bf2-c44dcc4d5260,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.500 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.500 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.501 254096 DEBUG os_vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.502 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.502 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.504 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7a57913-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.505 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7a57913-2b, col_values=(('external_ids', {'iface-id': 'a7a57913-2b29-44b6-ba43-4f56bfad86dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:6f:40', 'vm-uuid': '6ca43770-6e19-4279-9bf2-c44dcc4d5260'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:38 np0005535469 NetworkManager[48891]: <info>  [1764088598.5071] manager: (tapa7a57913-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.516 254096 INFO os_vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b')#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.581 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.582 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.582 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:ff:6f:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.583 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Using config drive#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.606 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.874 254096 DEBUG nova.network.neutron [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updated VIF entry in instance network info cache for port a7a57913-2b29-44b6-ba43-4f56bfad86dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.875 254096 DEBUG nova.network.neutron [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updating instance_info_cache with network_info: [{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:38 np0005535469 nova_compute[254092]: 2025-11-25 16:36:38.894 254096 DEBUG oslo_concurrency.lockutils [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.038 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Creating config drive at /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.045 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1fqgyfh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.182 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1fqgyfh" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.207 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.211 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 167 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 264 op/s
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.653 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.654 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deleting local config drive /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config because it was imported into RBD.#033[00m
Nov 25 11:36:39 np0005535469 kernel: tapa7a57913-2b: entered promiscuous mode
Nov 25 11:36:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:39Z|00415|binding|INFO|Claiming lport a7a57913-2b29-44b6-ba43-4f56bfad86dc for this chassis.
Nov 25 11:36:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:39Z|00416|binding|INFO|a7a57913-2b29-44b6-ba43-4f56bfad86dc: Claiming fa:16:3e:ff:6f:40 10.100.0.14
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:39 np0005535469 NetworkManager[48891]: <info>  [1764088599.7268] manager: (tapa7a57913-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Nov 25 11:36:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:39Z|00417|binding|INFO|Setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc ovn-installed in OVS
Nov 25 11:36:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:39Z|00418|binding|INFO|Setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc up in Southbound
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.743 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.745 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 bound to our chassis#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.746 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:39 np0005535469 systemd-udevd[308691]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:36:39 np0005535469 NetworkManager[48891]: <info>  [1764088599.7668] device (tapa7a57913-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:36:39 np0005535469 NetworkManager[48891]: <info>  [1764088599.7681] device (tapa7a57913-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b39935cb-57a7-40ba-927a-2e0cd7b804b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:39 np0005535469 systemd-machined[216343]: New machine qemu-57-instance-00000030.
Nov 25 11:36:39 np0005535469 systemd[1]: Started Virtual Machine qemu-57-instance-00000030.
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.793 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[530cb36a-fe78-4259-b12e-29e88106906b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.798 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a9310f58-08d8-4ccd-93d1-b41177597e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.826 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29365dd5-a6fa-43db-9305-7670c482e1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.849 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b74f69-ad03-4a70-a9f0-dd2a1d43b189]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308702, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.869 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de6d8265-308a-4fb7-ad00-fbec93e63f14]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308705, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308705, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.870 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:39 np0005535469 nova_compute[254092]: 2025-11-25 16:36:39.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.873 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:36:40
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data']
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.496 254096 DEBUG nova.compute.manager [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.496 254096 DEBUG oslo_concurrency.lockutils [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.497 254096 DEBUG oslo_concurrency.lockutils [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.497 254096 DEBUG oslo_concurrency.lockutils [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.497 254096 DEBUG nova.compute.manager [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Processing event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:36:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:40Z|00419|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.672 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.672 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088600.671455, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.672 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Started (Lifecycle Event)#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.678 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.690 254096 INFO nova.virt.libvirt.driver [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance spawned successfully.#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.690 254096 INFO nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 7.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.691 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.698 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.702 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.727 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.727 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088600.6747887, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.727 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.759 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.764 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088600.6762948, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.764 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.800 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.804 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.821 254096 INFO nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 8.77 seconds to build instance.#033[00m
Nov 25 11:36:40 np0005535469 nova_compute[254092]: 2025-11-25 16:36:40.844 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 167 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.018 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.018 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.053 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.221 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.222 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.233 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.234 254096 INFO nova.compute.claims [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.409 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.831 254096 DEBUG nova.compute.manager [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.832 254096 DEBUG oslo_concurrency.lockutils [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.832 254096 DEBUG oslo_concurrency.lockutils [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.833 254096 DEBUG oslo_concurrency.lockutils [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.833 254096 DEBUG nova.compute.manager [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] No waiting events found dispatching network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.833 254096 WARNING nova.compute.manager [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received unexpected event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc for instance with vm_state active and task_state None.#033[00m
Nov 25 11:36:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:36:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198775094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.932 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.938 254096 DEBUG nova.compute.provider_tree [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:36:42 np0005535469 nova_compute[254092]: 2025-11-25 16:36:42.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.015 254096 DEBUG nova.scheduler.client.report [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.239 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.240 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.319 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.320 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.382 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.395 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.396 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.396 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.396 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.397 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.398 254096 INFO nova.compute.manager [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Terminating instance#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.399 254096 DEBUG nova.compute.manager [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.401 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:36:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 167 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 156 op/s
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.502 254096 DEBUG nova.policy [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.561 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.562 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.563 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Creating image(s)#033[00m
Nov 25 11:36:43 np0005535469 kernel: tapa7a57913-2b (unregistering): left promiscuous mode
Nov 25 11:36:43 np0005535469 NetworkManager[48891]: <info>  [1764088603.5889] device (tapa7a57913-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.594 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00420|binding|INFO|Releasing lport a7a57913-2b29-44b6-ba43-4f56bfad86dc from this chassis (sb_readonly=0)
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00421|binding|INFO|Setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc down in Southbound
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00422|binding|INFO|Removing iface tapa7a57913-2b ovn-installed in OVS
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.611 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.612 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.613 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7#033[00m
Nov 25 11:36:43 np0005535469 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 25 11:36:43 np0005535469 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Consumed 3.666s CPU time.
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d37bac8-20e6-4112-86ad-b0b040f0508b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 systemd-machined[216343]: Machine qemu-57-instance-00000030 terminated.
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.636 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.659 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.664 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdb32c1-c12d-418f-a0a7-80e63bc27b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.664 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.666 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea7196b-866b-4a7c-ad8b-ec8269809165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.698 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[52cf40a3-d141-485d-a8e2-fd5a51b903f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.717 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20a1b4ca-ff53-43ff-9e7c-873249b53275]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308838, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.737 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.738 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.739 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.740 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.739 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[824687e7-6546-4e3c-b553-ed43b4e7ec57]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308839, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308839, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.801 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.809 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:43 np0005535469 kernel: tapa7a57913-2b: entered promiscuous mode
Nov 25 11:36:43 np0005535469 systemd-udevd[308806]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:36:43 np0005535469 NetworkManager[48891]: <info>  [1764088603.8319] manager: (tapa7a57913-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Nov 25 11:36:43 np0005535469 kernel: tapa7a57913-2b (unregistering): left promiscuous mode
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00423|binding|INFO|Claiming lport a7a57913-2b29-44b6-ba43-4f56bfad86dc for this chassis.
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00424|binding|INFO|a7a57913-2b29-44b6-ba43-4f56bfad86dc: Claiming fa:16:3e:ff:6f:40 10.100.0.14
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00425|if_status|INFO|Dropped 2 log messages in last 56 seconds (most recently, 56 seconds ago) due to excessive rate
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00426|if_status|INFO|Not setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc down as sb is readonly
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.895 254096 INFO nova.virt.libvirt.driver [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance destroyed successfully.#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.896 254096 DEBUG nova.objects.instance [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid 6ca43770-6e19-4279-9bf2-c44dcc4d5260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.908 254096 DEBUG nova.virt.libvirt.vif [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1002945108',display_name='tempest-ImagesTestJSON-server-1002945108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1002945108',id=48,image_ref='12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-szp3bkr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e549d4e8-b824-480b-b81a-83e2ea1eff12',image_min_disk='1',image_min_ram='0',image_owner_id='8bd01e0913564ac783fae350d6861e24',image_owner_project_name='tempest-ImagesTestJSON-217824554',image_owner_user_name='tempest-ImagesTestJSON-217824554-project-member',image_user_id='75e65df891a54a2caabb073a427430b9',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:40Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=6ca43770-6e19-4279-9bf2-c44dcc4d5260,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.908 254096 DEBUG nova.network.os_vif_util [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.909 254096 DEBUG nova.network.os_vif_util [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.910 254096 DEBUG os_vif [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:36:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:43Z|00427|binding|INFO|Releasing lport a7a57913-2b29-44b6-ba43-4f56bfad86dc from this chassis (sb_readonly=0)
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.913 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7a57913-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.913 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.915 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 bound to our chassis#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.917 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.932 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:43 np0005535469 nova_compute[254092]: 2025-11-25 16:36:43.941 254096 INFO os_vif [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b')#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[afc86e7a-107b-42d1-9ba5-1ba91295b446]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.982 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d14b5643-30f1-4e0d-bcf1-7d8de098431e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.987 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c3becc86-2238-4e58-a193-2059d251ea28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.020 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab4d9d2-2b23-4085-a412-d74eb73c4de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.038 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5fffeb4e-6b67-4ad2-b2a0-f97b9f5bf125]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308901, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.058 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c35c2bdf-ff78-44db-aba0-8fc6c7b3f62a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308902, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308902, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.060 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.135 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.139 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.141 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.141 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.142 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.143 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.164 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8055af2-eb7c-4e79-abc0-4db17b23f233]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.202 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbbec43-d8e1-40c6-b0e8-d9161b11b0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.206 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ca962b42-7286-4929-a9d5-c2b2260d46b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.241 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0ca92f-c262-462f-8c67-e0338e1cfd03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.261 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df620135-1148-4993-b463-c15c795023d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308911, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15c75a20-589b-4580-969d-fd762be96e74]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308912, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308912, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.283 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.285 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG nova.compute.manager [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-unplugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG oslo_concurrency.lockutils [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG oslo_concurrency.lockutils [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG oslo_concurrency.lockutils [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG nova.compute.manager [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] No waiting events found dispatching network-vif-unplugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:44 np0005535469 nova_compute[254092]: 2025-11-25 16:36:44.912 254096 DEBUG nova.compute.manager [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-unplugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:36:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 167 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 870 KiB/s rd, 2.2 MiB/s wr, 175 op/s
Nov 25 11:36:45 np0005535469 nova_compute[254092]: 2025-11-25 16:36:45.419 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Successfully created port: 379de8c7-cb88-4a89-8008-f7bb1cbfc09b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:36:45 np0005535469 nova_compute[254092]: 2025-11-25 16:36:45.616 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088590.6141033, be1b8151-4e42-40db-813c-8b3b3e216949 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:45 np0005535469 nova_compute[254092]: 2025-11-25 16:36:45.616 254096 INFO nova.compute.manager [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:36:45 np0005535469 nova_compute[254092]: 2025-11-25 16:36:45.656 254096 DEBUG nova.compute.manager [None req-66ddf397-aaef-48a2-99c6-1d8ef982d90a - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.370 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088591.3692746, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.371 254096 INFO nova.compute.manager [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.392 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.417 254096 DEBUG nova.compute.manager [None req-b53db4e9-3526-443c-85d1-04be77966ecd - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.448 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.918 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Successfully updated port: 379de8c7-cb88-4a89-8008-f7bb1cbfc09b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.971 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.971 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.971 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.979 254096 DEBUG nova.objects.instance [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.992 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.992 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Ensure instance console log exists: /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.993 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.993 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:46 np0005535469 nova_compute[254092]: 2025-11-25 16:36:46.993 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.128 254096 DEBUG nova.compute.manager [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.129 254096 DEBUG oslo_concurrency.lockutils [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.129 254096 DEBUG oslo_concurrency.lockutils [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.130 254096 DEBUG oslo_concurrency.lockutils [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.130 254096 DEBUG nova.compute.manager [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] No waiting events found dispatching network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.130 254096 WARNING nova.compute.manager [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received unexpected event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.176 254096 DEBUG nova.compute.manager [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-changed-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.176 254096 DEBUG nova.compute.manager [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Refreshing instance network info cache due to event network-changed-379de8c7-cb88-4a89-8008-f7bb1cbfc09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.177 254096 DEBUG oslo_concurrency.lockutils [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.301 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:36:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 187 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 147 op/s
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.967 254096 INFO nova.virt.libvirt.driver [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deleting instance files /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260_del#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.968 254096 INFO nova.virt.libvirt.driver [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deletion of /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260_del complete#033[00m
Nov 25 11:36:47 np0005535469 nova_compute[254092]: 2025-11-25 16:36:47.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.131 254096 INFO nova.compute.manager [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 4.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.132 254096 DEBUG oslo.service.loopingcall [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.132 254096 DEBUG nova.compute.manager [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.133 254096 DEBUG nova.network.neutron [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.234 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updating instance_info_cache with network_info: [{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.297 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.298 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance network_info: |[{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.298 254096 DEBUG oslo_concurrency.lockutils [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.298 254096 DEBUG nova.network.neutron [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Refreshing network info cache for port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.302 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start _get_guest_xml network_info=[{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.310 254096 WARNING nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.316 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.318 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.330 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.331 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.331 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.331 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.332 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.332 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.334 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.334 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.334 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.335 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.339 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:36:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4049767500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.837 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.867 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.873 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:48 np0005535469 nova_compute[254092]: 2025-11-25 16:36:48.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:36:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 81250df2-611f-4359-94d7-cff18cf68d2a does not exist
Nov 25 11:36:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 864aad08-dcaf-4ef4-b33e-5a10c4bc400e does not exist
Nov 25 11:36:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 80a7b768-b0fb-44f8-ab45-7bf091435fc0 does not exist
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471800074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 187 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 675 KiB/s wr, 98 op/s
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.519 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.520 254096 DEBUG nova.virt.libvirt.vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1926162330',display_name='tempest-DeleteServersTestJSON-server-1926162330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1926162330',id=49,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-3hu4p32f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:43Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=288ada45-c7fc-4ddc-8b83-1c03ffa14fe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.521 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.521 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.522 254096 DEBUG nova.objects.instance [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.536 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <uuid>288ada45-c7fc-4ddc-8b83-1c03ffa14fe6</uuid>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <name>instance-00000031</name>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <nova:name>tempest-DeleteServersTestJSON-server-1926162330</nova:name>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:36:48</nova:creationTime>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <nova:port uuid="379de8c7-cb88-4a89-8008-f7bb1cbfc09b">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <entry name="serial">288ada45-c7fc-4ddc-8b83-1c03ffa14fe6</entry>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <entry name="uuid">288ada45-c7fc-4ddc-8b83-1c03ffa14fe6</entry>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:33:4b:3b"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <target dev="tap379de8c7-cb"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/console.log" append="off"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:36:49 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:36:49 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:36:49 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:36:49 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.538 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Preparing to wait for external event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.538 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.538 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.539 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.539 254096 DEBUG nova.virt.libvirt.vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1926162330',display_name='tempest-DeleteServersTestJSON-server-1926162330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1926162330',id=49,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-3hu4p32f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:43Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=288ada45-c7fc-4ddc-8b83-1c03ffa14fe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.540 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.540 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.541 254096 DEBUG os_vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.542 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.542 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.545 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap379de8c7-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.545 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap379de8c7-cb, col_values=(('external_ids', {'iface-id': '379de8c7-cb88-4a89-8008-f7bb1cbfc09b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:4b:3b', 'vm-uuid': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:49 np0005535469 NetworkManager[48891]: <info>  [1764088609.5486] manager: (tap379de8c7-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.549 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.555 254096 INFO os_vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb')#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.608 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.608 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.608 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:33:4b:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.609 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Using config drive#033[00m
Nov 25 11:36:49 np0005535469 nova_compute[254092]: 2025-11-25 16:36:49.644 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:36:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:36:49 np0005535469 podman[309345]: 2025-11-25 16:36:49.917512252 +0000 UTC m=+0.110221562 container create 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 25 11:36:49 np0005535469 podman[309345]: 2025-11-25 16:36:49.835967909 +0000 UTC m=+0.028677239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:36:50 np0005535469 systemd[1]: Started libpod-conmon-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope.
Nov 25 11:36:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:36:50 np0005535469 podman[309345]: 2025-11-25 16:36:50.081277214 +0000 UTC m=+0.273986544 container init 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:36:50 np0005535469 podman[309345]: 2025-11-25 16:36:50.093961989 +0000 UTC m=+0.286671299 container start 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:36:50 np0005535469 elated_davinci[309361]: 167 167
Nov 25 11:36:50 np0005535469 systemd[1]: libpod-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope: Deactivated successfully.
Nov 25 11:36:50 np0005535469 conmon[309361]: conmon 9253bbe2bf98b0ca1b73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope/container/memory.events
Nov 25 11:36:50 np0005535469 podman[309345]: 2025-11-25 16:36:50.125163525 +0000 UTC m=+0.317872835 container attach 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:36:50 np0005535469 podman[309345]: 2025-11-25 16:36:50.126173802 +0000 UTC m=+0.318883122 container died 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:36:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-310f3666428d13a55798413474193d4c7986f942c0020f54d37ff7187a07444d-merged.mount: Deactivated successfully.
Nov 25 11:36:50 np0005535469 podman[309345]: 2025-11-25 16:36:50.342750958 +0000 UTC m=+0.535460268 container remove 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:36:50 np0005535469 systemd[1]: libpod-conmon-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope: Deactivated successfully.
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.474 254096 DEBUG nova.network.neutron [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.499 254096 INFO nova.compute.manager [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 2.37 seconds to deallocate network for instance.#033[00m
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.543 254096 DEBUG nova.compute.manager [req-9148fff9-0440-460f-a96c-e2ef305080af req-c6c08cdc-53df-4c38-87e3-715dd7fd52d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-deleted-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:50 np0005535469 podman[309385]: 2025-11-25 16:36:50.543597887 +0000 UTC m=+0.052639630 container create 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.561 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.562 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:50 np0005535469 systemd[1]: Started libpod-conmon-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope.
Nov 25 11:36:50 np0005535469 podman[309385]: 2025-11-25 16:36:50.518717602 +0000 UTC m=+0.027759365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:36:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:50 np0005535469 podman[309385]: 2025-11-25 16:36:50.654920007 +0000 UTC m=+0.163961780 container init 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:36:50 np0005535469 podman[309385]: 2025-11-25 16:36:50.666419329 +0000 UTC m=+0.175461072 container start 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:36:50 np0005535469 podman[309385]: 2025-11-25 16:36:50.674812787 +0000 UTC m=+0.183854570 container attach 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.688 254096 DEBUG nova.network.neutron [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updated VIF entry in instance network info cache for port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.689 254096 DEBUG nova.network.neutron [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updating instance_info_cache with network_info: [{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.707 254096 DEBUG oslo_concurrency.lockutils [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:36:50 np0005535469 nova_compute[254092]: 2025-11-25 16:36:50.708 254096 DEBUG oslo_concurrency.processutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011067962696164615 of space, bias 1.0, pg target 0.33203888088493844 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:36:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:36:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247762840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.218 254096 DEBUG oslo_concurrency.processutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.225 254096 DEBUG nova.compute.provider_tree [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.238 254096 DEBUG nova.scheduler.client.report [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.258 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.297 254096 INFO nova.scheduler.client.report [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance 6ca43770-6e19-4279-9bf2-c44dcc4d5260#033[00m
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.359 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.741 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Creating config drive at /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config#033[00m
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.746 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2zmx2777 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:51 np0005535469 zealous_payne[309401]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:36:51 np0005535469 zealous_payne[309401]: --> relative data size: 1.0
Nov 25 11:36:51 np0005535469 zealous_payne[309401]: --> All data devices are unavailable
Nov 25 11:36:51 np0005535469 systemd[1]: libpod-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope: Deactivated successfully.
Nov 25 11:36:51 np0005535469 systemd[1]: libpod-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope: Consumed 1.156s CPU time.
Nov 25 11:36:51 np0005535469 podman[309385]: 2025-11-25 16:36:51.883724123 +0000 UTC m=+1.392765886 container died 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:36:51 np0005535469 nova_compute[254092]: 2025-11-25 16:36:51.890 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2zmx2777" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:52 np0005535469 nova_compute[254092]: 2025-11-25 16:36:52.076 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:36:52 np0005535469 nova_compute[254092]: 2025-11-25 16:36:52.080 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:36:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151-merged.mount: Deactivated successfully.
Nov 25 11:36:52 np0005535469 nova_compute[254092]: 2025-11-25 16:36:52.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 11:36:53 np0005535469 podman[309385]: 2025-11-25 16:36:53.988926677 +0000 UTC m=+3.497968420 container remove 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 11:36:54 np0005535469 systemd[1]: libpod-conmon-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope: Deactivated successfully.
Nov 25 11:36:54 np0005535469 nova_compute[254092]: 2025-11-25 16:36:54.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:54 np0005535469 podman[309645]: 2025-11-25 16:36:54.607954221 +0000 UTC m=+0.024031103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:36:55 np0005535469 podman[309645]: 2025-11-25 16:36:55.033444844 +0000 UTC m=+0.449521746 container create f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:36:55 np0005535469 systemd[1]: Started libpod-conmon-f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d.scope.
Nov 25 11:36:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:36:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508781239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:36:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:36:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508781239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:36:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:36:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 11:36:55 np0005535469 podman[309645]: 2025-11-25 16:36:55.965242133 +0000 UTC m=+1.381319015 container init f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:36:55 np0005535469 podman[309645]: 2025-11-25 16:36:55.973533578 +0000 UTC m=+1.389610440 container start f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:36:56 np0005535469 romantic_payne[309661]: 167 167
Nov 25 11:36:56 np0005535469 systemd[1]: libpod-f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d.scope: Deactivated successfully.
Nov 25 11:36:56 np0005535469 podman[309645]: 2025-11-25 16:36:56.481088699 +0000 UTC m=+1.897165591 container attach f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 11:36:56 np0005535469 podman[309645]: 2025-11-25 16:36:56.482045884 +0000 UTC m=+1.898122756 container died f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:36:56 np0005535469 nova_compute[254092]: 2025-11-25 16:36:56.729 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:36:56 np0005535469 nova_compute[254092]: 2025-11-25 16:36:56.731 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deleting local config drive /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config because it was imported into RBD.#033[00m
Nov 25 11:36:56 np0005535469 kernel: tap379de8c7-cb: entered promiscuous mode
Nov 25 11:36:56 np0005535469 NetworkManager[48891]: <info>  [1764088616.7875] manager: (tap379de8c7-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 25 11:36:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:56Z|00428|binding|INFO|Claiming lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b for this chassis.
Nov 25 11:36:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:56Z|00429|binding|INFO|379de8c7-cb88-4a89-8008-f7bb1cbfc09b: Claiming fa:16:3e:33:4b:3b 10.100.0.8
Nov 25 11:36:56 np0005535469 nova_compute[254092]: 2025-11-25 16:36:56.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:56 np0005535469 nova_compute[254092]: 2025-11-25 16:36:56.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:56 np0005535469 systemd-udevd[309688]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:36:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8a811f387cdcd39b961103710d0e8c18c0edaa33832bd59ffd3ed72815f9af59-merged.mount: Deactivated successfully.
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.827 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:36:56 np0005535469 NetworkManager[48891]: <info>  [1764088616.8299] device (tap379de8c7-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.828 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.829 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa#033[00m
Nov 25 11:36:56 np0005535469 NetworkManager[48891]: <info>  [1764088616.8314] device (tap379de8c7-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:36:56 np0005535469 systemd-machined[216343]: New machine qemu-58-instance-00000031.
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2242b911-42d3-4d9a-80f2-a93ec38e3f0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.843 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.845 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.845 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66b72d6c-fb37-41d4-8bef-fdb7c09dc2c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[47370103-572e-45ee-86e5-bf2696653fbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.858 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[166b9a5a-6c76-4e6b-9993-6854735f00e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 systemd[1]: Started Virtual Machine qemu-58-instance-00000031.
Nov 25 11:36:56 np0005535469 nova_compute[254092]: 2025-11-25 16:36:56.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:56Z|00430|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b ovn-installed in OVS
Nov 25 11:36:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:56Z|00431|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b up in Southbound
Nov 25 11:36:56 np0005535469 nova_compute[254092]: 2025-11-25 16:36:56.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.883 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93da4d28-c456-4f8d-afce-cf242d672333]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.910 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c47fac93-823a-48cc-ae03-2ebcb7f053c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04e73cc5-121b-48ad-a814-afeb23e7fbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 NetworkManager[48891]: <info>  [1764088616.9177] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.948 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[76ca34fb-ff27-4857-b68b-bb16bff69956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.951 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3829d4f4-f660-4cbc-9e46-0f60e911312e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 NetworkManager[48891]: <info>  [1764088616.9699] device (tape469a950-70): carrier: link connected
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.975 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[721c6ee3-a82f-4044-8b1f-88e62379e66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.991 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8438512-d3b3-437e-86cd-2daac9872407]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507456, 'reachable_time': 36528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309724, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05c8b3cf-5525-4a2f-9c45-583274f000fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507456, 'tstamp': 507456}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309725, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.021 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c526922c-97f6-4ab7-97ec-7ad7aa20795d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507456, 'reachable_time': 36528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309726, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.050 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e95576f-2b1c-4c7c-bb7f-8df65d1017f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.107 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9a06ca57-c8c9-41e3-beb2-61632836d786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.108 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:57 np0005535469 NetworkManager[48891]: <info>  [1764088617.1116] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Nov 25 11:36:57 np0005535469 kernel: tape469a950-70: entered promiscuous mode
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.113 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:36:57Z|00432|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.131 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42679748-0f75-4cc5-b7fb-07aa01b8fae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.132 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:36:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.133 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:36:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 11:36:57 np0005535469 podman[309645]: 2025-11-25 16:36:57.560315667 +0000 UTC m=+2.976392529 container remove f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 11:36:57 np0005535469 systemd[1]: libpod-conmon-f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d.scope: Deactivated successfully.
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.701 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088617.7009575, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.702 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Started (Lifecycle Event)#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.724 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.730 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088617.702132, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.730 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.747 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.753 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:36:57 np0005535469 podman[309801]: 2025-11-25 16:36:57.666362844 +0000 UTC m=+0.019894921 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.769 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:36:57 np0005535469 nova_compute[254092]: 2025-11-25 16:36:57.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.366 254096 DEBUG nova.compute.manager [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.367 254096 DEBUG oslo_concurrency.lockutils [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.367 254096 DEBUG oslo_concurrency.lockutils [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.367 254096 DEBUG oslo_concurrency.lockutils [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.368 254096 DEBUG nova.compute.manager [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Processing event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.368 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.372 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088618.372311, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.372 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.376 254096 INFO nova.virt.libvirt.driver [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance spawned successfully.#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.377 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.403 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.407 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.408 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.408 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.408 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.409 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.409 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.413 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.440 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.593 254096 INFO nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 15.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.594 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:58 np0005535469 podman[309820]: 2025-11-25 16:36:58.57522544 +0000 UTC m=+0.872002337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.892 254096 INFO nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 16.72 seconds to build instance.#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.894 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088603.8917487, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.894 254096 INFO nova.compute.manager [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.925 254096 DEBUG nova.compute.manager [None req-93336a62-10ea-4730-a8a3-944c3bc44474 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:36:58 np0005535469 podman[309801]: 2025-11-25 16:36:58.928228997 +0000 UTC m=+1.281761054 container create ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:36:58 np0005535469 nova_compute[254092]: 2025-11-25 16:36:58.965 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:36:59 np0005535469 systemd[1]: Started libpod-conmon-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope.
Nov 25 11:36:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:36:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe71abb6aef0ccd46cc56760891a8c6424dbecf1dc6c300940a7bd657aa611f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:59 np0005535469 podman[309820]: 2025-11-25 16:36:59.352798916 +0000 UTC m=+1.649575773 container create d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 11:36:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.2 MiB/s wr, 35 op/s
Nov 25 11:36:59 np0005535469 nova_compute[254092]: 2025-11-25 16:36:59.551 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:36:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:36:59 np0005535469 systemd[1]: Started libpod-conmon-d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162.scope.
Nov 25 11:36:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:36:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:36:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 25 11:36:59 np0005535469 podman[309820]: 2025-11-25 16:36:59.800835171 +0000 UTC m=+2.097612038 container init d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:36:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 25 11:36:59 np0005535469 podman[309820]: 2025-11-25 16:36:59.81704746 +0000 UTC m=+2.113824317 container start d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:37:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 25 11:37:00 np0005535469 podman[309801]: 2025-11-25 16:37:00.277576404 +0000 UTC m=+2.631108471 container init ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 11:37:00 np0005535469 podman[309801]: 2025-11-25 16:37:00.284552903 +0000 UTC m=+2.638084960 container start ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:37:00 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : New worker (309850) forked
Nov 25 11:37:00 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : Loading success.
Nov 25 11:37:00 np0005535469 nova_compute[254092]: 2025-11-25 16:37:00.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]: {
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:    "0": [
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:        {
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "devices": [
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "/dev/loop3"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            ],
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_name": "ceph_lv0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_size": "21470642176",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "name": "ceph_lv0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "tags": {
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cluster_name": "ceph",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.crush_device_class": "",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.encrypted": "0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osd_id": "0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.type": "block",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.vdo": "0"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            },
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "type": "block",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "vg_name": "ceph_vg0"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:        }
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:    ],
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:    "1": [
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:        {
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "devices": [
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "/dev/loop4"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            ],
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_name": "ceph_lv1",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_size": "21470642176",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "name": "ceph_lv1",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "tags": {
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cluster_name": "ceph",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.crush_device_class": "",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.encrypted": "0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osd_id": "1",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.type": "block",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.vdo": "0"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            },
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "type": "block",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "vg_name": "ceph_vg1"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:        }
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:    ],
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:    "2": [
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:        {
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "devices": [
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "/dev/loop5"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            ],
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_name": "ceph_lv2",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_size": "21470642176",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "name": "ceph_lv2",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "tags": {
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.cluster_name": "ceph",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.crush_device_class": "",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.encrypted": "0",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osd_id": "2",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.type": "block",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:                "ceph.vdo": "0"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            },
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "type": "block",
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:            "vg_name": "ceph_vg2"
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:        }
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]:    ]
Nov 25 11:37:00 np0005535469 vigilant_gauss[309842]: }
Nov 25 11:37:00 np0005535469 systemd[1]: libpod-d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162.scope: Deactivated successfully.
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.216 254096 DEBUG nova.compute.manager [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.217 254096 DEBUG oslo_concurrency.lockutils [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.218 254096 DEBUG oslo_concurrency.lockutils [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.218 254096 DEBUG oslo_concurrency.lockutils [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.219 254096 DEBUG nova.compute.manager [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.219 254096 WARNING nova.compute.manager [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state active and task_state None.#033[00m
Nov 25 11:37:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 97 op/s
Nov 25 11:37:01 np0005535469 podman[309820]: 2025-11-25 16:37:01.439476377 +0000 UTC m=+3.736253234 container attach d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:37:01 np0005535469 podman[309820]: 2025-11-25 16:37:01.441151221 +0000 UTC m=+3.737928078 container died d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.541 254096 INFO nova.compute.manager [None req-1a9b566b-347a-4ea2-af78-c5c4a00ff337 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Pausing#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.543 254096 DEBUG nova.objects.instance [None req-1a9b566b-347a-4ea2-af78-c5c4a00ff337 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'flavor' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.567 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.567 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.629 254096 DEBUG nova.compute.manager [None req-1a9b566b-347a-4ea2-af78-c5c4a00ff337 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088621.62802, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.632 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.657 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.660 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:37:01 np0005535469 nova_compute[254092]: 2025-11-25 16:37:01.673 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 11:37:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6-merged.mount: Deactivated successfully.
Nov 25 11:37:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:37:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675372627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.207 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:02 np0005535469 podman[309820]: 2025-11-25 16:37:02.28590454 +0000 UTC m=+4.582681447 container remove d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.309 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.309 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.315 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.315 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:37:02 np0005535469 podman[309876]: 2025-11-25 16:37:02.385533642 +0000 UTC m=+0.796678944 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:37:02 np0005535469 systemd[1]: libpod-conmon-d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162.scope: Deactivated successfully.
Nov 25 11:37:02 np0005535469 podman[309875]: 2025-11-25 16:37:02.416605275 +0000 UTC m=+0.827997324 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:37:02 np0005535469 podman[309874]: 2025-11-25 16:37:02.416617495 +0000 UTC m=+0.823268006 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.542 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.544 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3801MB free_disk=59.92188262939453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.622 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e549d4e8-b824-480b-b81a-83e2ea1eff12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.622 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.623 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.623 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.680 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:02 np0005535469 nova_compute[254092]: 2025-11-25 16:37:02.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:03 np0005535469 podman[310121]: 2025-11-25 16:37:02.941131446 +0000 UTC m=+0.024441455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:37:03 np0005535469 podman[310121]: 2025-11-25 16:37:03.040336707 +0000 UTC m=+0.123646696 container create a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:37:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:37:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615259839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:37:03 np0005535469 nova_compute[254092]: 2025-11-25 16:37:03.135 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:03 np0005535469 nova_compute[254092]: 2025-11-25 16:37:03.141 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:37:03 np0005535469 nova_compute[254092]: 2025-11-25 16:37:03.156 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:37:03 np0005535469 nova_compute[254092]: 2025-11-25 16:37:03.219 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:37:03 np0005535469 nova_compute[254092]: 2025-11-25 16:37:03.219 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:03 np0005535469 systemd[1]: Started libpod-conmon-a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f.scope.
Nov 25 11:37:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:37:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 97 op/s
Nov 25 11:37:03 np0005535469 podman[310121]: 2025-11-25 16:37:03.42713041 +0000 UTC m=+0.510440399 container init a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:37:03 np0005535469 podman[310121]: 2025-11-25 16:37:03.437654816 +0000 UTC m=+0.520964815 container start a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:37:03 np0005535469 compassionate_saha[310139]: 167 167
Nov 25 11:37:03 np0005535469 systemd[1]: libpod-a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f.scope: Deactivated successfully.
Nov 25 11:37:03 np0005535469 podman[310121]: 2025-11-25 16:37:03.49421493 +0000 UTC m=+0.577524919 container attach a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:37:03 np0005535469 podman[310121]: 2025-11-25 16:37:03.49566714 +0000 UTC m=+0.578977129 container died a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:37:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-47d967a0046aeff6ac0de221fa83a3106612c74ffe71dc114791a7d0b77a2637-merged.mount: Deactivated successfully.
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.219 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.220 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:04 np0005535469 podman[310121]: 2025-11-25 16:37:04.356069442 +0000 UTC m=+1.439379431 container remove a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:37:04 np0005535469 systemd[1]: libpod-conmon-a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f.scope: Deactivated successfully.
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:04 np0005535469 podman[310164]: 2025-11-25 16:37:04.574367005 +0000 UTC m=+0.037157420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:37:04 np0005535469 podman[310164]: 2025-11-25 16:37:04.698975485 +0000 UTC m=+0.161765880 container create 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.709 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.710 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.710 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.710 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.711 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.712 254096 INFO nova.compute.manager [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Terminating instance#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.713 254096 DEBUG nova.compute.manager [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:37:04 np0005535469 kernel: tap379de8c7-cb (unregistering): left promiscuous mode
Nov 25 11:37:04 np0005535469 NetworkManager[48891]: <info>  [1764088624.7597] device (tap379de8c7-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:37:04 np0005535469 systemd[1]: Started libpod-conmon-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope.
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00433|binding|INFO|Releasing lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b from this chassis (sb_readonly=0)
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00434|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b down in Southbound
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00435|binding|INFO|Removing iface tap379de8c7-cb ovn-installed in OVS
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.776 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.778 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:37:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.779 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:37:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.780 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[693149d5-978f-4258-933a-d5d8c568d15d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.781 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:37:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:37:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:37:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:37:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:37:04 np0005535469 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 25 11:37:04 np0005535469 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000031.scope: Consumed 3.711s CPU time.
Nov 25 11:37:04 np0005535469 systemd-machined[216343]: Machine qemu-58-instance-00000031 terminated.
Nov 25 11:37:04 np0005535469 podman[310164]: 2025-11-25 16:37:04.819132675 +0000 UTC m=+0.281923090 container init 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:37:04 np0005535469 podman[310164]: 2025-11-25 16:37:04.829509907 +0000 UTC m=+0.292300302 container start 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 11:37:04 np0005535469 podman[310164]: 2025-11-25 16:37:04.834439149 +0000 UTC m=+0.297229565 container attach 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:37:04 np0005535469 kernel: tap379de8c7-cb: entered promiscuous mode
Nov 25 11:37:04 np0005535469 NetworkManager[48891]: <info>  [1764088624.9368] manager: (tap379de8c7-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Nov 25 11:37:04 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : haproxy version is 2.8.14-c23fe91
Nov 25 11:37:04 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : path to executable is /usr/sbin/haproxy
Nov 25 11:37:04 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [WARNING]  (309848) : Exiting Master process...
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00436|binding|INFO|Claiming lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b for this chassis.
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00437|binding|INFO|379de8c7-cb88-4a89-8008-f7bb1cbfc09b: Claiming fa:16:3e:33:4b:3b 10.100.0.8
Nov 25 11:37:04 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [WARNING]  (309848) : Exiting Master process...
Nov 25 11:37:04 np0005535469 kernel: tap379de8c7-cb (unregistering): left promiscuous mode
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [ALERT]    (309848) : Current worker (309850) exited with code 143 (Terminated)
Nov 25 11:37:04 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [WARNING]  (309848) : All workers exited. Exiting... (0)
Nov 25 11:37:04 np0005535469 systemd[1]: libpod-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope: Deactivated successfully.
Nov 25 11:37:04 np0005535469 conmon[309837]: conmon ceffc45733c669125a2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope/container/memory.events
Nov 25 11:37:04 np0005535469 podman[310210]: 2025-11-25 16:37:04.950127729 +0000 UTC m=+0.060709099 container died ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:37:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.956 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00438|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b ovn-installed in OVS
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00439|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b up in Southbound
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.972 254096 INFO nova.virt.libvirt.driver [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance destroyed successfully.#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.972 254096 DEBUG nova.objects.instance [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00440|binding|INFO|Releasing lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b from this chassis (sb_readonly=0)
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00441|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b down in Southbound
Nov 25 11:37:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:04Z|00442|binding|INFO|Removing iface tap379de8c7-cb ovn-installed in OVS
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.991 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.995 254096 DEBUG nova.virt.libvirt.vif [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1926162330',display_name='tempest-DeleteServersTestJSON-server-1926162330',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1926162330',id=49,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-3hu4p32f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:37:01Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=288ada45-c7fc-4ddc-8b83-1c03ffa14fe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.995 254096 DEBUG nova.network.os_vif_util [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.996 254096 DEBUG nova.network.os_vif_util [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.996 254096 DEBUG os_vif [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:37:04 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:04.999 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379de8c7-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.004 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.008 254096 INFO os_vif [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb')#033[00m
Nov 25 11:37:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276-userdata-shm.mount: Deactivated successfully.
Nov 25 11:37:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-efe71abb6aef0ccd46cc56760891a8c6424dbecf1dc6c300940a7bd657aa611f-merged.mount: Deactivated successfully.
Nov 25 11:37:05 np0005535469 podman[310210]: 2025-11-25 16:37:05.056248738 +0000 UTC m=+0.166830108 container cleanup ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:37:05 np0005535469 systemd[1]: libpod-conmon-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope: Deactivated successfully.
Nov 25 11:37:05 np0005535469 podman[310264]: 2025-11-25 16:37:05.146965659 +0000 UTC m=+0.065163899 container remove ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.156 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a76a1077-8c8c-471c-8466-cec461424c1f]: (4, ('Tue Nov 25 04:37:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276)\nceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276\nTue Nov 25 04:37:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276)\nceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.158 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e95ac38a-88c8-4cd3-96be-54ac7d47300a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.160 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 kernel: tape469a950-70: left promiscuous mode
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.167 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcf7b8a-a6b4-440d-8155-88bfef0e54a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.185 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99c90c19-5978-4efe-8297-135c43a7256e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.189 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48b16d07-f6fe-40f2-9fb4-357a6b2be43a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.215 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9ef0bb-f615-49cb-a191-72a6d82b7a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507449, 'reachable_time': 23267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310279, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.217 254096 DEBUG nova.compute.manager [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.220 254096 DEBUG oslo_concurrency.lockutils [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.221 254096 DEBUG oslo_concurrency.lockutils [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:05 np0005535469 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.223 254096 DEBUG oslo_concurrency.lockutils [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.224 254096 DEBUG nova.compute.manager [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.224 254096 DEBUG nova.compute.manager [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.224 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.224 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f920de99-2d81-4da0-9ffd-07bcf9960bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.226 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.228 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24046d25-806d-42bf-8ef6-cb1c722e920f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.232 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.234 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc99993-1b96-4cb8-8984-a616b9b79cbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 97 op/s
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.467 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.468 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.469 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.469 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.469 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.471 254096 INFO nova.compute.manager [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Terminating instance#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.472 254096 DEBUG nova.compute.manager [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:05 np0005535469 kernel: tap4ac8455e-46 (unregistering): left promiscuous mode
Nov 25 11:37:05 np0005535469 NetworkManager[48891]: <info>  [1764088625.5430] device (tap4ac8455e-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:05Z|00443|binding|INFO|Releasing lport 4ac8455e-46f9-4f4e-9acc-43b78589ef10 from this chassis (sb_readonly=0)
Nov 25 11:37:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:05Z|00444|binding|INFO|Setting lport 4ac8455e-46f9-4f4e-9acc-43b78589ef10 down in Southbound
Nov 25 11:37:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:05Z|00445|binding|INFO|Removing iface tap4ac8455e-46 ovn-installed in OVS
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.565 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:7b:cf 10.100.0.6'], port_security=['fa:16:3e:36:7b:cf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e549d4e8-b824-480b-b81a-83e2ea1eff12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4ac8455e-46f9-4f4e-9acc-43b78589ef10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.567 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4ac8455e-46f9-4f4e-9acc-43b78589ef10 in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.568 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[522d4fce-1e41-4e47-a01e-5ac2022671df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.570 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace which is not needed anymore#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Nov 25 11:37:05 np0005535469 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002e.scope: Consumed 15.485s CPU time.
Nov 25 11:37:05 np0005535469 systemd-machined[216343]: Machine qemu-55-instance-0000002e terminated.
Nov 25 11:37:05 np0005535469 NetworkManager[48891]: <info>  [1764088625.6993] manager: (tap4ac8455e-46): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.715 254096 INFO nova.virt.libvirt.driver [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance destroyed successfully.#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.715 254096 DEBUG nova.objects.instance [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid e549d4e8-b824-480b-b81a-83e2ea1eff12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.734 254096 INFO nova.virt.libvirt.driver [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deleting instance files /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_del#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.735 254096 INFO nova.virt.libvirt.driver [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deletion of /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_del complete#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.741 254096 DEBUG nova.virt.libvirt.vif [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-609049756',display_name='tempest-ImagesTestJSON-server-609049756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-609049756',id=46,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-6cjpy2nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:26Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=e549d4e8-b824-480b-b81a-83e2ea1eff12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.742 254096 DEBUG nova.network.os_vif_util [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.743 254096 DEBUG nova.network.os_vif_util [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.744 254096 DEBUG os_vif [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.746 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ac8455e-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.753 254096 INFO os_vif [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46')#033[00m
Nov 25 11:37:05 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [NOTICE]   (307441) : haproxy version is 2.8.14-c23fe91
Nov 25 11:37:05 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [NOTICE]   (307441) : path to executable is /usr/sbin/haproxy
Nov 25 11:37:05 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [WARNING]  (307441) : Exiting Master process...
Nov 25 11:37:05 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [ALERT]    (307441) : Current worker (307443) exited with code 143 (Terminated)
Nov 25 11:37:05 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [WARNING]  (307441) : All workers exited. Exiting... (0)
Nov 25 11:37:05 np0005535469 systemd[1]: libpod-e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb.scope: Deactivated successfully.
Nov 25 11:37:05 np0005535469 podman[310304]: 2025-11-25 16:37:05.775417648 +0000 UTC m=+0.093666732 container died e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.794 254096 INFO nova.compute.manager [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.794 254096 DEBUG oslo.service.loopingcall [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.795 254096 DEBUG nova.compute.manager [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.796 254096 DEBUG nova.network.neutron [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:37:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-902d37fe79f927afc8f719a0eb41b52feeeed28907f3b71bd4cc8a4842d1692e-merged.mount: Deactivated successfully.
Nov 25 11:37:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb-userdata-shm.mount: Deactivated successfully.
Nov 25 11:37:05 np0005535469 podman[310304]: 2025-11-25 16:37:05.841242634 +0000 UTC m=+0.159491728 container cleanup e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:37:05 np0005535469 systemd[1]: libpod-conmon-e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb.scope: Deactivated successfully.
Nov 25 11:37:05 np0005535469 podman[310372]: 2025-11-25 16:37:05.934583776 +0000 UTC m=+0.062245569 container remove e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.941 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[500e7dd0-9f04-4cd2-96d8-00ac85f9c30b]: (4, ('Tue Nov 25 04:37:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb)\ne8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb\nTue Nov 25 04:37:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb)\ne8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.943 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e045bf8-87f9-4380-9602-1fe8ebf4a2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.946 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 kernel: tap0816ae24-20: left promiscuous mode
Nov 25 11:37:05 np0005535469 nova_compute[254092]: 2025-11-25 16:37:05.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.972 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9db4ccbf-4f63-46b8-844c-38dd2bdcfd1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf12fe8e-15a8-4722-bbc0-5bd0212e86c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.991 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9a1abe-5d01-402e-9e08-1a02610c7593]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:06.010 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4d49504c-854d-460d-8eee-f82aee417897]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503063, 'reachable_time': 22887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310398, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:06 np0005535469 systemd[1]: run-netns-ovnmeta\x2d0816ae24\x2d275c\x2d455e\x2da549\x2d929f4eb756e7.mount: Deactivated successfully.
Nov 25 11:37:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:06.014 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:37:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:06.014 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2b88ba-21d3-47f9-84d5-98936aac5158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]: {
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "osd_id": 1,
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "type": "bluestore"
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:    },
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "osd_id": 2,
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "type": "bluestore"
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:    },
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "osd_id": 0,
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:        "type": "bluestore"
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]:    }
Nov 25 11:37:06 np0005535469 jolly_sanderson[310183]: }
Nov 25 11:37:06 np0005535469 systemd[1]: libpod-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope: Deactivated successfully.
Nov 25 11:37:06 np0005535469 systemd[1]: libpod-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope: Consumed 1.217s CPU time.
Nov 25 11:37:06 np0005535469 podman[310403]: 2025-11-25 16:37:06.140436631 +0000 UTC m=+0.031718172 container died 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:37:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96-merged.mount: Deactivated successfully.
Nov 25 11:37:06 np0005535469 podman[310403]: 2025-11-25 16:37:06.212769733 +0000 UTC m=+0.104051264 container remove 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 11:37:06 np0005535469 systemd[1]: libpod-conmon-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope: Deactivated successfully.
Nov 25 11:37:06 np0005535469 nova_compute[254092]: 2025-11-25 16:37:06.248 254096 INFO nova.virt.libvirt.driver [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Deleting instance files /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12_del#033[00m
Nov 25 11:37:06 np0005535469 nova_compute[254092]: 2025-11-25 16:37:06.249 254096 INFO nova.virt.libvirt.driver [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Deletion of /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12_del complete#033[00m
Nov 25 11:37:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:37:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:37:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:37:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:37:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b601c7f1-6f70-447f-87bd-377417b418a8 does not exist
Nov 25 11:37:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 64bc2ae8-e1d1-4124-91f3-b29a1526900e does not exist
Nov 25 11:37:06 np0005535469 nova_compute[254092]: 2025-11-25 16:37:06.306 254096 INFO nova.compute.manager [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:37:06 np0005535469 nova_compute[254092]: 2025-11-25 16:37:06.307 254096 DEBUG oslo.service.loopingcall [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:37:06 np0005535469 nova_compute[254092]: 2025-11-25 16:37:06.307 254096 DEBUG nova.compute.manager [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:37:06 np0005535469 nova_compute[254092]: 2025-11-25 16:37:06.308 254096 DEBUG nova.network.neutron [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:37:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:37:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.087 254096 DEBUG nova.network.neutron [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.111 254096 INFO nova.compute.manager [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 1.32 seconds to deallocate network for instance.#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.163 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.163 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.244 254096 DEBUG oslo_concurrency.processutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 121 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 KiB/s wr, 144 op/s
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.507 254096 DEBUG nova.network.neutron [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.571 254096 INFO nova.compute.manager [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 1.26 seconds to deallocate network for instance.#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.685 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:37:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843938009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.719 254096 DEBUG oslo_concurrency.processutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.726 254096 DEBUG nova.compute.provider_tree [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.745 254096 DEBUG nova.scheduler.client.report [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.994 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:07 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.997 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:07.999 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.061 254096 DEBUG oslo_concurrency.processutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.265 254096 INFO nova.scheduler.client.report [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.464 254096 DEBUG nova.compute.manager [req-2c2e33f6-dadc-45ca-b6e1-9525849205ea req-8bd12274-dbd9-41a3-bd67-414308fe1741 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-deleted-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.526 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.600 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.600 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.602 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.602 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.602 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-unplugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] No waiting events found dispatching network-vif-unplugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received unexpected event network-vif-unplugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] No waiting events found dispatching network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received unexpected event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:37:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:37:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493393454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.700 254096 DEBUG oslo_concurrency.processutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.710 254096 DEBUG nova.compute.provider_tree [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.727 254096 DEBUG nova.scheduler.client.report [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.756 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.809 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.810 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.810 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.810 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e549d4e8-b824-480b-b81a-83e2ea1eff12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.819 254096 INFO nova.scheduler.client.report [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance e549d4e8-b824-480b-b81a-83e2ea1eff12#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.884 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.885 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.941 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:37:08 np0005535469 nova_compute[254092]: 2025-11-25 16:37:08.952 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.067 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.068 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.076 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.077 254096 INFO nova.compute.claims [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.251 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.285 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:37:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 121 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 KiB/s wr, 144 op/s
Nov 25 11:37:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 25 11:37:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:37:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473220336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:37:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.824 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.832 254096 DEBUG nova.compute.provider_tree [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:37:09 np0005535469 nova_compute[254092]: 2025-11-25 16:37:09.861 254096 DEBUG nova.scheduler.client.report [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:37:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 25 11:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.108 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.109 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.195 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.197 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.224 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.241 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.304 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.305 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.347 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.353 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.355 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.355 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Creating image(s)#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.390 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.419 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.444 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.448 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.497 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.508 254096 DEBUG nova.policy [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec49417447ad4a98b1f890ed78fd5b41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8b9be1565d148a3ac487eacb391dc1f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.537 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.538 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.541 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.543 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.544 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.545 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.565 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.573 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.600 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.601 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.606 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.606 254096 INFO nova.compute.claims [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:10 np0005535469 nova_compute[254092]: 2025-11-25 16:37:10.813 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.112 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.166 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] resizing rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.217 254096 DEBUG nova.compute.manager [req-14646ea2-bd22-449f-9487-0b600ca20803 req-f02fbc28-66cf-4798-8fe6-388114ab24ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-deleted-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:37:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1902254106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.333 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.339 254096 DEBUG nova.compute.provider_tree [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.360 254096 DEBUG nova.scheduler.client.report [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.397 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.398 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.407 254096 DEBUG nova.objects.instance [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'migration_context' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.423 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.424 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Ensure instance console log exists: /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.425 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.425 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.425 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 53 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 634 KiB/s wr, 87 op/s
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.458 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.459 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.479 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.496 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.601 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.602 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.603 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Creating image(s)#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.630 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.655 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.683 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.687 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.759 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.760 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.760 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.761 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.787 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.791 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.822 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Successfully created port: 1693689c-371b-40fb-8153-5313e926d910 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:37:11 np0005535469 nova_compute[254092]: 2025-11-25 16:37:11.846 254096 DEBUG nova.policy [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:37:12 np0005535469 nova_compute[254092]: 2025-11-25 16:37:12.920 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:12 np0005535469 nova_compute[254092]: 2025-11-25 16:37:12.981 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] resizing rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:37:13 np0005535469 nova_compute[254092]: 2025-11-25 16:37:13.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:13 np0005535469 nova_compute[254092]: 2025-11-25 16:37:13.103 254096 DEBUG nova.objects.instance [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid 8ed9342e-e179-467c-993f-a92f2f7b0dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:13 np0005535469 nova_compute[254092]: 2025-11-25 16:37:13.118 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:37:13 np0005535469 nova_compute[254092]: 2025-11-25 16:37:13.119 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Ensure instance console log exists: /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:37:13 np0005535469 nova_compute[254092]: 2025-11-25 16:37:13.119 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:13 np0005535469 nova_compute[254092]: 2025-11-25 16:37:13.119 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:13 np0005535469 nova_compute[254092]: 2025-11-25 16:37:13.120 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 53 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 634 KiB/s wr, 87 op/s
Nov 25 11:37:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:13.614 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:13.615 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:13.615 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.485 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.485 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.505 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.596 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.597 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.604 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.605 254096 INFO nova.compute.claims [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.608 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.775 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.808 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Successfully created port: 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:37:14 np0005535469 nova_compute[254092]: 2025-11-25 16:37:14.978 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Successfully updated port: 1693689c-371b-40fb-8153-5313e926d910 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.008 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.009 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.009 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.167 254096 DEBUG nova.compute.manager [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.168 254096 DEBUG nova.compute.manager [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.169 254096 DEBUG oslo_concurrency.lockutils [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:37:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485257019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.210 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.224 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.232 254096 DEBUG nova.compute.provider_tree [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.251 254096 DEBUG nova.scheduler.client.report [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.284 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.285 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.330 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.331 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.349 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.375 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:37:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 90 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.496 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.498 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.498 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Creating image(s)#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.526 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.552 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.579 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.583 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.668 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.669 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.670 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.670 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.693 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.697 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:15 np0005535469 nova_compute[254092]: 2025-11-25 16:37:15.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.081 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.150 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.187 254096 DEBUG nova.policy [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.254 254096 DEBUG nova.objects.instance [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.270 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.271 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Ensure instance console log exists: /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.271 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.271 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.272 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.425 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Successfully updated port: 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.446 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.446 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.446 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:37:16 np0005535469 nova_compute[254092]: 2025-11-25 16:37:16.768 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.091 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Successfully created port: d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.318 254096 DEBUG nova.compute.manager [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-changed-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.318 254096 DEBUG nova.compute.manager [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Refreshing instance network info cache due to event network-changed-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.318 254096 DEBUG oslo_concurrency.lockutils [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 135 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.554 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.578 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.579 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance network_info: |[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.579 254096 DEBUG oslo_concurrency.lockutils [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.580 254096 DEBUG nova.network.neutron [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.583 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start _get_guest_xml network_info=[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.589 254096 WARNING nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.594 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.595 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.602 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.603 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.603 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.604 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.604 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.604 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:37:17 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.610 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:17.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:37:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201025939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.113 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.207 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.212 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.609 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updating instance_info_cache with network_info: [{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.634 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.635 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance network_info: |[{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.636 254096 DEBUG oslo_concurrency.lockutils [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.636 254096 DEBUG nova.network.neutron [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Refreshing network info cache for port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.640 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start _get_guest_xml network_info=[{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.647 254096 WARNING nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.655 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.656 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.661 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.662 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.663 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.663 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.665 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.665 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.665 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.666 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.666 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.666 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.670 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:37:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732494780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.716 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.718 254096 DEBUG nova.virt.libvirt.vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2097469038',display_name='tempest-AttachInterfacesUnderV243Test-server-2097469038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2097469038',id=50,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK35kG1vwXJXdI4r/EHWJZkHbZ92CrcmDm6T8HHIBEabt8dsD4hwgL2ByxTJp0aD3PDPswuWtqGhIZZ1n6EYekgLgLZqS6KsMbAxaY/ldKY87IH4bSdNTYm2tWLgSZE5MA==',key_name='tempest-keypair-935791717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8b9be1565d148a3ac487eacb391dc1f',ramdisk_id='',reservation_id='r-nejw6g9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1206829083',owner_user_name='tempest-AttachInterfacesUnderV243Test-1206829083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec49417447ad4a98b1f890ed78fd5b41',uuid=d3356685-91bc-46b9-9b9f-87ffce31a4ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.719 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converting VIF {"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.720 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.721 254096 DEBUG nova.objects.instance [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'pci_devices' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.738 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <uuid>d3356685-91bc-46b9-9b9f-87ffce31a4ab</uuid>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <name>instance-00000032</name>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-2097469038</nova:name>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:37:17</nova:creationTime>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:user uuid="ec49417447ad4a98b1f890ed78fd5b41">tempest-AttachInterfacesUnderV243Test-1206829083-project-member</nova:user>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:project uuid="c8b9be1565d148a3ac487eacb391dc1f">tempest-AttachInterfacesUnderV243Test-1206829083</nova:project>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <nova:port uuid="1693689c-371b-40fb-8153-5313e926d910">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <entry name="serial">d3356685-91bc-46b9-9b9f-87ffce31a4ab</entry>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <entry name="uuid">d3356685-91bc-46b9-9b9f-87ffce31a4ab</entry>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:fc:c2:fe"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <target dev="tap1693689c-37"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/console.log" append="off"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:37:18 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:37:18 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:37:18 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:37:18 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.740 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Preparing to wait for external event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.740 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.741 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.741 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.742 254096 DEBUG nova.virt.libvirt.vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2097469038',display_name='tempest-AttachInterfacesUnderV243Test-server-2097469038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2097469038',id=50,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK35kG1vwXJXdI4r/EHWJZkHbZ92CrcmDm6T8HHIBEabt8dsD4hwgL2ByxTJp0aD3PDPswuWtqGhIZZ1n6EYekgLgLZqS6KsMbAxaY/ldKY87IH4bSdNTYm2tWLgSZE5MA==',key_name='tempest-keypair-935791717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8b9be1565d148a3ac487eacb391dc1f',ramdisk_id='',reservation_id='r-nejw6g9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1206829083',owner_user_name='tempest-AttachInterfacesUnderV243Test-1206829083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec49417447ad4a98b1f890ed78fd5b41',uuid=d3356685-91bc-46b9-9b9f-87ffce31a4ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.742 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converting VIF {"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.743 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.743 254096 DEBUG os_vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.744 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.745 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.749 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1693689c-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.750 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1693689c-37, col_values=(('external_ids', {'iface-id': '1693689c-371b-40fb-8153-5313e926d910', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:c2:fe', 'vm-uuid': 'd3356685-91bc-46b9-9b9f-87ffce31a4ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:18 np0005535469 NetworkManager[48891]: <info>  [1764088638.7532] manager: (tap1693689c-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.763 254096 INFO os_vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37')#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.829 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.830 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.830 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] No VIF found with MAC fa:16:3e:fc:c2:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.830 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Using config drive#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.863 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.871 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Successfully updated port: d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.954 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.954 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.955 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.976 254096 DEBUG nova.compute.manager [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.977 254096 DEBUG nova.compute.manager [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing instance network info cache due to event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:37:18 np0005535469 nova_compute[254092]: 2025-11-25 16:37:18.977 254096 DEBUG oslo_concurrency.lockutils [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.191 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:37:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:37:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592362619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.333 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.362 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.368 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 135 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Nov 25 11:37:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:37:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751106983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.890 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.892 254096 DEBUG nova.virt.libvirt.vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1535795979',display_name='tempest-ImagesTestJSON-server-1535795979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1535795979',id=51,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-no9ayfye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:11Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=8ed9342e-e179-467c-993f-a92f2f7b0dff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.892 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.893 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.894 254096 DEBUG nova.objects.instance [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ed9342e-e179-467c-993f-a92f2f7b0dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.923 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <uuid>8ed9342e-e179-467c-993f-a92f2f7b0dff</uuid>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <name>instance-00000033</name>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <nova:name>tempest-ImagesTestJSON-server-1535795979</nova:name>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:37:18</nova:creationTime>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <nova:port uuid="27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <entry name="serial">8ed9342e-e179-467c-993f-a92f2f7b0dff</entry>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <entry name="uuid">8ed9342e-e179-467c-993f-a92f2f7b0dff</entry>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8ed9342e-e179-467c-993f-a92f2f7b0dff_disk">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:fc:e5:5c"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <target dev="tap27bf7a08-6d"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/console.log" append="off"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:37:19 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:37:19 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:37:19 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:37:19 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.925 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Preparing to wait for external event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.926 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.926 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.926 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.927 254096 DEBUG nova.virt.libvirt.vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1535795979',display_name='tempest-ImagesTestJSON-server-1535795979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1535795979',id=51,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-no9ayfye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:11Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=8ed9342e-e179-467c-993f-a92f2f7b0dff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.927 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.928 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.929 254096 DEBUG os_vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.930 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.930 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.933 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27bf7a08-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.934 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27bf7a08-6d, col_values=(('external_ids', {'iface-id': '27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:e5:5c', 'vm-uuid': '8ed9342e-e179-467c-993f-a92f2f7b0dff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:19 np0005535469 NetworkManager[48891]: <info>  [1764088639.9364] manager: (tap27bf7a08-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.945 254096 INFO os_vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d')#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.964 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088624.9616897, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.965 254096 INFO nova.compute.manager [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:37:19 np0005535469 nova_compute[254092]: 2025-11-25 16:37:19.982 254096 DEBUG nova.compute.manager [None req-0d9e5fd0-4f5a-47b3-bda5-7b34cbe34c5c - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.048 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.049 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.049 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:fc:e5:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.049 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Using config drive#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.073 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.446 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Creating config drive at /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.455 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_avtw57 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.600 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_avtw57" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.639 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.645 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.712 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088625.7113466, e549d4e8-b824-480b-b81a-83e2ea1eff12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.713 254096 INFO nova.compute.manager [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.736 254096 DEBUG nova.compute.manager [None req-89873492-4360-48dc-919c-235fc11aa7ad - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.739 254096 DEBUG nova.network.neutron [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated VIF entry in instance network info cache for port 1693689c-371b-40fb-8153-5313e926d910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.740 254096 DEBUG nova.network.neutron [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.755 254096 DEBUG oslo_concurrency.lockutils [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.805 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Creating config drive at /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.810 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rmxs87b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.959 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rmxs87b" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.980 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:20 np0005535469 nova_compute[254092]: 2025-11-25 16:37:20.983 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.414 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 5.5 MiB/s wr, 113 op/s
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.480 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.481 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance network_info: |[{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.482 254096 DEBUG oslo_concurrency.lockutils [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.482 254096 DEBUG nova.network.neutron [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.485 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start _get_guest_xml network_info=[{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.491 254096 WARNING nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.496 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.496 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.504 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.504 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.505 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.505 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.508 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.508 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.511 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.836 254096 DEBUG nova.network.neutron [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updated VIF entry in instance network info cache for port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.838 254096 DEBUG nova.network.neutron [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updating instance_info_cache with network_info: [{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.866 254096 DEBUG oslo_concurrency.lockutils [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.868 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.870 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deleting local config drive /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config because it was imported into RBD.#033[00m
Nov 25 11:37:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:37:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4052910765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:37:21 np0005535469 kernel: tap1693689c-37: entered promiscuous mode
Nov 25 11:37:21 np0005535469 NetworkManager[48891]: <info>  [1764088641.9460] manager: (tap1693689c-37): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Nov 25 11:37:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:21Z|00446|binding|INFO|Claiming lport 1693689c-371b-40fb-8153-5313e926d910 for this chassis.
Nov 25 11:37:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:21Z|00447|binding|INFO|1693689c-371b-40fb-8153-5313e926d910: Claiming fa:16:3e:fc:c2:fe 10.100.0.14
Nov 25 11:37:21 np0005535469 nova_compute[254092]: 2025-11-25 16:37:21.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:21.996 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:c2:fe 10.100.0.14'], port_security=['fa:16:3e:fc:c2:fe 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd3356685-91bc-46b9-9b9f-87ffce31a4ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8b9be1565d148a3ac487eacb391dc1f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f328e556-e196-4e21-8b60-04c34108b4ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8c5f31d-3e5c-4add-b1b6-dfe24828d28e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1693689c-371b-40fb-8153-5313e926d910) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:21.997 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1693689c-371b-40fb-8153-5313e926d910 in datapath 0c61a44f-bcff-4141-9691-0b0cd16e5793 bound to our chassis#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:21.998 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c61a44f-bcff-4141-9691-0b0cd16e5793#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.006 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edab9812-4462-4201-bcc7-7d3efae8c2b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.015 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0c61a44f-b1 in ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.018 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0c61a44f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[faf89c31-2482-445e-b91c-48afa9ba548d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 systemd-udevd[311370]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.019 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[61036d8f-a30a-498d-8051-5816b15d2c56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 systemd-machined[216343]: New machine qemu-59-instance-00000032.
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.032 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3984a28d-4653-4a98-91d9-5584204acd9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.0368] device (tap1693689c-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.0377] device (tap1693689c-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.040 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:22 np0005535469 systemd[1]: Started Virtual Machine qemu-59-instance-00000032.
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.044 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.059 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b35dd095-7e84-4426-8aa9-ed2e1f0f77ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.075 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:22Z|00448|binding|INFO|Setting lport 1693689c-371b-40fb-8153-5313e926d910 ovn-installed in OVS
Nov 25 11:37:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:22Z|00449|binding|INFO|Setting lport 1693689c-371b-40fb-8153-5313e926d910 up in Southbound
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.078 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.095 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1993ea-90a7-420f-81ce-a0a5e9297c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.101 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6fad8c-a2a6-45a4-ad3c-ab53941ac990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.1022] manager: (tap0c61a44f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.150 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a5d3f9-13cf-4b60-8b05-cb316dd26450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.154 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7eabc477-2d1f-4188-9798-65a7dd9f952e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.1770] device (tap0c61a44f-b0): carrier: link connected
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.183 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8111d2fe-66dc-45ed-a409-7016bb2e8704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.189 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.189 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deleting local config drive /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config because it was imported into RBD.#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.204 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af5a09f1-ce4d-46a7-9f3f-7eefbfb9bc65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c61a44f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:38:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509976, 'reachable_time': 16458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311423, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.222 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b055186-d306-4e26-8522-2a1f8a8cb07f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:385d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509976, 'tstamp': 509976}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311441, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.240 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[350fbc43-3961-4671-b5be-aa232fc0aaf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c61a44f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:38:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509976, 'reachable_time': 16458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311444, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.2535] manager: (tap27bf7a08-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Nov 25 11:37:22 np0005535469 kernel: tap27bf7a08-6d: entered promiscuous mode
Nov 25 11:37:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:22Z|00450|binding|INFO|Claiming lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for this chassis.
Nov 25 11:37:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:22Z|00451|binding|INFO|27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc: Claiming fa:16:3e:fc:e5:5c 10.100.0.13
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.2664] device (tap27bf7a08-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.2675] device (tap27bf7a08-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:37:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:22Z|00452|binding|INFO|Setting lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc ovn-installed in OVS
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.273 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.275 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:22Z|00453|binding|INFO|Setting lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc up in Southbound
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.288 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e5:5c 10.100.0.13'], port_security=['fa:16:3e:fc:e5:5c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8ed9342e-e179-467c-993f-a92f2f7b0dff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.287 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8b7de0-704d-480f-9673-c4b14b306a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 systemd-machined[216343]: New machine qemu-60-instance-00000033.
Nov 25 11:37:22 np0005535469 systemd[1]: Started Virtual Machine qemu-60-instance-00000033.
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.361 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[772eb7f6-9ee6-422b-9945-1b25cfd1ea17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.364 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c61a44f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.364 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.365 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c61a44f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.3678] manager: (tap0c61a44f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 25 11:37:22 np0005535469 kernel: tap0c61a44f-b0: entered promiscuous mode
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.374 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c61a44f-b0, col_values=(('external_ids', {'iface-id': 'eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:22Z|00454|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=1)
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.379 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c61a44f-bcff-4141-9691-0b0cd16e5793.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c61a44f-bcff-4141-9691-0b0cd16e5793.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1224f08-9fef-4c52-8ecc-2e879b530722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.382 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-0c61a44f-bcff-4141-9691-0b0cd16e5793
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/0c61a44f-bcff-4141-9691-0b0cd16e5793.pid.haproxy
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 0c61a44f-bcff-4141-9691-0b0cd16e5793
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:37:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.384 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'env', 'PROCESS_TAG=haproxy-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0c61a44f-bcff-4141-9691-0b0cd16e5793.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:37:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352025268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.549 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.552 254096 DEBUG nova.virt.libvirt.vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1046040473',display_name='tempest-DeleteServersTestJSON-server-1046040473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1046040473',id=52,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ia105ljl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:15Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=497caf1f-53fe-425d-8e5c-10b2f0a2506d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.553 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.554 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.556 254096 DEBUG nova.objects.instance [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.572 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <uuid>497caf1f-53fe-425d-8e5c-10b2f0a2506d</uuid>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <name>instance-00000034</name>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <nova:name>tempest-DeleteServersTestJSON-server-1046040473</nova:name>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:37:21</nova:creationTime>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <nova:port uuid="d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <entry name="serial">497caf1f-53fe-425d-8e5c-10b2f0a2506d</entry>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <entry name="uuid">497caf1f-53fe-425d-8e5c-10b2f0a2506d</entry>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:b0:76:f8"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <target dev="tapd90f4f5a-3c"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/console.log" append="off"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:37:22 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:37:22 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:37:22 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:37:22 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.579 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Preparing to wait for external event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.579 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.580 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.580 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.581 254096 DEBUG nova.virt.libvirt.vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1046040473',display_name='tempest-DeleteServersTestJSON-server-1046040473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1046040473',id=52,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ia105ljl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:15Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=497caf1f-53fe-425d-8e5c-10b2f0a2506d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.581 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.582 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.583 254096 DEBUG os_vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.584 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.584 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.588 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd90f4f5a-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.588 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd90f4f5a-3c, col_values=(('external_ids', {'iface-id': 'd90f4f5a-3cd7-4c5d-bf11-0e669fb736ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:76:f8', 'vm-uuid': '497caf1f-53fe-425d-8e5c-10b2f0a2506d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 NetworkManager[48891]: <info>  [1764088642.5914] manager: (tapd90f4f5a-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.599 254096 INFO os_vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c')#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.715 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088642.7149215, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.716 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Started (Lifecycle Event)#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.743 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.748 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088642.7169144, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.748 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.757 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.757 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.758 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:b0:76:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.758 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Using config drive#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.810 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.826 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.832 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:37:22 np0005535469 podman[311556]: 2025-11-25 16:37:22.763412903 +0000 UTC m=+0.032316087 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:37:22 np0005535469 nova_compute[254092]: 2025-11-25 16:37:22.859 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.092 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088643.0918915, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.092 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Started (Lifecycle Event)#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.119 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.125 254096 DEBUG nova.compute.manager [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.126 254096 DEBUG oslo_concurrency.lockutils [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.126 254096 DEBUG oslo_concurrency.lockutils [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.127 254096 DEBUG oslo_concurrency.lockutils [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.127 254096 DEBUG nova.compute.manager [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Processing event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.128 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.133 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.134 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088643.0927892, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.134 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.139 254096 INFO nova.virt.libvirt.driver [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance spawned successfully.#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.139 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:37:23 np0005535469 podman[311556]: 2025-11-25 16:37:23.154123502 +0000 UTC m=+0.423026666 container create f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.156 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.163 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.164 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.165 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.165 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.166 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.166 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.171 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.203 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088643.132422, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.227 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:37:23 np0005535469 systemd[1]: Started libpod-conmon-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440.scope.
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.252 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:37:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:37:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8389ed7fca4d6f0aa03377187ba2932fde612ac6153d8db17d6a663aaf209d6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.309 254096 INFO nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 12.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.311 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:23 np0005535469 podman[311556]: 2025-11-25 16:37:23.403513138 +0000 UTC m=+0.672416322 container init f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:37:23 np0005535469 podman[311556]: 2025-11-25 16:37:23.40910703 +0000 UTC m=+0.678010194 container start f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:37:23 np0005535469 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : New worker (311621) forked
Nov 25 11:37:23 np0005535469 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : Loading success.
Nov 25 11:37:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.8 MiB/s wr, 80 op/s
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.474 254096 INFO nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 14.44 seconds to build instance.#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.539 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.542 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[446d1518-9602-4a47-a8a9-1e4c27aefad1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.554 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0816ae24-21 in ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.556 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0816ae24-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[753ed2c5-ae78-489e-9855-3ebd384764ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.557 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa141f45-6dfe-4f76-b386-2c64e1366428]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.570 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bb227fc1-1065-42d4-a686-6694c2c6b2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.584 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e85e3fc-c3fe-4fd0-93a2-31022d4669c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.610 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.614 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13c12584-224b-4ef6-94a3-a309aba64498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 NetworkManager[48891]: <info>  [1764088643.6216] manager: (tap0816ae24-20): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8b274a-ed97-4cb9-a79d-2cbf2d769fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.656 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c5609533-791b-4fec-823e-79fbdc8e6372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.659 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b31990-d168-4903-a84c-1bf064d0eee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 NetworkManager[48891]: <info>  [1764088643.6827] device (tap0816ae24-20): carrier: link connected
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.688 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a5107ba1-eadc-4f9c-bcf5-da40d5ffb499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cc8ee5-9f60-4cb0-9f5f-4219d8cfba73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510127, 'reachable_time': 24726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311643, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.723 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc555e1-19e9-43b4-976e-95b9a055a347]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:524c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510127, 'tstamp': 510127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311644, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8794fc7a-bfd3-4a6e-a919-b34914b29599]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510127, 'reachable_time': 24726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311645, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.774 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1099388a-8e4b-4708-9708-09a7224ff354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.836 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3fbd25-824e-4ad7-bc3d-693f32e028ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.839 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.839 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.840 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:23 np0005535469 kernel: tap0816ae24-20: entered promiscuous mode
Nov 25 11:37:23 np0005535469 NetworkManager[48891]: <info>  [1764088643.8730] manager: (tap0816ae24-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.871 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:23Z|00455|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.907 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.908 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9bebb218-065d-4850-b6f2-4d5bc777db50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.909 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:37:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.911 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'env', 'PROCESS_TAG=haproxy-0816ae24-275c-455e-a549-929f4eb756e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0816ae24-275c-455e-a549-929f4eb756e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.931 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Creating config drive at /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config#033[00m
Nov 25 11:37:23 np0005535469 nova_compute[254092]: 2025-11-25 16:37:23.936 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22yhnsr3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:24 np0005535469 nova_compute[254092]: 2025-11-25 16:37:24.081 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22yhnsr3" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:24 np0005535469 nova_compute[254092]: 2025-11-25 16:37:24.103 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:37:24 np0005535469 nova_compute[254092]: 2025-11-25 16:37:24.114 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:37:24 np0005535469 podman[311715]: 2025-11-25 16:37:24.28122803 +0000 UTC m=+0.022400659 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:37:24 np0005535469 nova_compute[254092]: 2025-11-25 16:37:24.474 254096 DEBUG nova.network.neutron [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updated VIF entry in instance network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:37:24 np0005535469 nova_compute[254092]: 2025-11-25 16:37:24.475 254096 DEBUG nova.network.neutron [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:24 np0005535469 nova_compute[254092]: 2025-11-25 16:37:24.489 254096 DEBUG oslo_concurrency.lockutils [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:24 np0005535469 podman[311715]: 2025-11-25 16:37:24.910233965 +0000 UTC m=+0.651406574 container create 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 11:37:24 np0005535469 systemd[1]: Started libpod-conmon-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1.scope.
Nov 25 11:37:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:37:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0b7ea57d4fc30664551c01fa106099d478380f35527ead310f6fbe933d4f2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:37:25 np0005535469 podman[311715]: 2025-11-25 16:37:25.041027903 +0000 UTC m=+0.782200542 container init 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 11:37:25 np0005535469 podman[311715]: 2025-11-25 16:37:25.047577381 +0000 UTC m=+0.788749990 container start 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:37:25 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : New worker (311740) forked
Nov 25 11:37:25 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : Loading success.
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.208 254096 DEBUG nova.compute.manager [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.208 254096 DEBUG oslo_concurrency.lockutils [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 DEBUG oslo_concurrency.lockutils [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 DEBUG oslo_concurrency.lockutils [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 DEBUG nova.compute.manager [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] No waiting events found dispatching network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 WARNING nova.compute.manager [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received unexpected event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.423 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.424 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Deleting local config drive /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config because it was imported into RBD.#033[00m
Nov 25 11:37:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 4.8 MiB/s wr, 101 op/s
Nov 25 11:37:25 np0005535469 kernel: tapd90f4f5a-3c: entered promiscuous mode
Nov 25 11:37:25 np0005535469 NetworkManager[48891]: <info>  [1764088645.4997] manager: (tapd90f4f5a-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Nov 25 11:37:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:25Z|00456|binding|INFO|Claiming lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for this chassis.
Nov 25 11:37:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:25Z|00457|binding|INFO|d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca: Claiming fa:16:3e:b0:76:f8 10.100.0.10
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:25Z|00458|binding|INFO|Setting lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca ovn-installed in OVS
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:25 np0005535469 systemd-udevd[311762]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:37:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:25Z|00459|binding|INFO|Setting lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca up in Southbound
Nov 25 11:37:25 np0005535469 systemd-machined[216343]: New machine qemu-61-instance-00000034.
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.562 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:76:f8 10.100.0.10'], port_security=['fa:16:3e:b0:76:f8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '497caf1f-53fe-425d-8e5c-10b2f0a2506d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.564 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.565 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa#033[00m
Nov 25 11:37:25 np0005535469 systemd[1]: Started Virtual Machine qemu-61-instance-00000034.
Nov 25 11:37:25 np0005535469 NetworkManager[48891]: <info>  [1764088645.5778] device (tapd90f4f5a-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:37:25 np0005535469 NetworkManager[48891]: <info>  [1764088645.5799] device (tapd90f4f5a-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64bf9c1c-425e-4dab-bcbd-565a8e39a6a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.584 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.586 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c251e9a-eadf-4b03-b02c-34043ac9251f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8de04c3-ca24-444c-b60f-20546bb649a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.606 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8a484232-7a66-4c13-bfb4-72bea70da16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.644 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79ae4e10-f739-45be-95b1-b3ccb3f6d245]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.685 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb224a6-c42c-481c-a794-70121e6fe2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 NetworkManager[48891]: <info>  [1764088645.6969] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.698 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e95c51ba-775c-48b7-a2f9-1260cd58d846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.737 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6de3f445-27e5-4646-bd06-3d0250395d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.741 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a34a1376-b08f-47bd-9300-a9126001f190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 NetworkManager[48891]: <info>  [1764088645.7656] device (tape469a950-70): carrier: link connected
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.772 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[80950a39-7142-4bf0-9932-61e445f45b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d244d9ff-c3c1-4e4d-9eee-04733f71de4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510335, 'reachable_time': 31348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311796, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.806 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[992662e5-9472-46ed-a568-206f39ff3c89]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510335, 'tstamp': 510335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311797, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2da4462-fd3b-4420-bfea-4afeabeb89b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510335, 'reachable_time': 31348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311798, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.864 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[177f6cf3-6854-4ebf-bd1b-bcb641c62646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.954 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b51d92a-903e-402a-87d1-6726d3922d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.957 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.958 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:25 np0005535469 NetworkManager[48891]: <info>  [1764088645.9620] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Nov 25 11:37:25 np0005535469 kernel: tape469a950-70: entered promiscuous mode
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.967 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:25Z|00460|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.972 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.973 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3357b2-aa20-41d3-9cb2-1bed5709e9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.974 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:37:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.975 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:37:25 np0005535469 nova_compute[254092]: 2025-11-25 16:37:25.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.047 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088646.0470612, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.047 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Started (Lifecycle Event)#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.086 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088646.0478446, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.086 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.113 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.118 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.143 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:37:26 np0005535469 podman[311872]: 2025-11-25 16:37:26.394104591 +0000 UTC m=+0.058662621 container create 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:37:26 np0005535469 podman[311872]: 2025-11-25 16:37:26.359863542 +0000 UTC m=+0.024421592 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:37:26 np0005535469 systemd[1]: Started libpod-conmon-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8.scope.
Nov 25 11:37:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:37:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af3758759c7473d6d0c9e6e56bd70b29b7aef6fe03951135d48e66ecc34c2f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:37:26 np0005535469 podman[311872]: 2025-11-25 16:37:26.578822412 +0000 UTC m=+0.243380462 container init 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:37:26 np0005535469 podman[311872]: 2025-11-25 16:37:26.58647486 +0000 UTC m=+0.251032890 container start 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:37:26 np0005535469 NetworkManager[48891]: <info>  [1764088646.5913] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 25 11:37:26 np0005535469 NetworkManager[48891]: <info>  [1764088646.5929] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:26 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : New worker (311892) forked
Nov 25 11:37:26 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : Loading success.
Nov 25 11:37:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:26Z|00461|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=0)
Nov 25 11:37:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:26Z|00462|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 11:37:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:26Z|00463|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:26Z|00464|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=0)
Nov 25 11:37:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:26Z|00465|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 11:37:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:26Z|00466|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:37:26 np0005535469 nova_compute[254092]: 2025-11-25 16:37:26.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.378 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.379 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.379 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.379 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.380 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Processing event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.380 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.380 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.381 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.381 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.381 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] No waiting events found dispatching network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 WARNING nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received unexpected event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.383 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.383 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Processing event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.383 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG nova.network.neutron [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.385 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.386 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088647.3952847, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.396 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.398 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.418 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.419 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.424 254096 INFO nova.virt.libvirt.driver [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance spawned successfully.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.425 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.426 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance spawned successfully.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.427 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.428 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:37:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.451 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.452 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088647.3954356, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.452 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.455 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.456 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.456 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.457 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.457 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.458 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.463 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.463 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.464 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.465 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.465 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.465 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:37:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:27.496 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:27.497 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:37:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:27.498 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.507 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.511 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.526 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.691 254096 INFO nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 16.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.692 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.698 254096 INFO nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Took 12.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.699 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.795 254096 INFO nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 17.29 seconds to build instance.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.799 254096 INFO nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Took 13.23 seconds to build instance.#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.827 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:27 np0005535469 nova_compute[254092]: 2025-11-25 16:37:27.830 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:28 np0005535469 nova_compute[254092]: 2025-11-25 16:37:28.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 25 11:37:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.730 254096 DEBUG nova.network.neutron [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated VIF entry in instance network info cache for port 1693689c-371b-40fb-8153-5313e926d910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.730 254096 DEBUG nova.network.neutron [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.805 254096 DEBUG nova.compute.manager [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.805 254096 DEBUG oslo_concurrency.lockutils [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.806 254096 DEBUG oslo_concurrency.lockutils [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.806 254096 DEBUG oslo_concurrency.lockutils [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.807 254096 DEBUG nova.compute.manager [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] No waiting events found dispatching network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.807 254096 WARNING nova.compute.manager [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received unexpected event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for instance with vm_state active and task_state None.#033[00m
Nov 25 11:37:30 np0005535469 nova_compute[254092]: 2025-11-25 16:37:30.862 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:37:31 np0005535469 nova_compute[254092]: 2025-11-25 16:37:31.360 254096 DEBUG nova.compute.manager [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:37:31 np0005535469 nova_compute[254092]: 2025-11-25 16:37:31.421 254096 INFO nova.compute.manager [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] instance snapshotting#033[00m
Nov 25 11:37:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.7 MiB/s wr, 247 op/s
Nov 25 11:37:31 np0005535469 nova_compute[254092]: 2025-11-25 16:37:31.692 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:31 np0005535469 nova_compute[254092]: 2025-11-25 16:37:31.692 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:31 np0005535469 nova_compute[254092]: 2025-11-25 16:37:31.692 254096 INFO nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Shelving#033[00m
Nov 25 11:37:31 np0005535469 nova_compute[254092]: 2025-11-25 16:37:31.716 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:37:31 np0005535469 nova_compute[254092]: 2025-11-25 16:37:31.819 254096 INFO nova.virt.libvirt.driver [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Beginning live snapshot process#033[00m
Nov 25 11:37:32 np0005535469 nova_compute[254092]: 2025-11-25 16:37:32.129 254096 DEBUG nova.virt.libvirt.imagebackend [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:37:32 np0005535469 nova_compute[254092]: 2025-11-25 16:37:32.452 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(a48b4200ef294c0cb47c992fa8f0f5ce) on rbd image(8ed9342e-e179-467c-993f-a92f2f7b0dff_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:37:32 np0005535469 nova_compute[254092]: 2025-11-25 16:37:32.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:32 np0005535469 podman[311952]: 2025-11-25 16:37:32.678950246 +0000 UTC m=+0.089993582 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 25 11:37:32 np0005535469 podman[311953]: 2025-11-25 16:37:32.684028624 +0000 UTC m=+0.094656929 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Nov 25 11:37:32 np0005535469 podman[311954]: 2025-11-25 16:37:32.717194124 +0000 UTC m=+0.118879776 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:37:33 np0005535469 nova_compute[254092]: 2025-11-25 16:37:33.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 25 11:37:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 25 11:37:33 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 25 11:37:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 47 KiB/s wr, 262 op/s
Nov 25 11:37:34 np0005535469 nova_compute[254092]: 2025-11-25 16:37:34.528 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning vms/8ed9342e-e179-467c-993f-a92f2f7b0dff_disk@a48b4200ef294c0cb47c992fa8f0f5ce to images/1a87d43c-b6eb-4995-8c8a-ad36a6013e2b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:37:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 32 KiB/s wr, 241 op/s
Nov 25 11:37:37 np0005535469 nova_compute[254092]: 2025-11-25 16:37:37.364 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] flattening images/1a87d43c-b6eb-4995-8c8a-ad36a6013e2b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:37:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 187 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 186 op/s
Nov 25 11:37:37 np0005535469 nova_compute[254092]: 2025-11-25 16:37:37.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:38 np0005535469 nova_compute[254092]: 2025-11-25 16:37:38.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 187 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 186 op/s
Nov 25 11:37:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:37:40
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', 'images', '.rgw.root']
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:37:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 253 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 105 op/s
Nov 25 11:37:41 np0005535469 nova_compute[254092]: 2025-11-25 16:37:41.764 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:37:42 np0005535469 nova_compute[254092]: 2025-11-25 16:37:42.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:43 np0005535469 nova_compute[254092]: 2025-11-25 16:37:43.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 253 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 105 op/s
Nov 25 11:37:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:44Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:c2:fe 10.100.0.14
Nov 25 11:37:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:44Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:c2:fe 10.100.0.14
Nov 25 11:37:45 np0005535469 nova_compute[254092]: 2025-11-25 16:37:45.132 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(a48b4200ef294c0cb47c992fa8f0f5ce) on rbd image(8ed9342e-e179-467c-993f-a92f2f7b0dff_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:37:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 258 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 103 op/s
Nov 25 11:37:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 25 11:37:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 25 11:37:46 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 25 11:37:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 299 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.5 MiB/s wr, 154 op/s
Nov 25 11:37:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:47Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:76:f8 10.100.0.10
Nov 25 11:37:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:47Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:76:f8 10.100.0.10
Nov 25 11:37:47 np0005535469 nova_compute[254092]: 2025-11-25 16:37:47.524 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(snap) on rbd image(1a87d43c-b6eb-4995-8c8a-ad36a6013e2b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:37:47 np0005535469 nova_compute[254092]: 2025-11-25 16:37:47.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:48 np0005535469 nova_compute[254092]: 2025-11-25 16:37:48.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 25 11:37:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 25 11:37:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 25 11:37:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 299 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 4.8 MiB/s wr, 96 op/s
Nov 25 11:37:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:49Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:e5:5c 10.100.0.13
Nov 25 11:37:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:49Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:e5:5c 10.100.0.13
Nov 25 11:37:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002201510304747021 of space, bias 1.0, pg target 0.6604530914241062 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010120461149638602 of space, bias 1.0, pg target 0.3036138344891581 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:37:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 311 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 217 op/s
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b could not be found.
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver 
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver 
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b could not be found.
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver #033[00m
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.866 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(snap) on rbd image(1a87d43c-b6eb-4995-8c8a-ad36a6013e2b) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:52 np0005535469 nova_compute[254092]: 2025-11-25 16:37:52.982 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:37:53 np0005535469 nova_compute[254092]: 2025-11-25 16:37:53.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 311 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Nov 25 11:37:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 25 11:37:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 25 11:37:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747353024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:37:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747353024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:37:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 308 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 218 op/s
Nov 25 11:37:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:56Z|00467|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 11:37:56 np0005535469 nova_compute[254092]: 2025-11-25 16:37:56.750 254096 WARNING nova.compute.manager [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Image not found during snapshot: nova.exception.ImageNotFound: Image 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b could not be found.#033[00m
Nov 25 11:37:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 280 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 204 op/s
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.824 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.827 254096 INFO nova.compute.manager [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Terminating instance#033[00m
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.827 254096 DEBUG nova.compute.manager [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:37:57 np0005535469 nova_compute[254092]: 2025-11-25 16:37:57.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 kernel: tap27bf7a08-6d (unregistering): left promiscuous mode
Nov 25 11:37:58 np0005535469 NetworkManager[48891]: <info>  [1764088678.0519] device (tap27bf7a08-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:58Z|00468|binding|INFO|Releasing lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc from this chassis (sb_readonly=0)
Nov 25 11:37:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:58Z|00469|binding|INFO|Setting lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc down in Southbound
Nov 25 11:37:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:37:58Z|00470|binding|INFO|Removing iface tap27bf7a08-6d ovn-installed in OVS
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.060 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.097 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e5:5c 10.100.0.13'], port_security=['fa:16:3e:fc:e5:5c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8ed9342e-e179-467c-993f-a92f2f7b0dff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.100 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.101 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f199b019-5d09-4431-ae8d-08eacd3dac5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.101 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace which is not needed anymore#033[00m
Nov 25 11:37:58 np0005535469 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 25 11:37:58 np0005535469 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Consumed 14.091s CPU time.
Nov 25 11:37:58 np0005535469 systemd-machined[216343]: Machine qemu-60-instance-00000033 terminated.
Nov 25 11:37:58 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : haproxy version is 2.8.14-c23fe91
Nov 25 11:37:58 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : path to executable is /usr/sbin/haproxy
Nov 25 11:37:58 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [WARNING]  (311738) : Exiting Master process...
Nov 25 11:37:58 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [ALERT]    (311738) : Current worker (311740) exited with code 143 (Terminated)
Nov 25 11:37:58 np0005535469 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [WARNING]  (311738) : All workers exited. Exiting... (0)
Nov 25 11:37:58 np0005535469 systemd[1]: libpod-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1.scope: Deactivated successfully.
Nov 25 11:37:58 np0005535469 podman[312162]: 2025-11-25 16:37:58.238399172 +0000 UTC m=+0.052511116 container died 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.263 254096 INFO nova.virt.libvirt.driver [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance destroyed successfully.#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.263 254096 DEBUG nova.objects.instance [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid 8ed9342e-e179-467c-993f-a92f2f7b0dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.365 254096 DEBUG nova.virt.libvirt.vif [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1535795979',display_name='tempest-ImagesTestJSON-server-1535795979',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1535795979',id=51,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-no9ayfye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:37:56Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=8ed9342e-e179-467c-993f-a92f2f7b0dff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.366 254096 DEBUG nova.network.os_vif_util [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.367 254096 DEBUG nova.network.os_vif_util [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.367 254096 DEBUG os_vif [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.369 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27bf7a08-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.375 254096 INFO os_vif [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d')#033[00m
Nov 25 11:37:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dc0b7ea57d4fc30664551c01fa106099d478380f35527ead310f6fbe933d4f2a-merged.mount: Deactivated successfully.
Nov 25 11:37:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1-userdata-shm.mount: Deactivated successfully.
Nov 25 11:37:58 np0005535469 podman[312162]: 2025-11-25 16:37:58.434697357 +0000 UTC m=+0.248809301 container cleanup 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:37:58 np0005535469 systemd[1]: libpod-conmon-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1.scope: Deactivated successfully.
Nov 25 11:37:58 np0005535469 podman[312219]: 2025-11-25 16:37:58.577998035 +0000 UTC m=+0.118173757 container remove 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.580 254096 DEBUG nova.compute.manager [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-unplugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.581 254096 DEBUG oslo_concurrency.lockutils [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.581 254096 DEBUG oslo_concurrency.lockutils [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.582 254096 DEBUG oslo_concurrency.lockutils [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.582 254096 DEBUG nova.compute.manager [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] No waiting events found dispatching network-vif-unplugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.582 254096 DEBUG nova.compute.manager [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-unplugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27f24b0f-46c9-4958-88cd-b64c0af26dd9]: (4, ('Tue Nov 25 04:37:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1)\n0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1\nTue Nov 25 04:37:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1)\n0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6bb656-d603-4678-b7ee-2766de9930f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.588 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 kernel: tap0816ae24-20: left promiscuous mode
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 nova_compute[254092]: 2025-11-25 16:37:58.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[318d9e07-20c1-45dc-9c02-04c3f9225cf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cad173c7-8b47-409c-ab78-f01346fa433a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.625 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1d34a4-d9c0-4cfa-a05e-b5ad01334c5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.645 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54ecff4f-99a0-41fc-9f68-eb0c4793010f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510119, 'reachable_time': 29977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312234, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.649 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:37:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.649 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[da9af182-47fb-455f-bedd-e287d88e8123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:37:58 np0005535469 systemd[1]: run-netns-ovnmeta\x2d0816ae24\x2d275c\x2d455e\x2da549\x2d929f4eb756e7.mount: Deactivated successfully.
Nov 25 11:37:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 280 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 262 KiB/s wr, 83 op/s
Nov 25 11:37:59 np0005535469 nova_compute[254092]: 2025-11-25 16:37:59.666 254096 INFO nova.virt.libvirt.driver [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deleting instance files /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff_del#033[00m
Nov 25 11:37:59 np0005535469 nova_compute[254092]: 2025-11-25 16:37:59.667 254096 INFO nova.virt.libvirt.driver [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deletion of /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff_del complete#033[00m
Nov 25 11:37:59 np0005535469 nova_compute[254092]: 2025-11-25 16:37:59.729 254096 INFO nova.compute.manager [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 1.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:37:59 np0005535469 nova_compute[254092]: 2025-11-25 16:37:59.729 254096 DEBUG oslo.service.loopingcall [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:37:59 np0005535469 nova_compute[254092]: 2025-11-25 16:37:59.729 254096 DEBUG nova.compute.manager [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:37:59 np0005535469 nova_compute[254092]: 2025-11-25 16:37:59.730 254096 DEBUG nova.network.neutron [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:38:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 25 11:38:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 25 11:38:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 25 11:38:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 305 KiB/s wr, 145 op/s
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.461 254096 DEBUG nova.network.neutron [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.494 254096 INFO nova.compute.manager [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 2.76 seconds to deallocate network for instance.#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG nova.compute.manager [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG oslo_concurrency.lockutils [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG oslo_concurrency.lockutils [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG oslo_concurrency.lockutils [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.638 254096 DEBUG nova.compute.manager [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] No waiting events found dispatching network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.638 254096 WARNING nova.compute.manager [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received unexpected event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:38:02 np0005535469 nova_compute[254092]: 2025-11-25 16:38:02.645 254096 DEBUG oslo_concurrency.processutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.062 254096 DEBUG nova.objects.instance [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'flavor' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.086 254096 DEBUG oslo_concurrency.lockutils [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.087 254096 DEBUG oslo_concurrency.lockutils [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120555344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.178 254096 DEBUG oslo_concurrency.processutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.184 254096 DEBUG nova.compute.provider_tree [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.203 254096 DEBUG nova.scheduler.client.report [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.226 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.273 254096 INFO nova.scheduler.client.report [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance 8ed9342e-e179-467c-993f-a92f2f7b0dff#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.348 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 109 KiB/s wr, 80 op/s
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:03 np0005535469 podman[312260]: 2025-11-25 16:38:03.655577438 +0000 UTC m=+0.068536861 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 11:38:03 np0005535469 podman[312259]: 2025-11-25 16:38:03.659244047 +0000 UTC m=+0.073399212 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 11:38:03 np0005535469 podman[312261]: 2025-11-25 16:38:03.716731456 +0000 UTC m=+0.122316819 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 25 11:38:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032760713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:03 np0005535469 nova_compute[254092]: 2025-11-25 16:38:03.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.026 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.068 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.068 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.142 254096 DEBUG nova.network.neutron [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.283 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3823MB free_disk=59.897247314453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.285 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.285 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.362 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance d3356685-91bc-46b9-9b9f-87ffce31a4ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.362 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 497caf1f-53fe-425d-8e5c-10b2f0a2506d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.363 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.363 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.454 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.738 254096 DEBUG nova.compute.manager [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-deleted-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.740 254096 DEBUG nova.compute.manager [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.741 254096 DEBUG nova.compute.manager [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.742 254096 DEBUG oslo_concurrency.lockutils [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162550706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.945 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.953 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:38:04 np0005535469 nova_compute[254092]: 2025-11-25 16:38:04.991 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:38:05 np0005535469 nova_compute[254092]: 2025-11-25 16:38:05.100 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:38:05 np0005535469 nova_compute[254092]: 2025-11-25 16:38:05.101 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:05 np0005535469 nova_compute[254092]: 2025-11-25 16:38:05.102 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:05 np0005535469 nova_compute[254092]: 2025-11-25 16:38:05.102 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:38:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 94 KiB/s wr, 65 op/s
Nov 25 11:38:05 np0005535469 nova_compute[254092]: 2025-11-25 16:38:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:05 np0005535469 nova_compute[254092]: 2025-11-25 16:38:05.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.561 254096 DEBUG nova.network.neutron [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.578 254096 DEBUG oslo_concurrency.lockutils [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.578 254096 DEBUG nova.compute.manager [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.578 254096 DEBUG nova.compute.manager [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] network_info to inject: |[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.580 254096 DEBUG oslo_concurrency.lockutils [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:06 np0005535469 nova_compute[254092]: 2025-11-25 16:38:06.581 254096 DEBUG nova.network.neutron [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:38:07 np0005535469 podman[312534]: 2025-11-25 16:38:07.346954091 +0000 UTC m=+0.078542132 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:38:07 np0005535469 podman[312534]: 2025-11-25 16:38:07.450000306 +0000 UTC m=+0.181588347 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:38:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 9.4 KiB/s wr, 35 op/s
Nov 25 11:38:08 np0005535469 nova_compute[254092]: 2025-11-25 16:38:08.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:08 np0005535469 nova_compute[254092]: 2025-11-25 16:38:08.063 254096 DEBUG nova.objects.instance [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'flavor' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:08 np0005535469 nova_compute[254092]: 2025-11-25 16:38:08.087 254096 DEBUG oslo_concurrency.lockutils [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:08 np0005535469 nova_compute[254092]: 2025-11-25 16:38:08.373 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:08 np0005535469 nova_compute[254092]: 2025-11-25 16:38:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:08 np0005535469 nova_compute[254092]: 2025-11-25 16:38:08.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:38:08 np0005535469 nova_compute[254092]: 2025-11-25 16:38:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0a3e9c55-611b-44ce-b0e6-68c68c30eae5 does not exist
Nov 25 11:38:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2a9ab26a-ad31-4f7e-89d1-29fdc81b9d54 does not exist
Nov 25 11:38:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 733ef74d-8d74-4899-a760-17a112ecd26a does not exist
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:38:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 9.4 KiB/s wr, 35 op/s
Nov 25 11:38:09 np0005535469 nova_compute[254092]: 2025-11-25 16:38:09.489 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:09 np0005535469 podman[312963]: 2025-11-25 16:38:09.492436647 +0000 UTC m=+0.041643011 container create 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:38:09 np0005535469 systemd[1]: Started libpod-conmon-2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681.scope.
Nov 25 11:38:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:38:09 np0005535469 podman[312963]: 2025-11-25 16:38:09.474234193 +0000 UTC m=+0.023440587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:38:09 np0005535469 podman[312963]: 2025-11-25 16:38:09.611052343 +0000 UTC m=+0.160258737 container init 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:38:09 np0005535469 podman[312963]: 2025-11-25 16:38:09.619491823 +0000 UTC m=+0.168698197 container start 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:38:09 np0005535469 podman[312963]: 2025-11-25 16:38:09.624725455 +0000 UTC m=+0.173931849 container attach 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:38:09 np0005535469 gifted_golick[312979]: 167 167
Nov 25 11:38:09 np0005535469 systemd[1]: libpod-2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681.scope: Deactivated successfully.
Nov 25 11:38:09 np0005535469 podman[312963]: 2025-11-25 16:38:09.626225066 +0000 UTC m=+0.175431450 container died 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:38:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2ddc078d2fdc38967bdfae6663830df767a1f0a992580ffba4cead93816d4698-merged.mount: Deactivated successfully.
Nov 25 11:38:09 np0005535469 podman[312963]: 2025-11-25 16:38:09.670254951 +0000 UTC m=+0.219461325 container remove 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:38:09 np0005535469 systemd[1]: libpod-conmon-2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681.scope: Deactivated successfully.
Nov 25 11:38:09 np0005535469 podman[313003]: 2025-11-25 16:38:09.85820152 +0000 UTC m=+0.065140209 container create bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:38:09 np0005535469 podman[313003]: 2025-11-25 16:38:09.817393043 +0000 UTC m=+0.024331752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:38:09 np0005535469 systemd[1]: Started libpod-conmon-bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c.scope.
Nov 25 11:38:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:38:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:09 np0005535469 nova_compute[254092]: 2025-11-25 16:38:09.947 254096 DEBUG nova.network.neutron [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated VIF entry in instance network info cache for port 1693689c-371b-40fb-8153-5313e926d910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:38:09 np0005535469 nova_compute[254092]: 2025-11-25 16:38:09.948 254096 DEBUG nova.network.neutron [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:09 np0005535469 nova_compute[254092]: 2025-11-25 16:38:09.965 254096 DEBUG oslo_concurrency.lockutils [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:09 np0005535469 nova_compute[254092]: 2025-11-25 16:38:09.965 254096 DEBUG oslo_concurrency.lockutils [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:09 np0005535469 podman[313003]: 2025-11-25 16:38:09.986262523 +0000 UTC m=+0.193201232 container init bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:38:09 np0005535469 podman[313003]: 2025-11-25 16:38:09.993333426 +0000 UTC m=+0.200272115 container start bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:38:10 np0005535469 podman[313003]: 2025-11-25 16:38:10.030773891 +0000 UTC m=+0.237712610 container attach bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:38:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:11 np0005535469 determined_bell[313019]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:38:11 np0005535469 determined_bell[313019]: --> relative data size: 1.0
Nov 25 11:38:11 np0005535469 determined_bell[313019]: --> All data devices are unavailable
Nov 25 11:38:11 np0005535469 systemd[1]: libpod-bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c.scope: Deactivated successfully.
Nov 25 11:38:11 np0005535469 podman[313003]: 2025-11-25 16:38:11.060312512 +0000 UTC m=+1.267251201 container died bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:38:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376-merged.mount: Deactivated successfully.
Nov 25 11:38:11 np0005535469 podman[313003]: 2025-11-25 16:38:11.118051988 +0000 UTC m=+1.324990677 container remove bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:38:11 np0005535469 systemd[1]: libpod-conmon-bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c.scope: Deactivated successfully.
Nov 25 11:38:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 8.6 KiB/s wr, 25 op/s
Nov 25 11:38:11 np0005535469 podman[313201]: 2025-11-25 16:38:11.760291133 +0000 UTC m=+0.025269277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:38:11 np0005535469 podman[313201]: 2025-11-25 16:38:11.870232655 +0000 UTC m=+0.135210769 container create c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:38:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:11Z|00471|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=0)
Nov 25 11:38:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:11Z|00472|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:38:11 np0005535469 systemd[1]: Started libpod-conmon-c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e.scope.
Nov 25 11:38:11 np0005535469 nova_compute[254092]: 2025-11-25 16:38:11.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:38:11 np0005535469 podman[313201]: 2025-11-25 16:38:11.987026554 +0000 UTC m=+0.252004688 container init c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:38:11 np0005535469 podman[313201]: 2025-11-25 16:38:11.994096095 +0000 UTC m=+0.259074209 container start c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:38:11 np0005535469 festive_shtern[313217]: 167 167
Nov 25 11:38:11 np0005535469 systemd[1]: libpod-c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e.scope: Deactivated successfully.
Nov 25 11:38:11 np0005535469 podman[313201]: 2025-11-25 16:38:11.998741032 +0000 UTC m=+0.263719146 container attach c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:38:12 np0005535469 podman[313201]: 2025-11-25 16:38:12.000101008 +0000 UTC m=+0.265079142 container died c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 25 11:38:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-fd06d4366ddb525501842901f5e5822facae99d3bb8735dac61793f3d56062b8-merged.mount: Deactivated successfully.
Nov 25 11:38:12 np0005535469 podman[313201]: 2025-11-25 16:38:12.039738373 +0000 UTC m=+0.304716487 container remove c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 11:38:12 np0005535469 systemd[1]: libpod-conmon-c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e.scope: Deactivated successfully.
Nov 25 11:38:12 np0005535469 podman[313240]: 2025-11-25 16:38:12.210903708 +0000 UTC m=+0.041072526 container create 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:38:12 np0005535469 systemd[1]: Started libpod-conmon-4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3.scope.
Nov 25 11:38:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:38:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:12 np0005535469 podman[313240]: 2025-11-25 16:38:12.28177257 +0000 UTC m=+0.111941408 container init 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:38:12 np0005535469 podman[313240]: 2025-11-25 16:38:12.192550729 +0000 UTC m=+0.022719577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:38:12 np0005535469 podman[313240]: 2025-11-25 16:38:12.287997669 +0000 UTC m=+0.118166487 container start 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 11:38:12 np0005535469 podman[313240]: 2025-11-25 16:38:12.29206398 +0000 UTC m=+0.122232828 container attach 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:38:12 np0005535469 nova_compute[254092]: 2025-11-25 16:38:12.523 254096 DEBUG nova.network.neutron [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:38:12 np0005535469 nova_compute[254092]: 2025-11-25 16:38:12.618 254096 DEBUG nova.compute.manager [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:12 np0005535469 nova_compute[254092]: 2025-11-25 16:38:12.618 254096 DEBUG nova.compute.manager [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:38:12 np0005535469 nova_compute[254092]: 2025-11-25 16:38:12.619 254096 DEBUG oslo_concurrency.lockutils [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:13 np0005535469 nova_compute[254092]: 2025-11-25 16:38:13.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]: {
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:    "0": [
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:        {
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "devices": [
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "/dev/loop3"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            ],
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_name": "ceph_lv0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_size": "21470642176",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "name": "ceph_lv0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "tags": {
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cluster_name": "ceph",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.crush_device_class": "",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.encrypted": "0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osd_id": "0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.type": "block",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.vdo": "0"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            },
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "type": "block",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "vg_name": "ceph_vg0"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:        }
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:    ],
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:    "1": [
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:        {
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "devices": [
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "/dev/loop4"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            ],
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_name": "ceph_lv1",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_size": "21470642176",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "name": "ceph_lv1",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "tags": {
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cluster_name": "ceph",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.crush_device_class": "",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.encrypted": "0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osd_id": "1",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.type": "block",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.vdo": "0"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            },
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "type": "block",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "vg_name": "ceph_vg1"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:        }
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:    ],
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:    "2": [
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:        {
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "devices": [
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "/dev/loop5"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            ],
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_name": "ceph_lv2",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_size": "21470642176",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "name": "ceph_lv2",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "tags": {
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.cluster_name": "ceph",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.crush_device_class": "",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.encrypted": "0",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osd_id": "2",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.type": "block",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:                "ceph.vdo": "0"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            },
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "type": "block",
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:            "vg_name": "ceph_vg2"
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:        }
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]:    ]
Nov 25 11:38:13 np0005535469 upbeat_blackwell[313256]: }
Nov 25 11:38:13 np0005535469 systemd[1]: libpod-4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3.scope: Deactivated successfully.
Nov 25 11:38:13 np0005535469 podman[313240]: 2025-11-25 16:38:13.069816379 +0000 UTC m=+0.899985247 container died 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:38:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140-merged.mount: Deactivated successfully.
Nov 25 11:38:13 np0005535469 podman[313240]: 2025-11-25 16:38:13.130134656 +0000 UTC m=+0.960303474 container remove 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:38:13 np0005535469 systemd[1]: libpod-conmon-4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3.scope: Deactivated successfully.
Nov 25 11:38:13 np0005535469 nova_compute[254092]: 2025-11-25 16:38:13.262 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088678.2608998, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:38:13 np0005535469 nova_compute[254092]: 2025-11-25 16:38:13.263 254096 INFO nova.compute.manager [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:38:13 np0005535469 nova_compute[254092]: 2025-11-25 16:38:13.290 254096 DEBUG nova.compute.manager [None req-dda4a91c-31df-4c87-88e8-457a539171b8 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:13 np0005535469 nova_compute[254092]: 2025-11-25 16:38:13.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Nov 25 11:38:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:13.615 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:13.616 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:13.617 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:13 np0005535469 podman[313418]: 2025-11-25 16:38:13.739434585 +0000 UTC m=+0.036373798 container create b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 11:38:13 np0005535469 systemd[1]: Started libpod-conmon-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope.
Nov 25 11:38:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:38:13 np0005535469 podman[313418]: 2025-11-25 16:38:13.808586332 +0000 UTC m=+0.105525565 container init b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:38:13 np0005535469 podman[313418]: 2025-11-25 16:38:13.815083548 +0000 UTC m=+0.112022761 container start b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:38:13 np0005535469 podman[313418]: 2025-11-25 16:38:13.817673078 +0000 UTC m=+0.114612311 container attach b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:38:13 np0005535469 elegant_greider[313434]: 167 167
Nov 25 11:38:13 np0005535469 podman[313418]: 2025-11-25 16:38:13.723997767 +0000 UTC m=+0.020937000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:38:13 np0005535469 systemd[1]: libpod-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope: Deactivated successfully.
Nov 25 11:38:13 np0005535469 conmon[313434]: conmon b1e5a8a2e1ce763634d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope/container/memory.events
Nov 25 11:38:13 np0005535469 podman[313418]: 2025-11-25 16:38:13.821318396 +0000 UTC m=+0.118257609 container died b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 11:38:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6ecf01bbd3b7b6c3a716cdca94eb1e490d2da789fe311361e4970ffb62142bae-merged.mount: Deactivated successfully.
Nov 25 11:38:13 np0005535469 podman[313418]: 2025-11-25 16:38:13.856332507 +0000 UTC m=+0.153271720 container remove b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:38:13 np0005535469 systemd[1]: libpod-conmon-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope: Deactivated successfully.
Nov 25 11:38:14 np0005535469 podman[313459]: 2025-11-25 16:38:14.031055017 +0000 UTC m=+0.037161129 container create 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 11:38:14 np0005535469 systemd[1]: Started libpod-conmon-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope.
Nov 25 11:38:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:38:14 np0005535469 podman[313459]: 2025-11-25 16:38:14.014619871 +0000 UTC m=+0.020726003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:38:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:14 np0005535469 podman[313459]: 2025-11-25 16:38:14.130260269 +0000 UTC m=+0.136366401 container init 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:38:14 np0005535469 podman[313459]: 2025-11-25 16:38:14.138631806 +0000 UTC m=+0.144737938 container start 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 11:38:14 np0005535469 podman[313459]: 2025-11-25 16:38:14.142235653 +0000 UTC m=+0.148341785 container attach 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:38:14 np0005535469 nova_compute[254092]: 2025-11-25 16:38:14.961 254096 DEBUG nova.network.neutron [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:14 np0005535469 nova_compute[254092]: 2025-11-25 16:38:14.990 254096 DEBUG oslo_concurrency.lockutils [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:14 np0005535469 nova_compute[254092]: 2025-11-25 16:38:14.991 254096 DEBUG nova.compute.manager [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 25 11:38:14 np0005535469 nova_compute[254092]: 2025-11-25 16:38:14.991 254096 DEBUG nova.compute.manager [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] network_info to inject: |[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 25 11:38:14 np0005535469 nova_compute[254092]: 2025-11-25 16:38:14.994 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:14 np0005535469 nova_compute[254092]: 2025-11-25 16:38:14.994 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:38:14 np0005535469 nova_compute[254092]: 2025-11-25 16:38:14.994 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.080 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]: {
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "osd_id": 1,
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "type": "bluestore"
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:    },
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "osd_id": 2,
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "type": "bluestore"
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:    },
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "osd_id": 0,
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:        "type": "bluestore"
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]:    }
Nov 25 11:38:15 np0005535469 gracious_perlman[313475]: }
Nov 25 11:38:15 np0005535469 systemd[1]: libpod-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope: Deactivated successfully.
Nov 25 11:38:15 np0005535469 podman[313459]: 2025-11-25 16:38:15.205612513 +0000 UTC m=+1.211718625 container died 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:38:15 np0005535469 systemd[1]: libpod-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope: Consumed 1.068s CPU time.
Nov 25 11:38:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8-merged.mount: Deactivated successfully.
Nov 25 11:38:15 np0005535469 podman[313459]: 2025-11-25 16:38:15.263057931 +0000 UTC m=+1.269164043 container remove 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 25 11:38:15 np0005535469 systemd[1]: libpod-conmon-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope: Deactivated successfully.
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.310 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.313 254096 INFO nova.compute.manager [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Terminating instance#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.313 254096 DEBUG nova.compute.manager [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:38:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:38:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:38:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e226de30-7687-4daf-ab29-ced4fefb7d15 does not exist
Nov 25 11:38:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c3d74471-60c4-44e4-8669-99240d9a3dcc does not exist
Nov 25 11:38:15 np0005535469 kernel: tap1693689c-37 (unregistering): left promiscuous mode
Nov 25 11:38:15 np0005535469 NetworkManager[48891]: <info>  [1764088695.3809] device (tap1693689c-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:38:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:15Z|00473|binding|INFO|Releasing lport 1693689c-371b-40fb-8153-5313e926d910 from this chassis (sb_readonly=0)
Nov 25 11:38:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:15Z|00474|binding|INFO|Setting lport 1693689c-371b-40fb-8153-5313e926d910 down in Southbound
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:15Z|00475|binding|INFO|Removing iface tap1693689c-37 ovn-installed in OVS
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.397 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:c2:fe 10.100.0.14'], port_security=['fa:16:3e:fc:c2:fe 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd3356685-91bc-46b9-9b9f-87ffce31a4ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8b9be1565d148a3ac487eacb391dc1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f328e556-e196-4e21-8b60-04c34108b4ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8c5f31d-3e5c-4add-b1b6-dfe24828d28e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1693689c-371b-40fb-8153-5313e926d910) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.398 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1693689c-371b-40fb-8153-5313e926d910 in datapath 0c61a44f-bcff-4141-9691-0b0cd16e5793 unbound from our chassis#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.399 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c61a44f-bcff-4141-9691-0b0cd16e5793, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.401 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40a9d26f-968c-423f-ad14-67693c7f0a55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.401 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 namespace which is not needed anymore#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 11:38:15 np0005535469 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Consumed 15.622s CPU time.
Nov 25 11:38:15 np0005535469 systemd-machined[216343]: Machine qemu-59-instance-00000032 terminated.
Nov 25 11:38:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.555 254096 INFO nova.virt.libvirt.driver [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance destroyed successfully.#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.557 254096 DEBUG nova.objects.instance [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'resources' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:15 np0005535469 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : haproxy version is 2.8.14-c23fe91
Nov 25 11:38:15 np0005535469 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : path to executable is /usr/sbin/haproxy
Nov 25 11:38:15 np0005535469 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [WARNING]  (311619) : Exiting Master process...
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.576 254096 DEBUG nova.virt.libvirt.vif [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:37:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2097469038',display_name='tempest-AttachInterfacesUnderV243Test-server-2097469038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2097469038',id=50,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK35kG1vwXJXdI4r/EHWJZkHbZ92CrcmDm6T8HHIBEabt8dsD4hwgL2ByxTJp0aD3PDPswuWtqGhIZZ1n6EYekgLgLZqS6KsMbAxaY/ldKY87IH4bSdNTYm2tWLgSZE5MA==',key_name='tempest-keypair-935791717',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:37:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8b9be1565d148a3ac487eacb391dc1f',ramdisk_id='',reservation_id='r-nejw6g9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1206829083',owner_user_name='tempest-AttachInterfacesUnderV243Test-1206829083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:38:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec49417447ad4a98b1f890ed78fd5b41',uuid=d3356685-91bc-46b9-9b9f-87ffce31a4ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.576 254096 DEBUG nova.network.os_vif_util [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converting VIF {"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.577 254096 DEBUG nova.network.os_vif_util [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:38:15 np0005535469 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [ALERT]    (311619) : Current worker (311621) exited with code 143 (Terminated)
Nov 25 11:38:15 np0005535469 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [WARNING]  (311619) : All workers exited. Exiting... (0)
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.577 254096 DEBUG os_vif [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.579 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1693689c-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:15 np0005535469 systemd[1]: libpod-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440.scope: Deactivated successfully.
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.581 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 podman[313594]: 2025-11-25 16:38:15.587577815 +0000 UTC m=+0.066325581 container died f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.588 254096 INFO os_vif [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37')#033[00m
Nov 25 11:38:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8389ed7fca4d6f0aa03377187ba2932fde612ac6153d8db17d6a663aaf209d6a-merged.mount: Deactivated successfully.
Nov 25 11:38:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440-userdata-shm.mount: Deactivated successfully.
Nov 25 11:38:15 np0005535469 podman[313594]: 2025-11-25 16:38:15.633040998 +0000 UTC m=+0.111788754 container cleanup f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:38:15 np0005535469 systemd[1]: libpod-conmon-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440.scope: Deactivated successfully.
Nov 25 11:38:15 np0005535469 podman[313654]: 2025-11-25 16:38:15.702585515 +0000 UTC m=+0.047016277 container remove f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.709 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41772acc-d847-4d07-b2bb-e5ce11a69edf]: (4, ('Tue Nov 25 04:38:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 (f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440)\nf9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440\nTue Nov 25 04:38:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 (f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440)\nf9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa20833c-3a08-497c-84d4-83349b34b214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.712 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c61a44f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 kernel: tap0c61a44f-b0: left promiscuous mode
Nov 25 11:38:15 np0005535469 nova_compute[254092]: 2025-11-25 16:38:15.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.733 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f45e02e4-2b19-4081-b520-5182b0fc8bbf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b76769-238e-4807-99f8-d7048edad45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b520c478-1c1e-4187-97c2-938864973bc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.768 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[099a2e90-2519-477f-b1cc-5781c0fbd8ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509968, 'reachable_time': 44554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313667, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.771 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:38:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.772 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5e64deeb-b702-44e5-8bdc-d5463a095850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:15 np0005535469 systemd[1]: run-netns-ovnmeta\x2d0c61a44f\x2dbcff\x2d4141\x2d9691\x2d0b0cd16e5793.mount: Deactivated successfully.
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.050 254096 INFO nova.virt.libvirt.driver [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deleting instance files /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab_del#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.052 254096 INFO nova.virt.libvirt.driver [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deletion of /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab_del complete#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.254 254096 INFO nova.compute.manager [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.254 254096 DEBUG oslo.service.loopingcall [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.254 254096 DEBUG nova.compute.manager [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.255 254096 DEBUG nova.network.neutron [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:38:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.517 254096 DEBUG nova.compute.manager [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-unplugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.517 254096 DEBUG oslo_concurrency.lockutils [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.517 254096 DEBUG oslo_concurrency.lockutils [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.518 254096 DEBUG oslo_concurrency.lockutils [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.518 254096 DEBUG nova.compute.manager [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] No waiting events found dispatching network-vif-unplugged-1693689c-371b-40fb-8153-5313e926d910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:38:16 np0005535469 nova_compute[254092]: 2025-11-25 16:38:16.518 254096 DEBUG nova.compute.manager [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-unplugged-1693689c-371b-40fb-8153-5313e926d910 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:38:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 168 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 4.0 KiB/s wr, 17 op/s
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.518 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.532 254096 DEBUG oslo_concurrency.lockutils [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.532 254096 DEBUG nova.network.neutron [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.560 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.578 254096 DEBUG nova.network.neutron [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.607 254096 INFO nova.compute.manager [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 1.35 seconds to deallocate network for instance.#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.649 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.650 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.735 254096 DEBUG oslo_concurrency.processutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.921 254096 INFO nova.network.neutron [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Port 1693689c-371b-40fb-8153-5313e926d910 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.922 254096 DEBUG nova.network.neutron [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:17 np0005535469 nova_compute[254092]: 2025-11-25 16:38:17.936 254096 DEBUG oslo_concurrency.lockutils [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355352229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.199 254096 DEBUG oslo_concurrency.processutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.208 254096 DEBUG nova.compute.provider_tree [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.223 254096 DEBUG nova.scheduler.client.report [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.242 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.291 254096 INFO nova.scheduler.client.report [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Deleted allocations for instance d3356685-91bc-46b9-9b9f-87ffce31a4ab#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.375 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.656 254096 DEBUG nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.656 254096 DEBUG oslo_concurrency.lockutils [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.657 254096 DEBUG oslo_concurrency.lockutils [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.657 254096 DEBUG oslo_concurrency.lockutils [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.657 254096 DEBUG nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] No waiting events found dispatching network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.658 254096 WARNING nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received unexpected event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:38:18 np0005535469 nova_compute[254092]: 2025-11-25 16:38:18.658 254096 DEBUG nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-deleted-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 168 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.7 KiB/s wr, 17 op/s
Nov 25 11:38:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:20 np0005535469 nova_compute[254092]: 2025-11-25 16:38:20.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 11:38:22 np0005535469 nova_compute[254092]: 2025-11-25 16:38:22.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:23 np0005535469 nova_compute[254092]: 2025-11-25 16:38:23.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 11:38:25 np0005535469 nova_compute[254092]: 2025-11-25 16:38:25.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:25Z|00476|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:38:25 np0005535469 nova_compute[254092]: 2025-11-25 16:38:25.314 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:25Z|00477|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:38:25 np0005535469 nova_compute[254092]: 2025-11-25 16:38:25.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 11:38:25 np0005535469 nova_compute[254092]: 2025-11-25 16:38:25.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:26 np0005535469 nova_compute[254092]: 2025-11-25 16:38:26.132 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:38:27 np0005535469 kernel: tapd90f4f5a-3c (unregistering): left promiscuous mode
Nov 25 11:38:27 np0005535469 NetworkManager[48891]: <info>  [1764088707.4332] device (tapd90f4f5a-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:38:27 np0005535469 nova_compute[254092]: 2025-11-25 16:38:27.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:27 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:27Z|00478|binding|INFO|Releasing lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca from this chassis (sb_readonly=0)
Nov 25 11:38:27 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:27Z|00479|binding|INFO|Setting lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca down in Southbound
Nov 25 11:38:27 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:27Z|00480|binding|INFO|Removing iface tapd90f4f5a-3c ovn-installed in OVS
Nov 25 11:38:27 np0005535469 nova_compute[254092]: 2025-11-25 16:38:27.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.455 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:76:f8 10.100.0.10'], port_security=['fa:16:3e:b0:76:f8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '497caf1f-53fe-425d-8e5c-10b2f0a2506d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.457 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.459 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.460 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93ff0d7a-079b-4c23-a663-e135e44ad8b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.461 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore#033[00m
Nov 25 11:38:27 np0005535469 nova_compute[254092]: 2025-11-25 16:38:27.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Nov 25 11:38:27 np0005535469 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 25 11:38:27 np0005535469 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000034.scope: Consumed 15.691s CPU time.
Nov 25 11:38:27 np0005535469 systemd-machined[216343]: Machine qemu-61-instance-00000034 terminated.
Nov 25 11:38:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : haproxy version is 2.8.14-c23fe91
Nov 25 11:38:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : path to executable is /usr/sbin/haproxy
Nov 25 11:38:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [WARNING]  (311890) : Exiting Master process...
Nov 25 11:38:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [WARNING]  (311890) : Exiting Master process...
Nov 25 11:38:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [ALERT]    (311890) : Current worker (311892) exited with code 143 (Terminated)
Nov 25 11:38:27 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [WARNING]  (311890) : All workers exited. Exiting... (0)
Nov 25 11:38:27 np0005535469 systemd[1]: libpod-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8.scope: Deactivated successfully.
Nov 25 11:38:27 np0005535469 podman[313716]: 2025-11-25 16:38:27.601918748 +0000 UTC m=+0.050490890 container died 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:38:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8-userdata-shm.mount: Deactivated successfully.
Nov 25 11:38:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4af3758759c7473d6d0c9e6e56bd70b29b7aef6fe03951135d48e66ecc34c2f1-merged.mount: Deactivated successfully.
Nov 25 11:38:27 np0005535469 podman[313716]: 2025-11-25 16:38:27.649381416 +0000 UTC m=+0.097953548 container cleanup 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:38:27 np0005535469 systemd[1]: libpod-conmon-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8.scope: Deactivated successfully.
Nov 25 11:38:27 np0005535469 nova_compute[254092]: 2025-11-25 16:38:27.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:27 np0005535469 podman[313747]: 2025-11-25 16:38:27.762912306 +0000 UTC m=+0.047317254 container remove 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.771 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a746292-edd1-4128-a701-bcf01cd77ed4]: (4, ('Tue Nov 25 04:38:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8)\n8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8\nTue Nov 25 04:38:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8)\n8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.775 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[09c4902c-4187-4285-a103-e21681555ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.776 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:27 np0005535469 kernel: tape469a950-70: left promiscuous mode
Nov 25 11:38:27 np0005535469 nova_compute[254092]: 2025-11-25 16:38:27.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:27 np0005535469 nova_compute[254092]: 2025-11-25 16:38:27.797 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bcca25bb-2514-4a96-889f-697cc02065ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.823 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f472ffa3-4045-4981-bb24-007eadfa610f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62058701-1601-492f-9767-04d43d32a536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9878da22-ff60-48c0-a307-b1ac11c3008d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510326, 'reachable_time': 28441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313774, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.849 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:38:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.849 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4afcc7-21c9-4ba1-b9dd-47c29578f9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:27 np0005535469 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 11:38:28 np0005535469 nova_compute[254092]: 2025-11-25 16:38:28.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:28 np0005535469 nova_compute[254092]: 2025-11-25 16:38:28.143 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance shutdown successfully after 56 seconds.#033[00m
Nov 25 11:38:28 np0005535469 nova_compute[254092]: 2025-11-25 16:38:28.148 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance destroyed successfully.#033[00m
Nov 25 11:38:28 np0005535469 nova_compute[254092]: 2025-11-25 16:38:28.148 254096 DEBUG nova.objects.instance [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:29.689 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:38:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:29.690 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:38:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 4.8 KiB/s wr, 11 op/s
Nov 25 11:38:29 np0005535469 nova_compute[254092]: 2025-11-25 16:38:29.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:29 np0005535469 nova_compute[254092]: 2025-11-25 16:38:29.764 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Beginning cold snapshot process#033[00m
Nov 25 11:38:29 np0005535469 nova_compute[254092]: 2025-11-25 16:38:29.970 254096 DEBUG nova.virt.libvirt.imagebackend [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.265 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] creating snapshot(79daa9a6eec84791953396c480b0a63f) on rbd image(497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:38:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.553 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088695.5517426, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.553 254096 INFO nova.compute.manager [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.582 254096 DEBUG nova.compute.manager [None req-1ded6a8e-6b26-4619-b1c0-7bd0efca1f85 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.821 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.823 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:30 np0005535469 nova_compute[254092]: 2025-11-25 16:38:30.914 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:38:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 25 11:38:30 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.016 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.016 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.025 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.026 254096 INFO nova.compute.claims [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.075 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] cloning vms/497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk@79daa9a6eec84791953396c480b0a63f to images/fedb4aef-bdad-4b3a-abdc-073591bfcffa clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.192 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.233 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] flattening images/fedb4aef-bdad-4b3a-abdc-073591bfcffa flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:38:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 17 KiB/s wr, 6 op/s
Nov 25 11:38:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629628229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.662 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.668 254096 DEBUG nova.compute.provider_tree [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.684 254096 DEBUG nova.scheduler.client.report [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:38:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:31.692 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.774 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.775 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.854 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.854 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.899 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:38:31 np0005535469 nova_compute[254092]: 2025-11-25 16:38:31.936 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.146 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.147 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.147 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Creating image(s)#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.174 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.282 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.305 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.310 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.363 254096 DEBUG nova.policy [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34706428d3f94a60b53f4a535d408fd1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '80a627278d934815a3ea621e9d6402d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.370 254096 DEBUG nova.compute.manager [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-unplugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.370 254096 DEBUG oslo_concurrency.lockutils [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 DEBUG oslo_concurrency.lockutils [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 DEBUG oslo_concurrency.lockutils [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 DEBUG nova.compute.manager [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] No waiting events found dispatching network-vif-unplugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 WARNING nova.compute.manager [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received unexpected event network-vif-unplugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.404 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.406 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.406 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.407 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.428 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.433 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.583 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] removing snapshot(79daa9a6eec84791953396c480b0a63f) on rbd image(497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.844 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:32 np0005535469 nova_compute[254092]: 2025-11-25 16:38:32.912 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] resizing rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.018 254096 DEBUG nova.objects.instance [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.048 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.048 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Ensure instance console log exists: /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.049 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.049 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.049 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.051 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Successfully created port: a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:38:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 25 11:38:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 25 11:38:33 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 25 11:38:33 np0005535469 nova_compute[254092]: 2025-11-25 16:38:33.225 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] creating snapshot(snap) on rbd image(fedb4aef-bdad-4b3a-abdc-073591bfcffa) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:38:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 22 KiB/s wr, 8 op/s
Nov 25 11:38:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 25 11:38:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 25 11:38:34 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.358 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Successfully updated port: a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.390 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.391 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquired lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.391 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.479 254096 DEBUG nova.compute.manager [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-changed-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.479 254096 DEBUG nova.compute.manager [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Refreshing instance network info cache due to event network-changed-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.479 254096 DEBUG oslo_concurrency.lockutils [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.482 254096 DEBUG nova.compute.manager [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.482 254096 DEBUG oslo_concurrency.lockutils [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.482 254096 DEBUG oslo_concurrency.lockutils [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.483 254096 DEBUG oslo_concurrency.lockutils [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.483 254096 DEBUG nova.compute.manager [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] No waiting events found dispatching network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.483 254096 WARNING nova.compute.manager [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received unexpected event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 11:38:34 np0005535469 podman[314105]: 2025-11-25 16:38:34.643695608 +0000 UTC m=+0.063090672 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:38:34 np0005535469 podman[314104]: 2025-11-25 16:38:34.646953826 +0000 UTC m=+0.066037382 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 11:38:34 np0005535469 podman[314106]: 2025-11-25 16:38:34.67511818 +0000 UTC m=+0.090786303 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:38:34 np0005535469 nova_compute[254092]: 2025-11-25 16:38:34.687 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:38:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 182 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.0 MiB/s wr, 151 op/s
Nov 25 11:38:35 np0005535469 nova_compute[254092]: 2025-11-25 16:38:35.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.440 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updating instance_info_cache with network_info: [{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.462 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Releasing lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.462 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance network_info: |[{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.463 254096 DEBUG oslo_concurrency.lockutils [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.463 254096 DEBUG nova.network.neutron [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Refreshing network info cache for port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.466 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start _get_guest_xml network_info=[{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.469 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Snapshot image upload complete#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.469 254096 DEBUG nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.478 254096 WARNING nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.483 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.484 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.487 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.487 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.488 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.488 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.488 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.489 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.489 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.489 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.491 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.493 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.533 254096 INFO nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Shelve offloading#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.541 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance destroyed successfully.#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.542 254096 DEBUG nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.544 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.545 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.545 254096 DEBUG nova.network.neutron [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:38:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:38:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464012297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.946 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.967 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:36 np0005535469 nova_compute[254092]: 2025-11-25 16:38:36.971 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:38:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749163616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.404 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.407 254096 DEBUG nova.virt.libvirt.vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-876908934',display_name='tempest-ImagesOneServerTestJSON-server-876908934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-876908934',id=53,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80a627278d934815a3ea621e9d6402d2',ramdisk_id='',reservation_id='r-mt56k20g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-941588767',owner_user_name='tempest-ImagesOneServerTestJSON-941588767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:31Z,user_data=None,user_id='34706428d3f94a60b53f4a535d408fd1',uuid=52c13ebd-df79-43b9-8d5f-e4bf4a2e0738,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.407 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converting VIF {"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.409 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.410 254096 DEBUG nova.objects.instance [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.423 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <uuid>52c13ebd-df79-43b9-8d5f-e4bf4a2e0738</uuid>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <name>instance-00000035</name>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <nova:name>tempest-ImagesOneServerTestJSON-server-876908934</nova:name>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:38:36</nova:creationTime>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:user uuid="34706428d3f94a60b53f4a535d408fd1">tempest-ImagesOneServerTestJSON-941588767-project-member</nova:user>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:project uuid="80a627278d934815a3ea621e9d6402d2">tempest-ImagesOneServerTestJSON-941588767</nova:project>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <nova:port uuid="a1b0e8cf-d5e8-4b48-b591-6e3d110aff41">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <entry name="serial">52c13ebd-df79-43b9-8d5f-e4bf4a2e0738</entry>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <entry name="uuid">52c13ebd-df79-43b9-8d5f-e4bf4a2e0738</entry>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d3:ad:75"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <target dev="tapa1b0e8cf-d5"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/console.log" append="off"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:38:37 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:38:37 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:38:37 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:38:37 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.424 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Preparing to wait for external event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.425 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.425 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.425 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.426 254096 DEBUG nova.virt.libvirt.vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-876908934',display_name='tempest-ImagesOneServerTestJSON-server-876908934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-876908934',id=53,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80a627278d934815a3ea621e9d6402d2',ramdisk_id='',reservation_id='r-mt56k20g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-941588767',owner_user_name='tempest-ImagesOneServerTestJSON-941588767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:31Z,user_data=None,user_id='34706428d3f94a60b53f4a535d408fd1',uuid=52c13ebd-df79-43b9-8d5f-e4bf4a2e0738,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.426 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converting VIF {"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.427 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.427 254096 DEBUG os_vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.428 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.429 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.432 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1b0e8cf-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.432 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1b0e8cf-d5, col_values=(('external_ids', {'iface-id': 'a1b0e8cf-d5e8-4b48-b591-6e3d110aff41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:ad:75', 'vm-uuid': '52c13ebd-df79-43b9-8d5f-e4bf4a2e0738'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:37 np0005535469 NetworkManager[48891]: <info>  [1764088717.4353] manager: (tapa1b0e8cf-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.442 254096 INFO os_vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5')#033[00m
Nov 25 11:38:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 246 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 10 MiB/s wr, 201 op/s
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.506 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.507 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.507 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No VIF found with MAC fa:16:3e:d3:ad:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.507 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Using config drive#033[00m
Nov 25 11:38:37 np0005535469 nova_compute[254092]: 2025-11-25 16:38:37.527 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.625 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Creating config drive at /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.631 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2dvl4yb1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.720 254096 DEBUG nova.network.neutron [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updated VIF entry in instance network info cache for port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.721 254096 DEBUG nova.network.neutron [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updating instance_info_cache with network_info: [{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.737 254096 DEBUG oslo_concurrency.lockutils [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.765 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2dvl4yb1" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.786 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:38 np0005535469 nova_compute[254092]: 2025-11-25 16:38:38.789 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:39 np0005535469 nova_compute[254092]: 2025-11-25 16:38:39.204 254096 DEBUG nova.network.neutron [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:39 np0005535469 nova_compute[254092]: 2025-11-25 16:38:39.218 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 246 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 161 op/s
Nov 25 11:38:39 np0005535469 nova_compute[254092]: 2025-11-25 16:38:39.957 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:39 np0005535469 nova_compute[254092]: 2025-11-25 16:38:39.958 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deleting local config drive /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config because it was imported into RBD.#033[00m
Nov 25 11:38:40 np0005535469 kernel: tapa1b0e8cf-d5: entered promiscuous mode
Nov 25 11:38:40 np0005535469 NetworkManager[48891]: <info>  [1764088720.0116] manager: (tapa1b0e8cf-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:40Z|00481|binding|INFO|Claiming lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for this chassis.
Nov 25 11:38:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:40Z|00482|binding|INFO|a1b0e8cf-d5e8-4b48-b591-6e3d110aff41: Claiming fa:16:3e:d3:ad:75 10.100.0.9
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:38:40 np0005535469 systemd-machined[216343]: New machine qemu-62-instance-00000035.
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:38:40
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes']
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:38:40 np0005535469 systemd[1]: Started Virtual Machine qemu-62-instance-00000035.
Nov 25 11:38:40 np0005535469 systemd-udevd[314301]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.098 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:ad:75 10.100.0.9'], port_security=['fa:16:3e:d3:ad:75 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52c13ebd-df79-43b9-8d5f-e4bf4a2e0738', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80a627278d934815a3ea621e9d6402d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1671917c-f980-406a-8c8d-043f07074abb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f06ea60c-86e3-46a2-b0dc-014d0b0b5949, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:38:40 np0005535469 NetworkManager[48891]: <info>  [1764088720.1003] device (tapa1b0e8cf-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:38:40 np0005535469 NetworkManager[48891]: <info>  [1764088720.1012] device (tapa1b0e8cf-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.100 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 in datapath abda97f3-dcb7-42ee-af40-cfc387fadfda bound to our chassis#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.101 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abda97f3-dcb7-42ee-af40-cfc387fadfda#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.115 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73c8d346-4d99-4ebd-b619-78d64886dfba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.116 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabda97f3-d1 in ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.118 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabda97f3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5fc00b-31be-430e-a41d-88feee7384cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbf4365-ffb0-4eb9-9dc6-133a4440f74b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.134 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7d1c41-5804-49cf-bf7a-959a8009a165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:40Z|00483|binding|INFO|Setting lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 ovn-installed in OVS
Nov 25 11:38:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:40Z|00484|binding|INFO|Setting lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 up in Southbound
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac633b7f-87d7-4ac8-a34d-069d9b7bdf7e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.200 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c41ca1e6-9dea-4d67-b50e-ed77a7cea6f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 NetworkManager[48891]: <info>  [1764088720.2079] manager: (tapabda97f3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/208)
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2dee14-0f73-4b7c-90cc-d26e7e48191e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.242 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7089d7ad-ae58-4bd2-95fb-9f5ea754dd53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.247 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9d3483-1dfa-4976-9758-a79ed52581c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 NetworkManager[48891]: <info>  [1764088720.2762] device (tapabda97f3-d0): carrier: link connected
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.281 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e90519fb-518d-4954-90c1-cba1f03c9459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[306a1cc0-f7c9-4745-82a0-d275db37fb6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabda97f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:a9:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517786, 'reachable_time': 26951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314340, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b18c9f-8aab-4ce3-991d-9fb4bb0af029]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:a9e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517786, 'tstamp': 517786}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314341, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.330 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a021a32b-62e3-4b16-9a99-ba2c9565016b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabda97f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:a9:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517786, 'reachable_time': 26951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314342, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.358 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37b4507c-ecc5-40ec-85c5-d5255ffac275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.413 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25d6c23a-0718-4bc0-80c4-30191b129a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.415 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabda97f3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.415 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.415 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabda97f3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:40 np0005535469 NetworkManager[48891]: <info>  [1764088720.4182] manager: (tapabda97f3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Nov 25 11:38:40 np0005535469 kernel: tapabda97f3-d0: entered promiscuous mode
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.424 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabda97f3-d0, col_values=(('external_ids', {'iface-id': '4b466e06-fb69-4706-83df-d7865671165a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:38:40Z|00485|binding|INFO|Releasing lport 4b466e06-fb69-4706-83df-d7865671165a from this chassis (sb_readonly=0)
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 nova_compute[254092]: 2025-11-25 16:38:40.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.445 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abda97f3-dcb7-42ee-af40-cfc387fadfda.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abda97f3-dcb7-42ee-af40-cfc387fadfda.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.446 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f5bc87-34ae-4330-af62-62ac409bc88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.447 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-abda97f3-dcb7-42ee-af40-cfc387fadfda
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/abda97f3-dcb7-42ee-af40-cfc387fadfda.pid.haproxy
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID abda97f3-dcb7-42ee-af40-cfc387fadfda
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:38:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.448 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'env', 'PROCESS_TAG=haproxy-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abda97f3-dcb7-42ee-af40-cfc387fadfda.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:38:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 25 11:38:40 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:38:40 np0005535469 podman[314376]: 2025-11-25 16:38:40.822579858 +0000 UTC m=+0.029631475 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:38:41 np0005535469 podman[314376]: 2025-11-25 16:38:41.419401069 +0000 UTC m=+0.626452656 container create ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 11:38:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 172 op/s
Nov 25 11:38:41 np0005535469 systemd[1]: Started libpod-conmon-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c.scope.
Nov 25 11:38:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:38:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d652c06ceeec69f5cac5c1cc50b800c7a964f2bc7a8ca4a2399870c6c25d11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:38:41 np0005535469 podman[314376]: 2025-11-25 16:38:41.578115045 +0000 UTC m=+0.785166632 container init ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:38:41 np0005535469 podman[314376]: 2025-11-25 16:38:41.588945639 +0000 UTC m=+0.795997216 container start ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 11:38:41 np0005535469 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : New worker (314441) forked
Nov 25 11:38:41 np0005535469 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : Loading success.
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.664 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088721.6643987, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.665 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Started (Lifecycle Event)#033[00m
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.690 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.695 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088721.6654434, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.696 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.710 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.714 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:38:41 np0005535469 nova_compute[254092]: 2025-11-25 16:38:41.730 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:38:42 np0005535469 nova_compute[254092]: 2025-11-25 16:38:42.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:42 np0005535469 nova_compute[254092]: 2025-11-25 16:38:42.720 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088707.7184463, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:38:42 np0005535469 nova_compute[254092]: 2025-11-25 16:38:42.722 254096 INFO nova.compute.manager [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:38:42 np0005535469 nova_compute[254092]: 2025-11-25 16:38:42.790 254096 DEBUG nova.compute.manager [None req-e2e8c774-34c0-483e-bcd9-dfa22ea1d05b - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:42 np0005535469 nova_compute[254092]: 2025-11-25 16:38:42.794 254096 DEBUG nova.compute.manager [None req-e2e8c774-34c0-483e-bcd9-dfa22ea1d05b - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:38:42 np0005535469 nova_compute[254092]: 2025-11-25 16:38:42.820 254096 INFO nova.compute.manager [None req-e2e8c774-34c0-483e-bcd9-dfa22ea1d05b - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.3 MiB/s wr, 148 op/s
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.690 254096 DEBUG nova.compute.manager [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.691 254096 DEBUG oslo_concurrency.lockutils [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.692 254096 DEBUG oslo_concurrency.lockutils [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.692 254096 DEBUG oslo_concurrency.lockutils [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.692 254096 DEBUG nova.compute.manager [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Processing event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.694 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.699 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088723.698265, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.699 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.700 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.704 254096 INFO nova.virt.libvirt.driver [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance spawned successfully.#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.705 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.720 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.727 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.732 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.733 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.733 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.734 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.734 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.735 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.953 254096 INFO nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 11.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:38:43 np0005535469 nova_compute[254092]: 2025-11-25 16:38:43.954 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.247 254096 INFO nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 13.26 seconds to build instance.#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.263 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.777 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance destroyed successfully.#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.778 254096 DEBUG nova.objects.instance [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.796 254096 DEBUG nova.virt.libvirt.vif [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1046040473',display_name='tempest-DeleteServersTestJSON-server-1046040473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1046040473',id=52,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ia105ljl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member',shelved_at='2025-11-25T16:38:36.469708',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fedb4aef-bdad-4b3a-abdc-073591bfcffa'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:38:29Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=497caf1f-53fe-425d-8e5c-10b2f0a2506d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.797 254096 DEBUG nova.network.os_vif_util [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.798 254096 DEBUG nova.network.os_vif_util [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.798 254096 DEBUG os_vif [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.801 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd90f4f5a-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.808 254096 INFO os_vif [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c')#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.989 254096 DEBUG nova.compute.manager [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.990 254096 DEBUG nova.compute.manager [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing instance network info cache due to event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.990 254096 DEBUG oslo_concurrency.lockutils [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.991 254096 DEBUG oslo_concurrency.lockutils [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:44 np0005535469 nova_compute[254092]: 2025-11-25 16:38:44.991 254096 DEBUG nova.network.neutron [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:38:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.8 MiB/s wr, 100 op/s
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.542 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Deleting instance files /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d_del#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.547 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Deletion of /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d_del complete#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.689 254096 INFO nova.scheduler.client.report [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance 497caf1f-53fe-425d-8e5c-10b2f0a2506d#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.762 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.764 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.819 254096 DEBUG oslo_concurrency.processutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.873 254096 DEBUG nova.compute.manager [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.875 254096 DEBUG oslo_concurrency.lockutils [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.875 254096 DEBUG oslo_concurrency.lockutils [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.875 254096 DEBUG oslo_concurrency.lockutils [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.876 254096 DEBUG nova.compute.manager [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] No waiting events found dispatching network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:38:45 np0005535469 nova_compute[254092]: 2025-11-25 16:38:45.876 254096 WARNING nova.compute.manager [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received unexpected event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:38:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557752528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:46 np0005535469 nova_compute[254092]: 2025-11-25 16:38:46.318 254096 DEBUG oslo_concurrency.processutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:46 np0005535469 nova_compute[254092]: 2025-11-25 16:38:46.325 254096 DEBUG nova.compute.provider_tree [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:38:46 np0005535469 nova_compute[254092]: 2025-11-25 16:38:46.345 254096 DEBUG nova.scheduler.client.report [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:38:46 np0005535469 nova_compute[254092]: 2025-11-25 16:38:46.376 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:46 np0005535469 nova_compute[254092]: 2025-11-25 16:38:46.456 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 74.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:47 np0005535469 nova_compute[254092]: 2025-11-25 16:38:47.453 254096 DEBUG nova.compute.manager [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:38:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 188 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 108 op/s
Nov 25 11:38:47 np0005535469 nova_compute[254092]: 2025-11-25 16:38:47.484 254096 DEBUG nova.network.neutron [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updated VIF entry in instance network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:38:47 np0005535469 nova_compute[254092]: 2025-11-25 16:38:47.485 254096 DEBUG nova.network.neutron [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": null, "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:47 np0005535469 nova_compute[254092]: 2025-11-25 16:38:47.509 254096 INFO nova.compute.manager [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] instance snapshotting#033[00m
Nov 25 11:38:47 np0005535469 nova_compute[254092]: 2025-11-25 16:38:47.523 254096 DEBUG oslo_concurrency.lockutils [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:47 np0005535469 nova_compute[254092]: 2025-11-25 16:38:47.991 254096 INFO nova.virt.libvirt.driver [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Beginning live snapshot process#033[00m
Nov 25 11:38:48 np0005535469 nova_compute[254092]: 2025-11-25 16:38:48.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:48 np0005535469 nova_compute[254092]: 2025-11-25 16:38:48.198 254096 DEBUG nova.virt.libvirt.imagebackend [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:38:48 np0005535469 nova_compute[254092]: 2025-11-25 16:38:48.613 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(b91028f01b13453bb03c3748cb1f0430) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:38:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 188 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 108 op/s
Nov 25 11:38:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 25 11:38:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 25 11:38:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 25 11:38:49 np0005535469 nova_compute[254092]: 2025-11-25 16:38:49.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:49 np0005535469 nova_compute[254092]: 2025-11-25 16:38:49.837 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] cloning vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk@b91028f01b13453bb03c3748cb1f0430 to images/82803f77-dd79-40bf-9575-d6c61a15ce8a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:38:50 np0005535469 nova_compute[254092]: 2025-11-25 16:38:50.089 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] flattening images/82803f77-dd79-40bf-9575-d6c61a15ce8a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:38:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:50 np0005535469 nova_compute[254092]: 2025-11-25 16:38:50.923 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:50 np0005535469 nova_compute[254092]: 2025-11-25 16:38:50.923 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:50 np0005535469 nova_compute[254092]: 2025-11-25 16:38:50.989 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.022 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] removing snapshot(b91028f01b13453bb03c3748cb1f0430) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.062 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.062 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.067 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.067 254096 INFO nova.compute.claims [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005052917643419938 of space, bias 1.0, pg target 0.15158752930259814 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014241774923487661 of space, bias 1.0, pg target 0.42725324770462986 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.217 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 148 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 179 op/s
Nov 25 11:38:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320757746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.654 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.660 254096 DEBUG nova.compute.provider_tree [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:38:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.676 254096 DEBUG nova.scheduler.client.report [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:38:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 25 11:38:51 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.705 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.706 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.772 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.773 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.842 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.880 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.985 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.987 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:38:51 np0005535469 nova_compute[254092]: 2025-11-25 16:38:51.988 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Creating image(s)#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.066 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.097 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.121 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.125 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.163 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(snap) on rbd image(82803f77-dd79-40bf-9575-d6c61a15ce8a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.236 254096 DEBUG nova.policy [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cb4b65d4074373a38534856574dc8f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.239 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.240 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.241 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.241 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.262 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:52 np0005535469 nova_compute[254092]: 2025-11-25 16:38:52.265 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 25 11:38:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:53 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.472 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 148 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.535 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] resizing rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.580 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully created port: a6f06f5d-486f-4039-a0cb-30b122e69258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.728 254096 DEBUG nova.objects.instance [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.743 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.744 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Ensure instance console log exists: /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.744 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.745 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:53 np0005535469 nova_compute[254092]: 2025-11-25 16:38:53.745 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:54 np0005535469 nova_compute[254092]: 2025-11-25 16:38:54.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:54 np0005535469 nova_compute[254092]: 2025-11-25 16:38:54.981 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully updated port: a6f06f5d-486f-4039-a0cb-30b122e69258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.046 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.046 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.047 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035909139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035909139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 25 11:38:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 152 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 201 op/s
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.530 254096 DEBUG nova.compute.manager [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-changed-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.530 254096 DEBUG nova.compute.manager [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing instance network info cache due to event network-changed-a6f06f5d-486f-4039-a0cb-30b122e69258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.530 254096 DEBUG oslo_concurrency.lockutils [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.532 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:38:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.598 254096 INFO nova.virt.libvirt.driver [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Snapshot image upload complete#033[00m
Nov 25 11:38:55 np0005535469 nova_compute[254092]: 2025-11-25 16:38:55.598 254096 INFO nova.compute.manager [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 8.09 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:38:56 np0005535469 nova_compute[254092]: 2025-11-25 16:38:56.754 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:56 np0005535469 nova_compute[254092]: 2025-11-25 16:38:56.754 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:56 np0005535469 nova_compute[254092]: 2025-11-25 16:38:56.787 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.041 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.076 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.077 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.086 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.086 254096 INFO nova.compute.claims [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.111 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.112 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance network_info: |[{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.112 254096 DEBUG oslo_concurrency.lockutils [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.112 254096 DEBUG nova.network.neutron [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing network info cache for port a6f06f5d-486f-4039-a0cb-30b122e69258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.115 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start _get_guest_xml network_info=[{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.120 254096 WARNING nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.124 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.126 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.133 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.134 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.134 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.137 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.137 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.137 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.140 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 180 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.1 MiB/s wr, 132 op/s
Nov 25 11:38:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:38:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770989582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.617 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.642 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.647 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:57 np0005535469 nova_compute[254092]: 2025-11-25 16:38:57.882 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:38:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772629285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.207 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.208 254096 DEBUG nova.virt.libvirt.vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:51Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.209 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.210 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.211 254096 DEBUG nova.objects.instance [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.225 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <uuid>97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211</uuid>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <name>instance-00000036</name>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <nova:name>tempest-AttachInterfacesV270Test-server-540246934</nova:name>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:38:57</nova:creationTime>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:user uuid="96cb4b65d4074373a38534856574dc8f">tempest-AttachInterfacesV270Test-1255379647-project-member</nova:user>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:project uuid="a92e4b86655441c59ead5a1bd83173e5">tempest-AttachInterfacesV270Test-1255379647</nova:project>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <nova:port uuid="a6f06f5d-486f-4039-a0cb-30b122e69258">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <entry name="serial">97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211</entry>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <entry name="uuid">97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211</entry>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:76:39:37"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <target dev="tapa6f06f5d-48"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/console.log" append="off"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:38:58 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:38:58 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:38:58 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:38:58 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Preparing to wait for external event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.228 254096 DEBUG nova.virt.libvirt.vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:51Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.228 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.229 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.229 254096 DEBUG os_vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.230 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.230 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.230 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.234 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6f06f5d-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.234 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6f06f5d-48, col_values=(('external_ids', {'iface-id': 'a6f06f5d-486f-4039-a0cb-30b122e69258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:39:37', 'vm-uuid': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:58 np0005535469 NetworkManager[48891]: <info>  [1764088738.2375] manager: (tapa6f06f5d-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.243 254096 INFO os_vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48')#033[00m
Nov 25 11:38:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:38:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567130199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.333 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.340 254096 DEBUG nova.compute.provider_tree [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.359 254096 DEBUG nova.scheduler.client.report [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.384 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.385 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.385 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No VIF found with MAC fa:16:3e:76:39:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.385 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Using config drive#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.408 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.414 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.415 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.474 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.475 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.495 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.524 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.630 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.632 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.633 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Creating image(s)#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.651 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.673 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.695 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.699 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.779 254096 DEBUG nova.policy [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.783 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.784 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.785 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.785 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.810 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:58 np0005535469 nova_compute[254092]: 2025-11-25 16:38:58.816 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5947529-cfda-4753-94cd-b764da9d5c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 25 11:38:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 25 11:38:59 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.282 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5947529-cfda-4753-94cd-b764da9d5c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.343 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:38:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 180 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 125 op/s
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.481 254096 DEBUG nova.objects.instance [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.494 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.494 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Ensure instance console log exists: /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.495 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.495 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.496 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.588 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Creating config drive at /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.594 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3iiwimi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.755 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3iiwimi" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.789 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.794 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.976 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:38:59 np0005535469 nova_compute[254092]: 2025-11-25 16:38:59.977 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deleting local config drive /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config because it was imported into RBD.#033[00m
Nov 25 11:39:00 np0005535469 kernel: tapa6f06f5d-48: entered promiscuous mode
Nov 25 11:39:00 np0005535469 NetworkManager[48891]: <info>  [1764088740.0262] manager: (tapa6f06f5d-48): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:00Z|00486|binding|INFO|Claiming lport a6f06f5d-486f-4039-a0cb-30b122e69258 for this chassis.
Nov 25 11:39:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:00Z|00487|binding|INFO|a6f06f5d-486f-4039-a0cb-30b122e69258: Claiming fa:16:3e:76:39:37 10.100.0.6
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.046 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.049 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 bound to our chassis#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.052 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5808bee-5100-4cdf-b578-a1bc323dafe9#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.066 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca139968-8e23-4c15-8cd3-2a5fdc5765c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.067 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5808bee-51 in ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.073 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5808bee-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.073 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[409c4890-bb7c-4cee-9c78-f95540fb364a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 systemd-machined[216343]: New machine qemu-63-instance-00000036.
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.074 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb878d25-6c38-420e-b7be-fa9d1d942691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 systemd-udevd[315149]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:39:00 np0005535469 systemd[1]: Started Virtual Machine qemu-63-instance-00000036.
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.091 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9196601d-c37b-4eec-beee-f93b128a81d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 NetworkManager[48891]: <info>  [1764088740.0962] device (tapa6f06f5d-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:39:00 np0005535469 NetworkManager[48891]: <info>  [1764088740.0975] device (tapa6f06f5d-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:39:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:00Z|00488|binding|INFO|Setting lport a6f06f5d-486f-4039-a0cb-30b122e69258 ovn-installed in OVS
Nov 25 11:39:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:00Z|00489|binding|INFO|Setting lport a6f06f5d-486f-4039-a0cb-30b122e69258 up in Southbound
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.113 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5125b11f-0a3d-4e1f-abe7-352f8e8eacb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.146 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3b09e7-bda3-4dc7-9371-fb33cf3219d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 NetworkManager[48891]: <info>  [1764088740.1535] manager: (tapd5808bee-50): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07519d69-19ba-4136-8af9-f1f0511cce95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.197 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1bf113-ead5-44d4-98b0-d3c5029c3b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.202 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[173ae467-0e75-4422-a502-0651075662bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 NetworkManager[48891]: <info>  [1764088740.2298] device (tapd5808bee-50): carrier: link connected
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.237 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a50ce31-7635-497c-9ede-22df7eee4b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.250027) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740250113, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2219, "num_deletes": 260, "total_data_size": 3336185, "memory_usage": 3378272, "flush_reason": "Manual Compaction"}
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.262 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b74a44f1-ac8f-4a5a-a614-ab075d2cb05b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315181, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740274834, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3274719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30737, "largest_seqno": 32955, "table_properties": {"data_size": 3264596, "index_size": 6489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21317, "raw_average_key_size": 20, "raw_value_size": 3244167, "raw_average_value_size": 3177, "num_data_blocks": 283, "num_entries": 1021, "num_filter_entries": 1021, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088551, "oldest_key_time": 1764088551, "file_creation_time": 1764088740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 24996 microseconds, and 7341 cpu microseconds.
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.275035) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3274719 bytes OK
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.275099) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.276774) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.276792) EVENT_LOG_v1 {"time_micros": 1764088740276786, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.276819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3326752, prev total WAL file size 3326752, number of live WAL files 2.
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.278179) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3197KB)], [68(6823KB)]
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740278258, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10262462, "oldest_snapshot_seqno": -1}
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad733870-06ad-475f-84f5-13d155f33b87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:9ed0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519782, 'tstamp': 519782}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315182, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffcf78a-f52a-4079-ba20-29500048b89d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315183, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5816 keys, 8664787 bytes, temperature: kUnknown
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740333481, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8664787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8625423, "index_size": 23702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 146457, "raw_average_key_size": 25, "raw_value_size": 8520436, "raw_average_value_size": 1464, "num_data_blocks": 964, "num_entries": 5816, "num_filter_entries": 5816, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.333755) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8664787 bytes
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.334874) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.6 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 6345, records dropped: 529 output_compression: NoCompression
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.334890) EVENT_LOG_v1 {"time_micros": 1764088740334881, "job": 38, "event": "compaction_finished", "compaction_time_micros": 55293, "compaction_time_cpu_micros": 21055, "output_level": 6, "num_output_files": 1, "total_output_size": 8664787, "num_input_records": 6345, "num_output_records": 5816, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740335413, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740336399, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.278046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a97ef4-bdd0-4137-b416-c83de9a896f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.400 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[043bdb7f-8c57-4b8c-ba46-238a98fed43d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5808bee-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:00 np0005535469 kernel: tapd5808bee-50: entered promiscuous mode
Nov 25 11:39:00 np0005535469 NetworkManager[48891]: <info>  [1764088740.4051] manager: (tapd5808bee-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.407 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5808bee-50, col_values=(('external_ids', {'iface-id': 'ab0217e4-1718-4cef-9483-cef43176b686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:00Z|00490|binding|INFO|Releasing lport ab0217e4-1718-4cef-9483-cef43176b686 from this chassis (sb_readonly=0)
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.431 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5808bee-5100-4cdf-b578-a1bc323dafe9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5808bee-5100-4cdf-b578-a1bc323dafe9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.432 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d42f6565-bad9-4eba-aef1-7a38e500c7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.433 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-d5808bee-5100-4cdf-b578-a1bc323dafe9
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/d5808bee-5100-4cdf-b578-a1bc323dafe9.pid.haproxy
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID d5808bee-5100-4cdf-b578-a1bc323dafe9
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:39:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.433 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'env', 'PROCESS_TAG=haproxy-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5808bee-5100-4cdf-b578-a1bc323dafe9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 25 11:39:00 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 25 11:39:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:00Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:ad:75 10.100.0.9
Nov 25 11:39:00 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:00Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:ad:75 10.100.0.9
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.589 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Successfully created port: aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.634 254096 DEBUG nova.compute.manager [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG oslo_concurrency.lockutils [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG oslo_concurrency.lockutils [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG oslo_concurrency.lockutils [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG nova.compute.manager [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Processing event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.673 254096 DEBUG nova.compute.manager [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.720 254096 INFO nova.compute.manager [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] instance snapshotting#033[00m
Nov 25 11:39:00 np0005535469 podman[315215]: 2025-11-25 16:39:00.842529299 +0000 UTC m=+0.064816920 container create 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:39:00 np0005535469 systemd[1]: Started libpod-conmon-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded.scope.
Nov 25 11:39:00 np0005535469 podman[315215]: 2025-11-25 16:39:00.808602738 +0000 UTC m=+0.030890359 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:39:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c248a764d5721c6e561451525ee03f5d19e082815bceb4132727d099f101a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:00 np0005535469 podman[315215]: 2025-11-25 16:39:00.942012938 +0000 UTC m=+0.164300559 container init 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:39:00 np0005535469 podman[315215]: 2025-11-25 16:39:00.948662278 +0000 UTC m=+0.170949909 container start 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:39:00 np0005535469 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : New worker (315237) forked
Nov 25 11:39:00 np0005535469 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : Loading success.
Nov 25 11:39:00 np0005535469 nova_compute[254092]: 2025-11-25 16:39:00.984 254096 INFO nova.virt.libvirt.driver [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Beginning live snapshot process#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.147 254096 DEBUG nova.network.neutron [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updated VIF entry in instance network info cache for port a6f06f5d-486f-4039-a0cb-30b122e69258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.148 254096 DEBUG nova.network.neutron [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.154 254096 DEBUG nova.virt.libvirt.imagebackend [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.161 254096 DEBUG oslo_concurrency.lockutils [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.309 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(1e8da10f0cb94a5f915a173d7d11e28b) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:39:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 25 11:39:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 25 11:39:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 25 11:39:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 213 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 10 MiB/s wr, 300 op/s
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.540 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] cloning vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk@1e8da10f0cb94a5f915a173d7d11e28b to images/0216513c-fd2f-4f07-aa1e-cd470e84e4a4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.734 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] flattening images/0216513c-fd2f-4f07-aa1e-cd470e84e4a4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.798 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088741.7631524, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.798 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Started (Lifecycle Event)#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.800 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.803 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.806 254096 INFO nova.virt.libvirt.driver [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance spawned successfully.#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.807 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.830 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.836 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.839 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.839 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.840 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.840 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.841 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.841 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:01 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.876 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088741.763254, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.876 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.904 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.909 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088741.8031816, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.910 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.934 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.938 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.963 254096 INFO nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 9.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.964 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:01 np0005535469 nova_compute[254092]: 2025-11-25 16:39:01.965 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.077 254096 INFO nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 11.04 seconds to build instance.#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.135 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.524 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.691 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] removing snapshot(1e8da10f0cb94a5f915a173d7d11e28b) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG nova.compute.manager [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG oslo_concurrency.lockutils [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG oslo_concurrency.lockutils [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG oslo_concurrency.lockutils [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG nova.compute.manager [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.878 254096 WARNING nova.compute.manager [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.915 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Successfully updated port: aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.936 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.936 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:39:02 np0005535469 nova_compute[254092]: 2025-11-25 16:39:02.936 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.307 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:39:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 213 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 704 KiB/s rd, 7.7 MiB/s wr, 233 op/s
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 25 11:39:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 25 11:39:03 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 25 11:39:03 np0005535469 nova_compute[254092]: 2025-11-25 16:39:03.966 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(snap) on rbd image(0216513c-fd2f-4f07-aa1e-cd470e84e4a4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:39:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532262061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.128 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.214 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.215 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.219 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.219 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.392 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updating instance_info_cache with network_info: [{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.446 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.447 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance network_info: |[{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.449 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start _get_guest_xml network_info=[{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.453 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.454 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3793MB free_disk=59.90127944946289GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.454 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.455 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.456 254096 WARNING nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.460 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.461 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.463 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.463 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.464 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.464 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.464 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.468 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.553 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.553 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e5947529-cfda-4753-94cd-b764da9d5c2c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:39:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.574 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "interface-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.575 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "interface-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.575 254096 DEBUG nova.objects.instance [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'flavor' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.612 254096 DEBUG nova.objects.instance [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.620 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:39:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 25 11:39:04 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.641 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:39:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091903382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.934 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.955 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:04 np0005535469 nova_compute[254092]: 2025-11-25 16:39:04.958 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205301590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.109 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.124 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG nova.compute.manager [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-changed-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG nova.compute.manager [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Refreshing instance network info cache due to event network-changed-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG oslo_concurrency.lockutils [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG oslo_concurrency.lockutils [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.133 254096 DEBUG nova.network.neutron [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Refreshing network info cache for port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.146 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.177 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.178 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019557981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.440 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.441 254096 DEBUG nova.virt.libvirt.vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-251232744',display_name='tempest-DeleteServersTestJSON-server-251232744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-251232744',id=55,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-xpvmx0am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:58Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=e5947529-cfda-4753-94cd-b764da9d5c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.442 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.442 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.444 254096 DEBUG nova.objects.instance [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.461 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <uuid>e5947529-cfda-4753-94cd-b764da9d5c2c</uuid>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <name>instance-00000037</name>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <nova:name>tempest-DeleteServersTestJSON-server-251232744</nova:name>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:39:04</nova:creationTime>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <nova:port uuid="aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <entry name="serial">e5947529-cfda-4753-94cd-b764da9d5c2c</entry>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <entry name="uuid">e5947529-cfda-4753-94cd-b764da9d5c2c</entry>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e5947529-cfda-4753-94cd-b764da9d5c2c_disk">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:38:a9:78"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <target dev="tapaabf40d8-e3"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/console.log" append="off"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:39:05 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:39:05 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:39:05 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:39:05 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.461 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Preparing to wait for external event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.462 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.462 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.462 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.463 254096 DEBUG nova.virt.libvirt.vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-251232744',display_name='tempest-DeleteServersTestJSON-server-251232744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-251232744',id=55,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-xpvmx0am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:58Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=e5947529-cfda-4753-94cd-b764da9d5c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.463 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.464 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.464 254096 DEBUG os_vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.466 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.466 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaabf40d8-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaabf40d8-e3, col_values=(('external_ids', {'iface-id': 'aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:a9:78', 'vm-uuid': 'e5947529-cfda-4753-94cd-b764da9d5c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:05 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:05 np0005535469 NetworkManager[48891]: <info>  [1764088745.4731] manager: (tapaabf40d8-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.479 254096 INFO os_vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3')#033[00m
Nov 25 11:39:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 235 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.7 MiB/s wr, 235 op/s
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.565 254096 DEBUG nova.policy [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cb4b65d4074373a38534856574dc8f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.613 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.615 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.615 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:38:a9:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.615 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Using config drive#033[00m
Nov 25 11:39:05 np0005535469 podman[315538]: 2025-11-25 16:39:05.672127613 +0000 UTC m=+0.074293415 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 11:39:05 np0005535469 podman[315539]: 2025-11-25 16:39:05.697828552 +0000 UTC m=+0.099658146 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 11:39:05 np0005535469 podman[315540]: 2025-11-25 16:39:05.711585254 +0000 UTC m=+0.113509120 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 11:39:05 np0005535469 nova_compute[254092]: 2025-11-25 16:39:05.808 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.178 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.178 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.178 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.179 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.655 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Creating config drive at /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.661 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqbonlcb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.808 254096 INFO nova.virt.libvirt.driver [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Snapshot image upload complete#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.809 254096 INFO nova.compute.manager [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 6.09 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.812 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqbonlcb" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.834 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:06 np0005535469 nova_compute[254092]: 2025-11-25 16:39:06.837 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.015 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully created port: c66763cf-d7ff-412d-89d8-fb6db38952f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.058 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.059 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deleting local config drive /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config because it was imported into RBD.#033[00m
Nov 25 11:39:07 np0005535469 kernel: tapaabf40d8-e3: entered promiscuous mode
Nov 25 11:39:07 np0005535469 NetworkManager[48891]: <info>  [1764088747.1088] manager: (tapaabf40d8-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:07Z|00491|binding|INFO|Claiming lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for this chassis.
Nov 25 11:39:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:07Z|00492|binding|INFO|aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2: Claiming fa:16:3e:38:a9:78 10.100.0.6
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.168 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.169 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.170 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa#033[00m
Nov 25 11:39:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:07Z|00493|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 ovn-installed in OVS
Nov 25 11:39:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:07Z|00494|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 up in Southbound
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f293f42-bccd-4be0-8741-c67ca87a4c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.184 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.186 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.186 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e63288c1-79eb-4fbd-9266-a8675b01368e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.187 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5df82957-5678-4af6-9d75-8cf0ee792d64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 systemd-udevd[315669]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.201 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f25373b1-f528-4eca-ad4e-ab80c8b904f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 systemd-machined[216343]: New machine qemu-64-instance-00000037.
Nov 25 11:39:07 np0005535469 NetworkManager[48891]: <info>  [1764088747.2113] device (tapaabf40d8-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:39:07 np0005535469 NetworkManager[48891]: <info>  [1764088747.2119] device (tapaabf40d8-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:39:07 np0005535469 systemd[1]: Started Virtual Machine qemu-64-instance-00000037.
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.229 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[578971f8-cf5c-435f-b88f-9246d9a3fb84]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.261 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7ab98f-0755-4ded-a60f-16cce7651591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.269 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec7ad80-0081-4e59-a284-273c6f0fd7d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 NetworkManager[48891]: <info>  [1764088747.2708] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.306 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[46a19992-1b5f-4018-8895-236c999d8cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[87560e57-f8bb-4a72-9a38-967ec66fdf7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 NetworkManager[48891]: <info>  [1764088747.3363] device (tape469a950-70): carrier: link connected
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.342 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0413e089-02ef-496c-94bd-b4edd7ef42ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.361 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07a1f60a-c484-48e1-a133-3d2738ce9bab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520492, 'reachable_time': 43790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315701, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.378 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d84fd010-fd35-4bdd-8753-3d2891853582]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520492, 'tstamp': 520492}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315702, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.396 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1802a45-69a2-41d2-9db1-57c7c54b4886]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520492, 'reachable_time': 43790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315703, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.424 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fba259d0-0b28-4b41-868d-78382732e333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.455 254096 DEBUG nova.network.neutron [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updated VIF entry in instance network info cache for port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.455 254096 DEBUG nova.network.neutron [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updating instance_info_cache with network_info: [{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.468 254096 DEBUG oslo_concurrency.lockutils [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:39:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 292 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.7 MiB/s wr, 296 op/s
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.492 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5f1758-8f9c-4880-b2d7-b0f438c512a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:07 np0005535469 NetworkManager[48891]: <info>  [1764088747.4970] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 25 11:39:07 np0005535469 kernel: tape469a950-70: entered promiscuous mode
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.499 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:07Z|00495|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.522 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.523 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d5fcc8-e730-458d-bb0e-0d6aeb53bd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.524 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:39:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.525 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.675 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088747.6749299, e5947529-cfda-4753-94cd-b764da9d5c2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.675 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Started (Lifecycle Event)#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.891 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.899 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088747.6751065, e5947529-cfda-4753-94cd-b764da9d5c2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.900 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.927 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.931 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:07 np0005535469 nova_compute[254092]: 2025-11-25 16:39:07.960 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:07 np0005535469 podman[315777]: 2025-11-25 16:39:07.875708336 +0000 UTC m=+0.021751731 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:08 np0005535469 podman[315777]: 2025-11-25 16:39:08.150973594 +0000 UTC m=+0.297016969 container create c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 11:39:08 np0005535469 systemd[1]: Started libpod-conmon-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957.scope.
Nov 25 11:39:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ad08b84d0545a65c2b91231b7123f70815850aa282c0b4fc83b481c7c3e304/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:08 np0005535469 podman[315777]: 2025-11-25 16:39:08.325993512 +0000 UTC m=+0.472036907 container init c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:39:08 np0005535469 podman[315777]: 2025-11-25 16:39:08.331771309 +0000 UTC m=+0.477814684 container start c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:08 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : New worker (315798) forked
Nov 25 11:39:08 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : Loading success.
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.381 254096 DEBUG nova.compute.manager [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.381 254096 DEBUG oslo_concurrency.lockutils [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.382 254096 DEBUG oslo_concurrency.lockutils [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.382 254096 DEBUG oslo_concurrency.lockutils [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.382 254096 DEBUG nova.compute.manager [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Processing event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.383 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.388 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088748.3882308, e5947529-cfda-4753-94cd-b764da9d5c2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.388 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.391 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.395 254096 INFO nova.virt.libvirt.driver [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance spawned successfully.#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.396 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.412 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.415 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.415 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.416 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.416 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.417 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.417 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.424 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.456 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.516 254096 INFO nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.516 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.584 254096 INFO nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 11.69 seconds to build instance.#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.612 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.701 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully updated port: c66763cf-d7ff-412d-89d8-fb6db38952f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.716 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.716 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.716 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:39:08 np0005535469 nova_compute[254092]: 2025-11-25 16:39:08.935 254096 WARNING nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] d5808bee-5100-4cdf-b578-a1bc323dafe9 already exists in list: networks containing: ['d5808bee-5100-4cdf-b578-a1bc323dafe9']. ignoring it#033[00m
Nov 25 11:39:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 292 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.7 MiB/s wr, 296 op/s
Nov 25 11:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:39:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 25 11:39:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 25 11:39:10 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.841 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.872 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.875 254096 DEBUG nova.virt.libvirt.vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.875 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.876 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.876 254096 DEBUG os_vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.877 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.877 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.881 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc66763cf-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.881 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc66763cf-d7, col_values=(('external_ids', {'iface-id': 'c66763cf-d7ff-412d-89d8-fb6db38952f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:7c:94', 'vm-uuid': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:10 np0005535469 NetworkManager[48891]: <info>  [1764088750.8838] manager: (tapc66763cf-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.891 254096 INFO os_vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7')#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.892 254096 DEBUG nova.virt.libvirt.vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.892 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.893 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.896 254096 DEBUG nova.virt.libvirt.guest [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] attach device xml: <interface type="ethernet">
Nov 25 11:39:10 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:a6:7c:94"/>
Nov 25 11:39:10 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 11:39:10 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:39:10 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 11:39:10 np0005535469 nova_compute[254092]:  <target dev="tapc66763cf-d7"/>
Nov 25 11:39:10 np0005535469 nova_compute[254092]: </interface>
Nov 25 11:39:10 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 11:39:10 np0005535469 kernel: tapc66763cf-d7: entered promiscuous mode
Nov 25 11:39:10 np0005535469 NetworkManager[48891]: <info>  [1764088750.9065] manager: (tapc66763cf-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Nov 25 11:39:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:10Z|00496|binding|INFO|Claiming lport c66763cf-d7ff-412d-89d8-fb6db38952f9 for this chassis.
Nov 25 11:39:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:10Z|00497|binding|INFO|c66763cf-d7ff-412d-89d8-fb6db38952f9: Claiming fa:16:3e:a6:7c:94 10.100.0.9
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.922 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:7c:94 10.100.0.9'], port_security=['fa:16:3e:a6:7c:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c66763cf-d7ff-412d-89d8-fb6db38952f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.923 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c66763cf-d7ff-412d-89d8-fb6db38952f9 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 bound to our chassis#033[00m
Nov 25 11:39:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.925 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5808bee-5100-4cdf-b578-a1bc323dafe9#033[00m
Nov 25 11:39:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:10Z|00498|binding|INFO|Setting lport c66763cf-d7ff-412d-89d8-fb6db38952f9 ovn-installed in OVS
Nov 25 11:39:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:10Z|00499|binding|INFO|Setting lport c66763cf-d7ff-412d-89d8-fb6db38952f9 up in Southbound
Nov 25 11:39:10 np0005535469 nova_compute[254092]: 2025-11-25 16:39:10.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.949 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4933c86-ed5a-4a30-982a-fd6edb6bea3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:10 np0005535469 systemd-udevd[315814]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:39:10 np0005535469 NetworkManager[48891]: <info>  [1764088750.9695] device (tapc66763cf-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:39:10 np0005535469 NetworkManager[48891]: <info>  [1764088750.9704] device (tapc66763cf-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:39:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.992 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[03bc5ef1-259f-48c1-a85c-b3fa5537aa97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.996 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ab15791a-d884-49f6-8006-cb6f4c6f861e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.020 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.021 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.021 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No VIF found with MAC fa:16:3e:76:39:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.021 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No VIF found with MAC fa:16:3e:a6:7c:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.045 254096 DEBUG nova.virt.libvirt.guest [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  <nova:name>tempest-AttachInterfacesV270Test-server-540246934</nova:name>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 16:39:11</nova:creationTime>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:user uuid="96cb4b65d4074373a38534856574dc8f">tempest-AttachInterfacesV270Test-1255379647-project-member</nova:user>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:project uuid="a92e4b86655441c59ead5a1bd83173e5">tempest-AttachInterfacesV270Test-1255379647</nova:project>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:port uuid="a6f06f5d-486f-4039-a0cb-30b122e69258">
Nov 25 11:39:11 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    <nova:port uuid="c66763cf-d7ff-412d-89d8-fb6db38952f9">
Nov 25 11:39:11 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 11:39:11 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 11:39:11 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 11:39:11 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.049 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5ca7ef-3d2d-4f09-acaf-f6275d78dd4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.068 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4157ea-d974-4b0a-9694-ab0eec896d08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315821, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.072 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "interface-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.094 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8af4b4c-47c2-4858-9b5f-99911035e6e8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519795, 'tstamp': 519795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315822, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519798, 'tstamp': 519798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315822, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5808bee-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.110 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5808bee-50, col_values=(('external_ids', {'iface-id': 'ab0217e4-1718-4cef-9483-cef43176b686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.110 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 252 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 6.8 MiB/s wr, 435 op/s
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.875 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.876 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.876 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.876 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 WARNING nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-changed-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing instance network info cache due to event network-changed-c66763cf-d7ff-412d-89d8-fb6db38952f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.878 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:39:11 np0005535469 nova_compute[254092]: 2025-11-25 16:39:11.878 254096 DEBUG nova.network.neutron [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing network info cache for port c66763cf-d7ff-412d-89d8-fb6db38952f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.047 254096 DEBUG oslo_concurrency.lockutils [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.048 254096 DEBUG oslo_concurrency.lockutils [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.048 254096 DEBUG nova.compute.manager [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.051 254096 DEBUG nova.compute.manager [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.051 254096 DEBUG nova.objects.instance [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'flavor' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.073 254096 DEBUG nova.virt.libvirt.driver [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.692 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.693 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.693 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.693 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.694 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.695 254096 INFO nova.compute.manager [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Terminating instance#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.696 254096 DEBUG nova.compute.manager [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:39:12 np0005535469 kernel: tapa1b0e8cf-d5 (unregistering): left promiscuous mode
Nov 25 11:39:12 np0005535469 NetworkManager[48891]: <info>  [1764088752.7741] device (tapa1b0e8cf-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:39:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:12Z|00500|binding|INFO|Releasing lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 from this chassis (sb_readonly=0)
Nov 25 11:39:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:12Z|00501|binding|INFO|Setting lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 down in Southbound
Nov 25 11:39:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:12Z|00502|binding|INFO|Removing iface tapa1b0e8cf-d5 ovn-installed in OVS
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.839 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:ad:75 10.100.0.9'], port_security=['fa:16:3e:d3:ad:75 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52c13ebd-df79-43b9-8d5f-e4bf4a2e0738', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80a627278d934815a3ea621e9d6402d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1671917c-f980-406a-8c8d-043f07074abb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f06ea60c-86e3-46a2-b0dc-014d0b0b5949, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.840 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 in datapath abda97f3-dcb7-42ee-af40-cfc387fadfda unbound from our chassis#033[00m
Nov 25 11:39:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.841 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abda97f3-dcb7-42ee-af40-cfc387fadfda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca78ea7f-4888-4848-9d73-9708d1087678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.843 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda namespace which is not needed anymore#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:12 np0005535469 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 25 11:39:12 np0005535469 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Consumed 14.148s CPU time.
Nov 25 11:39:12 np0005535469 systemd-machined[216343]: Machine qemu-62-instance-00000035 terminated.
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.941 254096 INFO nova.virt.libvirt.driver [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance destroyed successfully.#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.941 254096 DEBUG nova.objects.instance [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lazy-loading 'resources' on Instance uuid 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.961 254096 DEBUG nova.virt.libvirt.vif [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-876908934',display_name='tempest-ImagesOneServerTestJSON-server-876908934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-876908934',id=53,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:38:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80a627278d934815a3ea621e9d6402d2',ramdisk_id='',reservation_id='r-mt56k20g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-941588767',owner_user_name='tempest-ImagesOneServerTestJSON-941588767-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:06Z,user_data=None,user_id='34706428d3f94a60b53f4a535d408fd1',uuid=52c13ebd-df79-43b9-8d5f-e4bf4a2e0738,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.963 254096 DEBUG nova.network.os_vif_util [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converting VIF {"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.966 254096 DEBUG nova.network.os_vif_util [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.967 254096 DEBUG os_vif [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.971 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1b0e8cf-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:12 np0005535469 nova_compute[254092]: 2025-11-25 16:39:12.979 254096 INFO os_vif [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5')#033[00m
Nov 25 11:39:13 np0005535469 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : haproxy version is 2.8.14-c23fe91
Nov 25 11:39:13 np0005535469 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : path to executable is /usr/sbin/haproxy
Nov 25 11:39:13 np0005535469 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [WARNING]  (314439) : Exiting Master process...
Nov 25 11:39:13 np0005535469 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [ALERT]    (314439) : Current worker (314441) exited with code 143 (Terminated)
Nov 25 11:39:13 np0005535469 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [WARNING]  (314439) : All workers exited. Exiting... (0)
Nov 25 11:39:13 np0005535469 systemd[1]: libpod-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c.scope: Deactivated successfully.
Nov 25 11:39:13 np0005535469 podman[315852]: 2025-11-25 16:39:13.014937531 +0000 UTC m=+0.053015199 container died ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c-userdata-shm.mount: Deactivated successfully.
Nov 25 11:39:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-87d652c06ceeec69f5cac5c1cc50b800c7a964f2bc7a8ca4a2399870c6c25d11-merged.mount: Deactivated successfully.
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:13 np0005535469 podman[315852]: 2025-11-25 16:39:13.072872802 +0000 UTC m=+0.110950450 container cleanup ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:39:13 np0005535469 systemd[1]: libpod-conmon-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c.scope: Deactivated successfully.
Nov 25 11:39:13 np0005535469 podman[315898]: 2025-11-25 16:39:13.161011094 +0000 UTC m=+0.058455797 container remove ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.173 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc6027a-4549-4c28-af08-2fbe4d7701f1]: (4, ('Tue Nov 25 04:39:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda (ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c)\nad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c\nTue Nov 25 04:39:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda (ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c)\nad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af556460-f02e-4046-a81b-0ceb010ef38b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.179 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabda97f3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:13 np0005535469 kernel: tapabda97f3-d0: left promiscuous mode
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.211 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf16140f-9fee-4a5a-9c91-ed2e1d7cc9b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.221 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df8013d2-7308-4b02-86b5-faebb5250687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.225 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af1622db-23b7-41bc-ba07-a140f6f0375d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5b1f1e-dcce-4b5e-b1c9-61fdc1f8eefa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517778, 'reachable_time': 34569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315912, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.250 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.250 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4849b1ba-069e-4335-a131-0f04573fa07b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:13 np0005535469 systemd[1]: run-netns-ovnmeta\x2dabda97f3\x2ddcb7\x2d42ee\x2daf40\x2dcfc387fadfda.mount: Deactivated successfully.
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.423 254096 INFO nova.virt.libvirt.driver [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deleting instance files /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_del#033[00m
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.425 254096 INFO nova.virt.libvirt.driver [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deletion of /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_del complete#033[00m
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.475 254096 INFO nova.compute.manager [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.476 254096 DEBUG oslo.service.loopingcall [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.476 254096 DEBUG nova.compute.manager [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:39:13 np0005535469 nova_compute[254092]: 2025-11-25 16:39:13.476 254096 DEBUG nova.network.neutron [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:39:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 252 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 4.0 MiB/s wr, 253 op/s
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.616 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.054 254096 DEBUG nova.compute.manager [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-unplugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.055 254096 DEBUG oslo_concurrency.lockutils [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.055 254096 DEBUG oslo_concurrency.lockutils [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.055 254096 DEBUG oslo_concurrency.lockutils [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.056 254096 DEBUG nova.compute.manager [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] No waiting events found dispatching network-vif-unplugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.056 254096 DEBUG nova.compute.manager [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-unplugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.219 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.221 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.269 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.269 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 WARNING nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 WARNING nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.340 254096 DEBUG nova.network.neutron [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.371 254096 INFO nova.compute.manager [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 0.89 seconds to deallocate network for instance.#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.441 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.442 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.442 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.442 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.443 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.444 254096 INFO nova.compute.manager [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Terminating instance#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.446 254096 DEBUG nova.compute.manager [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.449 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.449 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.559 254096 DEBUG oslo_concurrency.processutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.618 254096 DEBUG nova.network.neutron [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updated VIF entry in instance network info cache for port c66763cf-d7ff-412d-89d8-fb6db38952f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.621 254096 DEBUG nova.network.neutron [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:14 np0005535469 kernel: tapa6f06f5d-48 (unregistering): left promiscuous mode
Nov 25 11:39:14 np0005535469 NetworkManager[48891]: <info>  [1764088754.6549] device (tapa6f06f5d-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00503|binding|INFO|Releasing lport a6f06f5d-486f-4039-a0cb-30b122e69258 from this chassis (sb_readonly=0)
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00504|binding|INFO|Setting lport a6f06f5d-486f-4039-a0cb-30b122e69258 down in Southbound
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00505|binding|INFO|Removing iface tapa6f06f5d-48 ovn-installed in OVS
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.668 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.677 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.679 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.680 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5808bee-5100-4cdf-b578-a1bc323dafe9#033[00m
Nov 25 11:39:14 np0005535469 kernel: tapc66763cf-d7 (unregistering): left promiscuous mode
Nov 25 11:39:14 np0005535469 NetworkManager[48891]: <info>  [1764088754.6889] device (tapc66763cf-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00506|binding|INFO|Releasing lport c66763cf-d7ff-412d-89d8-fb6db38952f9 from this chassis (sb_readonly=0)
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00507|binding|INFO|Setting lport c66763cf-d7ff-412d-89d8-fb6db38952f9 down in Southbound
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00508|binding|INFO|Removing iface tapc66763cf-d7 ovn-installed in OVS
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.719 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:7c:94 10.100.0.9'], port_security=['fa:16:3e:a6:7c:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c66763cf-d7ff-412d-89d8-fb6db38952f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.719 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c86b3baf-2a65-4517-bf90-202234b17ab1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000036.scope: Deactivated successfully.
Nov 25 11:39:14 np0005535469 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000036.scope: Consumed 13.563s CPU time.
Nov 25 11:39:14 np0005535469 systemd-machined[216343]: Machine qemu-63-instance-00000036 terminated.
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.763 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3568e660-5a36-417f-b6ea-2e4e5fdb1c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.767 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[71d7ba19-1be9-4ae6-a377-6b53ed365017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.800 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4b814250-30c9-4eca-b52b-395ae8ce416e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.821 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2989b42f-dbb1-4006-8b7f-fd07675bb5b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315949, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8049e4f-4dac-49f6-ac3f-1de260d47e56]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519795, 'tstamp': 519795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315950, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519798, 'tstamp': 519798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315950, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.847 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.862 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5808bee-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5808bee-50, col_values=(('external_ids', {'iface-id': 'ab0217e4-1718-4cef-9483-cef43176b686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.864 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.865 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c66763cf-d7ff-412d-89d8-fb6db38952f9 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.866 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5808bee-5100-4cdf-b578-a1bc323dafe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.867 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[023a8ac1-b0ca-40b8-aebf-19cebb710526]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.867 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 namespace which is not needed anymore#033[00m
Nov 25 11:39:14 np0005535469 kernel: tapa6f06f5d-48: entered promiscuous mode
Nov 25 11:39:14 np0005535469 NetworkManager[48891]: <info>  [1764088754.8717] manager: (tapa6f06f5d-48): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Nov 25 11:39:14 np0005535469 kernel: tapa6f06f5d-48 (unregistering): left promiscuous mode
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00509|binding|INFO|Claiming lport a6f06f5d-486f-4039-a0cb-30b122e69258 for this chassis.
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00510|binding|INFO|a6f06f5d-486f-4039-a0cb-30b122e69258: Claiming fa:16:3e:76:39:37 10.100.0.6
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.891 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:14 np0005535469 NetworkManager[48891]: <info>  [1764088754.8931] manager: (tapc66763cf-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Nov 25 11:39:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:14Z|00511|binding|INFO|Releasing lport a6f06f5d-486f-4039-a0cb-30b122e69258 from this chassis (sb_readonly=0)
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.927 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.933 254096 INFO nova.virt.libvirt.driver [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance destroyed successfully.#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.934 254096 DEBUG nova.objects.instance [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'resources' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.947 254096 DEBUG nova.virt.libvirt.vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.947 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.948 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.949 254096 DEBUG os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.951 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6f06f5d-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.960 254096 INFO os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48')#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.961 254096 DEBUG nova.virt.libvirt.vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.961 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.962 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.962 254096 DEBUG os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.965 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc66763cf-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:14 np0005535469 nova_compute[254092]: 2025-11-25 16:39:14.970 254096 INFO os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7')#033[00m
Nov 25 11:39:15 np0005535469 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : haproxy version is 2.8.14-c23fe91
Nov 25 11:39:15 np0005535469 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : path to executable is /usr/sbin/haproxy
Nov 25 11:39:15 np0005535469 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [WARNING]  (315235) : Exiting Master process...
Nov 25 11:39:15 np0005535469 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [ALERT]    (315235) : Current worker (315237) exited with code 143 (Terminated)
Nov 25 11:39:15 np0005535469 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [WARNING]  (315235) : All workers exited. Exiting... (0)
Nov 25 11:39:15 np0005535469 systemd[1]: libpod-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded.scope: Deactivated successfully.
Nov 25 11:39:15 np0005535469 podman[315990]: 2025-11-25 16:39:15.049292922 +0000 UTC m=+0.056797423 container died 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:39:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602312147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded-userdata-shm.mount: Deactivated successfully.
Nov 25 11:39:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-14c248a764d5721c6e561451525ee03f5d19e082815bceb4132727d099f101a8-merged.mount: Deactivated successfully.
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.163 254096 DEBUG oslo_concurrency.processutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.169 254096 DEBUG nova.compute.provider_tree [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:39:15 np0005535469 podman[315990]: 2025-11-25 16:39:15.179561366 +0000 UTC m=+0.187065867 container cleanup 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:39:15 np0005535469 systemd[1]: libpod-conmon-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded.scope: Deactivated successfully.
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.197 254096 DEBUG nova.scheduler.client.report [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.222 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.223 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.260 254096 INFO nova.scheduler.client.report [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Deleted allocations for instance 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738#033[00m
Nov 25 11:39:15 np0005535469 podman[316039]: 2025-11-25 16:39:15.2674255 +0000 UTC m=+0.057524312 container remove 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.278 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8916aa-b1a8-46c0-88ef-2550af144840]: (4, ('Tue Nov 25 04:39:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 (8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded)\n8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded\nTue Nov 25 04:39:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 (8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded)\n8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[154dff44-5c27-4474-9dfe-ea6f090c8585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.282 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:15 np0005535469 kernel: tapd5808bee-50: left promiscuous mode
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a08bb265-eea7-4bcd-bcf8-b75420ec7ffd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.328 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.329 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd5f481-31dd-4dd2-8c6f-4932846c7f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.332 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c00d867-5d4a-4de9-8ed7-2050ffb0817c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.351 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b1810142-368e-4cb4-9e47-fe1ec04b0259]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519773, 'reachable_time': 20090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316055, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 systemd[1]: run-netns-ovnmeta\x2dd5808bee\x2d5100\x2d4cdf\x2db578\x2da1bc323dafe9.mount: Deactivated successfully.
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.355 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.355 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ea428feb-8e0b-41e8-8da0-c436e65e7869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.358 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.359 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5808bee-5100-4cdf-b578-a1bc323dafe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.361 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8eae78a6-cb5b-451d-b3ed-ea3d90f544cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.362 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.363 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5808bee-5100-4cdf-b578-a1bc323dafe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.364 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71ecdab1-813e-434c-bb42-6a37dfde8808]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 164 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.686 254096 INFO nova.virt.libvirt.driver [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deleting instance files /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_del#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.688 254096 INFO nova.virt.libvirt.driver [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deletion of /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_del complete#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.741 254096 INFO nova.compute.manager [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.742 254096 DEBUG oslo.service.loopingcall [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.742 254096 DEBUG nova.compute.manager [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:39:15 np0005535469 nova_compute[254092]: 2025-11-25 16:39:15.743 254096 DEBUG nova.network.neutron [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.229 254096 DEBUG nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.229 254096 DEBUG oslo_concurrency.lockutils [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG oslo_concurrency.lockutils [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG oslo_concurrency.lockutils [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] No waiting events found dispatching network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 WARNING nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received unexpected event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-deleted-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.356 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-unplugged-a6f06f5d-486f-4039-a0cb-30b122e69258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-a6f06f5d-486f-4039-a0cb-30b122e69258 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 WARNING nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-unplugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:16 np0005535469 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 WARNING nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:39:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 106d5375-87cc-4d10-bce1-452950ee8def does not exist
Nov 25 11:39:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c6e90fd5-c60e-4eae-a94e-4d82cfb727aa does not exist
Nov 25 11:39:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ff225544-2378-4eb1-80ec-4623e5a3736d does not exist
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:39:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:39:17 np0005535469 podman[316328]: 2025-11-25 16:39:17.11338773 +0000 UTC m=+0.054280314 container create c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:39:17 np0005535469 systemd[1]: Started libpod-conmon-c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9.scope.
Nov 25 11:39:17 np0005535469 podman[316328]: 2025-11-25 16:39:17.083219312 +0000 UTC m=+0.024111916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:39:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:17 np0005535469 podman[316328]: 2025-11-25 16:39:17.226874109 +0000 UTC m=+0.167766723 container init c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:39:17 np0005535469 podman[316328]: 2025-11-25 16:39:17.238027761 +0000 UTC m=+0.178920355 container start c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:39:17 np0005535469 podman[316328]: 2025-11-25 16:39:17.242884923 +0000 UTC m=+0.183777507 container attach c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:39:17 np0005535469 elated_liskov[316344]: 167 167
Nov 25 11:39:17 np0005535469 systemd[1]: libpod-c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9.scope: Deactivated successfully.
Nov 25 11:39:17 np0005535469 podman[316328]: 2025-11-25 16:39:17.244780904 +0000 UTC m=+0.185673498 container died c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-fba6000f2dee9160c206b843f23a29c761c41ddd68712fa2809909027d1a069d-merged.mount: Deactivated successfully.
Nov 25 11:39:17 np0005535469 podman[316328]: 2025-11-25 16:39:17.28479378 +0000 UTC m=+0.225686364 container remove c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:39:17 np0005535469 systemd[1]: libpod-conmon-c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9.scope: Deactivated successfully.
Nov 25 11:39:17 np0005535469 nova_compute[254092]: 2025-11-25 16:39:17.352 254096 DEBUG nova.network.neutron [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:17 np0005535469 nova_compute[254092]: 2025-11-25 16:39:17.373 254096 INFO nova.compute.manager [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 1.63 seconds to deallocate network for instance.#033[00m
Nov 25 11:39:17 np0005535469 nova_compute[254092]: 2025-11-25 16:39:17.421 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:17 np0005535469 nova_compute[254092]: 2025-11-25 16:39:17.422 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 129 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 202 op/s
Nov 25 11:39:17 np0005535469 nova_compute[254092]: 2025-11-25 16:39:17.493 254096 DEBUG oslo_concurrency.processutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:17 np0005535469 podman[316368]: 2025-11-25 16:39:17.495048294 +0000 UTC m=+0.057213693 container create 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:17 np0005535469 systemd[1]: Started libpod-conmon-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope.
Nov 25 11:39:17 np0005535469 podman[316368]: 2025-11-25 16:39:17.471527166 +0000 UTC m=+0.033692665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:39:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:17 np0005535469 podman[316368]: 2025-11-25 16:39:17.615009979 +0000 UTC m=+0.177175398 container init 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:39:17 np0005535469 podman[316368]: 2025-11-25 16:39:17.62646865 +0000 UTC m=+0.188634049 container start 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:39:17 np0005535469 podman[316368]: 2025-11-25 16:39:17.631932618 +0000 UTC m=+0.194098057 container attach 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:39:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902064392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.035 254096 DEBUG oslo_concurrency.processutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.043 254096 DEBUG nova.compute.provider_tree [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.067 254096 DEBUG nova.scheduler.client.report [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.093 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.126 254096 INFO nova.scheduler.client.report [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Deleted allocations for instance 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.198 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.445 254096 DEBUG nova.compute.manager [req-691e2723-5830-47bd-a30c-aba43b4bcf6a req-bfef7b54-3606-465b-b623-c83929669519 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-deleted-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:18 np0005535469 nova_compute[254092]: 2025-11-25 16:39:18.447 254096 DEBUG nova.compute.manager [req-691e2723-5830-47bd-a30c-aba43b4bcf6a req-bfef7b54-3606-465b-b623-c83929669519 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-deleted-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:18 np0005535469 hardcore_jackson[316386]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:39:18 np0005535469 hardcore_jackson[316386]: --> relative data size: 1.0
Nov 25 11:39:18 np0005535469 hardcore_jackson[316386]: --> All data devices are unavailable
Nov 25 11:39:18 np0005535469 systemd[1]: libpod-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope: Deactivated successfully.
Nov 25 11:39:18 np0005535469 systemd[1]: libpod-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope: Consumed 1.108s CPU time.
Nov 25 11:39:18 np0005535469 conmon[316386]: conmon 768e7ef8ca16a19c49f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope/container/memory.events
Nov 25 11:39:18 np0005535469 podman[316368]: 2025-11-25 16:39:18.798361672 +0000 UTC m=+1.360527081 container died 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 11:39:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905-merged.mount: Deactivated successfully.
Nov 25 11:39:18 np0005535469 podman[316368]: 2025-11-25 16:39:18.856609963 +0000 UTC m=+1.418775362 container remove 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:39:18 np0005535469 systemd[1]: libpod-conmon-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope: Deactivated successfully.
Nov 25 11:39:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 129 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 202 op/s
Nov 25 11:39:19 np0005535469 podman[316585]: 2025-11-25 16:39:19.614899584 +0000 UTC m=+0.069301410 container create 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 25 11:39:19 np0005535469 podman[316585]: 2025-11-25 16:39:19.577420738 +0000 UTC m=+0.031822634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:39:19 np0005535469 systemd[1]: Started libpod-conmon-24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a.scope.
Nov 25 11:39:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:19 np0005535469 podman[316585]: 2025-11-25 16:39:19.734451728 +0000 UTC m=+0.188853594 container init 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:39:19 np0005535469 podman[316585]: 2025-11-25 16:39:19.744120871 +0000 UTC m=+0.198522687 container start 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:39:19 np0005535469 eloquent_sanderson[316602]: 167 167
Nov 25 11:39:19 np0005535469 systemd[1]: libpod-24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a.scope: Deactivated successfully.
Nov 25 11:39:19 np0005535469 podman[316585]: 2025-11-25 16:39:19.75112596 +0000 UTC m=+0.205527806 container attach 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:39:19 np0005535469 podman[316585]: 2025-11-25 16:39:19.751984383 +0000 UTC m=+0.206386219 container died 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 11:39:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-610f4ac3aeb6bfcbdda3a911d2b1660e488f72926cca6c9ff92b125b80f60428-merged.mount: Deactivated successfully.
Nov 25 11:39:19 np0005535469 podman[316585]: 2025-11-25 16:39:19.821177471 +0000 UTC m=+0.275579287 container remove 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:39:19 np0005535469 systemd[1]: libpod-conmon-24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a.scope: Deactivated successfully.
Nov 25 11:39:19 np0005535469 nova_compute[254092]: 2025-11-25 16:39:19.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:20 np0005535469 podman[316627]: 2025-11-25 16:39:20.025847463 +0000 UTC m=+0.038589638 container create dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:39:20 np0005535469 systemd[1]: Started libpod-conmon-dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3.scope.
Nov 25 11:39:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:20 np0005535469 podman[316627]: 2025-11-25 16:39:20.010142238 +0000 UTC m=+0.022884443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:39:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:20 np0005535469 podman[316627]: 2025-11-25 16:39:20.126617867 +0000 UTC m=+0.139360062 container init dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:20 np0005535469 podman[316627]: 2025-11-25 16:39:20.13999371 +0000 UTC m=+0.152735885 container start dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:39:20 np0005535469 podman[316627]: 2025-11-25 16:39:20.146473816 +0000 UTC m=+0.159216021 container attach dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:39:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 25 11:39:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 25 11:39:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 25 11:39:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:20Z|00512|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:39:20 np0005535469 nova_compute[254092]: 2025-11-25 16:39:20.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]: {
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:    "0": [
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:        {
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "devices": [
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "/dev/loop3"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            ],
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_name": "ceph_lv0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_size": "21470642176",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "name": "ceph_lv0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "tags": {
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cluster_name": "ceph",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.crush_device_class": "",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.encrypted": "0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osd_id": "0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.type": "block",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.vdo": "0"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            },
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "type": "block",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "vg_name": "ceph_vg0"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:        }
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:    ],
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:    "1": [
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:        {
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "devices": [
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "/dev/loop4"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            ],
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_name": "ceph_lv1",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_size": "21470642176",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "name": "ceph_lv1",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "tags": {
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cluster_name": "ceph",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.crush_device_class": "",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.encrypted": "0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osd_id": "1",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.type": "block",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.vdo": "0"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            },
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "type": "block",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "vg_name": "ceph_vg1"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:        }
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:    ],
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:    "2": [
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:        {
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "devices": [
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "/dev/loop5"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            ],
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_name": "ceph_lv2",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_size": "21470642176",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "name": "ceph_lv2",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "tags": {
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.cluster_name": "ceph",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.crush_device_class": "",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.encrypted": "0",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osd_id": "2",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.type": "block",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:                "ceph.vdo": "0"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            },
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "type": "block",
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:            "vg_name": "ceph_vg2"
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:        }
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]:    ]
Nov 25 11:39:20 np0005535469 hungry_brahmagupta[316643]: }
Nov 25 11:39:20 np0005535469 systemd[1]: libpod-dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3.scope: Deactivated successfully.
Nov 25 11:39:20 np0005535469 podman[316627]: 2025-11-25 16:39:20.963666856 +0000 UTC m=+0.976409041 container died dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:39:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa-merged.mount: Deactivated successfully.
Nov 25 11:39:21 np0005535469 podman[316627]: 2025-11-25 16:39:21.041773435 +0000 UTC m=+1.054515610 container remove dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:39:21 np0005535469 systemd[1]: libpod-conmon-dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3.scope: Deactivated successfully.
Nov 25 11:39:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:21Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:a9:78 10.100.0.6
Nov 25 11:39:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:21Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:a9:78 10.100.0.6
Nov 25 11:39:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 92 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 25 11:39:21 np0005535469 podman[316807]: 2025-11-25 16:39:21.782422138 +0000 UTC m=+0.045387372 container create ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:39:21 np0005535469 systemd[1]: Started libpod-conmon-ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9.scope.
Nov 25 11:39:21 np0005535469 podman[316807]: 2025-11-25 16:39:21.762013015 +0000 UTC m=+0.024978279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:39:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:21 np0005535469 podman[316807]: 2025-11-25 16:39:21.885392012 +0000 UTC m=+0.148357276 container init ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:39:21 np0005535469 podman[316807]: 2025-11-25 16:39:21.895242839 +0000 UTC m=+0.158208093 container start ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:39:21 np0005535469 podman[316807]: 2025-11-25 16:39:21.899242838 +0000 UTC m=+0.162208132 container attach ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:39:21 np0005535469 amazing_knuth[316823]: 167 167
Nov 25 11:39:21 np0005535469 systemd[1]: libpod-ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9.scope: Deactivated successfully.
Nov 25 11:39:21 np0005535469 podman[316807]: 2025-11-25 16:39:21.902956068 +0000 UTC m=+0.165921322 container died ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 11:39:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-21c7f1e67c36deabdfc962b171b6c88d470e58b2f42d8630cfc8d2678175b227-merged.mount: Deactivated successfully.
Nov 25 11:39:21 np0005535469 podman[316807]: 2025-11-25 16:39:21.943445877 +0000 UTC m=+0.206411111 container remove ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:39:21 np0005535469 systemd[1]: libpod-conmon-ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9.scope: Deactivated successfully.
Nov 25 11:39:22 np0005535469 nova_compute[254092]: 2025-11-25 16:39:22.136 254096 DEBUG nova.virt.libvirt.driver [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:39:22 np0005535469 podman[316846]: 2025-11-25 16:39:22.145660473 +0000 UTC m=+0.056728980 container create 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 11:39:22 np0005535469 systemd[1]: Started libpod-conmon-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope.
Nov 25 11:39:22 np0005535469 podman[316846]: 2025-11-25 16:39:22.125110286 +0000 UTC m=+0.036178823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:39:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:22 np0005535469 podman[316846]: 2025-11-25 16:39:22.251579327 +0000 UTC m=+0.162647864 container init 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 11:39:22 np0005535469 podman[316846]: 2025-11-25 16:39:22.261735422 +0000 UTC m=+0.172803929 container start 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 11:39:22 np0005535469 podman[316846]: 2025-11-25 16:39:22.265032281 +0000 UTC m=+0.176100818 container attach 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:39:23 np0005535469 nova_compute[254092]: 2025-11-25 16:39:23.067 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:23 np0005535469 sweet_spence[316863]: {
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "osd_id": 1,
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "type": "bluestore"
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:    },
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "osd_id": 2,
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "type": "bluestore"
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:    },
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "osd_id": 0,
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:        "type": "bluestore"
Nov 25 11:39:23 np0005535469 sweet_spence[316863]:    }
Nov 25 11:39:23 np0005535469 sweet_spence[316863]: }
Nov 25 11:39:23 np0005535469 systemd[1]: libpod-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope: Deactivated successfully.
Nov 25 11:39:23 np0005535469 systemd[1]: libpod-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope: Consumed 1.069s CPU time.
Nov 25 11:39:23 np0005535469 podman[316846]: 2025-11-25 16:39:23.330888318 +0000 UTC m=+1.241956825 container died 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:39:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9-merged.mount: Deactivated successfully.
Nov 25 11:39:23 np0005535469 podman[316846]: 2025-11-25 16:39:23.391765589 +0000 UTC m=+1.302834096 container remove 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:39:23 np0005535469 systemd[1]: libpod-conmon-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope: Deactivated successfully.
Nov 25 11:39:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:39:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:39:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:39:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:39:23 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c3133126-8cc0-4927-891c-69fbc3fc1703 does not exist
Nov 25 11:39:23 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 65c0cfed-f922-4d8f-bf2c-f2aad2053c6b does not exist
Nov 25 11:39:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 92 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 25 11:39:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:39:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:39:24 np0005535469 kernel: tapaabf40d8-e3 (unregistering): left promiscuous mode
Nov 25 11:39:24 np0005535469 NetworkManager[48891]: <info>  [1764088764.6682] device (tapaabf40d8-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00513|binding|INFO|Releasing lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 from this chassis (sb_readonly=0)
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00514|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 down in Southbound
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00515|binding|INFO|Removing iface tapaabf40d8-e3 ovn-installed in OVS
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.689 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.691 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.693 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.695 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[415e4580-621e-4de3-a080-bf47c24bdb62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.696 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 25 11:39:24 np0005535469 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Consumed 13.834s CPU time.
Nov 25 11:39:24 np0005535469 systemd-machined[216343]: Machine qemu-64-instance-00000037 terminated.
Nov 25 11:39:24 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : haproxy version is 2.8.14-c23fe91
Nov 25 11:39:24 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : path to executable is /usr/sbin/haproxy
Nov 25 11:39:24 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [WARNING]  (315796) : Exiting Master process...
Nov 25 11:39:24 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [ALERT]    (315796) : Current worker (315798) exited with code 143 (Terminated)
Nov 25 11:39:24 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [WARNING]  (315796) : All workers exited. Exiting... (0)
Nov 25 11:39:24 np0005535469 systemd[1]: libpod-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957.scope: Deactivated successfully.
Nov 25 11:39:24 np0005535469 podman[316985]: 2025-11-25 16:39:24.819350669 +0000 UTC m=+0.040340465 container died c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:39:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e3ad08b84d0545a65c2b91231b7123f70815850aa282c0b4fc83b481c7c3e304-merged.mount: Deactivated successfully.
Nov 25 11:39:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957-userdata-shm.mount: Deactivated successfully.
Nov 25 11:39:24 np0005535469 podman[316985]: 2025-11-25 16:39:24.855164341 +0000 UTC m=+0.076154137 container cleanup c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:39:24 np0005535469 systemd[1]: libpod-conmon-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957.scope: Deactivated successfully.
Nov 25 11:39:24 np0005535469 kernel: tapaabf40d8-e3: entered promiscuous mode
Nov 25 11:39:24 np0005535469 NetworkManager[48891]: <info>  [1764088764.9035] manager: (tapaabf40d8-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Nov 25 11:39:24 np0005535469 systemd-udevd[316967]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00516|binding|INFO|Claiming lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for this chassis.
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00517|binding|INFO|aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2: Claiming fa:16:3e:38:a9:78 10.100.0.6
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.914 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:24 np0005535469 kernel: tapaabf40d8-e3 (unregistering): left promiscuous mode
Nov 25 11:39:24 np0005535469 podman[317016]: 2025-11-25 16:39:24.930365221 +0000 UTC m=+0.052772323 container remove c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: hostname: compute-0
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.936 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f899d12b-2135-42b6-931c-fb39c36e688c]: (4, ('Tue Nov 25 04:39:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957)\nc066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957\nTue Nov 25 04:39:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957)\nc066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00518|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 ovn-installed in OVS
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00519|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 up in Southbound
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00520|binding|INFO|Releasing lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 from this chassis (sb_readonly=1)
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00521|if_status|INFO|Dropped 2 log messages in last 161 seconds (most recently, 161 seconds ago) due to excessive rate
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00522|if_status|INFO|Not setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 down as sb is readonly
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00523|binding|INFO|Removing iface tapaabf40d8-e3 ovn-installed in OVS
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.939 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbcb089-8588-4b90-aec0-24c620cc67c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.940 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00524|binding|INFO|Releasing lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 from this chassis (sb_readonly=0)
Nov 25 11:39:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:24Z|00525|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 down in Southbound
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.964 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 kernel: tape469a950-70: left promiscuous mode
Nov 25 11:39:24 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.982 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3876ca06-4cd5-432f-9c6e-1833690a3c33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.991 254096 DEBUG nova.compute.manager [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.991 254096 DEBUG oslo_concurrency.lockutils [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 DEBUG oslo_concurrency.lockutils [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 DEBUG oslo_concurrency.lockutils [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 DEBUG nova.compute.manager [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:24 np0005535469 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 WARNING nova.compute.manager [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state active and task_state powering-off.#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[120325f8-24f7-4e46-a4f1-00ddd1056596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6f5cc1-b04f-4a49-86e3-f23679f5d1e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70ac3ab7-879c-4c88-bc99-a58708661a5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520484, 'reachable_time': 42622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317057, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.015 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.015 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[76823b99-6212-4bc1-bfeb-b3b88de02506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.016 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:39:25 np0005535469 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.016 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.017 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8d89d4-8116-43c1-8271-ba15da8d5493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.018 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.019 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.019 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6c825f09-af7f-4d9a-9fe8-c4bae1a885b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:25 np0005535469 nova_compute[254092]: 2025-11-25 16:39:25.152 254096 INFO nova.virt.libvirt.driver [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:39:25 np0005535469 nova_compute[254092]: 2025-11-25 16:39:25.157 254096 INFO nova.virt.libvirt.driver [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance destroyed successfully.#033[00m
Nov 25 11:39:25 np0005535469 nova_compute[254092]: 2025-11-25 16:39:25.157 254096 DEBUG nova.objects.instance [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:25 np0005535469 nova_compute[254092]: 2025-11-25 16:39:25.168 254096 DEBUG nova.compute.manager [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:25 np0005535469 nova_compute[254092]: 2025-11-25 16:39:25.208 254096 DEBUG oslo_concurrency.lockutils [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 119 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 431 KiB/s rd, 2.9 MiB/s wr, 121 op/s
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.139 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.141 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.141 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.141 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.143 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.143 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.143 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.144 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.144 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.144 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.147 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.147 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.147 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:39:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 121 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.940 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088752.939042, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.941 254096 INFO nova.compute.manager [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:39:27 np0005535469 nova_compute[254092]: 2025-11-25 16:39:27.965 254096 DEBUG nova.compute.manager [None req-e0cc94df-4d5a-4a64-974f-285d6bc301e3 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.931 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.933 254096 INFO nova.compute.manager [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Terminating instance#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.934 254096 DEBUG nova.compute.manager [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.940 254096 INFO nova.virt.libvirt.driver [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance destroyed successfully.#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.941 254096 DEBUG nova.objects.instance [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.953 254096 DEBUG nova.virt.libvirt.vif [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-251232744',display_name='tempest-DeleteServersTestJSON-server-251232744',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-251232744',id=55,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-xpvmx0am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:25Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=e5947529-cfda-4753-94cd-b764da9d5c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.954 254096 DEBUG nova.network.os_vif_util [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.955 254096 DEBUG nova.network.os_vif_util [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.955 254096 DEBUG os_vif [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.958 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaabf40d8-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:39:28 np0005535469 nova_compute[254092]: 2025-11-25 16:39:28.968 254096 INFO os_vif [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3')#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.464 254096 INFO nova.virt.libvirt.driver [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deleting instance files /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c_del#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.465 254096 INFO nova.virt.libvirt.driver [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deletion of /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c_del complete#033[00m
Nov 25 11:39:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 121 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.713 254096 INFO nova.compute.manager [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.713 254096 DEBUG oslo.service.loopingcall [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.714 254096 DEBUG nova.compute.manager [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.714 254096 DEBUG nova.network.neutron [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.914 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088754.912875, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.914 254096 INFO nova.compute.manager [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:39:29 np0005535469 nova_compute[254092]: 2025-11-25 16:39:29.932 254096 DEBUG nova.compute.manager [None req-97464e98-c13b-43fd-b2a0-1a2c15ea3a58 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:30 np0005535469 nova_compute[254092]: 2025-11-25 16:39:30.605 254096 DEBUG nova.network.neutron [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:30 np0005535469 nova_compute[254092]: 2025-11-25 16:39:30.626 254096 INFO nova.compute.manager [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 25 11:39:30 np0005535469 nova_compute[254092]: 2025-11-25 16:39:30.670 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:30 np0005535469 nova_compute[254092]: 2025-11-25 16:39:30.671 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:30 np0005535469 nova_compute[254092]: 2025-11-25 16:39:30.746 254096 DEBUG nova.compute.manager [req-10c8ec24-eb52-4e9f-948c-c848a54efff1 req-7ef5c5f8-a626-4268-a9cc-900fec7977c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-deleted-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:30 np0005535469 nova_compute[254092]: 2025-11-25 16:39:30.747 254096 DEBUG oslo_concurrency.processutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660174848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:31 np0005535469 nova_compute[254092]: 2025-11-25 16:39:31.207 254096 DEBUG oslo_concurrency.processutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:31 np0005535469 nova_compute[254092]: 2025-11-25 16:39:31.215 254096 DEBUG nova.compute.provider_tree [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:39:31 np0005535469 nova_compute[254092]: 2025-11-25 16:39:31.231 254096 DEBUG nova.scheduler.client.report [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:39:31 np0005535469 nova_compute[254092]: 2025-11-25 16:39:31.257 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:31 np0005535469 nova_compute[254092]: 2025-11-25 16:39:31.308 254096 INFO nova.scheduler.client.report [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance e5947529-cfda-4753-94cd-b764da9d5c2c#033[00m
Nov 25 11:39:31 np0005535469 nova_compute[254092]: 2025-11-25 16:39:31.374 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 101 op/s
Nov 25 11:39:33 np0005535469 nova_compute[254092]: 2025-11-25 16:39:33.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 11:39:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 25 11:39:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 25 11:39:33 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 25 11:39:33 np0005535469 nova_compute[254092]: 2025-11-25 16:39:33.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 6.0 KiB/s wr, 55 op/s
Nov 25 11:39:36 np0005535469 podman[317100]: 2025-11-25 16:39:36.626975656 +0000 UTC m=+0.048894027 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 11:39:36 np0005535469 podman[317099]: 2025-11-25 16:39:36.634359416 +0000 UTC m=+0.056305568 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:39:36 np0005535469 podman[317101]: 2025-11-25 16:39:36.659354925 +0000 UTC m=+0.076478717 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:39:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.335 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.336 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.395 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.618 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.618 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.623 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.624 254096 INFO nova.compute.claims [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.792 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:38 np0005535469 nova_compute[254092]: 2025-11-25 16:39:38.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 25 11:39:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895456730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.268 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.275 254096 DEBUG nova.compute.provider_tree [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.314 254096 DEBUG nova.scheduler.client.report [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.370 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.371 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.451 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.452 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.485 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:39:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 28 op/s
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.515 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.655 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.657 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.658 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Creating image(s)#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.684 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.711 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.740 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.745 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.858 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.859 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.859 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.860 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.900 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.904 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.944 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088764.9432306, e5947529-cfda-4753-94cd-b764da9d5c2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.944 254096 INFO nova.compute.manager [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:39:39 np0005535469 nova_compute[254092]: 2025-11-25 16:39:39.978 254096 DEBUG nova.compute.manager [None req-d2e9de6f-9c83-434a-8e6b-4129674188f3 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:39:40
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'backups']
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.256 254096 DEBUG nova.policy [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.461 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.513 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.615 254096 DEBUG nova.objects.instance [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.697 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.698 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Ensure instance console log exists: /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.699 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.699 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:40 np0005535469 nova_compute[254092]: 2025-11-25 16:39:40.699 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:41 np0005535469 nova_compute[254092]: 2025-11-25 16:39:41.378 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Successfully created port: 9d4276f1-91e9-418f-9a7b-844c83aea8f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:39:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 58 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 949 KiB/s wr, 51 op/s
Nov 25 11:39:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.657 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Successfully updated port: 9d4276f1-91e9-418f-9a7b-844c83aea8f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.713 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.713 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.714 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:39:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 25 11:39:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.776 254096 DEBUG nova.compute.manager [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-changed-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.776 254096 DEBUG nova.compute.manager [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Refreshing instance network info cache due to event network-changed-9d4276f1-91e9-418f-9a7b-844c83aea8f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.777 254096 DEBUG oslo_concurrency.lockutils [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:39:42 np0005535469 nova_compute[254092]: 2025-11-25 16:39:42.913 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:39:43 np0005535469 nova_compute[254092]: 2025-11-25 16:39:43.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 58 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 947 KiB/s wr, 23 op/s
Nov 25 11:39:43 np0005535469 nova_compute[254092]: 2025-11-25 16:39:43.923 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updating instance_info_cache with network_info: [{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:43 np0005535469 nova_compute[254092]: 2025-11-25 16:39:43.968 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:39:43 np0005535469 nova_compute[254092]: 2025-11-25 16:39:43.969 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance network_info: |[{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:39:43 np0005535469 nova_compute[254092]: 2025-11-25 16:39:43.969 254096 DEBUG oslo_concurrency.lockutils [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:39:43 np0005535469 nova_compute[254092]: 2025-11-25 16:39:43.969 254096 DEBUG nova.network.neutron [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Refreshing network info cache for port 9d4276f1-91e9-418f-9a7b-844c83aea8f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:39:43 np0005535469 nova_compute[254092]: 2025-11-25 16:39:43.973 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start _get_guest_xml network_info=[{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.020 254096 WARNING nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.027 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.028 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.031 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.031 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.032 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.032 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.033 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.033 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.034 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.034 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.035 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.035 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.035 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.036 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.036 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.036 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.040 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:39:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120606357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.485 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.516 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:44 np0005535469 nova_compute[254092]: 2025-11-25 16:39:44.522 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 25 11:39:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 25 11:39:44 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 25 11:39:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:39:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604200154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.002 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.004 254096 DEBUG nova.virt.libvirt.vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:39:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2082208700',display_name='tempest-DeleteServersTestJSON-server-2082208700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2082208700',id=56,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-w2u0qqei',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:39:39Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.004 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.005 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.007 254096 DEBUG nova.objects.instance [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.028 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <uuid>bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68</uuid>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <name>instance-00000038</name>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <nova:name>tempest-DeleteServersTestJSON-server-2082208700</nova:name>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:39:44</nova:creationTime>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <nova:port uuid="9d4276f1-91e9-418f-9a7b-844c83aea8f4">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <entry name="serial">bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68</entry>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <entry name="uuid">bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68</entry>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:8d:00:0e"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <target dev="tap9d4276f1-91"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/console.log" append="off"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:39:45 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:39:45 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:39:45 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:39:45 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.029 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Preparing to wait for external event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.029 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.030 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.030 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.031 254096 DEBUG nova.virt.libvirt.vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:39:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2082208700',display_name='tempest-DeleteServersTestJSON-server-2082208700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2082208700',id=56,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-w2u0qqei',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:39:39Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.031 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.032 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.033 254096 DEBUG os_vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.034 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.035 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.039 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.039 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d4276f1-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.040 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d4276f1-91, col_values=(('external_ids', {'iface-id': '9d4276f1-91e9-418f-9a7b-844c83aea8f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:00:0e', 'vm-uuid': 'bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:45 np0005535469 NetworkManager[48891]: <info>  [1764088785.0432] manager: (tap9d4276f1-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.049 254096 INFO os_vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91')#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.114 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.114 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.114 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:8d:00:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.115 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Using config drive#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.137 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.4 MiB/s wr, 141 op/s
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.824 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Creating config drive at /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.829 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx0i954y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:45 np0005535469 nova_compute[254092]: 2025-11-25 16:39:45.978 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx0i954y" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.020 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.024 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.183 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.184 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deleting local config drive /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config because it was imported into RBD.#033[00m
Nov 25 11:39:46 np0005535469 kernel: tap9d4276f1-91: entered promiscuous mode
Nov 25 11:39:46 np0005535469 NetworkManager[48891]: <info>  [1764088786.2521] manager: (tap9d4276f1-91): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Nov 25 11:39:46 np0005535469 systemd-udevd[317481]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:39:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:46Z|00526|binding|INFO|Claiming lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 for this chassis.
Nov 25 11:39:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:46Z|00527|binding|INFO|9d4276f1-91e9-418f-9a7b-844c83aea8f4: Claiming fa:16:3e:8d:00:0e 10.100.0.4
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:46 np0005535469 NetworkManager[48891]: <info>  [1764088786.3015] device (tap9d4276f1-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:39:46 np0005535469 NetworkManager[48891]: <info>  [1764088786.3022] device (tap9d4276f1-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.303 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:00:0e 10.100.0.4'], port_security=['fa:16:3e:8d:00:0e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9d4276f1-91e9-418f-9a7b-844c83aea8f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:46Z|00528|binding|INFO|Setting lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 ovn-installed in OVS
Nov 25 11:39:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:46Z|00529|binding|INFO|Setting lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 up in Southbound
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.305 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9d4276f1-91e9-418f-9a7b-844c83aea8f4 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.307 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.319 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7355b6e-6981-4fb4-af78-9095889b17bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.320 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.322 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.322 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d242e9-ff99-4353-874e-5316f7f47271]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 systemd-machined[216343]: New machine qemu-65-instance-00000038.
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3def81ed-0b6e-4b5e-979d-86d8e3bf163d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.333 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d1406a0d-7bd1-40f3-8ffb-f001aac4a27a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 systemd[1]: Started Virtual Machine qemu-65-instance-00000038.
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d7739188-6e20-4dd2-b7f7-b4dcfae29ab0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.397 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d8141ed5-ddf0-4ea2-9ef2-666bfa4130fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 NetworkManager[48891]: <info>  [1764088786.4043] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/225)
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.403 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03063a96-2088-4495-947e-a2321c3dbc67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.440 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[70ba8400-8abd-4933-843f-eaa56822c491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.444 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[50751263-c972-4d6c-b5ad-d6b6fe4d7ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 NetworkManager[48891]: <info>  [1764088786.4737] device (tape469a950-70): carrier: link connected
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.481 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[44c8fd3d-93b8-4347-9088-d454b7681c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.507 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[622edd37-16d6-4a98-876f-6982b3f60a3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524406, 'reachable_time': 17584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317517, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a414599d-15b8-491f-8bc3-70e1efdb4b3b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524406, 'tstamp': 524406}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317518, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2fdf39a0-5ef0-4160-93ad-0f4460d8aa59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524406, 'reachable_time': 17584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317526, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c04d4ea3-c375-4dd7-af4f-6463dc0216d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.668 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c16c94fb-5904-479f-8d35-baaf95d02f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.670 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.670 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.671 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:46 np0005535469 NetworkManager[48891]: <info>  [1764088786.6742] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Nov 25 11:39:46 np0005535469 kernel: tape469a950-70: entered promiscuous mode
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.677 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:46Z|00530|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.695 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6b1f1e-b9b8-438b-a9b8-e79801346912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.698 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:39:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.699 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.741 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088786.7408016, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.743 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Started (Lifecycle Event)#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.766 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.772 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088786.7411115, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.772 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.802 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.806 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:46 np0005535469 nova_compute[254092]: 2025-11-25 16:39:46.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:47 np0005535469 nova_compute[254092]: 2025-11-25 16:39:47.043 254096 DEBUG nova.network.neutron [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updated VIF entry in instance network info cache for port 9d4276f1-91e9-418f-9a7b-844c83aea8f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:39:47 np0005535469 nova_compute[254092]: 2025-11-25 16:39:47.043 254096 DEBUG nova.network.neutron [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updating instance_info_cache with network_info: [{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:47 np0005535469 nova_compute[254092]: 2025-11-25 16:39:47.059 254096 DEBUG oslo_concurrency.lockutils [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:39:47 np0005535469 podman[317591]: 2025-11-25 16:39:47.138256353 +0000 UTC m=+0.056402131 container create 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 11:39:47 np0005535469 systemd[1]: Started libpod-conmon-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55.scope.
Nov 25 11:39:47 np0005535469 podman[317591]: 2025-11-25 16:39:47.109893724 +0000 UTC m=+0.028039512 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:39:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:39:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a6a91dce05b8ef142c987667939fc82ec5e5aaaa51b344a4808d5d7ff32d46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:39:47 np0005535469 podman[317591]: 2025-11-25 16:39:47.239508731 +0000 UTC m=+0.157654499 container init 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 11:39:47 np0005535469 podman[317591]: 2025-11-25 16:39:47.246838569 +0000 UTC m=+0.164984327 container start 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:47 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : New worker (317613) forked
Nov 25 11:39:47 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : Loading success.
Nov 25 11:39:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.793 254096 DEBUG nova.compute.manager [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.794 254096 DEBUG oslo_concurrency.lockutils [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.794 254096 DEBUG oslo_concurrency.lockutils [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.795 254096 DEBUG oslo_concurrency.lockutils [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.795 254096 DEBUG nova.compute.manager [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Processing event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.797 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.802 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088788.8021367, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.803 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.806 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.811 254096 INFO nova.virt.libvirt.driver [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance spawned successfully.#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.812 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.845 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.855 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.856 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.857 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.858 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.858 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.859 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.867 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:48 np0005535469 nova_compute[254092]: 2025-11-25 16:39:48.913 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:49 np0005535469 nova_compute[254092]: 2025-11-25 16:39:49.258 254096 INFO nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 9.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:39:49 np0005535469 nova_compute[254092]: 2025-11-25 16:39:49.259 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:49 np0005535469 nova_compute[254092]: 2025-11-25 16:39:49.366 254096 INFO nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 10.78 seconds to build instance.#033[00m
Nov 25 11:39:49 np0005535469 nova_compute[254092]: 2025-11-25 16:39:49.388 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 1.7 MiB/s wr, 89 op/s
Nov 25 11:39:50 np0005535469 nova_compute[254092]: 2025-11-25 16:39:50.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 25 11:39:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 25 11:39:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034834989744090644 of space, bias 1.0, pg target 0.10450496923227193 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:39:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 212 op/s
Nov 25 11:39:51 np0005535469 nova_compute[254092]: 2025-11-25 16:39:51.946 254096 DEBUG nova.compute.manager [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:51 np0005535469 nova_compute[254092]: 2025-11-25 16:39:51.946 254096 DEBUG oslo_concurrency.lockutils [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:51 np0005535469 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 DEBUG oslo_concurrency.lockutils [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:51 np0005535469 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 DEBUG oslo_concurrency.lockutils [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:51 np0005535469 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 DEBUG nova.compute.manager [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] No waiting events found dispatching network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:51 np0005535469 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 WARNING nova.compute.manager [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received unexpected event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:39:52 np0005535469 nova_compute[254092]: 2025-11-25 16:39:52.618 254096 DEBUG nova.objects.instance [None req-bf505fd5-32a3-49fb-b25c-5a85f27cdb1d 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:52 np0005535469 nova_compute[254092]: 2025-11-25 16:39:52.644 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088792.6446714, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:52 np0005535469 nova_compute[254092]: 2025-11-25 16:39:52.645 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:39:52 np0005535469 nova_compute[254092]: 2025-11-25 16:39:52.663 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:52 np0005535469 nova_compute[254092]: 2025-11-25 16:39:52.666 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:52 np0005535469 nova_compute[254092]: 2025-11-25 16:39:52.687 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 kernel: tap9d4276f1-91 (unregistering): left promiscuous mode
Nov 25 11:39:53 np0005535469 NetworkManager[48891]: <info>  [1764088793.1915] device (tap9d4276f1-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:39:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:53Z|00531|binding|INFO|Releasing lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 from this chassis (sb_readonly=0)
Nov 25 11:39:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:53Z|00532|binding|INFO|Setting lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 down in Southbound
Nov 25 11:39:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:39:53Z|00533|binding|INFO|Removing iface tap9d4276f1-91 ovn-installed in OVS
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.216 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:00:0e 10.100.0.4'], port_security=['fa:16:3e:8d:00:0e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9d4276f1-91e9-418f-9a7b-844c83aea8f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.219 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9d4276f1-91e9-418f-9a7b-844c83aea8f4 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.220 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.222 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8614d029-0156-4486-a7d0-0ba85942f748]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.225 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 25 11:39:53 np0005535469 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000038.scope: Consumed 4.423s CPU time.
Nov 25 11:39:53 np0005535469 systemd-machined[216343]: Machine qemu-65-instance-00000038 terminated.
Nov 25 11:39:53 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : haproxy version is 2.8.14-c23fe91
Nov 25 11:39:53 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : path to executable is /usr/sbin/haproxy
Nov 25 11:39:53 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [WARNING]  (317611) : Exiting Master process...
Nov 25 11:39:53 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [ALERT]    (317611) : Current worker (317613) exited with code 143 (Terminated)
Nov 25 11:39:53 np0005535469 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [WARNING]  (317611) : All workers exited. Exiting... (0)
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 systemd[1]: libpod-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55.scope: Deactivated successfully.
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 podman[317649]: 2025-11-25 16:39:53.374507458 +0000 UTC m=+0.055564478 container died 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.381 254096 DEBUG nova.compute.manager [None req-bf505fd5-32a3-49fb-b25c-5a85f27cdb1d 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55-userdata-shm.mount: Deactivated successfully.
Nov 25 11:39:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-54a6a91dce05b8ef142c987667939fc82ec5e5aaaa51b344a4808d5d7ff32d46-merged.mount: Deactivated successfully.
Nov 25 11:39:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 806 KiB/s wr, 152 op/s
Nov 25 11:39:53 np0005535469 podman[317649]: 2025-11-25 16:39:53.592021628 +0000 UTC m=+0.273078638 container cleanup 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:39:53 np0005535469 systemd[1]: libpod-conmon-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55.scope: Deactivated successfully.
Nov 25 11:39:53 np0005535469 podman[317685]: 2025-11-25 16:39:53.70521123 +0000 UTC m=+0.086343634 container remove 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.714 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c078ee4f-94a2-454a-bdc1-f012f40511f0]: (4, ('Tue Nov 25 04:39:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55)\n2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55\nTue Nov 25 04:39:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55)\n2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.714 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "440746e9-455f-4a2f-8412-a24d1c93cb21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.715 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.716 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4720ae-b149-4a3b-a796-644a0442c5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.717 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 kernel: tape469a950-70: left promiscuous mode
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.730 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e232207e-e970-4dd9-aa79-33065cff92ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.753 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55f38787-f45d-4221-a05e-ce9d53d77b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.754 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[461b5c9b-6080-432b-a932-01ab94969f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4092d35a-bce7-4791-80ca-4e128c6ffe8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524398, 'reachable_time': 20227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317703, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.774 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:39:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.774 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[17e345ca-9b03-464e-aae9-87ec0b3fe7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:39:53 np0005535469 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.816 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.816 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.825 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.826 254096 INFO nova.compute.claims [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.900 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.918 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.919 254096 DEBUG nova.compute.provider_tree [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.936 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 11:39:53 np0005535469 nova_compute[254092]: 2025-11-25 16:39:53.974 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.041 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.087 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-unplugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.087 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] No waiting events found dispatching network-vif-unplugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 WARNING nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received unexpected event network-vif-unplugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] No waiting events found dispatching network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.090 254096 WARNING nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received unexpected event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 11:39:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641268344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.502 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.508 254096 DEBUG nova.compute.provider_tree [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.521 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.546 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.547 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.606 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.606 254096 DEBUG nova.network.neutron [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.627 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.644 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.756 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.758 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.758 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Creating image(s)#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.783 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.810 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.829 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.832 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "15cb5c745a0e602074dbadf61d84b40262c4f70d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:54 np0005535469 nova_compute[254092]: 2025-11-25 16:39:54.833 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "15cb5c745a0e602074dbadf61d84b40262c4f70d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.010 254096 DEBUG nova.network.neutron [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.011 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.122 254096 DEBUG nova.virt.libvirt.imagebackend [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.181 254096 DEBUG nova.virt.libvirt.imagebackend [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.182 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] cloning images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8@snap to None/440746e9-455f-4a2f-8412-a24d1c93cb21_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1385576439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1385576439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.285 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "15cb5c745a0e602074dbadf61d84b40262c4f70d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.430 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] resizing rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.498 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.498 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.499 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.499 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.499 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.501 254096 INFO nova.compute.manager [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Terminating instance#033[00m
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.503 254096 DEBUG nova.compute.manager [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:39:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 120 op/s
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 25 11:39:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.513 254096 DEBUG nova.objects.instance [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lazy-loading 'migration_context' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.522 254096 INFO nova.virt.libvirt.driver [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance destroyed successfully.#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.523 254096 DEBUG nova.objects.instance [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Ensure instance console log exists: /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.528 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.529 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='6912c9181ae0be6aa2d2706c24ed15a5',container_format='bare',created_at=2025-11-25T16:39:49Z,direct_url=<?>,disk_format='raw',id=be0cdf5e-9d4a-430a-ba46-c7875458b1f8,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1147967949',owner='805768b696874b00aa9b3bac89550ed7',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T16:39:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': 'be0cdf5e-9d4a-430a-ba46-c7875458b1f8'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.533 254096 WARNING nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.535 254096 DEBUG nova.virt.libvirt.vif [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:39:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2082208700',display_name='tempest-DeleteServersTestJSON-server-2082208700',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2082208700',id=56,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-w2u0qqei',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:53Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.536 254096 DEBUG nova.network.os_vif_util [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.536 254096 DEBUG nova.network.os_vif_util [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.537 254096 DEBUG os_vif [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.540 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d4276f1-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.544 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.545 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.594 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.594 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.595 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.595 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='6912c9181ae0be6aa2d2706c24ed15a5',container_format='bare',created_at=2025-11-25T16:39:49Z,direct_url=<?>,disk_format='raw',id=be0cdf5e-9d4a-430a-ba46-c7875458b1f8,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1147967949',owner='805768b696874b00aa9b3bac89550ed7',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T16:39:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.598 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.598 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.601 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:55 np0005535469 nova_compute[254092]: 2025-11-25 16:39:55.640 254096 INFO os_vif [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91')#033[00m
Nov 25 11:39:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:39:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835229537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.095 254096 INFO nova.virt.libvirt.driver [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deleting instance files /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_del#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.096 254096 INFO nova.virt.libvirt.driver [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deletion of /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_del complete#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.104 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.121 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.124 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.336 254096 INFO nova.compute.manager [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.337 254096 DEBUG oslo.service.loopingcall [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.337 254096 DEBUG nova.compute.manager [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.337 254096 DEBUG nova.network.neutron [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:39:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:39:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638881181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.571 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.572 254096 DEBUG nova.objects.instance [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.590 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <uuid>440746e9-455f-4a2f-8412-a24d1c93cb21</uuid>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <name>instance-00000039</name>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <nova:name>instance-depend-image</nova:name>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:39:55</nova:creationTime>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <nova:user uuid="810b628f1c824b55930a996d843cc85f">tempest-ImageDependencyTests-2018589381-project-member</nova:user>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <nova:project uuid="805768b696874b00aa9b3bac89550ed7">tempest-ImageDependencyTests-2018589381</nova:project>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="be0cdf5e-9d4a-430a-ba46-c7875458b1f8"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <entry name="serial">440746e9-455f-4a2f-8412-a24d1c93cb21</entry>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <entry name="uuid">440746e9-455f-4a2f-8412-a24d1c93cb21</entry>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/440746e9-455f-4a2f-8412-a24d1c93cb21_disk">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/console.log" append="off"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:39:56 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:39:56 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:39:56 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:39:56 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.640 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.640 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.641 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Using config drive#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.657 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.855 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Creating config drive at /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config#033[00m
Nov 25 11:39:56 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.860 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3fd0jc6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:56.999 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3fd0jc6" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.026 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.031 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.176 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.177 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deleting local config drive /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config because it was imported into RBD.#033[00m
Nov 25 11:39:57 np0005535469 systemd-machined[216343]: New machine qemu-66-instance-00000039.
Nov 25 11:39:57 np0005535469 systemd[1]: Started Virtual Machine qemu-66-instance-00000039.
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.323 254096 DEBUG nova.network.neutron [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.341 254096 INFO nova.compute.manager [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.390 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.391 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.452 254096 DEBUG oslo_concurrency.processutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:39:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 71 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 176 op/s
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.755 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.756 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088797.7564464, 440746e9-455f-4a2f-8412-a24d1c93cb21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.757 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.766 254096 INFO nova.virt.libvirt.driver [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance spawned successfully.#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.766 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.793 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.801 254096 DEBUG nova.compute.manager [req-f0cc0f44-1586-482e-98c9-998fec689d39 req-1a91ea79-8fc9-4e87-8203-b342a60c6bec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-deleted-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.805 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.811 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.811 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.812 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.812 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.812 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.813 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.851 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.851 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088797.7607162, 440746e9-455f-4a2f-8412-a24d1c93cb21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.852 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] VM Started (Lifecycle Event)#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.874 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.877 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.897 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:39:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:39:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683262055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.993 254096 INFO nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 3.24 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:39:57 np0005535469 nova_compute[254092]: 2025-11-25 16:39:57.994 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.004 254096 DEBUG oslo_concurrency.processutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.011 254096 DEBUG nova.compute.provider_tree [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.026 254096 DEBUG nova.scheduler.client.report [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.093 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.137 254096 INFO nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 4.36 seconds to build instance.#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.160 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.161 254096 INFO nova.scheduler.client.report [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68#033[00m
Nov 25 11:39:58 np0005535469 nova_compute[254092]: 2025-11-25 16:39:58.231 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:39:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 71 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 KiB/s wr, 96 op/s
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:00 np0005535469 nova_compute[254092]: 2025-11-25 16:40:00.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.601837) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800601908, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 933, "num_deletes": 261, "total_data_size": 1132135, "memory_usage": 1154544, "flush_reason": "Manual Compaction"}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800660974, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1118801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32956, "largest_seqno": 33888, "table_properties": {"data_size": 1114047, "index_size": 2342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10666, "raw_average_key_size": 20, "raw_value_size": 1104317, "raw_average_value_size": 2075, "num_data_blocks": 103, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088740, "oldest_key_time": 1764088740, "file_creation_time": 1764088800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 59173 microseconds, and 4809 cpu microseconds.
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.661018) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1118801 bytes OK
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.661036) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.673391) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.673491) EVENT_LOG_v1 {"time_micros": 1764088800673480, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.673529) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1127515, prev total WAL file size 1127515, number of live WAL files 2.
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.674402) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1092KB)], [71(8461KB)]
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800674481, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9783588, "oldest_snapshot_seqno": -1}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5812 keys, 9666328 bytes, temperature: kUnknown
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800743141, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9666328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9625087, "index_size": 25579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147435, "raw_average_key_size": 25, "raw_value_size": 9518266, "raw_average_value_size": 1637, "num_data_blocks": 1042, "num_entries": 5812, "num_filter_entries": 5812, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.743408) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9666328 bytes
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.748713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.3 rd, 140.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.3 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(17.4) write-amplify(8.6) OK, records in: 6348, records dropped: 536 output_compression: NoCompression
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.748749) EVENT_LOG_v1 {"time_micros": 1764088800748736, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68735, "compaction_time_cpu_micros": 24773, "output_level": 6, "num_output_files": 1, "total_output_size": 9666328, "num_input_records": 6348, "num_output_records": 5812, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800749192, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800751130, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.674229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:40:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:40:00 np0005535469 nova_compute[254092]: 2025-11-25 16:40:00.922 254096 DEBUG nova.compute.manager [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:00 np0005535469 nova_compute[254092]: 2025-11-25 16:40:00.977 254096 INFO nova.compute.manager [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] instance snapshotting#033[00m
Nov 25 11:40:01 np0005535469 nova_compute[254092]: 2025-11-25 16:40:01.372 254096 INFO nova.virt.libvirt.driver [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Beginning live snapshot process#033[00m
Nov 25 11:40:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 18 KiB/s wr, 108 op/s
Nov 25 11:40:01 np0005535469 nova_compute[254092]: 2025-11-25 16:40:01.528 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] creating snapshot(f083c1174065407790877006dc67cc02) on rbd image(440746e9-455f-4a2f-8412-a24d1c93cb21_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:40:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 25 11:40:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 25 11:40:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 25 11:40:01 np0005535469 nova_compute[254092]: 2025-11-25 16:40:01.941 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] cloning vms/440746e9-455f-4a2f-8412-a24d1c93cb21_disk@f083c1174065407790877006dc67cc02 to images/3281111a-0357-46ec-9d85-c1808ddbe20b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:40:02 np0005535469 nova_compute[254092]: 2025-11-25 16:40:02.078 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] flattening images/3281111a-0357-46ec-9d85-c1808ddbe20b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:40:02 np0005535469 nova_compute[254092]: 2025-11-25 16:40:02.254 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] removing snapshot(f083c1174065407790877006dc67cc02) on rbd image(440746e9-455f-4a2f-8412-a24d1c93cb21_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:40:02 np0005535469 nova_compute[254092]: 2025-11-25 16:40:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 25 11:40:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 25 11:40:02 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 25 11:40:02 np0005535469 nova_compute[254092]: 2025-11-25 16:40:02.937 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] creating snapshot(snap) on rbd image(3281111a-0357-46ec-9d85-c1808ddbe20b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.424 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.424 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.443 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:40:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 22 KiB/s wr, 110 op/s
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.511 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.511 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.519 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.520 254096 INFO nova.compute.claims [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:40:03 np0005535469 nova_compute[254092]: 2025-11-25 16:40:03.641 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 25 11:40:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 25 11:40:03 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 25 11:40:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417381252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.093 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.098 254096 DEBUG nova.compute.provider_tree [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.114 254096 DEBUG nova.scheduler.client.report [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.185 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.185 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.246 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.247 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.282 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.301 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.426 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.428 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.428 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating image(s)#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.448 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.466 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.484 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.487 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.516 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.517 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.555 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.556 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.557 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.557 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.577 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.581 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.813 254096 DEBUG nova.policy [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c1fd56de7cd4f5c9b1d85ffe8545c90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:40:04 np0005535469 nova_compute[254092]: 2025-11-25 16:40:04.985 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.042 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.139 254096 DEBUG nova.objects.instance [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.161 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.162 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Ensure instance console log exists: /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.162 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.163 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.163 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 63 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 1.5 MiB/s wr, 207 op/s
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.529 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:05 np0005535469 nova_compute[254092]: 2025-11-25 16:40:05.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2873390419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.053 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.139 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Successfully created port: d79fd017-c7a6-4bfe-8c90-b3295f62f83c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.152 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.153 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.315 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4057MB free_disk=59.98812484741211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.316 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.317 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.340 254096 INFO nova.virt.libvirt.driver [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Snapshot image upload complete#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.341 254096 INFO nova.compute.manager [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 5.36 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.411 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 440746e9-455f-4a2f-8412-a24d1c93cb21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance d1ceaafd-59a6-45b1-833d-eb2a76e789be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.469 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273605996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.963 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.969 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:06 np0005535469 nova_compute[254092]: 2025-11-25 16:40:06.982 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:07 np0005535469 nova_compute[254092]: 2025-11-25 16:40:07.097 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:40:07 np0005535469 nova_compute[254092]: 2025-11-25 16:40:07.098 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 78 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 2.6 MiB/s wr, 172 op/s
Nov 25 11:40:07 np0005535469 podman[318514]: 2025-11-25 16:40:07.668382132 +0000 UTC m=+0.075276293 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 11:40:07 np0005535469 podman[318515]: 2025-11-25 16:40:07.669582084 +0000 UTC m=+0.070410431 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 11:40:07 np0005535469 podman[318516]: 2025-11-25 16:40:07.737357434 +0000 UTC m=+0.142148548 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:40:08 np0005535469 nova_compute[254092]: 2025-11-25 16:40:08.098 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:08 np0005535469 nova_compute[254092]: 2025-11-25 16:40:08.098 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:08 np0005535469 nova_compute[254092]: 2025-11-25 16:40:08.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:08 np0005535469 nova_compute[254092]: 2025-11-25 16:40:08.384 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088793.381286, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:08 np0005535469 nova_compute[254092]: 2025-11-25 16:40:08.384 254096 INFO nova.compute.manager [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:40:08 np0005535469 nova_compute[254092]: 2025-11-25 16:40:08.419 254096 DEBUG nova.compute.manager [None req-9105e228-2be1-4c81-9949-e028ea2f97e6 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:08 np0005535469 nova_compute[254092]: 2025-11-25 16:40:08.986 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Successfully updated port: d79fd017-c7a6-4bfe-8c90-b3295f62f83c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:40:09 np0005535469 nova_compute[254092]: 2025-11-25 16:40:09.006 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:09 np0005535469 nova_compute[254092]: 2025-11-25 16:40:09.007 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquired lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:09 np0005535469 nova_compute[254092]: 2025-11-25 16:40:09.008 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:40:09 np0005535469 nova_compute[254092]: 2025-11-25 16:40:09.085 254096 DEBUG nova.compute.manager [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-changed-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:09 np0005535469 nova_compute[254092]: 2025-11-25 16:40:09.085 254096 DEBUG nova.compute.manager [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Refreshing instance network info cache due to event network-changed-d79fd017-c7a6-4bfe-8c90-b3295f62f83c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:40:09 np0005535469 nova_compute[254092]: 2025-11-25 16:40:09.086 254096 DEBUG oslo_concurrency.lockutils [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:09 np0005535469 nova_compute[254092]: 2025-11-25 16:40:09.243 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:40:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 78 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 135 op/s
Nov 25 11:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.284 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updating instance_info_cache with network_info: [{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.359 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Releasing lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.359 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance network_info: |[{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.359 254096 DEBUG oslo_concurrency.lockutils [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.360 254096 DEBUG nova.network.neutron [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Refreshing network info cache for port d79fd017-c7a6-4bfe-8c90-b3295f62f83c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.362 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start _get_guest_xml network_info=[{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.367 254096 WARNING nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.377 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.378 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.382 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.382 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.383 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.383 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.383 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.386 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.389 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 25 11:40:10 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 25 11:40:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392095743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.878 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.943 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:10 np0005535469 nova_compute[254092]: 2025-11-25 16:40:10.949 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022952161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.438 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.441 254096 DEBUG nova.virt.libvirt.vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:04Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.442 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.443 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.445 254096 DEBUG nova.objects.instance [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.462 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <uuid>d1ceaafd-59a6-45b1-833d-eb2a76e789be</uuid>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <name>instance-0000003a</name>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1732543352</nova:name>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:40:10</nova:creationTime>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <nova:port uuid="d79fd017-c7a6-4bfe-8c90-b3295f62f83c">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <entry name="serial">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <entry name="uuid">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e2:7b:b0"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <target dev="tapd79fd017-c7"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log" append="off"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:40:11 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:40:11 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:40:11 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:40:11 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.464 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Preparing to wait for external event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.465 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.465 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.465 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.466 254096 DEBUG nova.virt.libvirt.vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:04Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.467 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.468 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.468 254096 DEBUG os_vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.471 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.474 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd79fd017-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.474 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd79fd017-c7, col_values=(('external_ids', {'iface-id': 'd79fd017-c7a6-4bfe-8c90-b3295f62f83c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:7b:b0', 'vm-uuid': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:11 np0005535469 NetworkManager[48891]: <info>  [1764088811.4775] manager: (tapd79fd017-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.489 254096 INFO os_vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')#033[00m
Nov 25 11:40:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.7 MiB/s wr, 167 op/s
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.544 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.545 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.545 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:e2:7b:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.546 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Using config drive#033[00m
Nov 25 11:40:11 np0005535469 nova_compute[254092]: 2025-11-25 16:40:11.577 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 25 11:40:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 25 11:40:11 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.132 254096 DEBUG nova.network.neutron [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updated VIF entry in instance network info cache for port d79fd017-c7a6-4bfe-8c90-b3295f62f83c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.133 254096 DEBUG nova.network.neutron [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updating instance_info_cache with network_info: [{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.150 254096 DEBUG oslo_concurrency.lockutils [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.193 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating config drive at /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.198 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpin2mst4b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.336 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpin2mst4b" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.366 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.370 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.539 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.541 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting local config drive /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config because it was imported into RBD.#033[00m
Nov 25 11:40:12 np0005535469 kernel: tapd79fd017-c7: entered promiscuous mode
Nov 25 11:40:12 np0005535469 NetworkManager[48891]: <info>  [1764088812.6039] manager: (tapd79fd017-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:12Z|00534|binding|INFO|Claiming lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c for this chassis.
Nov 25 11:40:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:12Z|00535|binding|INFO|d79fd017-c7a6-4bfe-8c90-b3295f62f83c: Claiming fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.610 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "440746e9-455f-4a2f-8412-a24d1c93cb21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "440746e9-455f-4a2f-8412-a24d1c93cb21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.612 254096 INFO nova.compute.manager [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Terminating instance#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.613 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.613 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquired lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.614 254096 DEBUG nova.network.neutron [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.614 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.616 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.617 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a15cac4c-bcb2-45c0-b1eb-6d492721c4b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.632 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:40:12 np0005535469 systemd-udevd[318711]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.634 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a10320d0-6d7b-4d92-bf36-8a7db67215ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.636 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[097e22e8-12b3-482f-9aba-422fa2a99171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 systemd-machined[216343]: New machine qemu-67-instance-0000003a.
Nov 25 11:40:12 np0005535469 NetworkManager[48891]: <info>  [1764088812.6462] device (tapd79fd017-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:40:12 np0005535469 NetworkManager[48891]: <info>  [1764088812.6475] device (tapd79fd017-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.649 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d41406-549b-42e7-8a45-91684eb14d8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 systemd[1]: Started Virtual Machine qemu-67-instance-0000003a.
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.679 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01b35123-e976-4be1-add9-8ab3f6c6606b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:12Z|00536|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c ovn-installed in OVS
Nov 25 11:40:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:12Z|00537|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c up in Southbound
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.712 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a281eb3c-e9ac-4eb5-98f3-7321ac6637fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.715 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.717 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06b9dda6-7912-4c0a-9108-c98596b3dbe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 systemd-udevd[318715]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:12 np0005535469 NetworkManager[48891]: <info>  [1764088812.7186] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/229)
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.755 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5187c3bc-dcb4-4289-a77d-81edb8d17e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.758 254096 DEBUG nova.network.neutron [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.764 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e12a319b-4100-415c-ae01-5c4341aa07b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 NetworkManager[48891]: <info>  [1764088812.7843] device (tap62c0a8be-b0): carrier: link connected
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.788 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2acfc7-b6f7-46cc-b770-08ce9acea0dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.808 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf0a6bb-ffa7-4a89-94db-3c504b6c7e3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527037, 'reachable_time': 17142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318744, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.831 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94714e43-40d2-49c4-9d50-a97d5c4f9c34]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527037, 'tstamp': 527037}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318745, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.854 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b63be33-afac-4f97-b688-26de5bbfd7f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527037, 'reachable_time': 17142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318746, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.890 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[313764e7-d3ea-4a00-b817-065d9be495ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ccfcd0-1965-4565-bbbb-14decd97a95b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.960 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:12 np0005535469 NetworkManager[48891]: <info>  [1764088812.9632] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Nov 25 11:40:12 np0005535469 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:12Z|00538|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 11:40:12 np0005535469 nova_compute[254092]: 2025-11-25 16:40:12.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.991 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.992 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9c6a74-5b3f-4a33-bf85-2c7e2f4e4447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.993 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:40:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.996 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.093 254096 DEBUG nova.network.neutron [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.112 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Releasing lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.113 254096 DEBUG nova.compute.manager [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.114 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.115 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.115 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.132 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088813.132157, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.133 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Started (Lifecycle Event)#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.154 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.160 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088813.1331186, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.161 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.180 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.185 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.206 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:13 np0005535469 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 25 11:40:13 np0005535469 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Consumed 1.028s CPU time.
Nov 25 11:40:13 np0005535469 systemd-machined[216343]: Machine qemu-66-instance-00000039 terminated.
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.289 254096 DEBUG nova.compute.manager [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.290 254096 DEBUG oslo_concurrency.lockutils [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.290 254096 DEBUG oslo_concurrency.lockutils [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.291 254096 DEBUG oslo_concurrency.lockutils [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.291 254096 DEBUG nova.compute.manager [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Processing event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.292 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.297 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088813.2968247, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.298 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.300 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.305 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance spawned successfully.#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.305 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.308 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.321 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.335 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.340 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.340 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.341 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.341 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.341 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.342 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.351 254096 INFO nova.virt.libvirt.driver [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance destroyed successfully.#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.352 254096 DEBUG nova.objects.instance [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lazy-loading 'resources' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.374 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.420 254096 INFO nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 8.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.421 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:13 np0005535469 podman[318821]: 2025-11-25 16:40:13.433354401 +0000 UTC m=+0.066451974 container create 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 11:40:13 np0005535469 systemd[1]: Started libpod-conmon-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope.
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.484 254096 INFO nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 9.99 seconds to build instance.#033[00m
Nov 25 11:40:13 np0005535469 podman[318821]: 2025-11-25 16:40:13.399886313 +0000 UTC m=+0.032983906 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.504 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 1.5 MiB/s wr, 94 op/s
Nov 25 11:40:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6577cfdadc0151800bbbfb7a549b8162c86092c3bd8a05ddb66da72e8582cad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:13 np0005535469 podman[318821]: 2025-11-25 16:40:13.54202407 +0000 UTC m=+0.175121673 container init 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:40:13 np0005535469 podman[318821]: 2025-11-25 16:40:13.548234018 +0000 UTC m=+0.181331591 container start 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.570 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:13 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : New worker (318860) forked
Nov 25 11:40:13 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : Loading success.
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.596 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:13 np0005535469 nova_compute[254092]: 2025-11-25 16:40:13.596 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:40:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:13.617 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:13.617 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 25 11:40:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 25 11:40:13 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.302 254096 INFO nova.virt.libvirt.driver [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deleting instance files /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21_del#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.303 254096 INFO nova.virt.libvirt.driver [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deletion of /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21_del complete#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.361 254096 INFO nova.compute.manager [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.362 254096 DEBUG oslo.service.loopingcall [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.362 254096 DEBUG nova.compute.manager [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.362 254096 DEBUG nova.network.neutron [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.644 254096 DEBUG nova.network.neutron [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.656 254096 DEBUG nova.network.neutron [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.669 254096 INFO nova.compute.manager [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 0.31 seconds to deallocate network for instance.#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.723 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.723 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:14.848 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:14.850 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:40:14 np0005535469 nova_compute[254092]: 2025-11-25 16:40:14.850 254096 DEBUG oslo_concurrency.processutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176849215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.316 254096 DEBUG oslo_concurrency.processutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.322 254096 DEBUG nova.compute.provider_tree [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.341 254096 DEBUG nova.scheduler.client.report [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.359 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.381 254096 INFO nova.scheduler.client.report [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Deleted allocations for instance 440746e9-455f-4a2f-8412-a24d1c93cb21#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG nova.compute.manager [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG oslo_concurrency.lockutils [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG oslo_concurrency.lockutils [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG oslo_concurrency.lockutils [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG nova.compute.manager [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.386 254096 WARNING nova.compute.manager [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state None.#033[00m
Nov 25 11:40:15 np0005535469 nova_compute[254092]: 2025-11-25 16:40:15.446 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Nov 25 11:40:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:16 np0005535469 nova_compute[254092]: 2025-11-25 16:40:16.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.324 254096 INFO nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Rebuilding instance#033[00m
Nov 25 11:40:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 251 op/s
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.565 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.577 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.749 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_requests' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.760 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.777 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.787 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.797 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:40:17 np0005535469 nova_compute[254092]: 2025-11-25 16:40:17.800 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:40:18 np0005535469 nova_compute[254092]: 2025-11-25 16:40:18.157 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 186 op/s
Nov 25 11:40:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 25 11:40:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 25 11:40:20 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 25 11:40:21 np0005535469 nova_compute[254092]: 2025-11-25 16:40:21.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 186 op/s
Nov 25 11:40:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:21.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:23 np0005535469 nova_compute[254092]: 2025-11-25 16:40:23.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 155 op/s
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:40:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ad4a56ac-c18f-41c6-9c4f-644d39d15804 does not exist
Nov 25 11:40:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d3b2f856-fb28-40ba-a7fa-0a176a19862e does not exist
Nov 25 11:40:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 40e2874e-d650-4655-a137-1acfc4fc3894 does not exist
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:40:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:40:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:40:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:40:25 np0005535469 podman[319163]: 2025-11-25 16:40:25.258468061 +0000 UTC m=+0.113169911 container create 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:40:25 np0005535469 podman[319163]: 2025-11-25 16:40:25.169957749 +0000 UTC m=+0.024659629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:40:25 np0005535469 systemd[1]: Started libpod-conmon-5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983.scope.
Nov 25 11:40:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:25 np0005535469 podman[319163]: 2025-11-25 16:40:25.48589098 +0000 UTC m=+0.340592860 container init 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:40:25 np0005535469 podman[319163]: 2025-11-25 16:40:25.496038856 +0000 UTC m=+0.350740706 container start 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:40:25 np0005535469 tender_williamson[319179]: 167 167
Nov 25 11:40:25 np0005535469 systemd[1]: libpod-5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983.scope: Deactivated successfully.
Nov 25 11:40:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 1.7 KiB/s wr, 62 op/s
Nov 25 11:40:25 np0005535469 podman[319163]: 2025-11-25 16:40:25.639370854 +0000 UTC m=+0.494072764 container attach 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:40:25 np0005535469 podman[319163]: 2025-11-25 16:40:25.641622786 +0000 UTC m=+0.496324636 container died 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:40:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7e10a12645e05a225d914b79c0acc5cf62a63c48ce116915f0a650b04d783ef0-merged.mount: Deactivated successfully.
Nov 25 11:40:25 np0005535469 podman[319163]: 2025-11-25 16:40:25.838152377 +0000 UTC m=+0.692854227 container remove 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:40:25 np0005535469 systemd[1]: libpod-conmon-5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983.scope: Deactivated successfully.
Nov 25 11:40:26 np0005535469 podman[319203]: 2025-11-25 16:40:26.068991009 +0000 UTC m=+0.089295293 container create ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:40:26 np0005535469 podman[319203]: 2025-11-25 16:40:26.002598889 +0000 UTC m=+0.022903203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:40:26 np0005535469 systemd[1]: Started libpod-conmon-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope.
Nov 25 11:40:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:26 np0005535469 podman[319203]: 2025-11-25 16:40:26.175362415 +0000 UTC m=+0.195666729 container init ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:40:26 np0005535469 podman[319203]: 2025-11-25 16:40:26.18729342 +0000 UTC m=+0.207597704 container start ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:40:26 np0005535469 podman[319203]: 2025-11-25 16:40:26.196854649 +0000 UTC m=+0.217159023 container attach ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:40:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:26Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 11:40:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:26Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 11:40:26 np0005535469 nova_compute[254092]: 2025-11-25 16:40:26.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:27 np0005535469 naughty_gould[319220]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:40:27 np0005535469 naughty_gould[319220]: --> relative data size: 1.0
Nov 25 11:40:27 np0005535469 naughty_gould[319220]: --> All data devices are unavailable
Nov 25 11:40:27 np0005535469 systemd[1]: libpod-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope: Deactivated successfully.
Nov 25 11:40:27 np0005535469 systemd[1]: libpod-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope: Consumed 1.013s CPU time.
Nov 25 11:40:27 np0005535469 podman[319203]: 2025-11-25 16:40:27.265286734 +0000 UTC m=+1.285591018 container died ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:40:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 89 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 606 KiB/s wr, 13 op/s
Nov 25 11:40:27 np0005535469 nova_compute[254092]: 2025-11-25 16:40:27.848 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:40:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed-merged.mount: Deactivated successfully.
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.377 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088813.3462272, 440746e9-455f-4a2f-8412-a24d1c93cb21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.377 254096 INFO nova.compute.manager [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.395 254096 DEBUG nova.compute.manager [None req-1dce0595-399d-4a18-9630-cf72d57deefc - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:28 np0005535469 podman[319203]: 2025-11-25 16:40:28.507896907 +0000 UTC m=+2.528201191 container remove ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:40:28 np0005535469 systemd[1]: libpod-conmon-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope: Deactivated successfully.
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.534 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.535 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.561 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.676 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.678 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.685 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.686 254096 INFO nova.compute.claims [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:40:28 np0005535469 nova_compute[254092]: 2025-11-25 16:40:28.830 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790846405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.287 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.295 254096 DEBUG nova.compute.provider_tree [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.310 254096 DEBUG nova.scheduler.client.report [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.348 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.349 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:40:29 np0005535469 podman[319422]: 2025-11-25 16:40:29.348767759 +0000 UTC m=+0.062371913 container create 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:40:29 np0005535469 systemd[1]: Started libpod-conmon-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope.
Nov 25 11:40:29 np0005535469 podman[319422]: 2025-11-25 16:40:29.312409742 +0000 UTC m=+0.026013916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.423 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.424 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:40:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.477 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:40:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 89 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 606 KiB/s wr, 13 op/s
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.568 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:40:29 np0005535469 podman[319422]: 2025-11-25 16:40:29.666798376 +0000 UTC m=+0.380402550 container init 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:40:29 np0005535469 podman[319422]: 2025-11-25 16:40:29.674056572 +0000 UTC m=+0.387660726 container start 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:40:29 np0005535469 keen_snyder[319438]: 167 167
Nov 25 11:40:29 np0005535469 systemd[1]: libpod-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope: Deactivated successfully.
Nov 25 11:40:29 np0005535469 conmon[319438]: conmon 70f22833e47b1d5ee682 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope/container/memory.events
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.697 254096 DEBUG nova.policy [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '509d158fe3f34e219f96739bb51bd6d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ea58a9bae9c474bb9f0b9c821689054', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:40:29 np0005535469 podman[319422]: 2025-11-25 16:40:29.774277731 +0000 UTC m=+0.487881905 container attach 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:40:29 np0005535469 podman[319422]: 2025-11-25 16:40:29.77533882 +0000 UTC m=+0.488942994 container died 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.808 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.810 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.810 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Creating image(s)#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.833 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.855 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.880 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.886 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-089a85581a67d88e7e7cfb015518a4206f4c5c430a3744ff4c3b8f4c600dddc5-merged.mount: Deactivated successfully.
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.965 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.967 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.968 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.968 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.990 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:29 np0005535469 nova_compute[254092]: 2025-11-25 16:40:29.994 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 32b30534-761a-439a-85e5-4e2fe8f507df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:30 np0005535469 podman[319422]: 2025-11-25 16:40:30.190699979 +0000 UTC m=+0.904304123 container remove 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 11:40:30 np0005535469 systemd[1]: libpod-conmon-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope: Deactivated successfully.
Nov 25 11:40:30 np0005535469 podman[319553]: 2025-11-25 16:40:30.347700158 +0000 UTC m=+0.023897500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:40:30 np0005535469 podman[319553]: 2025-11-25 16:40:30.531250847 +0000 UTC m=+0.207448169 container create e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:40:30 np0005535469 systemd[1]: Started libpod-conmon-e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85.scope.
Nov 25 11:40:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:30 np0005535469 podman[319553]: 2025-11-25 16:40:30.791023845 +0000 UTC m=+0.467221177 container init e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:40:30 np0005535469 podman[319553]: 2025-11-25 16:40:30.797999733 +0000 UTC m=+0.474197055 container start e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 11:40:31 np0005535469 podman[319553]: 2025-11-25 16:40:31.027631823 +0000 UTC m=+0.703829175 container attach e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.422 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 32b30534-761a-439a-85e5-4e2fe8f507df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.505 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] resizing rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:40:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 121 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]: {
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:    "0": [
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:        {
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "devices": [
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "/dev/loop3"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            ],
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_name": "ceph_lv0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_size": "21470642176",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "name": "ceph_lv0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "tags": {
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cluster_name": "ceph",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.crush_device_class": "",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.encrypted": "0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osd_id": "0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.type": "block",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.vdo": "0"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            },
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "type": "block",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "vg_name": "ceph_vg0"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:        }
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:    ],
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:    "1": [
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:        {
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "devices": [
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "/dev/loop4"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            ],
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_name": "ceph_lv1",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_size": "21470642176",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "name": "ceph_lv1",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "tags": {
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cluster_name": "ceph",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.crush_device_class": "",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.encrypted": "0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osd_id": "1",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.type": "block",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.vdo": "0"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            },
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "type": "block",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "vg_name": "ceph_vg1"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:        }
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:    ],
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:    "2": [
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:        {
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "devices": [
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "/dev/loop5"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            ],
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_name": "ceph_lv2",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_size": "21470642176",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "name": "ceph_lv2",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "tags": {
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.cluster_name": "ceph",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.crush_device_class": "",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.encrypted": "0",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osd_id": "2",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.type": "block",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:                "ceph.vdo": "0"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            },
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "type": "block",
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:            "vg_name": "ceph_vg2"
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:        }
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]:    ]
Nov 25 11:40:31 np0005535469 pedantic_ardinghelli[319570]: }
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.636 254096 DEBUG nova.objects.instance [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lazy-loading 'migration_context' on Instance uuid 32b30534-761a-439a-85e5-4e2fe8f507df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.650 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.650 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Ensure instance console log exists: /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.651 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.651 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:31 np0005535469 nova_compute[254092]: 2025-11-25 16:40:31.652 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:31 np0005535469 systemd[1]: libpod-e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85.scope: Deactivated successfully.
Nov 25 11:40:31 np0005535469 podman[319553]: 2025-11-25 16:40:31.663180826 +0000 UTC m=+1.339378148 container died e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:40:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf-merged.mount: Deactivated successfully.
Nov 25 11:40:31 np0005535469 podman[319553]: 2025-11-25 16:40:31.730855021 +0000 UTC m=+1.407052353 container remove e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:40:31 np0005535469 systemd[1]: libpod-conmon-e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85.scope: Deactivated successfully.
Nov 25 11:40:32 np0005535469 podman[319805]: 2025-11-25 16:40:32.325381991 +0000 UTC m=+0.040841989 container create f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:40:32 np0005535469 systemd[1]: Started libpod-conmon-f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e.scope.
Nov 25 11:40:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:32 np0005535469 podman[319805]: 2025-11-25 16:40:32.403906712 +0000 UTC m=+0.119366750 container init f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:40:32 np0005535469 podman[319805]: 2025-11-25 16:40:32.309409998 +0000 UTC m=+0.024870026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:40:32 np0005535469 podman[319805]: 2025-11-25 16:40:32.410916031 +0000 UTC m=+0.126376039 container start f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:40:32 np0005535469 podman[319805]: 2025-11-25 16:40:32.414320583 +0000 UTC m=+0.129780621 container attach f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:40:32 np0005535469 sharp_mestorf[319821]: 167 167
Nov 25 11:40:32 np0005535469 systemd[1]: libpod-f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e.scope: Deactivated successfully.
Nov 25 11:40:32 np0005535469 podman[319805]: 2025-11-25 16:40:32.415609399 +0000 UTC m=+0.131069407 container died f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:40:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-fdfea0d00eb1c161c578aaa9156760f774feba7f3b10345be3524b5c3813c858-merged.mount: Deactivated successfully.
Nov 25 11:40:32 np0005535469 podman[319805]: 2025-11-25 16:40:32.48532109 +0000 UTC m=+0.200781098 container remove f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:40:32 np0005535469 systemd[1]: libpod-conmon-f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e.scope: Deactivated successfully.
Nov 25 11:40:32 np0005535469 podman[319845]: 2025-11-25 16:40:32.659421583 +0000 UTC m=+0.052677340 container create be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:40:32 np0005535469 systemd[1]: Started libpod-conmon-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope.
Nov 25 11:40:32 np0005535469 podman[319845]: 2025-11-25 16:40:32.628107784 +0000 UTC m=+0.021363561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:40:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:32 np0005535469 podman[319845]: 2025-11-25 16:40:32.747976476 +0000 UTC m=+0.141232253 container init be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 11:40:32 np0005535469 podman[319845]: 2025-11-25 16:40:32.754110542 +0000 UTC m=+0.147366299 container start be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:40:32 np0005535469 podman[319845]: 2025-11-25 16:40:32.767092204 +0000 UTC m=+0.160347961 container attach be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 kernel: tapd79fd017-c7 (unregistering): left promiscuous mode
Nov 25 11:40:33 np0005535469 NetworkManager[48891]: <info>  [1764088833.2078] device (tapd79fd017-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:40:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:33Z|00539|binding|INFO|Releasing lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c from this chassis (sb_readonly=0)
Nov 25 11:40:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:33Z|00540|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c down in Southbound
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:33Z|00541|binding|INFO|Removing iface tapd79fd017-c7 ovn-installed in OVS
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.219 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance shutdown successfully after 15 seconds.#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.226 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.227 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.228 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31e8acf2-46d5-4b25-9be7-80de8b284029]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.231 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 25 11:40:33 np0005535469 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Consumed 13.295s CPU time.
Nov 25 11:40:33 np0005535469 systemd-machined[216343]: Machine qemu-67-instance-0000003a terminated.
Nov 25 11:40:33 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : haproxy version is 2.8.14-c23fe91
Nov 25 11:40:33 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : path to executable is /usr/sbin/haproxy
Nov 25 11:40:33 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [WARNING]  (318858) : Exiting Master process...
Nov 25 11:40:33 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [ALERT]    (318858) : Current worker (318860) exited with code 143 (Terminated)
Nov 25 11:40:33 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [WARNING]  (318858) : All workers exited. Exiting... (0)
Nov 25 11:40:33 np0005535469 systemd[1]: libpod-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope: Deactivated successfully.
Nov 25 11:40:33 np0005535469 conmon[318854]: conmon 03cc187dd0506b8a2415 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope/container/memory.events
Nov 25 11:40:33 np0005535469 podman[319891]: 2025-11-25 16:40:33.376354273 +0000 UTC m=+0.043640295 container died 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:40:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e6577cfdadc0151800bbbfb7a549b8162c86092c3bd8a05ddb66da72e8582cad-merged.mount: Deactivated successfully.
Nov 25 11:40:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6-userdata-shm.mount: Deactivated successfully.
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 podman[319891]: 2025-11-25 16:40:33.451074991 +0000 UTC m=+0.118361013 container cleanup 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 systemd[1]: libpod-conmon-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope: Deactivated successfully.
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.467 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance destroyed successfully.#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.474 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance destroyed successfully.#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.475 254096 DEBUG nova.virt.libvirt.vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:16Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.476 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.477 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.478 254096 DEBUG os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.482 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd79fd017-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.488 254096 INFO os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')#033[00m
Nov 25 11:40:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 121 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 11:40:33 np0005535469 podman[319929]: 2025-11-25 16:40:33.713391387 +0000 UTC m=+0.235148371 container remove 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.720 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bbf6dc-2ba0-4274-b59b-cba74359e7f0]: (4, ('Tue Nov 25 04:40:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6)\n03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6\nTue Nov 25 04:40:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6)\n03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.723 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb79b536-143d-4f24-9d68-0779b6422300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 11:40:33 np0005535469 nova_compute[254092]: 2025-11-25 16:40:33.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef4755e-9cfc-4f3d-bca9-e90daf577db1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 elated_hellman[319861]: {
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "osd_id": 1,
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "type": "bluestore"
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:    },
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "osd_id": 2,
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "type": "bluestore"
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:    },
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "osd_id": 0,
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:        "type": "bluestore"
Nov 25 11:40:33 np0005535469 elated_hellman[319861]:    }
Nov 25 11:40:33 np0005535469 elated_hellman[319861]: }
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.762 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d16d3ece-a0b8-4a90-b656-dea68c0c53fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c246023-bf7f-42c3-9af0-52411961fb84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 systemd[1]: libpod-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope: Deactivated successfully.
Nov 25 11:40:33 np0005535469 systemd[1]: libpod-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope: Consumed 1.003s CPU time.
Nov 25 11:40:33 np0005535469 podman[319845]: 2025-11-25 16:40:33.77948177 +0000 UTC m=+1.172737527 container died be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93ad2421-1aa7-4ff4-b001-b9344e94bb41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527029, 'reachable_time': 21510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319992, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.792 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:40:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.792 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0f90c2-9d63-4355-9f6b-461d0144ea6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:33 np0005535469 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 11:40:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331-merged.mount: Deactivated successfully.
Nov 25 11:40:33 np0005535469 podman[319845]: 2025-11-25 16:40:33.967445069 +0000 UTC m=+1.360700826 container remove be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:40:33 np0005535469 systemd[1]: libpod-conmon-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope: Deactivated successfully.
Nov 25 11:40:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:40:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:40:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:40:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:40:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7fa11e78-f5be-4192-8f3a-937177935258 does not exist
Nov 25 11:40:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b04c556a-1bfb-42bb-980e-4344e2143822 does not exist
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.111 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Successfully created port: 9691504b-429d-44e8-bdf5-7f223c5b0527 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.290 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting instance files /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.291 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deletion of /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del complete#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.430 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.431 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating image(s)#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.453 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.484 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.511 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.516 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.558 254096 DEBUG nova.compute.manager [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.559 254096 DEBUG oslo_concurrency.lockutils [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.560 254096 DEBUG oslo_concurrency.lockutils [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.560 254096 DEBUG oslo_concurrency.lockutils [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.561 254096 DEBUG nova.compute.manager [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.561 254096 WARNING nova.compute.manager [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.570 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.570 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.588 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.610 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.611 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.612 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.612 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.634 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.638 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.707 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.708 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.716 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.716 254096 INFO nova.compute.claims [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.862 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:34 np0005535469 nova_compute[254092]: 2025-11-25 16:40:34.980 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:40:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.046 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.126 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.127 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Ensure instance console log exists: /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.127 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.127 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.128 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.130 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start _get_guest_xml network_info=[{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.134 254096 WARNING nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.138 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.139 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.141 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.142 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.142 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.142 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.143 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.143 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.144 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.159 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956529383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.320 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.326 254096 DEBUG nova.compute.provider_tree [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.342 254096 DEBUG nova.scheduler.client.report [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.368 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.368 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.415 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.416 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.432 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.459 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:40:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 122 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 4.4 MiB/s wr, 129 op/s
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.546 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.549 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.550 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Creating image(s)#033[00m
Nov 25 11:40:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232557520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.580 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.602 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.625 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.628 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.659 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.679 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.685 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.718 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.719 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.720 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.720 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.738 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.741 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:35 np0005535469 nova_compute[254092]: 2025-11-25 16:40:35.854 254096 DEBUG nova.policy [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a304e40f25749e49b171b1db4828ff1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a6a1f7b5bb9482d85239a3b39051837', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.068 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965256065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.119 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] resizing rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.143 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.145 254096 DEBUG nova.virt.libvirt.vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:34Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.145 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.146 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.148 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <uuid>d1ceaafd-59a6-45b1-833d-eb2a76e789be</uuid>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <name>instance-0000003a</name>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1732543352</nova:name>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:40:35</nova:creationTime>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <nova:port uuid="d79fd017-c7a6-4bfe-8c90-b3295f62f83c">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <entry name="serial">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <entry name="uuid">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e2:7b:b0"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <target dev="tapd79fd017-c7"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log" append="off"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:40:36 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:40:36 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:40:36 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:40:36 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.150 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Preparing to wait for external event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.150 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.151 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.151 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.151 254096 DEBUG nova.virt.libvirt.vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:34Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.152 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.152 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.153 254096 DEBUG os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.154 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.154 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.157 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd79fd017-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.157 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd79fd017-c7, col_values=(('external_ids', {'iface-id': 'd79fd017-c7a6-4bfe-8c90-b3295f62f83c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:7b:b0', 'vm-uuid': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:36 np0005535469 NetworkManager[48891]: <info>  [1764088836.1595] manager: (tapd79fd017-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.165 254096 INFO os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.202 254096 DEBUG nova.objects.instance [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lazy-loading 'migration_context' on Instance uuid bd217c57-20a2-41c4-a969-7a4d94f0c7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.219 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.220 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Ensure instance console log exists: /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.220 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.220 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.221 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.225 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.226 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.226 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:e2:7b:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.226 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Using config drive#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.244 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.259 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.313 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'keypairs' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.616 254096 DEBUG nova.compute.manager [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.616 254096 DEBUG oslo_concurrency.lockutils [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.617 254096 DEBUG oslo_concurrency.lockutils [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.617 254096 DEBUG oslo_concurrency.lockutils [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.617 254096 DEBUG nova.compute.manager [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Processing event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.661 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Successfully updated port: 9691504b-429d-44e8-bdf5-7f223c5b0527 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.684 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.685 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquired lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.685 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.793 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating config drive at /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.797 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bm0ixpo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.889 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.934 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bm0ixpo" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.958 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:36 np0005535469 nova_compute[254092]: 2025-11-25 16:40:36.962 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.117 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.118 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting local config drive /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config because it was imported into RBD.#033[00m
Nov 25 11:40:37 np0005535469 kernel: tapd79fd017-c7: entered promiscuous mode
Nov 25 11:40:37 np0005535469 NetworkManager[48891]: <info>  [1764088837.1671] manager: (tapd79fd017-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Nov 25 11:40:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:37Z|00542|binding|INFO|Claiming lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c for this chassis.
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:37Z|00543|binding|INFO|d79fd017-c7a6-4bfe-8c90-b3295f62f83c: Claiming fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.177 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.178 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.179 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9#033[00m
Nov 25 11:40:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:37Z|00544|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c ovn-installed in OVS
Nov 25 11:40:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:37Z|00545|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c up in Southbound
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[baf01e6c-8a47-4618-98a2-04a2b9bdc433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.192 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.193 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf5d70f-bdbc-445a-b67f-dbdd1878bede]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.194 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[83ac86fc-f067-4dcf-8879-d57e18fecf56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 systemd-udevd[320544]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:37 np0005535469 systemd-machined[216343]: New machine qemu-68-instance-0000003a.
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.204 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b4233e-56b8-4751-813c-20d2454d5f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 NetworkManager[48891]: <info>  [1764088837.2059] device (tapd79fd017-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:40:37 np0005535469 NetworkManager[48891]: <info>  [1764088837.2069] device (tapd79fd017-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:40:37 np0005535469 systemd[1]: Started Virtual Machine qemu-68-instance-0000003a.
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.227 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45c1d692-4fd3-4e6c-8c76-232bf9439763]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.255 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fbdbe9-af2e-4009-b950-50f8d8212d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.261 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d065bc4-79a2-4361-be0f-987329d96152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 NetworkManager[48891]: <info>  [1764088837.2627] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Nov 25 11:40:37 np0005535469 systemd-udevd[320548]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.270 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Successfully created port: 341145f6-8319-4c2d-aa9d-9d7475a5e7eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.290 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6e99ce-c68d-47b7-b080-23d069a15728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.296 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba71563a-3630-477b-abdb-2a84f67f60d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 NetworkManager[48891]: <info>  [1764088837.3146] device (tap62c0a8be-b0): carrier: link connected
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.323 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a1923b-fa87-4348-80ca-2ba9503e42af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.340 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b521e5f9-59e1-4321-abe1-81a6f664c4e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529490, 'reachable_time': 26368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320577, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[823875e6-92f9-415e-828a-eed3f01e0b35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529490, 'tstamp': 529490}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320578, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.370 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ada9c8b-06ad-4680-8ab8-cf9ce2eaf54b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529490, 'reachable_time': 26368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320579, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.404 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe2070-2258-41b0-b0c6-7f33beda7056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.466 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7210499c-75d1-4d72-8656-dcf029908d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.468 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:37 np0005535469 NetworkManager[48891]: <info>  [1764088837.4714] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 25 11:40:37 np0005535469 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.475 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:37Z|00546|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.477 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.478 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[58b38619-62e2-4bee-a8b2-1e5e26854d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.479 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:40:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.480 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 134 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 5.7 MiB/s wr, 135 op/s
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.617 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for d1ceaafd-59a6-45b1-833d-eb2a76e789be due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.618 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088837.6171453, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.618 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Started (Lifecycle Event)#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.621 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.624 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.627 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance spawned successfully.#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.628 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.640 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.646 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.650 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.651 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.651 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.651 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.652 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.652 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.679 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.679 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088837.62036, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.679 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.691 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.694 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088837.6240833, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.694 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.715 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.717 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.745 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.807 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.892 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.892 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:37 np0005535469 nova_compute[254092]: 2025-11-25 16:40:37.892 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:40:37 np0005535469 podman[320653]: 2025-11-25 16:40:37.938408459 +0000 UTC m=+0.114978100 container create a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:40:37 np0005535469 podman[320653]: 2025-11-25 16:40:37.843941167 +0000 UTC m=+0.020510798 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:40:37 np0005535469 systemd[1]: Started libpod-conmon-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20.scope.
Nov 25 11:40:38 np0005535469 nova_compute[254092]: 2025-11-25 16:40:38.008 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c090a3c2d63b20744daa816dd216a1d72f1b7b16b76ad011f3f5fdfd906ac44b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:38 np0005535469 podman[320653]: 2025-11-25 16:40:38.036618373 +0000 UTC m=+0.213188034 container init a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 11:40:38 np0005535469 podman[320653]: 2025-11-25 16:40:38.042674428 +0000 UTC m=+0.219244069 container start a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 11:40:38 np0005535469 podman[320666]: 2025-11-25 16:40:38.04825359 +0000 UTC m=+0.072136669 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:40:38 np0005535469 podman[320670]: 2025-11-25 16:40:38.057988273 +0000 UTC m=+0.078128260 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:40:38 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : New worker (320735) forked
Nov 25 11:40:38 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : Loading success.
Nov 25 11:40:38 np0005535469 podman[320671]: 2025-11-25 16:40:38.071041568 +0000 UTC m=+0.087054373 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:40:38 np0005535469 nova_compute[254092]: 2025-11-25 16:40:38.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:38 np0005535469 nova_compute[254092]: 2025-11-25 16:40:38.775 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-changed-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:38 np0005535469 nova_compute[254092]: 2025-11-25 16:40:38.775 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Refreshing instance network info cache due to event network-changed-9691504b-429d-44e8-bdf5-7f223c5b0527. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:40:38 np0005535469 nova_compute[254092]: 2025-11-25 16:40:38.775 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.047 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updating instance_info_cache with network_info: [{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.075 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Releasing lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.076 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance network_info: |[{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.077 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.077 254096 DEBUG nova.network.neutron [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Refreshing network info cache for port 9691504b-429d-44e8-bdf5-7f223c5b0527 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.080 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start _get_guest_xml network_info=[{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.084 254096 WARNING nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.090 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.090 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.093 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.094 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.094 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.095 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.095 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.095 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.099 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 134 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 5.2 MiB/s wr, 128 op/s
Nov 25 11:40:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3099929923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.555 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.577 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:39 np0005535469 nova_compute[254092]: 2025-11-25 16:40:39.581 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352668002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.051 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.054 254096 DEBUG nova.virt.libvirt.vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-849851407',display_name='tempest-InstanceActionsNegativeTestJSON-server-849851407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-849851407',id=59,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ea58a9bae9c474bb9f0b9c821689054',ramdisk_id='',reservation_id='r-umqrt77x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1833085706',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1833085706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:29Z,user_data=None,user_id='509d158fe3f34e219f96739bb51bd6d9',uuid=32b30534-761a-439a-85e5-4e2fe8f507df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.055 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converting VIF {"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.056 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.057 254096 DEBUG nova.objects.instance [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32b30534-761a-439a-85e5-4e2fe8f507df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.077 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <uuid>32b30534-761a-439a-85e5-4e2fe8f507df</uuid>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <name>instance-0000003b</name>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-849851407</nova:name>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:40:39</nova:creationTime>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:user uuid="509d158fe3f34e219f96739bb51bd6d9">tempest-InstanceActionsNegativeTestJSON-1833085706-project-member</nova:user>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:project uuid="1ea58a9bae9c474bb9f0b9c821689054">tempest-InstanceActionsNegativeTestJSON-1833085706</nova:project>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <nova:port uuid="9691504b-429d-44e8-bdf5-7f223c5b0527">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <entry name="serial">32b30534-761a-439a-85e5-4e2fe8f507df</entry>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <entry name="uuid">32b30534-761a-439a-85e5-4e2fe8f507df</entry>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/32b30534-761a-439a-85e5-4e2fe8f507df_disk">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/32b30534-761a-439a-85e5-4e2fe8f507df_disk.config">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f4:ec:f2"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <target dev="tap9691504b-42"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/console.log" append="off"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:40:40 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:40:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:40:40 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:40:40 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.085 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Preparing to wait for external event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.087 254096 DEBUG nova.virt.libvirt.vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-849851407',display_name='tempest-InstanceActionsNegativeTestJSON-server-849851407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-849851407',id=59,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ea58a9bae9c474bb9f0b9c821689054',ramdisk_id='',reservation_id='r-umqrt77x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1833085706',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1833085706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:29Z,user_data=None,user_id='509d158fe3f34e219f96739bb51bd6d9',uuid=32b30534-761a-439a-85e5-4e2fe8f507df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:40:40
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.087 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converting VIF {"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr']
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.089 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.090 254096 DEBUG os_vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.091 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.092 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.095 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9691504b-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.096 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9691504b-42, col_values=(('external_ids', {'iface-id': '9691504b-429d-44e8-bdf5-7f223c5b0527', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:ec:f2', 'vm-uuid': '32b30534-761a-439a-85e5-4e2fe8f507df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:40 np0005535469 NetworkManager[48891]: <info>  [1764088840.0984] manager: (tap9691504b-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.104 254096 INFO os_vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42')#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.148 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.149 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.150 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] No VIF found with MAC fa:16:3e:f4:ec:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.151 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Using config drive#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.174 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.685 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Creating config drive at /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.691 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe472clc4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.727 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Successfully updated port: 341145f6-8319-4c2d-aa9d-9d7475a5e7eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.740 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.741 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquired lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.741 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.828 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe472clc4" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.851 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.855 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.915 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.917 254096 WARNING nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state None.#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.917 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-changed-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.917 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Refreshing instance network info cache due to event network-changed-341145f6-8319-4c2d-aa9d-9d7475a5e7eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.918 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:40 np0005535469 nova_compute[254092]: 2025-11-25 16:40:40.955 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.026 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.027 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deleting local config drive /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config because it was imported into RBD.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.068 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.070 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.070 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.070 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.071 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.072 254096 INFO nova.compute.manager [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Terminating instance#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.073 254096 DEBUG nova.compute.manager [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:40:41 np0005535469 kernel: tap9691504b-42: entered promiscuous mode
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.0813] manager: (tap9691504b-42): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00547|binding|INFO|Claiming lport 9691504b-429d-44e8-bdf5-7f223c5b0527 for this chassis.
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00548|binding|INFO|9691504b-429d-44e8-bdf5-7f223c5b0527: Claiming fa:16:3e:f4:ec:f2 10.100.0.11
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.107 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:ec:f2 10.100.0.11'], port_security=['fa:16:3e:f4:ec:f2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32b30534-761a-439a-85e5-4e2fe8f507df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73476371-a7cf-4563-aeaf-a32b30db040e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ea58a9bae9c474bb9f0b9c821689054', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a64acc35-5bf2-40e8-88f9-5321cc37448d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94966a1-d8f7-4eaa-bd8c-6d046b12f822, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9691504b-429d-44e8-bdf5-7f223c5b0527) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.108 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9691504b-429d-44e8-bdf5-7f223c5b0527 in datapath 73476371-a7cf-4563-aeaf-a32b30db040e bound to our chassis#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.109 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 73476371-a7cf-4563-aeaf-a32b30db040e#033[00m
Nov 25 11:40:41 np0005535469 systemd-udevd[320880]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:41 np0005535469 systemd-machined[216343]: New machine qemu-69-instance-0000003b.
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3aa91dd-a66e-4e50-ac3a-3d0b85708f68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.122 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap73476371-a1 in ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.123 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap73476371-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.123 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ee24e5ca-9422-4df2-8a26-811f46b7b1a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.123 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b71819a3-5b1c-4c30-bc71-23a5a5e25846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.1342] device (tap9691504b-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.1353] device (tap9691504b-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:40:41 np0005535469 systemd[1]: Started Virtual Machine qemu-69-instance-0000003b.
Nov 25 11:40:41 np0005535469 kernel: tapd79fd017-c7 (unregistering): left promiscuous mode
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.140 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[91a7c9be-147d-4f82-a4a0-d06bc2b7dbca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.1468] device (tapd79fd017-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.169 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[691e9791-311b-44dd-b9d5-58f224e72566]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00549|binding|INFO|Releasing lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c from this chassis (sb_readonly=0)
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00550|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c down in Southbound
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00551|binding|INFO|Removing iface tapd79fd017-c7 ovn-installed in OVS
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.186 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:41 np0005535469 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 25 11:40:41 np0005535469 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003a.scope: Consumed 3.881s CPU time.
Nov 25 11:40:41 np0005535469 systemd-machined[216343]: Machine qemu-68-instance-0000003a terminated.
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.197 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[14ed5e64-bd0e-4278-99ed-70fad21cecb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00552|binding|INFO|Setting lport 9691504b-429d-44e8-bdf5-7f223c5b0527 up in Southbound
Nov 25 11:40:41 np0005535469 systemd-udevd[320884]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00553|binding|INFO|Setting lport 9691504b-429d-44e8-bdf5-7f223c5b0527 ovn-installed in OVS
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.2053] manager: (tap73476371-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.204 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60caa7ca-ba3d-4e2e-beb3-f4ff43ba283f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.241 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[641883ae-e6c4-4651-a53e-f8ca1184a8fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.244 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fad5fae0-4430-4c81-86e0-960f117d8e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.254 254096 DEBUG nova.network.neutron [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updated VIF entry in instance network info cache for port 9691504b-429d-44e8-bdf5-7f223c5b0527. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.255 254096 DEBUG nova.network.neutron [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updating instance_info_cache with network_info: [{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.2679] device (tap73476371-a0): carrier: link connected
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.269 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.269 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.271 254096 WARNING nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state None.#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.275 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[480415de-363b-45fa-90be-046e1e69df86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.2951] manager: (tapd79fd017-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.299 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1e1e74-ae4a-4b91-bdb2-a8db95e5247b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73476371-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:5f:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529885, 'reachable_time': 20577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320920, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.310 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance destroyed successfully.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.311 254096 DEBUG nova.objects.instance [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.315 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3a4902-486a-4f85-884a-2a2f4f0e2fa1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:5f9e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529885, 'tstamp': 529885}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320925, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.328 254096 DEBUG nova.virt.libvirt.vif [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:40:37Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.329 254096 DEBUG nova.network.os_vif_util [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.330 254096 DEBUG nova.network.os_vif_util [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.331 254096 DEBUG os_vif [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.334 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd79fd017-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.333 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e16f390-59b6-49ee-9e63-4d88201996c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73476371-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:5f:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529885, 'reachable_time': 20577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320931, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.340 254096 INFO os_vif [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.364 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60b5b1cc-8514-4ddd-a40d-18f865adb9c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.435 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e102ad28-6e3d-47f3-ae72-96bf34c29c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.436 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73476371-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.436 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.436 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73476371-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 NetworkManager[48891]: <info>  [1764088841.4389] manager: (tap73476371-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 25 11:40:41 np0005535469 kernel: tap73476371-a0: entered promiscuous mode
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.442 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap73476371-a0, col_values=(('external_ids', {'iface-id': '4f2051e7-8850-4104-b92b-9573772689cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:41Z|00554|binding|INFO|Releasing lport 4f2051e7-8850-4104-b92b-9573772689cb from this chassis (sb_readonly=0)
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.467 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/73476371-a7cf-4563-aeaf-a32b30db040e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/73476371-a7cf-4563-aeaf-a32b30db040e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.468 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e906d93a-d114-4836-b6cf-44496339724a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.469 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-73476371-a7cf-4563-aeaf-a32b30db040e
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/73476371-a7cf-4563-aeaf-a32b30db040e.pid.haproxy
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 73476371-a7cf-4563-aeaf-a32b30db040e
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:40:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.469 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'env', 'PROCESS_TAG=haproxy-73476371-a7cf-4563-aeaf-a32b30db040e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/73476371-a7cf-4563-aeaf-a32b30db040e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:40:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 180 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.0 MiB/s wr, 237 op/s
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.543 254096 DEBUG nova.compute.manager [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.544 254096 DEBUG oslo_concurrency.lockutils [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.544 254096 DEBUG oslo_concurrency.lockutils [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.545 254096 DEBUG oslo_concurrency.lockutils [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.545 254096 DEBUG nova.compute.manager [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Processing event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.565 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088841.5648766, 32b30534-761a-439a-85e5-4e2fe8f507df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.566 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Started (Lifecycle Event)#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.568 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.571 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.574 254096 INFO nova.virt.libvirt.driver [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance spawned successfully.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.574 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.588 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.594 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.598 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.599 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.599 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.600 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.600 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.601 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.626 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.626 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088841.5649772, 32b30534-761a-439a-85e5-4e2fe8f507df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.627 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.648 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.650 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088841.5701854, 32b30534-761a-439a-85e5-4e2fe8f507df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.650 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.658 254096 INFO nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 11.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.658 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.681 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.684 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.711 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.721 254096 INFO nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 13.07 seconds to build instance.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.736 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:41 np0005535469 podman[321023]: 2025-11-25 16:40:41.85631541 +0000 UTC m=+0.050809959 container create 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.885 254096 INFO nova.virt.libvirt.driver [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting instance files /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.887 254096 INFO nova.virt.libvirt.driver [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deletion of /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del complete#033[00m
Nov 25 11:40:41 np0005535469 systemd[1]: Started libpod-conmon-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27.scope.
Nov 25 11:40:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:41 np0005535469 podman[321023]: 2025-11-25 16:40:41.833108721 +0000 UTC m=+0.027603300 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:40:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438c9251a188e84b6838f46344ecd1bfc883a1c4248292af4a79de88d209154/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.939 254096 INFO nova.compute.manager [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.940 254096 DEBUG oslo.service.loopingcall [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.940 254096 DEBUG nova.compute.manager [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:40:41 np0005535469 nova_compute[254092]: 2025-11-25 16:40:41.941 254096 DEBUG nova.network.neutron [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:40:41 np0005535469 podman[321023]: 2025-11-25 16:40:41.944819011 +0000 UTC m=+0.139313590 container init 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:40:41 np0005535469 podman[321023]: 2025-11-25 16:40:41.952324825 +0000 UTC m=+0.146819384 container start 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 11:40:41 np0005535469 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : New worker (321045) forked
Nov 25 11:40:41 np0005535469 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : Loading success.
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.050 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.052 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.053 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c55713bf-b565-4b3a-a5f2-42d2d9a9c741]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.054 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore#033[00m
Nov 25 11:40:42 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : haproxy version is 2.8.14-c23fe91
Nov 25 11:40:42 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : path to executable is /usr/sbin/haproxy
Nov 25 11:40:42 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [WARNING]  (320727) : Exiting Master process...
Nov 25 11:40:42 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [ALERT]    (320727) : Current worker (320735) exited with code 143 (Terminated)
Nov 25 11:40:42 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [WARNING]  (320727) : All workers exited. Exiting... (0)
Nov 25 11:40:42 np0005535469 systemd[1]: libpod-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20.scope: Deactivated successfully.
Nov 25 11:40:42 np0005535469 podman[321072]: 2025-11-25 16:40:42.228022185 +0000 UTC m=+0.083269000 container died a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:40:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20-userdata-shm.mount: Deactivated successfully.
Nov 25 11:40:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c090a3c2d63b20744daa816dd216a1d72f1b7b16b76ad011f3f5fdfd906ac44b-merged.mount: Deactivated successfully.
Nov 25 11:40:42 np0005535469 podman[321072]: 2025-11-25 16:40:42.581139165 +0000 UTC m=+0.436385960 container cleanup a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:40:42 np0005535469 systemd[1]: libpod-conmon-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20.scope: Deactivated successfully.
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.638 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updating instance_info_cache with network_info: [{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.703 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Releasing lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.704 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance network_info: |[{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.704 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.705 254096 DEBUG nova.network.neutron [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Refreshing network info cache for port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.708 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start _get_guest_xml network_info=[{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.713 254096 WARNING nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.719 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.720 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.729 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.730 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.731 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.731 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.732 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.733 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.733 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.734 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:40:42 np0005535469 podman[321098]: 2025-11-25 16:40:42.734366531 +0000 UTC m=+0.132151396 container remove a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.735 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.735 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.736 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.736 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.736 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.737 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.740 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[82137892-5adf-44eb-b9b1-9c982bfeea41]: (4, ('Tue Nov 25 04:40:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20)\na8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20\nTue Nov 25 04:40:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20)\na8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b467f97-fc4b-4d67-9368-45f070caf66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.743 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:42 np0005535469 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4279717f-c7ac-4d66-92f5-f282a2c66eb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:42 np0005535469 nova_compute[254092]: 2025-11-25 16:40:42.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.777 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b65acb-f292-47af-824e-d81259973119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.778 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[518d9455-27d3-4d78-a16b-146f55ceb935]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e988a641-f4e3-46d5-a861-8c03d907fbef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529484, 'reachable_time': 21588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321115, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:42 np0005535469 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.802 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:40:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.802 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[cede02c2-d52d-422a-8efe-394ea929e7a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.125 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.126 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.126 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.127 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.127 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.128 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.128 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.128 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.129 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.129 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.129 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.130 254096 WARNING nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356384302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.215 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.236 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.241 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.272 254096 DEBUG nova.network.neutron [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.289 254096 INFO nova.compute.manager [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 1.35 seconds to deallocate network for instance.#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.344 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.345 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.444 254096 DEBUG oslo_concurrency.processutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.488 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.488 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.517 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:40:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 180 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.604 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.624 254096 DEBUG nova.compute.manager [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.625 254096 DEBUG oslo_concurrency.lockutils [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.626 254096 DEBUG oslo_concurrency.lockutils [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.627 254096 DEBUG oslo_concurrency.lockutils [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.628 254096 DEBUG nova.compute.manager [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] No waiting events found dispatching network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.629 254096 WARNING nova.compute.manager [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received unexpected event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:40:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193363460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.690 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.694 254096 DEBUG nova.virt.libvirt.vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-2012655337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-2012655337',id=60,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a6a1f7b5bb9482d85239a3b39051837',ramdisk_id='',reservation_id='r-7kghzncd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-694375426',owner_user_name='tempest-InstanceActionsV221TestJSON-694375426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:35Z,user_data=None,user_id='8a304e40f25749e49b171b1db4828ff1',uuid=bd217c57-20a2-41c4-a969-7a4d94f0c7ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.695 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converting VIF {"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.698 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.702 254096 DEBUG nova.objects.instance [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd217c57-20a2-41c4-a969-7a4d94f0c7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.728 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <uuid>bd217c57-20a2-41c4-a969-7a4d94f0c7ce</uuid>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <name>instance-0000003c</name>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-2012655337</nova:name>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:40:42</nova:creationTime>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:user uuid="8a304e40f25749e49b171b1db4828ff1">tempest-InstanceActionsV221TestJSON-694375426-project-member</nova:user>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:project uuid="8a6a1f7b5bb9482d85239a3b39051837">tempest-InstanceActionsV221TestJSON-694375426</nova:project>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <nova:port uuid="341145f6-8319-4c2d-aa9d-9d7475a5e7eb">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <entry name="serial">bd217c57-20a2-41c4-a969-7a4d94f0c7ce</entry>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <entry name="uuid">bd217c57-20a2-41c4-a969-7a4d94f0c7ce</entry>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:cc:6c:f3"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <target dev="tap341145f6-83"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/console.log" append="off"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:40:43 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:40:43 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:40:43 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:40:43 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.731 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Preparing to wait for external event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.731 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.732 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.732 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.735 254096 DEBUG nova.virt.libvirt.vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-2012655337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-2012655337',id=60,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a6a1f7b5bb9482d85239a3b39051837',ramdisk_id='',reservation_id='r-7kghzncd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-694375426',owner_user_name='tempest-InstanceActionsV221TestJSON-694375426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:35Z,user_data=None,user_id='8a304e40f25749e49b171b1db4828ff1',uuid=bd217c57-20a2-41c4-a969-7a4d94f0c7ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.736 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converting VIF {"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.738 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.739 254096 DEBUG os_vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.742 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.744 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.750 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap341145f6-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.751 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap341145f6-83, col_values=(('external_ids', {'iface-id': '341145f6-8319-4c2d-aa9d-9d7475a5e7eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:6c:f3', 'vm-uuid': 'bd217c57-20a2-41c4-a969-7a4d94f0c7ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:43 np0005535469 NetworkManager[48891]: <info>  [1764088843.7558] manager: (tap341145f6-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.764 254096 INFO os_vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83')#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.817 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.818 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.819 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] No VIF found with MAC fa:16:3e:cc:6c:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.820 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Using config drive#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.857 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026401179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.896 254096 DEBUG oslo_concurrency.processutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.902 254096 DEBUG nova.compute.provider_tree [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.915 254096 DEBUG nova.scheduler.client.report [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.935 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.937 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.942 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.942 254096 INFO nova.compute.claims [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:40:43 np0005535469 nova_compute[254092]: 2025-11-25 16:40:43.979 254096 INFO nova.scheduler.client.report [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Deleted allocations for instance d1ceaafd-59a6-45b1-833d-eb2a76e789be#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.037 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.083 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.146 254096 DEBUG nova.network.neutron [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updated VIF entry in instance network info cache for port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.146 254096 DEBUG nova.network.neutron [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updating instance_info_cache with network_info: [{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.159 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.268 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Creating config drive at /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.274 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4tpxgaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.419 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4tpxgaj" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.439 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.442 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253537573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.547 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.556 254096 DEBUG nova.compute.provider_tree [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.572 254096 DEBUG nova.scheduler.client.report [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.584 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.585 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deleting local config drive /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config because it was imported into RBD.#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.611 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.612 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.614 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.614 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.614 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.615 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.615 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.616 254096 INFO nova.compute.manager [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Terminating instance#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.617 254096 DEBUG nova.compute.manager [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:40:44 np0005535469 kernel: tap341145f6-83: entered promiscuous mode
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.6297] manager: (tap341145f6-83): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00555|binding|INFO|Claiming lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb for this chassis.
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00556|binding|INFO|341145f6-8319-4c2d-aa9d-9d7475a5e7eb: Claiming fa:16:3e:cc:6c:f3 10.100.0.10
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.651 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6c:f3 10.100.0.10'], port_security=['fa:16:3e:cc:6c:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bd217c57-20a2-41c4-a969-7a4d94f0c7ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd021dd-a32c-4996-b105-78f4369f31fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a6a1f7b5bb9482d85239a3b39051837', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06e61d98-1ba5-4078-aa67-4e1101d6ea8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0771bd5-9169-46dc-85a6-0c2b379eeaa2, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=341145f6-8319-4c2d-aa9d-9d7475a5e7eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.652 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb in datapath 4fd021dd-a32c-4996-b105-78f4369f31fc bound to our chassis#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.653 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd021dd-a32c-4996-b105-78f4369f31fc#033[00m
Nov 25 11:40:44 np0005535469 systemd-udevd[321295]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:44 np0005535469 systemd-machined[216343]: New machine qemu-70-instance-0000003c.
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.664 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1d5240-00d4-48b3-9578-3adfe742bbe7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.665 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4fd021dd-a1 in ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.667 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.668 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.669 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4fd021dd-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.669 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39973123-1d8d-47d6-89a7-33da9b55ef25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.670 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2e5b41-151b-429b-9619-e05a15ad3fde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 kernel: tap9691504b-42 (unregistering): left promiscuous mode
Nov 25 11:40:44 np0005535469 systemd[1]: Started Virtual Machine qemu-70-instance-0000003c.
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.6778] device (tap9691504b-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.6786] device (tap341145f6-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.6791] device (tap341145f6-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.681 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6feb9c-0b69-4059-ab51-70a1b3707514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.687 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6319db-d7fe-486d-9063-d5f4cec6c87e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.713 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00557|binding|INFO|Releasing lport 9691504b-429d-44e8-bdf5-7f223c5b0527 from this chassis (sb_readonly=0)
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00558|binding|INFO|Setting lport 9691504b-429d-44e8-bdf5-7f223c5b0527 down in Southbound
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00559|binding|INFO|Removing iface tap9691504b-42 ovn-installed in OVS
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.736 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:ec:f2 10.100.0.11'], port_security=['fa:16:3e:f4:ec:f2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32b30534-761a-439a-85e5-4e2fe8f507df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73476371-a7cf-4563-aeaf-a32b30db040e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ea58a9bae9c474bb9f0b9c821689054', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a64acc35-5bf2-40e8-88f9-5321cc37448d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94966a1-d8f7-4eaa-bd8c-6d046b12f822, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9691504b-429d-44e8-bdf5-7f223c5b0527) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.739 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfcac93-d3cc-4625-a586-2b64ffe34320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00560|binding|INFO|Setting lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb up in Southbound
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00561|binding|INFO|Setting lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb ovn-installed in OVS
Nov 25 11:40:44 np0005535469 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 25 11:40:44 np0005535469 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003b.scope: Consumed 3.442s CPU time.
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[749a39c2-c375-456e-85ea-50bd4f76d47c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.7487] manager: (tap4fd021dd-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Nov 25 11:40:44 np0005535469 systemd-udevd[321298]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:44 np0005535469 systemd-machined[216343]: Machine qemu-69-instance-0000003b terminated.
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.778 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9b4402-3f0a-4fd6-ba1b-ca6fa26aa2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.781 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[71833f51-72fd-4913-b541-580357030091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.8020] device (tap4fd021dd-a0): carrier: link connected
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.808 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e390fc4d-67e1-47dc-864a-100d7eadd9cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.821 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.823 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.823 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating image(s)#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb8b512-559f-4d59-a3bf-af3b3a1e885a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd021dd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:61:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530239, 'reachable_time': 31722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321333, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.8350] manager: (tap9691504b-42): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.841 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b3d64f-1b3e-4c17-b167-25ff994e5708]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:6112'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530239, 'tstamp': 530239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321337, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.850 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.861 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be1a7744-fc8e-4eb1-b82c-65b810782765]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd021dd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:61:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530239, 'reachable_time': 31722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321359, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.881 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.889 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8593a7-0591-4d86-a55b-c42fcb72e9f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.907 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.911 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.945 254096 DEBUG nova.policy [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c1fd56de7cd4f5c9b1d85ffe8545c90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48b5e09a-5251-48c8-8955-c4fc5a6fa77b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.947 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd021dd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.948 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.948 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd021dd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:44 np0005535469 NetworkManager[48891]: <info>  [1764088844.9508] manager: (tap4fd021dd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 kernel: tap4fd021dd-a0: entered promiscuous mode
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.958 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd021dd-a0, col_values=(('external_ids', {'iface-id': 'a9d87038-3e5b-43cb-8024-6dad7e852af0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:44Z|00562|binding|INFO|Releasing lport a9d87038-3e5b-43cb-8024-6dad7e852af0 from this chassis (sb_readonly=0)
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.961 254096 INFO nova.virt.libvirt.driver [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance destroyed successfully.#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.961 254096 DEBUG nova.objects.instance [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lazy-loading 'resources' on Instance uuid 32b30534-761a-439a-85e5-4e2fe8f507df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.982 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4fd021dd-a32c-4996-b105-78f4369f31fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4fd021dd-a32c-4996-b105-78f4369f31fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.982 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[143a875e-1f55-45cf-8c53-17cfb22b59e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.983 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-4fd021dd-a32c-4996-b105-78f4369f31fc
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/4fd021dd-a32c-4996-b105-78f4369f31fc.pid.haproxy
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 4fd021dd-a32c-4996-b105-78f4369f31fc
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:40:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.985 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'env', 'PROCESS_TAG=haproxy-4fd021dd-a32c-4996-b105-78f4369f31fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4fd021dd-a32c-4996-b105-78f4369f31fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.985 254096 DEBUG nova.virt.libvirt.vif [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-849851407',display_name='tempest-InstanceActionsNegativeTestJSON-server-849851407',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-849851407',id=59,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ea58a9bae9c474bb9f0b9c821689054',ramdisk_id='',reservation_id='r-umqrt77x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1833085706',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1833085706-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:40:41Z,user_data=None,user_id='509d158fe3f34e219f96739bb51bd6d9',uuid=32b30534-761a-439a-85e5-4e2fe8f507df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.985 254096 DEBUG nova.network.os_vif_util [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converting VIF {"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.986 254096 DEBUG nova.network.os_vif_util [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.987 254096 DEBUG os_vif [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.989 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9691504b-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:44 np0005535469 nova_compute[254092]: 2025-11-25 16:40:44.993 254096 INFO os_vif [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42')#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.012 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.012 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.037 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.041 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.074 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088845.0274746, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.075 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Started (Lifecycle Event)#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.100 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.104 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088845.0275977, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.105 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.129 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.133 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.162 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.228 254096 DEBUG nova.compute.manager [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-deleted-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.229 254096 DEBUG nova.compute.manager [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.229 254096 DEBUG oslo_concurrency.lockutils [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.229 254096 DEBUG oslo_concurrency.lockutils [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.230 254096 DEBUG oslo_concurrency.lockutils [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.230 254096 DEBUG nova.compute.manager [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Processing event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.231 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.245 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.246 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088845.245339, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.247 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.252 254096 INFO nova.virt.libvirt.driver [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance spawned successfully.#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.252 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.272 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.272 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.273 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.273 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.273 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.274 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.277 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.281 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.299 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.348 254096 INFO nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 9.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.349 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:45 np0005535469 podman[321533]: 2025-11-25 16:40:45.405033536 +0000 UTC m=+0.071007429 container create 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.413 254096 INFO nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 10.74 seconds to build instance.#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.434 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:45 np0005535469 systemd[1]: Started libpod-conmon-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62.scope.
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.448 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:45 np0005535469 podman[321533]: 2025-11-25 16:40:45.357877766 +0000 UTC m=+0.023851689 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:40:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a24bbb3b2d46df9d259f5d274fdace0aea75909df6f8ee405b5020d775c516/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:45 np0005535469 podman[321533]: 2025-11-25 16:40:45.498949103 +0000 UTC m=+0.164923026 container init 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:40:45 np0005535469 podman[321533]: 2025-11-25 16:40:45.50658254 +0000 UTC m=+0.172556433 container start 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:40:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 156 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.536 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:40:45 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : New worker (321591) forked
Nov 25 11:40:45 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : Loading success.
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.574 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9691504b-429d-44e8-bdf5-7f223c5b0527 in datapath 73476371-a7cf-4563-aeaf-a32b30db040e unbound from our chassis#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.577 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 73476371-a7cf-4563-aeaf-a32b30db040e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.578 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aeec7b8e-e870-4acf-9ffe-387e54275a21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e namespace which is not needed anymore#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.644 254096 DEBUG nova.objects.instance [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.666 254096 INFO nova.virt.libvirt.driver [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deleting instance files /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df_del#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.667 254096 INFO nova.virt.libvirt.driver [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deletion of /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df_del complete#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.671 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.671 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Ensure instance console log exists: /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.672 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.673 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.674 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:45 np0005535469 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : haproxy version is 2.8.14-c23fe91
Nov 25 11:40:45 np0005535469 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : path to executable is /usr/sbin/haproxy
Nov 25 11:40:45 np0005535469 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [WARNING]  (321043) : Exiting Master process...
Nov 25 11:40:45 np0005535469 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [ALERT]    (321043) : Current worker (321045) exited with code 143 (Terminated)
Nov 25 11:40:45 np0005535469 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [WARNING]  (321043) : All workers exited. Exiting... (0)
Nov 25 11:40:45 np0005535469 systemd[1]: libpod-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27.scope: Deactivated successfully.
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.733 254096 INFO nova.compute.manager [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.733 254096 DEBUG oslo.service.loopingcall [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.734 254096 DEBUG nova.compute.manager [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.734 254096 DEBUG nova.network.neutron [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.738 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-unplugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.738 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] No waiting events found dispatching network-vif-unplugged-9691504b-429d-44e8-bdf5-7f223c5b0527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-unplugged-9691504b-429d-44e8-bdf5-7f223c5b0527 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] No waiting events found dispatching network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 WARNING nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received unexpected event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:40:45 np0005535469 podman[321653]: 2025-11-25 16:40:45.742708966 +0000 UTC m=+0.048912848 container died 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:40:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27-userdata-shm.mount: Deactivated successfully.
Nov 25 11:40:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a438c9251a188e84b6838f46344ecd1bfc883a1c4248292af4a79de88d209154-merged.mount: Deactivated successfully.
Nov 25 11:40:45 np0005535469 podman[321653]: 2025-11-25 16:40:45.78671126 +0000 UTC m=+0.092915122 container cleanup 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:40:45 np0005535469 systemd[1]: libpod-conmon-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27.scope: Deactivated successfully.
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.807 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Successfully created port: 60897ca4-9177-413c-b0f0-808dbc7d34dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:40:45 np0005535469 podman[321681]: 2025-11-25 16:40:45.857668025 +0000 UTC m=+0.047090689 container remove 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.868 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f51c9b3-5178-41d5-8334-b7443cae5acf]: (4, ('Tue Nov 25 04:40:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e (5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27)\n5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27\nTue Nov 25 04:40:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e (5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27)\n5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.870 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[718b0d64-8572-43cd-82e4-7e7ff9e44be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.871 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73476371-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.873 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:45 np0005535469 kernel: tap73476371-a0: left promiscuous mode
Nov 25 11:40:45 np0005535469 nova_compute[254092]: 2025-11-25 16:40:45.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.892 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[837ce464-687c-4e32-99ab-033198b23cca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.906 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cee1b144-e3fd-42fc-b260-c0054e7637ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.907 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f74c302b-821e-4191-8bd0-a1442f0ad3a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.924 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a28bf8-ca95-458b-beea-263529f6ce3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529878, 'reachable_time': 25319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321697, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:45 np0005535469 systemd[1]: run-netns-ovnmeta\x2d73476371\x2da7cf\x2d4563\x2daeaf\x2da32b30db040e.mount: Deactivated successfully.
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.929 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:40:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.929 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b734c298-66e7-484b-b7a7-106f4ba25244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.617 254096 DEBUG nova.network.neutron [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.630 254096 INFO nova.compute.manager [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 0.90 seconds to deallocate network for instance.#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.671 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.671 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.686 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Successfully updated port: 60897ca4-9177-413c-b0f0-808dbc7d34dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.696 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.696 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquired lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.696 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.748 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.750 254096 INFO nova.compute.manager [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Terminating instance#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.751 254096 DEBUG nova.compute.manager [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.757 254096 DEBUG oslo_concurrency.processutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:46 np0005535469 kernel: tap341145f6-83 (unregistering): left promiscuous mode
Nov 25 11:40:46 np0005535469 NetworkManager[48891]: <info>  [1764088846.7947] device (tap341145f6-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:40:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:46Z|00563|binding|INFO|Releasing lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb from this chassis (sb_readonly=0)
Nov 25 11:40:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:46Z|00564|binding|INFO|Setting lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb down in Southbound
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:46Z|00565|binding|INFO|Removing iface tap341145f6-83 ovn-installed in OVS
Nov 25 11:40:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.811 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6c:f3 10.100.0.10'], port_security=['fa:16:3e:cc:6c:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bd217c57-20a2-41c4-a969-7a4d94f0c7ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd021dd-a32c-4996-b105-78f4369f31fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a6a1f7b5bb9482d85239a3b39051837', 'neutron:revision_number': '4', 'neutron:security_group_ids': '06e61d98-1ba5-4078-aa67-4e1101d6ea8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0771bd5-9169-46dc-85a6-0c2b379eeaa2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=341145f6-8319-4c2d-aa9d-9d7475a5e7eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.812 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb in datapath 4fd021dd-a32c-4996-b105-78f4369f31fc unbound from our chassis#033[00m
Nov 25 11:40:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.813 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4fd021dd-a32c-4996-b105-78f4369f31fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:40:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15283d85-3895-4375-89a0-630a68d31ec6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.814 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc namespace which is not needed anymore#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:46 np0005535469 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 25 11:40:46 np0005535469 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003c.scope: Consumed 1.886s CPU time.
Nov 25 11:40:46 np0005535469 systemd-machined[216343]: Machine qemu-70-instance-0000003c terminated.
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.898 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:40:46 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : haproxy version is 2.8.14-c23fe91
Nov 25 11:40:46 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : path to executable is /usr/sbin/haproxy
Nov 25 11:40:46 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [WARNING]  (321586) : Exiting Master process...
Nov 25 11:40:46 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [WARNING]  (321586) : Exiting Master process...
Nov 25 11:40:46 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [ALERT]    (321586) : Current worker (321591) exited with code 143 (Terminated)
Nov 25 11:40:46 np0005535469 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [WARNING]  (321586) : All workers exited. Exiting... (0)
Nov 25 11:40:46 np0005535469 systemd[1]: libpod-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62.scope: Deactivated successfully.
Nov 25 11:40:46 np0005535469 podman[321726]: 2025-11-25 16:40:46.941098878 +0000 UTC m=+0.041423185 container died 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:40:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62-userdata-shm.mount: Deactivated successfully.
Nov 25 11:40:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e9a24bbb3b2d46df9d259f5d274fdace0aea75909df6f8ee405b5020d775c516-merged.mount: Deactivated successfully.
Nov 25 11:40:46 np0005535469 NetworkManager[48891]: <info>  [1764088846.9703] manager: (tap341145f6-83): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:46 np0005535469 podman[321726]: 2025-11-25 16:40:46.980771594 +0000 UTC m=+0.081095901 container cleanup 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.994 254096 INFO nova.virt.libvirt.driver [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance destroyed successfully.#033[00m
Nov 25 11:40:46 np0005535469 nova_compute[254092]: 2025-11-25 16:40:46.995 254096 DEBUG nova.objects.instance [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lazy-loading 'resources' on Instance uuid bd217c57-20a2-41c4-a969-7a4d94f0c7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:47 np0005535469 systemd[1]: libpod-conmon-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62.scope: Deactivated successfully.
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.004 254096 DEBUG nova.virt.libvirt.vif [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-2012655337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-2012655337',id=60,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a6a1f7b5bb9482d85239a3b39051837',ramdisk_id='',reservation_id='r-7kghzncd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-694375426',owner_user_name='tempest-InstanceActionsV221TestJSON-694375426-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:40:45Z,user_data=None,user_id='8a304e40f25749e49b171b1db4828ff1',uuid=bd217c57-20a2-41c4-a969-7a4d94f0c7ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.004 254096 DEBUG nova.network.os_vif_util [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converting VIF {"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.005 254096 DEBUG nova.network.os_vif_util [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.005 254096 DEBUG os_vif [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.007 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap341145f6-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.010 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.013 254096 INFO os_vif [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83')#033[00m
Nov 25 11:40:47 np0005535469 podman[321772]: 2025-11-25 16:40:47.077348385 +0000 UTC m=+0.071495871 container remove 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.084 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5214f364-8d8b-4b8c-84dd-fbfa04f1d29f]: (4, ('Tue Nov 25 04:40:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc (0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62)\n0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62\nTue Nov 25 04:40:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc (0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62)\n0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.086 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd848a5c-435e-40e6-838c-7941fe6c4fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.087 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd021dd-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:47 np0005535469 kernel: tap4fd021dd-a0: left promiscuous mode
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.106 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4dcd9d3-34a8-412e-9da3-659a712cd982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e473b3f5-52a5-476b-8068-82c08d643f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.122 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11afd887-6413-4f3c-abf5-85d78f16cb29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ecf6bb-0e2a-43d7-a32d-eabeefab0d64]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530232, 'reachable_time': 20508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321805, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:47 np0005535469 systemd[1]: run-netns-ovnmeta\x2d4fd021dd\x2da32c\x2d4996\x2db105\x2d78f4369f31fc.mount: Deactivated successfully.
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.142 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:40:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.142 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[81c22bb6-eedb-4f1e-b67e-f13bdc6a8475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918936432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.230 254096 DEBUG oslo_concurrency.processutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.237 254096 DEBUG nova.compute.provider_tree [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.251 254096 DEBUG nova.scheduler.client.report [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.273 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.309 254096 INFO nova.scheduler.client.report [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Deleted allocations for instance 32b30534-761a-439a-85e5-4e2fe8f507df#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.382 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.400 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.401 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.401 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.402 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.403 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] No waiting events found dispatching network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.403 254096 WARNING nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received unexpected event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.403 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-deleted-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-unplugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.405 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] No waiting events found dispatching network-vif-unplugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.405 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-unplugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.496 254096 INFO nova.virt.libvirt.driver [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deleting instance files /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_del#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.496 254096 INFO nova.virt.libvirt.driver [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deletion of /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_del complete#033[00m
Nov 25 11:40:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 133 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 239 op/s
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.539 254096 INFO nova.compute.manager [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.540 254096 DEBUG oslo.service.loopingcall [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.540 254096 DEBUG nova.compute.manager [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.540 254096 DEBUG nova.network.neutron [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.845 254096 DEBUG nova.compute.manager [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-changed-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.845 254096 DEBUG nova.compute.manager [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Refreshing instance network info cache due to event network-changed-60897ca4-9177-413c-b0f0-808dbc7d34dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:40:47 np0005535469 nova_compute[254092]: 2025-11-25 16:40:47.845 254096 DEBUG oslo_concurrency.lockutils [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.021 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.036 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Releasing lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.037 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance network_info: |[{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.037 254096 DEBUG oslo_concurrency.lockutils [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.037 254096 DEBUG nova.network.neutron [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Refreshing network info cache for port 60897ca4-9177-413c-b0f0-808dbc7d34dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.040 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start _get_guest_xml network_info=[{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.044 254096 WARNING nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.049 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.049 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.055 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.055 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.056 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.056 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.056 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.061 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769453002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.492 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.513 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.516 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.896 254096 DEBUG nova.network.neutron [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.915 254096 INFO nova.compute.manager [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 25 11:40:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:40:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709853231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.958 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.959 254096 DEBUG nova.virt.libvirt.vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:44Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.959 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.960 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.961 254096 DEBUG nova.objects.instance [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.969 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.970 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.979 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <uuid>fef208e1-3706-4d03-8385-12418e9dc230</uuid>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <name>instance-0000003d</name>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1975310078</nova:name>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:40:48</nova:creationTime>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <nova:port uuid="60897ca4-9177-413c-b0f0-808dbc7d34dc">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <entry name="serial">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <entry name="uuid">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk.config">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:43:f9:27"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <target dev="tap60897ca4-91"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log" append="off"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:40:48 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:40:48 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:40:48 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:40:48 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.981 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Preparing to wait for external event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.981 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.982 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.982 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.983 254096 DEBUG nova.virt.libvirt.vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:44Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.983 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.984 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.984 254096 DEBUG os_vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.985 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.986 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.989 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60897ca4-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.989 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60897ca4-91, col_values=(('external_ids', {'iface-id': '60897ca4-9177-413c-b0f0-808dbc7d34dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:f9:27', 'vm-uuid': 'fef208e1-3706-4d03-8385-12418e9dc230'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:48 np0005535469 NetworkManager[48891]: <info>  [1764088848.9919] manager: (tap60897ca4-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:48 np0005535469 nova_compute[254092]: 2025-11-25 16:40:48.999 254096 INFO os_vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.035 254096 DEBUG oslo_concurrency.processutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.099 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.100 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.100 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:43:f9:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.100 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Using config drive#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.119 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.297 254096 DEBUG nova.network.neutron [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updated VIF entry in instance network info cache for port 60897ca4-9177-413c-b0f0-808dbc7d34dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.298 254096 DEBUG nova.network.neutron [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.311 254096 DEBUG oslo_concurrency.lockutils [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:40:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:40:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042085003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.488 254096 DEBUG nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.488 254096 DEBUG oslo_concurrency.lockutils [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.489 254096 DEBUG oslo_concurrency.lockutils [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.489 254096 DEBUG oslo_concurrency.lockutils [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.489 254096 DEBUG nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] No waiting events found dispatching network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.490 254096 WARNING nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received unexpected event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.490 254096 DEBUG nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-deleted-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.490 254096 DEBUG oslo_concurrency.processutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.495 254096 DEBUG nova.compute.provider_tree [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.507 254096 DEBUG nova.scheduler.client.report [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.528 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 133 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 229 op/s
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.560 254096 INFO nova.scheduler.client.report [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Deleted allocations for instance bd217c57-20a2-41c4-a969-7a4d94f0c7ce#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.584 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating config drive at /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.589 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb4cjicgz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.632 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.725 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb4cjicgz" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.746 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.749 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.891 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.891 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting local config drive /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config because it was imported into RBD.#033[00m
Nov 25 11:40:49 np0005535469 kernel: tap60897ca4-91: entered promiscuous mode
Nov 25 11:40:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:49Z|00566|binding|INFO|Claiming lport 60897ca4-9177-413c-b0f0-808dbc7d34dc for this chassis.
Nov 25 11:40:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:49Z|00567|binding|INFO|60897ca4-9177-413c-b0f0-808dbc7d34dc: Claiming fa:16:3e:43:f9:27 10.100.0.13
Nov 25 11:40:49 np0005535469 NetworkManager[48891]: <info>  [1764088849.9424] manager: (tap60897ca4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.948 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.950 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis#033[00m
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.951 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9#033[00m
Nov 25 11:40:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:49Z|00568|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc ovn-installed in OVS
Nov 25 11:40:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:49Z|00569|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc up in Southbound
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[656aae32-eb43-4c55-bbb8-21dae40a426c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.967 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:49 np0005535469 nova_compute[254092]: 2025-11-25 16:40:49.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.969 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.969 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[69b26c1f-a9d1-4753-8ff1-b87ba208ff11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.972 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a931f1-1d66-40c0-ae92-ae849c54a839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:49 np0005535469 systemd-machined[216343]: New machine qemu-71-instance-0000003d.
Nov 25 11:40:49 np0005535469 systemd-udevd[321967]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:40:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.985 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5e2cf5-ba60-4d43-a092-96d8746904af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:49 np0005535469 NetworkManager[48891]: <info>  [1764088849.9904] device (tap60897ca4-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:40:49 np0005535469 NetworkManager[48891]: <info>  [1764088849.9916] device (tap60897ca4-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:40:49 np0005535469 systemd[1]: Started Virtual Machine qemu-71-instance-0000003d.
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb9993a-7a93-450e-929e-dec331f11331]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.040 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a5c2ba-5131-481d-8828-43a9e680e1c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[365660c4-1ba1-4b84-a201-d30ce4c21bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 NetworkManager[48891]: <info>  [1764088850.0457] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/248)
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.073 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ec808376-12b5-43bc-8b22-931d6ce5b9fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.077 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[90156b18-582a-448c-af6a-2b50fb8c1052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 NetworkManager[48891]: <info>  [1764088850.0960] device (tap62c0a8be-b0): carrier: link connected
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.104 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eae810f4-9762-44f9-bd8a-3e960f4451e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6b1fc5-4d30-4757-a68e-35374931e4bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530768, 'reachable_time': 29192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321999, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a99f7d3-611f-4bf3-b53a-420bc57ac5df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530768, 'tstamp': 530768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322000, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[89e80b2e-dbd7-45a7-a0ef-4a87755276ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530768, 'reachable_time': 29192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322001, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG nova.compute.manager [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG oslo_concurrency.lockutils [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG oslo_concurrency.lockutils [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG oslo_concurrency.lockutils [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.159 254096 DEBUG nova.compute.manager [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Processing event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae90551-0e41-4cd0-b982-c1739282c4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.238 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26bedeb9-89f4-4583-a08e-da3047e55d49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.240 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.240 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.240 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:50 np0005535469 NetworkManager[48891]: <info>  [1764088850.2430] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Nov 25 11:40:50 np0005535469 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.249 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.250 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:50Z|00570|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.265 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.266 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28ce38e6-71d9-43e1-bff9-b0455f1014fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.267 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:40:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.267 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.472 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088850.4715316, fef208e1-3706-4d03-8385-12418e9dc230 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.472 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Started (Lifecycle Event)#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.474 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.477 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.481 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance spawned successfully.#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.481 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.502 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.508 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.510 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.511 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.511 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.512 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.512 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.512 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.559 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.559 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088850.4717445, fef208e1-3706-4d03-8385-12418e9dc230 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.560 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.588 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.592 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088850.477283, fef208e1-3706-4d03-8385-12418e9dc230 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.592 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.599 254096 INFO nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 5.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.600 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:50 np0005535469 podman[322076]: 2025-11-25 16:40:50.623249383 +0000 UTC m=+0.053429971 container create f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.623 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.627 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:40:50 np0005535469 systemd[1]: Started libpod-conmon-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839.scope.
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.662 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:40:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.684 254096 INFO nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 7.11 seconds to build instance.#033[00m
Nov 25 11:40:50 np0005535469 podman[322076]: 2025-11-25 16:40:50.596889468 +0000 UTC m=+0.027070086 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:40:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41736214c61e8e784bfe20f83e87831dc58ca28b27d833d91e8e86513c66c372/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:40:50 np0005535469 podman[322076]: 2025-11-25 16:40:50.700999152 +0000 UTC m=+0.131179760 container init f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:40:50 np0005535469 nova_compute[254092]: 2025-11-25 16:40:50.700 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:50 np0005535469 podman[322076]: 2025-11-25 16:40:50.705991308 +0000 UTC m=+0.136171896 container start f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:40:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:50 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : New worker (322097) forked
Nov 25 11:40:50 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : Loading success.
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007549488832454256 of space, bias 1.0, pg target 0.22648466497362765 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:40:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 88 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 369 op/s
Nov 25 11:40:51 np0005535469 ovn_controller[153477]: 2025-11-25T16:40:51Z|00571|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 11:40:51 np0005535469 nova_compute[254092]: 2025-11-25 16:40:51.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:52 np0005535469 nova_compute[254092]: 2025-11-25 16:40:52.250 254096 DEBUG nova.compute.manager [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:40:52 np0005535469 nova_compute[254092]: 2025-11-25 16:40:52.250 254096 DEBUG oslo_concurrency.lockutils [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:40:52 np0005535469 nova_compute[254092]: 2025-11-25 16:40:52.250 254096 DEBUG oslo_concurrency.lockutils [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:40:52 np0005535469 nova_compute[254092]: 2025-11-25 16:40:52.251 254096 DEBUG oslo_concurrency.lockutils [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:40:52 np0005535469 nova_compute[254092]: 2025-11-25 16:40:52.251 254096 DEBUG nova.compute.manager [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:40:52 np0005535469 nova_compute[254092]: 2025-11-25 16:40:52.251 254096 WARNING nova.compute.manager [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state active and task_state None.#033[00m
Nov 25 11:40:53 np0005535469 nova_compute[254092]: 2025-11-25 16:40:53.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 88 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 260 op/s
Nov 25 11:40:54 np0005535469 nova_compute[254092]: 2025-11-25 16:40:54.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:40:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3373983916' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:40:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:40:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3373983916' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:40:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 303 op/s
Nov 25 11:40:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:40:56 np0005535469 nova_compute[254092]: 2025-11-25 16:40:56.309 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088841.3079152, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:56 np0005535469 nova_compute[254092]: 2025-11-25 16:40:56.309 254096 INFO nova.compute.manager [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:40:56 np0005535469 nova_compute[254092]: 2025-11-25 16:40:56.335 254096 DEBUG nova.compute.manager [None req-691f60b8-290b-4390-845e-6ae545da3643 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:56 np0005535469 nova_compute[254092]: 2025-11-25 16:40:56.987 254096 INFO nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Rebuilding instance#033[00m
Nov 25 11:40:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.548 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.562 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.617 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_requests' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.627 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.639 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.657 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.665 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:40:57 np0005535469 nova_compute[254092]: 2025-11-25 16:40:57.668 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:40:58 np0005535469 nova_compute[254092]: 2025-11-25 16:40:58.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:59 np0005535469 nova_compute[254092]: 2025-11-25 16:40:59.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:40:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 207 op/s
Nov 25 11:40:59 np0005535469 nova_compute[254092]: 2025-11-25 16:40:59.944 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088844.8449953, 32b30534-761a-439a-85e5-4e2fe8f507df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:40:59 np0005535469 nova_compute[254092]: 2025-11-25 16:40:59.944 254096 INFO nova.compute.manager [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:40:59 np0005535469 nova_compute[254092]: 2025-11-25 16:40:59.964 254096 DEBUG nova.compute.manager [None req-5dcc45d8-ba3c-4fa3-aab7-fb410213162b - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 207 op/s
Nov 25 11:41:01 np0005535469 nova_compute[254092]: 2025-11-25 16:41:01.991 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088846.9888165, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:01 np0005535469 nova_compute[254092]: 2025-11-25 16:41:01.992 254096 INFO nova.compute.manager [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:41:02 np0005535469 nova_compute[254092]: 2025-11-25 16:41:02.009 254096 DEBUG nova.compute.manager [None req-5d329d7f-1a65-492e-af01-773778943f64 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:02Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:f9:27 10.100.0.13
Nov 25 11:41:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:02Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:f9:27 10.100.0.13
Nov 25 11:41:03 np0005535469 nova_compute[254092]: 2025-11-25 16:41:03.176 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:03 np0005535469 nova_compute[254092]: 2025-11-25 16:41:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.504 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.504 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.549 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.673 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.673 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.684 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.684 254096 INFO nova.compute.claims [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:04 np0005535469 nova_compute[254092]: 2025-11-25 16:41:04.814 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520643533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.265 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.272 254096 DEBUG nova.compute.provider_tree [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.290 254096 DEBUG nova.scheduler.client.report [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.317 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.318 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.384 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.385 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.414 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.440 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:41:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 111 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.564 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.566 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.566 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Creating image(s)#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.590 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.614 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.643 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.648 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.716 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.717 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.718 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.718 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.741 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:05 np0005535469 nova_compute[254092]: 2025-11-25 16:41:05.745 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.082 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.146 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] resizing rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.229 254096 DEBUG nova.objects.instance [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'migration_context' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.245 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.247 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Ensure instance console log exists: /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.248 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.248 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.248 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.476 254096 DEBUG nova.policy [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0d077039e0b4d9e8d5663768f40fa48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:06 np0005535469 nova_compute[254092]: 2025-11-25 16:41:06.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.225 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.226 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.242 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.324 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.325 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.335 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.335 254096 INFO nova.compute.claims [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.509 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:07 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:41:07 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:41:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.712 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:41:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894277031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.974 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.980 254096 DEBUG nova.compute.provider_tree [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:07 np0005535469 nova_compute[254092]: 2025-11-25 16:41:07.996 254096 DEBUG nova.scheduler.client.report [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.023 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.023 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.027 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.027 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.027 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.028 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.119 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.120 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.147 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.184 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.316 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.319 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.319 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Creating image(s)#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.344 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.377 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.406 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.411 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867797437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.477 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.478 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.478 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.479 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.498 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.501 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 dce3a591-9fb6-4495-a7fb-867af2de384f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.614 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.615 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:41:08 np0005535469 podman[322426]: 2025-11-25 16:41:08.639746842 +0000 UTC m=+0.064699916 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 11:41:08 np0005535469 podman[322418]: 2025-11-25 16:41:08.639842254 +0000 UTC m=+0.065076877 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 25 11:41:08 np0005535469 podman[322427]: 2025-11-25 16:41:08.669980091 +0000 UTC m=+0.090621759 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.794 254096 DEBUG nova.policy [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0d077039e0b4d9e8d5663768f40fa48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.827 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.829 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3988MB free_disk=59.94293212890625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.829 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.829 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.869 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 dce3a591-9fb6-4495-a7fb-867af2de384f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.925 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] resizing rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.981 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance fef208e1-3706-4d03-8385-12418e9dc230 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.981 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 013dc18e-57cd-4733-8e98-7d20e3b5c4db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.982 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance dce3a591-9fb6-4495-a7fb-867af2de384f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.982 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:41:08 np0005535469 nova_compute[254092]: 2025-11-25 16:41:08.982 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.119 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.182 254096 DEBUG nova.objects.instance [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'migration_context' on Instance uuid dce3a591-9fb6-4495-a7fb-867af2de384f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.212 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.212 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Ensure instance console log exists: /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.213 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.213 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.213 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077200639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.557 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.566 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.636 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.704 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:41:09 np0005535469 nova_compute[254092]: 2025-11-25 16:41:09.704 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:41:10 np0005535469 kernel: tap60897ca4-91 (unregistering): left promiscuous mode
Nov 25 11:41:10 np0005535469 NetworkManager[48891]: <info>  [1764088870.4542] device (tap60897ca4-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:41:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:10Z|00572|binding|INFO|Releasing lport 60897ca4-9177-413c-b0f0-808dbc7d34dc from this chassis (sb_readonly=0)
Nov 25 11:41:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:10Z|00573|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc down in Southbound
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:10Z|00574|binding|INFO|Removing iface tap60897ca4-91 ovn-installed in OVS
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.494 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.496 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.497 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.498 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d577917-b312-4529-afbf-7985c9d138bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.498 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore#033[00m
Nov 25 11:41:10 np0005535469 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 25 11:41:10 np0005535469 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003d.scope: Consumed 12.890s CPU time.
Nov 25 11:41:10 np0005535469 systemd-machined[216343]: Machine qemu-71-instance-0000003d terminated.
Nov 25 11:41:10 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : haproxy version is 2.8.14-c23fe91
Nov 25 11:41:10 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : path to executable is /usr/sbin/haproxy
Nov 25 11:41:10 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [WARNING]  (322095) : Exiting Master process...
Nov 25 11:41:10 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [ALERT]    (322095) : Current worker (322097) exited with code 143 (Terminated)
Nov 25 11:41:10 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [WARNING]  (322095) : All workers exited. Exiting... (0)
Nov 25 11:41:10 np0005535469 systemd[1]: libpod-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839.scope: Deactivated successfully.
Nov 25 11:41:10 np0005535469 podman[322619]: 2025-11-25 16:41:10.621617548 +0000 UTC m=+0.042476673 container died f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:41:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839-userdata-shm.mount: Deactivated successfully.
Nov 25 11:41:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-41736214c61e8e784bfe20f83e87831dc58ca28b27d833d91e8e86513c66c372-merged.mount: Deactivated successfully.
Nov 25 11:41:10 np0005535469 podman[322619]: 2025-11-25 16:41:10.676942059 +0000 UTC m=+0.097801104 container cleanup f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:41:10 np0005535469 systemd[1]: libpod-conmon-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839.scope: Deactivated successfully.
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.704 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.726 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.733 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance destroyed successfully.#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.739 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance destroyed successfully.#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.740 254096 DEBUG nova.virt.libvirt.vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:55Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.740 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.741 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.741 254096 DEBUG os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.743 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60897ca4-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:10 np0005535469 podman[322654]: 2025-11-25 16:41:10.748087019 +0000 UTC m=+0.050290645 container remove f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.750 254096 INFO os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.753 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bea5f54d-59df-43c7-861d-7e841f54147a]: (4, ('Tue Nov 25 04:41:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839)\nf9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839\nTue Nov 25 04:41:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839)\nf9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.755 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[608a7085-1069-4e0e-9d79-76a867317cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.756 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:10 np0005535469 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:10 np0005535469 nova_compute[254092]: 2025-11-25 16:41:10.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.775 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9be7268-95cd-47c5-8b0e-955de14665cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.790 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[639d685b-a2bd-4c4e-b20e-111c4e7e6556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.791 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dd24a085-cf68-40c4-84bb-1812705b3577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.805 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4172e0-5d74-4df1-b361-927f321c4bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530762, 'reachable_time': 38528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322692, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.807 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:41:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.807 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[31700902-6c8d-4502-be4a-691890b3e5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:10 np0005535469 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.122 254096 DEBUG nova.compute.manager [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-unplugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.122 254096 DEBUG oslo_concurrency.lockutils [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 DEBUG oslo_concurrency.lockutils [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 DEBUG oslo_concurrency.lockutils [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 DEBUG nova.compute.manager [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-unplugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 WARNING nova.compute.manager [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-unplugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state active and task_state rebuilding.#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.194 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Successfully created port: 5136afea-102e-46a1-8fdb-0af970c5af04 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.235 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting instance files /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.236 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deletion of /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del complete#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.419 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.420 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating image(s)#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.448 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.473 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.498 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.502 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 213 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.560 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.561 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.580 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.595 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.596 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.597 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.597 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.625 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.630 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.698 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.699 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.709 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.709 254096 INFO nova.compute.claims [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:11 np0005535469 nova_compute[254092]: 2025-11-25 16:41:11.916 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.044 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.098 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.183 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.184 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Ensure instance console log exists: /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.184 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.185 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.185 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.187 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start _get_guest_xml network_info=[{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.191 254096 WARNING nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.197 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.198 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.204 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.204 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.205 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.205 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.205 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.208 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.208 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.234 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553204244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.393 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.400 254096 DEBUG nova.compute.provider_tree [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.416 254096 DEBUG nova.scheduler.client.report [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.539 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.539 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890891610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.675 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.711 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.718 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.749 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.749 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.834 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:12 np0005535469 nova_compute[254092]: 2025-11-25 16:41:12.913 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593126229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.153 254096 DEBUG nova.policy [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0d077039e0b4d9e8d5663768f40fa48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.158 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.159 254096 DEBUG nova.virt.libvirt.vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:11Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.159 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.160 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.162 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <uuid>fef208e1-3706-4d03-8385-12418e9dc230</uuid>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <name>instance-0000003d</name>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1975310078</nova:name>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:12</nova:creationTime>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <nova:port uuid="60897ca4-9177-413c-b0f0-808dbc7d34dc">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <entry name="serial">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <entry name="uuid">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk.config">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:43:f9:27"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <target dev="tap60897ca4-91"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log" append="off"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:41:13 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:41:13 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:41:13 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:41:13 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.163 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Preparing to wait for external event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.164 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.166 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.166 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.167 254096 DEBUG nova.virt.libvirt.vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:11Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.167 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.167 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.168 254096 DEBUG os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.171 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.172 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.172 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Creating image(s)#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.194 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.270 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.291 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.294 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.328 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Successfully created port: 9e60e140-ca34-40f4-b867-d7c53f05bca4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.336 254096 DEBUG nova.compute.manager [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.337 254096 DEBUG oslo_concurrency.lockutils [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.337 254096 DEBUG oslo_concurrency.lockutils [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.337 254096 DEBUG oslo_concurrency.lockutils [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.338 254096 DEBUG nova.compute.manager [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Processing event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.339 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60897ca4-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.340 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60897ca4-91, col_values=(('external_ids', {'iface-id': '60897ca4-9177-413c-b0f0-808dbc7d34dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:f9:27', 'vm-uuid': 'fef208e1-3706-4d03-8385-12418e9dc230'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:13 np0005535469 NetworkManager[48891]: <info>  [1764088873.3427] manager: (tap60897ca4-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.347 254096 INFO os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.365 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.366 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.367 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.367 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.388 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.391 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.455 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.456 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.456 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:43:f9:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.456 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Using config drive#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.477 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.496 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:13 np0005535469 nova_compute[254092]: 2025-11-25 16:41:13.531 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'keypairs' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 213 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Nov 25 11:41:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:13.619 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:13.619 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.194 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.803s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.241 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] resizing rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.360 254096 DEBUG nova.objects.instance [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'migration_context' on Instance uuid b5c5a442-8e8e-40c5-9634-e36c49e6e41b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.381 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.382 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Ensure instance console log exists: /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.382 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.383 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.383 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.526 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.526 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.683 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating config drive at /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.688 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpml21mrbe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.822 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpml21mrbe" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.842 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:14 np0005535469 nova_compute[254092]: 2025-11-25 16:41:14.846 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.118 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.118 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting local config drive /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config because it was imported into RBD.#033[00m
Nov 25 11:41:15 np0005535469 kernel: tap60897ca4-91: entered promiscuous mode
Nov 25 11:41:15 np0005535469 NetworkManager[48891]: <info>  [1764088875.1681] manager: (tap60897ca4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:15Z|00575|binding|INFO|Claiming lport 60897ca4-9177-413c-b0f0-808dbc7d34dc for this chassis.
Nov 25 11:41:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:15Z|00576|binding|INFO|60897ca4-9177-413c-b0f0-808dbc7d34dc: Claiming fa:16:3e:43:f9:27 10.100.0.13
Nov 25 11:41:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:15Z|00577|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc ovn-installed in OVS
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:15 np0005535469 systemd-udevd[323182]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:15 np0005535469 NetworkManager[48891]: <info>  [1764088875.1997] device (tap60897ca4-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:41:15 np0005535469 NetworkManager[48891]: <info>  [1764088875.2006] device (tap60897ca4-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:41:15 np0005535469 systemd-machined[216343]: New machine qemu-72-instance-0000003d.
Nov 25 11:41:15 np0005535469 systemd[1]: Started Virtual Machine qemu-72-instance-0000003d.
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.523 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.522 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.524 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:41:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 214 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 8.4 MiB/s wr, 182 op/s
Nov 25 11:41:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:15Z|00578|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc up in Southbound
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.721 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.722 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.724 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.731 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for fef208e1-3706-4d03-8385-12418e9dc230 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.732 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088875.7313852, fef208e1-3706-4d03-8385-12418e9dc230 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.732 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Started (Lifecycle Event)#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.735 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39927fc1-a7b6-423b-a373-86b5f77c4aa4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.738 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.738 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.740 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67dbe776-3c47-4699-af6b-f51bea8486d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.741 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance spawned successfully.#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.742 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da5fea1e-1037-4d25-a3d6-1364d0b0d896]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.755 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[19a6407e-0785-447a-aa78-bac53e8c54d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.761 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.771 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.772 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e4d2a9-b1cd-4b09-9883-75ef053b0599]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.775 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.775 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.776 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.776 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.777 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.777 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.799 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d55693-bfc7-4ac9-936e-f10d2ea17ea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.802 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.803 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088875.7347782, fef208e1-3706-4d03-8385-12418e9dc230 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.803 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:41:15 np0005535469 NetworkManager[48891]: <info>  [1764088875.8056] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.804 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b57baa4-1459-4b0e-903f-f6b7864f8339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.833 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.836 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088875.7373233, fef208e1-3706-4d03-8385-12418e9dc230 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.837 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.843 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b76b4a69-5ca4-4b86-9c78-49173b6cf4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.846 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[293aaf1f-9833-4130-8279-2ec71c030bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.856 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.858 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.864 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:15 np0005535469 NetworkManager[48891]: <info>  [1764088875.8685] device (tap62c0a8be-b0): carrier: link connected
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.873 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ede3c6f4-b644-416c-a90f-fbf26dc5bc0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.891 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.892 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8a4459-476c-40d9-b878-e063845c2ed6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533345, 'reachable_time': 22257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.906 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca954004-a1d7-4b28-b1c0-6ef0ccb536e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533345, 'tstamp': 533345}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323260, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.912 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.912 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.913 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9b6b2e-c651-4cfc-94b0-bc9bae24dad6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533345, 'reachable_time': 22257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323261, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.948 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1df239f6-66e3-48c8-8fe5-413a237f8b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:15 np0005535469 nova_compute[254092]: 2025-11-25 16:41:15.973 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.012 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea2cae5-9e9f-47e6-beb0-8602ead4ab3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.014 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.014 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:16 np0005535469 NetworkManager[48891]: <info>  [1764088876.0505] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Nov 25 11:41:16 np0005535469 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.052 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:16Z|00579|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.069 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.070 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70b7acdb-62ae-44aa-9efa-23dcb0456f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.071 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:41:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.071 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:41:16 np0005535469 podman[323293]: 2025-11-25 16:41:16.477525886 +0000 UTC m=+0.102016968 container create dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:41:16 np0005535469 podman[323293]: 2025-11-25 16:41:16.400004574 +0000 UTC m=+0.024495676 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:41:16 np0005535469 systemd[1]: Started libpod-conmon-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727.scope.
Nov 25 11:41:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc6a3717c2bdadab6ffc5e32dfd90bd78792ae46a7514d2c4fc20fc246fb569/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:16 np0005535469 podman[323293]: 2025-11-25 16:41:16.57936825 +0000 UTC m=+0.203859362 container init dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:41:16 np0005535469 podman[323293]: 2025-11-25 16:41:16.585889556 +0000 UTC m=+0.210380638 container start dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.599 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Successfully updated port: 5136afea-102e-46a1-8fdb-0af970c5af04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:16 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : New worker (323313) forked
Nov 25 11:41:16 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : Loading success.
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.614 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.615 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.615 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.769 254096 DEBUG nova.compute.manager [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-changed-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.770 254096 DEBUG nova.compute.manager [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Refreshing instance network info cache due to event network-changed-5136afea-102e-46a1-8fdb-0af970c5af04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:41:16 np0005535469 nova_compute[254092]: 2025-11-25 16:41:16.770 254096 DEBUG oslo_concurrency.lockutils [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:17 np0005535469 nova_compute[254092]: 2025-11-25 16:41:17.420 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 226 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 150 KiB/s rd, 7.5 MiB/s wr, 151 op/s
Nov 25 11:41:17 np0005535469 nova_compute[254092]: 2025-11-25 16:41:17.626 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Successfully created port: 1404e99c-a32c-404a-a7d6-3daccc67c48b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:18 np0005535469 nova_compute[254092]: 2025-11-25 16:41:18.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:18 np0005535469 nova_compute[254092]: 2025-11-25 16:41:18.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:18 np0005535469 nova_compute[254092]: 2025-11-25 16:41:18.818 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:18 np0005535469 nova_compute[254092]: 2025-11-25 16:41:18.835 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:18 np0005535469 nova_compute[254092]: 2025-11-25 16:41:18.836 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.315 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Successfully updated port: 9e60e140-ca34-40f4-b867-d7c53f05bca4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.393 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.394 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.394 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 226 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 7.1 MiB/s wr, 142 op/s
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.626 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.631 254096 DEBUG nova.compute.manager [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-changed-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.632 254096 DEBUG nova.compute.manager [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Refreshing instance network info cache due to event network-changed-9e60e140-ca34-40f4-b867-d7c53f05bca4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.632 254096 DEBUG oslo_concurrency.lockutils [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.786 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.786 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance network_info: |[{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.786 254096 DEBUG oslo_concurrency.lockutils [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.787 254096 DEBUG nova.network.neutron [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Refreshing network info cache for port 5136afea-102e-46a1-8fdb-0af970c5af04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.790 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start _get_guest_xml network_info=[{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.794 254096 WARNING nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.799 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.800 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.802 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.803 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.803 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.803 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.808 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:19 np0005535469 nova_compute[254092]: 2025-11-25 16:41:19.841 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.222 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.226 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.226 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.226 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.227 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.227 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.228 254096 INFO nova.compute.manager [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Terminating instance#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.229 254096 DEBUG nova.compute.manager [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:41:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009668596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.278 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.310 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.315 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:20 np0005535469 kernel: tap60897ca4-91 (unregistering): left promiscuous mode
Nov 25 11:41:20 np0005535469 NetworkManager[48891]: <info>  [1764088880.3833] device (tap60897ca4-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00580|binding|INFO|Releasing lport 60897ca4-9177-413c-b0f0-808dbc7d34dc from this chassis (sb_readonly=0)
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00581|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc down in Southbound
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00582|binding|INFO|Removing iface tap60897ca4-91 ovn-installed in OVS
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.395 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.447 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.449 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis#033[00m
Nov 25 11:41:20 np0005535469 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 25 11:41:20 np0005535469 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003d.scope: Consumed 4.998s CPU time.
Nov 25 11:41:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.451 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:41:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.452 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02467b22-ddbb-4ae4-bbef-4a41401e8a84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.453 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore#033[00m
Nov 25 11:41:20 np0005535469 systemd-machined[216343]: Machine qemu-72-instance-0000003d terminated.
Nov 25 11:41:20 np0005535469 kernel: tap60897ca4-91: entered promiscuous mode
Nov 25 11:41:20 np0005535469 NetworkManager[48891]: <info>  [1764088880.6765] manager: (tap60897ca4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Nov 25 11:41:20 np0005535469 systemd-udevd[323366]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00583|binding|INFO|Claiming lport 60897ca4-9177-413c-b0f0-808dbc7d34dc for this chassis.
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00584|binding|INFO|60897ca4-9177-413c-b0f0-808dbc7d34dc: Claiming fa:16:3e:43:f9:27 10.100.0.13
Nov 25 11:41:20 np0005535469 kernel: tap60897ca4-91 (unregistering): left promiscuous mode
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00585|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc ovn-installed in OVS
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00586|if_status|INFO|Dropped 5 log messages in last 116 seconds (most recently, 116 seconds ago) due to excessive rate
Nov 25 11:41:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:20Z|00587|if_status|INFO|Not setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc down as sb is readonly
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.726 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance destroyed successfully.#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.727 254096 DEBUG nova.objects.instance [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.737 254096 DEBUG nova.virt.libvirt.vif [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:15Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.738 254096 DEBUG nova.network.os_vif_util [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.739 254096 DEBUG nova.network.os_vif_util [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.739 254096 DEBUG os_vif [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.742 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60897ca4-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.746 254096 INFO os_vif [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')#033[00m
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 11:41:20 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : haproxy version is 2.8.14-c23fe91
Nov 25 11:41:20 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : path to executable is /usr/sbin/haproxy
Nov 25 11:41:20 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [WARNING]  (323311) : Exiting Master process...
Nov 25 11:41:20 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [ALERT]    (323311) : Current worker (323313) exited with code 143 (Terminated)
Nov 25 11:41:20 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [WARNING]  (323311) : All workers exited. Exiting... (0)
Nov 25 11:41:20 np0005535469 systemd[1]: libpod-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727.scope: Deactivated successfully.
Nov 25 11:41:20 np0005535469 podman[323406]: 2025-11-25 16:41:20.806318055 +0000 UTC m=+0.244969037 container died dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:41:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766047327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.856 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.859 254096 DEBUG nova.virt.libvirt.vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:05Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.859 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.861 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.863 254096 DEBUG nova.objects.instance [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.897 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <uuid>013dc18e-57cd-4733-8e98-7d20e3b5c4db</uuid>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <name>instance-0000003e</name>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1278256596</nova:name>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:19</nova:creationTime>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <nova:port uuid="5136afea-102e-46a1-8fdb-0af970c5af04">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <entry name="serial">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <entry name="uuid">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:70:61:e3"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <target dev="tap5136afea-10"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/console.log" append="off"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:41:20 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:41:20 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:41:20 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:41:20 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.899 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Preparing to wait for external event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.900 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.900 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.900 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.901 254096 DEBUG nova.virt.libvirt.vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:05Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.902 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.902 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.903 254096 DEBUG os_vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.904 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.904 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.908 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.908 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5136afea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.909 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5136afea-10, col_values=(('external_ids', {'iface-id': '5136afea-102e-46a1-8fdb-0af970c5af04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:61:e3', 'vm-uuid': '013dc18e-57cd-4733-8e98-7d20e3b5c4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:20 np0005535469 NetworkManager[48891]: <info>  [1764088880.9124] manager: (tap5136afea-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:20 np0005535469 nova_compute[254092]: 2025-11-25 16:41:20.979 254096 INFO os_vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')#033[00m
Nov 25 11:41:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:21Z|00588|binding|INFO|Releasing lport 60897ca4-9177-413c-b0f0-808dbc7d34dc from this chassis (sb_readonly=0)
Nov 25 11:41:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:21.023 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:21 np0005535469 nova_compute[254092]: 2025-11-25 16:41:21.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:21.186 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:21 np0005535469 nova_compute[254092]: 2025-11-25 16:41:21.277 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:21 np0005535469 nova_compute[254092]: 2025-11-25 16:41:21.277 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:21 np0005535469 nova_compute[254092]: 2025-11-25 16:41:21.278 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No VIF found with MAC fa:16:3e:70:61:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:41:21 np0005535469 nova_compute[254092]: 2025-11-25 16:41:21.278 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Using config drive#033[00m
Nov 25 11:41:21 np0005535469 nova_compute[254092]: 2025-11-25 16:41:21.470 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 227 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 212 op/s
Nov 25 11:41:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727-userdata-shm.mount: Deactivated successfully.
Nov 25 11:41:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4bc6a3717c2bdadab6ffc5e32dfd90bd78792ae46a7514d2c4fc20fc246fb569-merged.mount: Deactivated successfully.
Nov 25 11:41:21 np0005535469 podman[323406]: 2025-11-25 16:41:21.966182172 +0000 UTC m=+1.404833154 container cleanup dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:41:22 np0005535469 podman[323496]: 2025-11-25 16:41:22.313167355 +0000 UTC m=+0.326203671 container remove dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 11:41:22 np0005535469 systemd[1]: libpod-conmon-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727.scope: Deactivated successfully.
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.319 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a3ed69-84e5-4d16-9c00-9a62aebf0c30]: (4, ('Tue Nov 25 04:41:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727)\ndfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727\nTue Nov 25 04:41:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727)\ndfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.322 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[609822df-e4dc-482f-bc48-48785b88efa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.323 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:22 np0005535469 nova_compute[254092]: 2025-11-25 16:41:22.325 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:22 np0005535469 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 11:41:22 np0005535469 nova_compute[254092]: 2025-11-25 16:41:22.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.348 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d15aed22-8c92-48f8-8109-51b3ff41ba7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.367 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1ce592-f093-4dde-97d4-584b04b5ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.368 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b076c51c-62a0-4dc5-b5aa-fd5453c85d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.387 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29b57587-c5e4-4236-b09a-af7c079e047c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533338, 'reachable_time': 33699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323515, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.391 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.391 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[19a718b6-f9bc-4817-9b6b-0bf20ac79b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.392 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis#033[00m
Nov 25 11:41:22 np0005535469 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.394 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.395 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[988d2694-8122-4ca4-8ea4-fe10ac7e515a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.396 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.397 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:41:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.397 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26125889-0c33-458f-a3b2-9d8dbfc64b17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 227 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.811 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Creating config drive at /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.816 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaqiov4s6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.854 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Successfully updated port: 1404e99c-a32c-404a-a7d6-3daccc67c48b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.877 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.878 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.878 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.904 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updating instance_info_cache with network_info: [{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.931 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.931 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance network_info: |[{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.932 254096 DEBUG oslo_concurrency.lockutils [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.932 254096 DEBUG nova.network.neutron [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Refreshing network info cache for port 9e60e140-ca34-40f4-b867-d7c53f05bca4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.935 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start _get_guest_xml network_info=[{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': 'a4aa3708-bb73-4b5a-b3f3-42153358021e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.939 254096 WARNING nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.945 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.946 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.952 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.952 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.953 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.953 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.954 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.954 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.954 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.956 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.956 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.956 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.959 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:23 np0005535469 nova_compute[254092]: 2025-11-25 16:41:23.987 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaqiov4s6" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.009 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.013 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.194 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.230 254096 INFO nova.virt.libvirt.driver [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting instance files /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.231 254096 INFO nova.virt.libvirt.driver [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deletion of /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del complete#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.249 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.250 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deleting local config drive /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config because it was imported into RBD.#033[00m
Nov 25 11:41:24 np0005535469 kernel: tap5136afea-10: entered promiscuous mode
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 NetworkManager[48891]: <info>  [1764088884.3038] manager: (tap5136afea-10): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:24Z|00589|binding|INFO|Claiming lport 5136afea-102e-46a1-8fdb-0af970c5af04 for this chassis.
Nov 25 11:41:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:24Z|00590|binding|INFO|5136afea-102e-46a1-8fdb-0af970c5af04: Claiming fa:16:3e:70:61:e3 10.100.0.13
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.319 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.321 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.322 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.331 254096 INFO nova.compute.manager [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 4.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.331 254096 DEBUG oslo.service.loopingcall [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.332 254096 DEBUG nova.compute.manager [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.332 254096 DEBUG nova.network.neutron [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:41:24 np0005535469 systemd-udevd[323590]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[758ff0bb-eb7a-470a-be92-3130dc2b505f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.337 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf00f265b-61 in ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.338 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf00f265b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.338 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6755d9-a128-4360-88bb-9f31940acf17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 systemd-machined[216343]: New machine qemu-73-instance-0000003e.
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.345 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0db630d-31e5-49ea-aa79-a4642d1bd52d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 NetworkManager[48891]: <info>  [1764088884.3523] device (tap5136afea-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:41:24 np0005535469 NetworkManager[48891]: <info>  [1764088884.3536] device (tap5136afea-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:41:24 np0005535469 systemd[1]: Started Virtual Machine qemu-73-instance-0000003e.
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.358 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:24Z|00591|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 ovn-installed in OVS
Nov 25 11:41:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:24Z|00592|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 up in Southbound
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.361 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.362 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0643ed-8fa0-4fd9-af88-8d9f54ee760d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.387 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1174d8-aad6-42d1-82fb-762d4dc5f209]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054519550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.418 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c3f56d-b1e5-47a0-87d1-55d20b6d8fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 systemd-udevd[323594]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.424 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e81bf8d-95c7-4c12-878a-f704ef779e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 NetworkManager[48891]: <info>  [1764088884.4258] manager: (tapf00f265b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.427 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.458 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cd738bc4-4328-4e8d-8940-74601e482f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.462 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[31247e0a-0051-4729-8603-3578093eeb39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.466 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.476 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:24 np0005535469 NetworkManager[48891]: <info>  [1764088884.4834] device (tapf00f265b-60): carrier: link connected
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.489 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe8c11a-f4dc-4281-8179-ef705036f61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.505 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf27eaa0-0536-40fe-8920-60cd79f6d6ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323644, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.507 254096 DEBUG nova.network.neutron [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updated VIF entry in instance network info cache for port 5136afea-102e-46a1-8fdb-0af970c5af04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.513 254096 DEBUG nova.network.neutron [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.523 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feae20a5-451d-42ed-b5df-ff6539cef218]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:fc66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534207, 'tstamp': 534207}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323645, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.524 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.532 254096 DEBUG oslo_concurrency.lockutils [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.541 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04fc487a-0b2f-4158-aa0e-b813978277f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323646, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[885e4165-070c-47ae-9ea0-671d466a1794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.625 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2164b1e9-3b63-498b-aa60-e444fb4073da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.627 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.627 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:24 np0005535469 kernel: tapf00f265b-60: entered promiscuous mode
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.628 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 NetworkManager[48891]: <info>  [1764088884.6295] manager: (tapf00f265b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.631 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:24Z|00593|binding|INFO|Releasing lport 57c889f7-e44b-4f52-8e8a-db17b4e1f3b8 from this chassis (sb_readonly=0)
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.653 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f00f265b-63fa-48fb-9383-38ff6abf51c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f00f265b-63fa-48fb-9383-38ff6abf51c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.654 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[81b38d15-988e-448e-b81e-90094a0d7ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.655 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/f00f265b-63fa-48fb-9383-38ff6abf51c1.pid.haproxy
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:41:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.657 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'env', 'PROCESS_TAG=haproxy-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f00f265b-63fa-48fb-9383-38ff6abf51c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:41:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750340307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.931 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.933 254096 DEBUG nova.virt.libvirt.vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-596690940',display_name='tempest-ListServerFiltersTestJSON-instance-596690940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-596690940',id=63,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-pzh0k6o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:08Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=dce3a591-9fb6-4495-a7fb-867af2de384f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.934 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.935 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.936 254096 DEBUG nova.objects.instance [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid dce3a591-9fb6-4495-a7fb-867af2de384f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.958 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <uuid>dce3a591-9fb6-4495-a7fb-867af2de384f</uuid>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <name>instance-0000003f</name>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-596690940</nova:name>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:23</nova:creationTime>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <nova:port uuid="9e60e140-ca34-40f4-b867-d7c53f05bca4">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <entry name="serial">dce3a591-9fb6-4495-a7fb-867af2de384f</entry>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <entry name="uuid">dce3a591-9fb6-4495-a7fb-867af2de384f</entry>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/dce3a591-9fb6-4495-a7fb-867af2de384f_disk">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:67:c0:a9"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <target dev="tap9e60e140-ca"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/console.log" append="off"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:41:24 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:41:24 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:41:24 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:41:24 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Preparing to wait for external event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.961 254096 DEBUG nova.virt.libvirt.vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-596690940',display_name='tempest-ListServerFiltersTestJSON-instance-596690940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-596690940',id=63,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-pzh0k6o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:08Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=dce3a591-9fb6-4495-a7fb-867af2de384f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.961 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.962 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.962 254096 DEBUG os_vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.963 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.964 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.967 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e60e140-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.967 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e60e140-ca, col_values=(('external_ids', {'iface-id': '9e60e140-ca34-40f4-b867-d7c53f05bca4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:c0:a9', 'vm-uuid': 'dce3a591-9fb6-4495-a7fb-867af2de384f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 NetworkManager[48891]: <info>  [1764088884.9696] manager: (tap9e60e140-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.975 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088884.974942, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.975 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Started (Lifecycle Event)#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:24 np0005535469 nova_compute[254092]: 2025-11-25 16:41:24.978 254096 INFO os_vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca')#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.082 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.089 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088884.975989, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.090 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:41:25 np0005535469 podman[323741]: 2025-11-25 16:41:24.996866433 +0000 UTC m=+0.023013186 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.123 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.124 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.124 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No VIF found with MAC fa:16:3e:67:c0:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.124 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Using config drive#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.146 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.152 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.157 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:25 np0005535469 podman[323741]: 2025-11-25 16:41:25.172453176 +0000 UTC m=+0.198599899 container create 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.198 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:25 np0005535469 systemd[1]: Started libpod-conmon-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192.scope.
Nov 25 11:41:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b958805e8f49ae0dd91301b40e75db313a46de7a441e4d3c3ccb3b8eb4101d4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:25 np0005535469 podman[323741]: 2025-11-25 16:41:25.266974771 +0000 UTC m=+0.293121504 container init 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:41:25 np0005535469 podman[323741]: 2025-11-25 16:41:25.272621554 +0000 UTC m=+0.298768287 container start 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:41:25 np0005535469 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : New worker (323781) forked
Nov 25 11:41:25 np0005535469 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : Loading success.
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.379 254096 DEBUG nova.compute.manager [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-changed-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.380 254096 DEBUG nova.compute.manager [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Refreshing instance network info cache due to event network-changed-1404e99c-a32c-404a-a7d6-3daccc67c48b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.380 254096 DEBUG oslo_concurrency.lockutils [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.527 254096 DEBUG nova.compute.manager [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.527 254096 DEBUG oslo_concurrency.lockutils [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.528 254096 DEBUG oslo_concurrency.lockutils [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.528 254096 DEBUG oslo_concurrency.lockutils [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.529 254096 DEBUG nova.compute.manager [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:25 np0005535469 nova_compute[254092]: 2025-11-25 16:41:25.530 254096 WARNING nova.compute.manager [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:41:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 193 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Nov 25 11:41:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.611 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Creating config drive at /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.616 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwjiw4ol execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.757 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwjiw4ol" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.781 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.784 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.820 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.821 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.843 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.862 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updating instance_info_cache with network_info: [{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.891 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.891 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance network_info: |[{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.892 254096 DEBUG oslo_concurrency.lockutils [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.892 254096 DEBUG nova.network.neutron [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Refreshing network info cache for port 1404e99c-a32c-404a-a7d6-3daccc67c48b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.896 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start _get_guest_xml network_info=[{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.908 254096 WARNING nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.915 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.916 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.919 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.920 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.920 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.920 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='48b3ab46-af13-4c6a-9088-ba98b648a375',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.925 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.985 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.986 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.988 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:26 np0005535469 nova_compute[254092]: 2025-11-25 16:41:26.988 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deleting local config drive /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.000 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.001 254096 INFO nova.compute.claims [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:27 np0005535469 kernel: tap9e60e140-ca: entered promiscuous mode
Nov 25 11:41:27 np0005535469 NetworkManager[48891]: <info>  [1764088887.0435] manager: (tap9e60e140-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Nov 25 11:41:27 np0005535469 systemd-udevd[323621]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:27 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:27Z|00594|binding|INFO|Claiming lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 for this chassis.
Nov 25 11:41:27 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:27Z|00595|binding|INFO|9e60e140-ca34-40f4-b867-d7c53f05bca4: Claiming fa:16:3e:67:c0:a9 10.100.0.8
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.058 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:c0:a9 10.100.0.8'], port_security=['fa:16:3e:67:c0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dce3a591-9fb6-4495-a7fb-867af2de384f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9e60e140-ca34-40f4-b867-d7c53f05bca4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.060 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9e60e140-ca34-40f4-b867-d7c53f05bca4 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.061 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:41:27 np0005535469 NetworkManager[48891]: <info>  [1764088887.0659] device (tap9e60e140-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:41:27 np0005535469 NetworkManager[48891]: <info>  [1764088887.0667] device (tap9e60e140-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40565c2c-f8be-4293-b33d-5efe50b891cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:27 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:27Z|00596|binding|INFO|Setting lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 ovn-installed in OVS
Nov 25 11:41:27 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:27Z|00597|binding|INFO|Setting lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 up in Southbound
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:27 np0005535469 systemd-machined[216343]: New machine qemu-74-instance-0000003f.
Nov 25 11:41:27 np0005535469 systemd[1]: Started Virtual Machine qemu-74-instance-0000003f.
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.121 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4369f73c-031b-43a4-9420-57e2a80f1714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.127 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec5c7ac-33c0-42ed-8b3e-b0f3f53270bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.156 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e55dc2-f9a1-4856-aa6a-3d8d5d99c1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[319bb8e0-3422-4d1b-82c8-6e1c1a74138e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323872, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.193 254096 DEBUG nova.network.neutron [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.195 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5adf288-2d99-4824-a6cd-536c14958ee3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323875, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323875, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.197 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.202 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.202 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.207 254096 INFO nova.compute.manager [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 2.87 seconds to deallocate network for instance.#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.216 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.264 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.319 254096 DEBUG nova.network.neutron [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updated VIF entry in instance network info cache for port 9e60e140-ca34-40f4-b867-d7c53f05bca4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.320 254096 DEBUG nova.network.neutron [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updating instance_info_cache with network_info: [{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.335 254096 DEBUG oslo_concurrency.lockutils [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208042239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.470 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.493 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.498 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 180 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 866 KiB/s wr, 129 op/s
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.600 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088887.5994153, dce3a591-9fb6-4495-a7fb-867af2de384f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.600 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.606 254096 DEBUG nova.compute.manager [req-0d07125c-fc0a-4f89-8782-41f7c64e6abb req-89330c76-887f-40f8-bbca-4f7b3d11cf73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-deleted-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.633 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.638 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088887.6009853, dce3a591-9fb6-4495-a7fb-867af2de384f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.638 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.659 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.664 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229298609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.696 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.703 254096 DEBUG nova.compute.provider_tree [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.721 254096 DEBUG nova.scheduler.client.report [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.749 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.750 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.752 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.830 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.830 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.862 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.914 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:27 np0005535469 nova_compute[254092]: 2025-11-25 16:41:27.934 254096 DEBUG oslo_concurrency.processutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22168325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.010 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.012 254096 DEBUG nova.virt.libvirt.vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1125840801',display_name='tempest-ListServerFiltersTestJSON-instance-1125840801',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1125840801',id=64,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-e1rd6q3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:12Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=b5c5a442-8e8e-40c5-9634-e36c49e6e41b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.013 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.014 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.015 254096 DEBUG nova.objects.instance [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5c5a442-8e8e-40c5-9634-e36c49e6e41b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.069 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <uuid>b5c5a442-8e8e-40c5-9634-e36c49e6e41b</uuid>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <name>instance-00000040</name>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <memory>196608</memory>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1125840801</nova:name>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:26</nova:creationTime>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.micro">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:memory>192</nova:memory>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <nova:port uuid="1404e99c-a32c-404a-a7d6-3daccc67c48b">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <entry name="serial">b5c5a442-8e8e-40c5-9634-e36c49e6e41b</entry>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <entry name="uuid">b5c5a442-8e8e-40c5-9634-e36c49e6e41b</entry>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:68:dd:b0"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <target dev="tap1404e99c-a3"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/console.log" append="off"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:41:28 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:41:28 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:41:28 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:41:28 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.071 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Preparing to wait for external event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.072 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.072 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.072 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.073 254096 DEBUG nova.virt.libvirt.vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1125840801',display_name='tempest-ListServerFiltersTestJSON-instance-1125840801',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1125840801',id=64,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-e1rd6q3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:12Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=b5c5a442-8e8e-40c5-9634-e36c49e6e41b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.074 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.074 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.075 254096 DEBUG os_vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.076 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.077 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.081 254096 DEBUG nova.compute.manager [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.081 254096 DEBUG oslo_concurrency.lockutils [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 DEBUG oslo_concurrency.lockutils [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 DEBUG oslo_concurrency.lockutils [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 DEBUG nova.compute.manager [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 WARNING nova.compute.manager [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1404e99c-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1404e99c-a3, col_values=(('external_ids', {'iface-id': '1404e99c-a32c-404a-a7d6-3daccc67c48b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:dd:b0', 'vm-uuid': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:28 np0005535469 NetworkManager[48891]: <info>  [1764088888.0897] manager: (tap1404e99c-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.092 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.098 254096 INFO os_vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3')#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373184205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.436 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.438 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.438 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Creating image(s)#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.458 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.482 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.509 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.514 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.542 254096 DEBUG oslo_concurrency.processutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.551 254096 DEBUG nova.compute.provider_tree [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.564 254096 DEBUG nova.scheduler.client.report [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.584 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.585 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.586 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.586 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.606 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.610 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a9e83b59-224b-49dd-83f7-d057737f5825_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.646 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.655 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.655 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.656 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No VIF found with MAC fa:16:3e:68:dd:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.657 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Using config drive#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.677 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.683 254096 INFO nova.scheduler.client.report [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Deleted allocations for instance fef208e1-3706-4d03-8385-12418e9dc230#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.767 254096 DEBUG nova.policy [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c1fd56de7cd4f5c9b1d85ffe8545c90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:28 np0005535469 nova_compute[254092]: 2025-11-25 16:41:28.837 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.281 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Creating config drive at /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.286 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgs7fm4ny execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.440 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgs7fm4ny" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.466 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.470 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 180 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 106 op/s
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.597 254096 DEBUG nova.network.neutron [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updated VIF entry in instance network info cache for port 1404e99c-a32c-404a-a7d6-3daccc67c48b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.598 254096 DEBUG nova.network.neutron [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updating instance_info_cache with network_info: [{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.620 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a9e83b59-224b-49dd-83f7-d057737f5825_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.646 254096 DEBUG oslo_concurrency.lockutils [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.771 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Processing event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] No waiting events found dispatching network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.774 254096 WARNING nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received unexpected event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.774 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.783 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.874 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088889.7792208, dce3a591-9fb6-4495-a7fb-867af2de384f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.880 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.881 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.881 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.885 254096 INFO nova.virt.libvirt.driver [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance spawned successfully.#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.885 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.917 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.922 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.922 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.923 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.923 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.923 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.924 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.927 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:29 np0005535469 nova_compute[254092]: 2025-11-25 16:41:29.968 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.053 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.155 254096 INFO nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 21.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.155 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.168 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.169 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.211 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.212 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.223 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.223 254096 INFO nova.compute.claims [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.288 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.310 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Successfully created port: 63c5b67d-9c2e-4371-887d-db0d034d9072 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.353 254096 INFO nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 23.05 seconds to build instance.#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.424 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.460 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.522 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.595 254096 DEBUG nova.objects.instance [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid a9e83b59-224b-49dd-83f7-d057737f5825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.618 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.619 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Ensure instance console log exists: /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.619 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.619 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.620 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.687 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.687 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deleting local config drive /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config because it was imported into RBD.#033[00m
Nov 25 11:41:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:30 np0005535469 kernel: tap1404e99c-a3: entered promiscuous mode
Nov 25 11:41:30 np0005535469 NetworkManager[48891]: <info>  [1764088890.7474] manager: (tap1404e99c-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Nov 25 11:41:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:30Z|00598|binding|INFO|Claiming lport 1404e99c-a32c-404a-a7d6-3daccc67c48b for this chassis.
Nov 25 11:41:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:30Z|00599|binding|INFO|1404e99c-a32c-404a-a7d6-3daccc67c48b: Claiming fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:30Z|00600|binding|INFO|Setting lport 1404e99c-a32c-404a-a7d6-3daccc67c48b ovn-installed in OVS
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:30 np0005535469 systemd-machined[216343]: New machine qemu-75-instance-00000040.
Nov 25 11:41:30 np0005535469 systemd-udevd[324266]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:30Z|00601|binding|INFO|Setting lport 1404e99c-a32c-404a-a7d6-3daccc67c48b up in Southbound
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.793 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.795 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis#033[00m
Nov 25 11:41:30 np0005535469 systemd[1]: Started Virtual Machine qemu-75-instance-00000040.
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.796 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:41:30 np0005535469 NetworkManager[48891]: <info>  [1764088890.8033] device (tap1404e99c-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:41:30 np0005535469 NetworkManager[48891]: <info>  [1764088890.8041] device (tap1404e99c-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf51d37-67ed-4bf3-a37f-c80ebd2d6d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.855 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfbfe89-548f-41f4-ad2b-20da30b5afd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.859 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[91032e6a-7723-40a6-baf7-b460d7ca855e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.894 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c58302-ddc2-4c47-8367-a6421b1f47df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3afb5ff7-6139-490c-93fb-bcf92a5c8587]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324280, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.944 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5908fd-68a1-48ff-9585-d56798e26854]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324281, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324281, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.946 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.950 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411224049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.982 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:30 np0005535469 nova_compute[254092]: 2025-11-25 16:41:30.989 254096 DEBUG nova.compute.provider_tree [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.005 254096 DEBUG nova.scheduler.client.report [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.089 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.090 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.093 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.102 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.103 254096 INFO nova.compute.claims [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.188 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.189 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.224 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.312 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.345 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 227 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.552 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.554 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.554 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Creating image(s)#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.578 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.598 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.620 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.623 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.704 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.705 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.706 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.707 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.730 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.735 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 42be369c-5a19-4073-becc-4f28ef579c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.766 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088891.7178912, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Started (Lifecycle Event)#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.787 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.791 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088891.71818, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.791 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.810 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.813 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.830 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707549141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.881 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.886 254096 DEBUG nova.compute.provider_tree [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.902 254096 DEBUG nova.scheduler.client.report [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.953 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:31 np0005535469 nova_compute[254092]: 2025-11-25 16:41:31.954 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.144 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.146 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.189 254096 DEBUG nova.policy [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.218 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.285 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.382 254096 DEBUG nova.compute.manager [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.383 254096 DEBUG oslo_concurrency.lockutils [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.383 254096 DEBUG oslo_concurrency.lockutils [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.384 254096 DEBUG oslo_concurrency.lockutils [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.384 254096 DEBUG nova.compute.manager [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Processing event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.385 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.389 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088892.389475, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.390 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.392 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.397 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance spawned successfully.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.397 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.414 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.414 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.415 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.415 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.416 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.416 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.420 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.423 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.436 254096 DEBUG nova.policy [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.441 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.487 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.490 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.490 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Creating image(s)#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.509 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.529 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.551 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.556 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.595 254096 DEBUG nova.compute.manager [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.596 254096 DEBUG oslo_concurrency.lockutils [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.597 254096 DEBUG oslo_concurrency.lockutils [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.597 254096 DEBUG oslo_concurrency.lockutils [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.597 254096 DEBUG nova.compute.manager [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Processing event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.598 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.611 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088892.6111143, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.612 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.631 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.637 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.638 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.639 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.641 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.641 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.667 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.676 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 679f5e87-bac4-4169-bffa-555a53e7321f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.710 254096 INFO nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 27.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.711 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.716 254096 INFO nova.virt.libvirt.driver [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance spawned successfully.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.716 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.718 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.748 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.754 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.755 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.757 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.758 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.759 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.759 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.814 254096 INFO nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 28.19 seconds to build instance.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.872 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.887 254096 INFO nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 19.72 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.888 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.946 254096 INFO nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 21.27 seconds to build instance.#033[00m
Nov 25 11:41:32 np0005535469 nova_compute[254092]: 2025-11-25 16:41:32.997 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.167 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Successfully updated port: 63c5b67d-9c2e-4371-887d-db0d034d9072 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.198 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.199 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquired lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.199 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.533 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 42be369c-5a19-4073-becc-4f28ef579c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.799s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 227 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.566 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:33 np0005535469 nova_compute[254092]: 2025-11-25 16:41:33.625 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:34 np0005535469 nova_compute[254092]: 2025-11-25 16:41:34.242 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Successfully created port: 43b8a38b-0b5a-4b7d-8043-759fa3697e8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:41:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:41:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:41:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:41:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:41:34 np0005535469 nova_compute[254092]: 2025-11-25 16:41:34.986 254096 DEBUG nova.compute.manager [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:34 np0005535469 nova_compute[254092]: 2025-11-25 16:41:34.988 254096 DEBUG oslo_concurrency.lockutils [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:34 np0005535469 nova_compute[254092]: 2025-11-25 16:41:34.988 254096 DEBUG oslo_concurrency.lockutils [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:34 np0005535469 nova_compute[254092]: 2025-11-25 16:41:34.988 254096 DEBUG oslo_concurrency.lockutils [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:34 np0005535469 nova_compute[254092]: 2025-11-25 16:41:34.990 254096 DEBUG nova.compute.manager [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:34 np0005535469 nova_compute[254092]: 2025-11-25 16:41:34.992 254096 WARNING nova.compute.manager [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.007 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Successfully created port: 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.061 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-changed-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.062 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Refreshing instance network info cache due to event network-changed-63c5b67d-9c2e-4371-887d-db0d034d9072. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.062 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:41:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9c98f7f4-9a93-4f65-8dca-db4754f7ef2d does not exist
Nov 25 11:41:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 81725a30-d0bb-46ce-a4ae-bcc50c9e4f14 does not exist
Nov 25 11:41:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 43867373-0770-4688-b051-9350420497f0 does not exist
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.489 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 679f5e87-bac4-4169-bffa-555a53e7321f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 301 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 244 op/s
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.594 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:35 np0005535469 podman[324911]: 2025-11-25 16:41:35.680371841 +0000 UTC m=+0.021519234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:41:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:41:35 np0005535469 podman[324911]: 2025-11-25 16:41:35.786923632 +0000 UTC m=+0.128071005 container create 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.802 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088880.72169, fef208e1-3706-4d03-8385-12418e9dc230 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.803 254096 INFO nova.compute.manager [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.809 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 42be369c-5a19-4073-becc-4f28ef579c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.829 254096 DEBUG nova.compute.manager [None req-881900ba-a799-4837-8656-279dea01dedf - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.830 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.831 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Ensure instance console log exists: /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.831 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.832 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.832 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:35 np0005535469 nova_compute[254092]: 2025-11-25 16:41:35.921 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updating instance_info_cache with network_info: [{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.006 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Releasing lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.006 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance network_info: |[{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.007 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.007 254096 DEBUG nova.network.neutron [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Refreshing network info cache for port 63c5b67d-9c2e-4371-887d-db0d034d9072 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.010 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start _get_guest_xml network_info=[{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.017 254096 WARNING nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.025 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.026 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.029 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:36 np0005535469 systemd[1]: Started libpod-conmon-823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3.scope.
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.030 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.031 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.031 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.033 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.033 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.033 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.034 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.034 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.034 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.037 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:36 np0005535469 podman[324911]: 2025-11-25 16:41:36.108434345 +0000 UTC m=+0.449581738 container init 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:41:36 np0005535469 podman[324911]: 2025-11-25 16:41:36.118896699 +0000 UTC m=+0.460044072 container start 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:41:36 np0005535469 festive_bardeen[324944]: 167 167
Nov 25 11:41:36 np0005535469 systemd[1]: libpod-823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3.scope: Deactivated successfully.
Nov 25 11:41:36 np0005535469 podman[324911]: 2025-11-25 16:41:36.355279892 +0000 UTC m=+0.696427295 container attach 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:41:36 np0005535469 podman[324911]: 2025-11-25 16:41:36.356586658 +0000 UTC m=+0.697734031 container died 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:41:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824987733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.568 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.597 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.601 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4da5db6edac3f727996977e48a79968e790527e7c43389387edd47cdc5da7bcc-merged.mount: Deactivated successfully.
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.687 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 679f5e87-bac4-4169-bffa-555a53e7321f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.718 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.718 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Ensure instance console log exists: /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.719 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.719 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:36 np0005535469 nova_compute[254092]: 2025-11-25 16:41:36.720 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:37 np0005535469 podman[324911]: 2025-11-25 16:41:37.037026968 +0000 UTC m=+1.378174341 container remove 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:41:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2873100244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.066 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.068 254096 DEBUG nova.virt.libvirt.vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-695586713',display_name='tempest-ServerDiskConfigTestJSON-server-695586713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-695586713',id=65,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-mslb6gvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:27Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=a9e83b59-224b-49dd-83f7-d057737f5825,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.068 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.069 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.070 254096 DEBUG nova.objects.instance [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid a9e83b59-224b-49dd-83f7-d057737f5825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.088 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <uuid>a9e83b59-224b-49dd-83f7-d057737f5825</uuid>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <name>instance-00000041</name>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-695586713</nova:name>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:36</nova:creationTime>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <nova:port uuid="63c5b67d-9c2e-4371-887d-db0d034d9072">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <entry name="serial">a9e83b59-224b-49dd-83f7-d057737f5825</entry>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <entry name="uuid">a9e83b59-224b-49dd-83f7-d057737f5825</entry>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a9e83b59-224b-49dd-83f7-d057737f5825_disk">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a9e83b59-224b-49dd-83f7-d057737f5825_disk.config">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:1b:43:17"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <target dev="tap63c5b67d-9c"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/console.log" append="off"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:41:37 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:41:37 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:41:37 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:41:37 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Preparing to wait for external event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.091 254096 DEBUG nova.virt.libvirt.vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-695586713',display_name='tempest-ServerDiskConfigTestJSON-server-695586713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-695586713',id=65,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-mslb6gvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:27Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=a9e83b59-224b-49dd-83f7-d057737f5825,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.092 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.092 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.093 254096 DEBUG os_vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.094 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.094 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.098 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63c5b67d-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.098 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63c5b67d-9c, col_values=(('external_ids', {'iface-id': '63c5b67d-9c2e-4371-887d-db0d034d9072', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:43:17', 'vm-uuid': 'a9e83b59-224b-49dd-83f7-d057737f5825'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:37 np0005535469 NetworkManager[48891]: <info>  [1764088897.1007] manager: (tap63c5b67d-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.105 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:37 np0005535469 systemd[1]: libpod-conmon-823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3.scope: Deactivated successfully.
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.109 254096 INFO os_vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c')#033[00m
Nov 25 11:41:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:41:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7659 writes, 34K keys, 7659 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 7659 writes, 7659 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1650 writes, 7633 keys, 1650 commit groups, 1.0 writes per commit group, ingest: 9.79 MB, 0.02 MB/s#012Interval WAL: 1650 writes, 1650 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     23.8      1.65              0.14        20    0.082       0      0       0.0       0.0#012  L6      1/0    9.22 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6     95.3     78.7      1.80              0.39        19    0.095     96K    10K       0.0       0.0#012 Sum      1/0    9.22 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     49.8     52.4      3.44              0.53        39    0.088     96K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     70.7     73.8      0.67              0.13        10    0.067     30K   3070       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     95.3     78.7      1.80              0.39        19    0.095     96K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.4      1.60              0.14        19    0.084       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.1 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.18 GB write, 0.06 MB/s write, 0.17 GB read, 0.06 MB/s read, 3.4 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 19.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000232 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1304,19.08 MB,6.27751%) FilterBlock(40,278.98 KB,0.0896203%) IndexBlock(40,477.27 KB,0.153316%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.259 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.261 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.261 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:1b:43:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.261 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Using config drive#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.279 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:37 np0005535469 podman[325049]: 2025-11-25 16:41:37.213066293 +0000 UTC m=+0.023928620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:41:37 np0005535469 podman[325049]: 2025-11-25 16:41:37.348144958 +0000 UTC m=+0.159007265 container create 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:41:37 np0005535469 systemd[1]: Started libpod-conmon-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope.
Nov 25 11:41:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:37 np0005535469 podman[325049]: 2025-11-25 16:41:37.502382372 +0000 UTC m=+0.313244699 container init 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:41:37 np0005535469 podman[325049]: 2025-11-25 16:41:37.510309367 +0000 UTC m=+0.321171674 container start 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.531 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Successfully updated port: 43b8a38b-0b5a-4b7d-8043-759fa3697e8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 287 op/s
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.574 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.576 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.576 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.579 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Creating config drive at /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.584 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7zuzvjk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:37 np0005535469 podman[325049]: 2025-11-25 16:41:37.609142219 +0000 UTC m=+0.420004526 container attach 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.721 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7zuzvjk" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.742 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.745 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config a9e83b59-224b-49dd-83f7-d057737f5825_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.774 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.918 254096 DEBUG nova.compute.manager [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-changed-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.919 254096 DEBUG nova.compute.manager [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Refreshing instance network info cache due to event network-changed-43b8a38b-0b5a-4b7d-8043-759fa3697e8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.919 254096 DEBUG oslo_concurrency.lockutils [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:37 np0005535469 nova_compute[254092]: 2025-11-25 16:41:37.988 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Successfully updated port: 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.057 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.058 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.058 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.171 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config a9e83b59-224b-49dd-83f7-d057737f5825_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.172 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deleting local config drive /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config because it was imported into RBD.#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:38 np0005535469 kernel: tap63c5b67d-9c: entered promiscuous mode
Nov 25 11:41:38 np0005535469 NetworkManager[48891]: <info>  [1764088898.2143] manager: (tap63c5b67d-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 25 11:41:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:38Z|00602|binding|INFO|Claiming lport 63c5b67d-9c2e-4371-887d-db0d034d9072 for this chassis.
Nov 25 11:41:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:38Z|00603|binding|INFO|63c5b67d-9c2e-4371-887d-db0d034d9072: Claiming fa:16:3e:1b:43:17 10.100.0.7
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.224 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:43:17 10.100.0.7'], port_security=['fa:16:3e:1b:43:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a9e83b59-224b-49dd-83f7-d057737f5825', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63c5b67d-9c2e-4371-887d-db0d034d9072) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.233 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63c5b67d-9c2e-4371-887d-db0d034d9072 in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.234 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9#033[00m
Nov 25 11:41:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:38Z|00604|binding|INFO|Setting lport 63c5b67d-9c2e-4371-887d-db0d034d9072 ovn-installed in OVS
Nov 25 11:41:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:38Z|00605|binding|INFO|Setting lport 63c5b67d-9c2e-4371-887d-db0d034d9072 up in Southbound
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.247 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b8dcbf-cb09-4fdc-bc97-7111647eeb90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 systemd-udevd[325142]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.250 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.252 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.252 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0418c8e6-d96d-48b4-8181-43340772b29b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.254 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a498cdb4-9d8c-4009-98ba-dd759393f71f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 NetworkManager[48891]: <info>  [1764088898.2631] device (tap63c5b67d-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:41:38 np0005535469 NetworkManager[48891]: <info>  [1764088898.2643] device (tap63c5b67d-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:41:38 np0005535469 systemd-machined[216343]: New machine qemu-76-instance-00000041.
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.266 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[dcef4dc2-9a61-499e-957c-46d6bd3e339f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 systemd[1]: Started Virtual Machine qemu-76-instance-00000041.
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.282 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0702c0f1-5ab6-41d0-9d1e-6abbab7ef41d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.313 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0ddf6c-c347-45e9-9039-57a8856f346d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 NetworkManager[48891]: <info>  [1764088898.3196] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.320 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[77073b75-5a1e-4b50-b49c-c936733c12dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.331 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.359 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd7377d-0e7d-455b-970d-a91d68abb08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.363 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7505e7df-5a92-4f03-9d69-2cb97117c8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 NetworkManager[48891]: <info>  [1764088898.3830] device (tap62c0a8be-b0): carrier: link connected
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.386 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dc085d-92a0-4541-a29b-169ce981602f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.413 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e29db0-fc7c-4472-8779-1328d9a7c47b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535597, 'reachable_time': 39029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325183, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.430 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3840e01-01b1-4a6c-a011-3b0d8330d77e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535597, 'tstamp': 535597}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325186, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.447 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99f56300-17d1-40c8-8639-e58aaf3905f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535597, 'reachable_time': 39029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325189, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.496 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86066a18-4d2e-4381-ae24-3c034dc20c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67190690-c937-4ee2-96be-9e3abb1dca4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:38 np0005535469 NetworkManager[48891]: <info>  [1764088898.5847] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Nov 25 11:41:38 np0005535469 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.595 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:38Z|00606|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.600 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.600 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79c64a57-f146-4f22-ad08-74f9899d9c06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.601 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:41:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.601 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:38 np0005535469 thirsty_rosalind[325084]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:41:38 np0005535469 thirsty_rosalind[325084]: --> relative data size: 1.0
Nov 25 11:41:38 np0005535469 thirsty_rosalind[325084]: --> All data devices are unavailable
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.658 254096 DEBUG nova.network.neutron [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updated VIF entry in instance network info cache for port 63c5b67d-9c2e-4371-887d-db0d034d9072. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.659 254096 DEBUG nova.network.neutron [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updating instance_info_cache with network_info: [{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.680 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:38 np0005535469 systemd[1]: libpod-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope: Deactivated successfully.
Nov 25 11:41:38 np0005535469 systemd[1]: libpod-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope: Consumed 1.043s CPU time.
Nov 25 11:41:38 np0005535469 podman[325049]: 2025-11-25 16:41:38.68219078 +0000 UTC m=+1.493053087 container died 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.680 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.686 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.686 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.686 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.687 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] No waiting events found dispatching network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.687 254096 WARNING nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received unexpected event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b for instance with vm_state active and task_state None.#033[00m
Nov 25 11:41:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93-merged.mount: Deactivated successfully.
Nov 25 11:41:38 np0005535469 podman[325049]: 2025-11-25 16:41:38.857030663 +0000 UTC m=+1.667892970 container remove 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:41:38 np0005535469 systemd[1]: libpod-conmon-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope: Deactivated successfully.
Nov 25 11:41:38 np0005535469 podman[325231]: 2025-11-25 16:41:38.885446914 +0000 UTC m=+0.173544950 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:41:38 np0005535469 podman[325238]: 2025-11-25 16:41:38.899298319 +0000 UTC m=+0.185774870 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:41:38 np0005535469 podman[325230]: 2025-11-25 16:41:38.935159753 +0000 UTC m=+0.225551690 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.969 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088898.9685616, a9e83b59-224b-49dd-83f7-d057737f5825 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.970 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Started (Lifecycle Event)#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.990 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.994 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088898.9694002, a9e83b59-224b-49dd-83f7-d057737f5825 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:38 np0005535469 nova_compute[254092]: 2025-11-25 16:41:38.994 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.018 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:39 np0005535469 podman[325364]: 2025-11-25 16:41:39.020836387 +0000 UTC m=+0.059112514 container create fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.031 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.053 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:39 np0005535469 systemd[1]: Started libpod-conmon-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9.scope.
Nov 25 11:41:39 np0005535469 podman[325364]: 2025-11-25 16:41:38.990596517 +0000 UTC m=+0.028872674 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:41:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/549e2f31ca8fd68d7bd7aa09e114cdc7531e725e9150fc3ab7f41d99c3d361aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.107 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updating instance_info_cache with network_info: [{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:39 np0005535469 podman[325364]: 2025-11-25 16:41:39.129586818 +0000 UTC m=+0.167862965 container init fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:41:39 np0005535469 podman[325364]: 2025-11-25 16:41:39.137198914 +0000 UTC m=+0.175475031 container start fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.137 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.137 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance network_info: |[{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.138 254096 DEBUG oslo_concurrency.lockutils [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.138 254096 DEBUG nova.network.neutron [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Refreshing network info cache for port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.140 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start _get_guest_xml network_info=[{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.145 254096 WARNING nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.150 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.151 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:39 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : New worker (325460) forked
Nov 25 11:41:39 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : Loading success.
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.160 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.161 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.162 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.162 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.163 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.163 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.163 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.164 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.164 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.165 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.165 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.165 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.166 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.166 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.169 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:39 np0005535469 podman[325523]: 2025-11-25 16:41:39.485687169 +0000 UTC m=+0.042154805 container create f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 11:41:39 np0005535469 systemd[1]: Started libpod-conmon-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope.
Nov 25 11:41:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 284 op/s
Nov 25 11:41:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:39 np0005535469 podman[325523]: 2025-11-25 16:41:39.467881735 +0000 UTC m=+0.024349401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:41:39 np0005535469 podman[325523]: 2025-11-25 16:41:39.572391631 +0000 UTC m=+0.128859297 container init f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:41:39 np0005535469 podman[325523]: 2025-11-25 16:41:39.578852336 +0000 UTC m=+0.135319982 container start f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:41:39 np0005535469 podman[325523]: 2025-11-25 16:41:39.582618058 +0000 UTC m=+0.139085694 container attach f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:41:39 np0005535469 systemd[1]: libpod-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope: Deactivated successfully.
Nov 25 11:41:39 np0005535469 jolly_herschel[325537]: 167 167
Nov 25 11:41:39 np0005535469 conmon[325537]: conmon f8d3affa46b3c1f71338 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope/container/memory.events
Nov 25 11:41:39 np0005535469 podman[325523]: 2025-11-25 16:41:39.586081941 +0000 UTC m=+0.142549597 container died f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:41:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a29de91016a70ce30ed111802e61732b05a1a219ce343a631d17fe7b502bf26a-merged.mount: Deactivated successfully.
Nov 25 11:41:39 np0005535469 podman[325523]: 2025-11-25 16:41:39.622780828 +0000 UTC m=+0.179248464 container remove f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 11:41:39 np0005535469 systemd[1]: libpod-conmon-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope: Deactivated successfully.
Nov 25 11:41:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/608732426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.750 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.779 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:39 np0005535469 nova_compute[254092]: 2025-11-25 16:41:39.784 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:39 np0005535469 podman[325563]: 2025-11-25 16:41:39.805065243 +0000 UTC m=+0.044104728 container create 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:41:39 np0005535469 systemd[1]: Started libpod-conmon-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope.
Nov 25 11:41:39 np0005535469 podman[325563]: 2025-11-25 16:41:39.785458281 +0000 UTC m=+0.024497786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:41:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:39 np0005535469 podman[325563]: 2025-11-25 16:41:39.929219161 +0000 UTC m=+0.168258666 container init 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:41:39 np0005535469 podman[325563]: 2025-11-25 16:41:39.937225658 +0000 UTC m=+0.176265153 container start 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:41:39 np0005535469 podman[325563]: 2025-11-25 16:41:39.94135148 +0000 UTC m=+0.180390985 container attach 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:41:40
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.146 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updating instance_info_cache with network_info: [{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.176 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.177 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance network_info: |[{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.180 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start _get_guest_xml network_info=[{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.187 254096 WARNING nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.196 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.197 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.204 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.204 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.205 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.205 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.211 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616757905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.289 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.291 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-1',id=66,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:31Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=42be369c-5a19-4073-becc-4f28ef579c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.292 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.293 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.294 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42be369c-5a19-4073-becc-4f28ef579c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.317 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <uuid>42be369c-5a19-4073-becc-4f28ef579c2c</uuid>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <name>instance-00000042</name>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <nova:name>tempest-tempest.common.compute-instance-258940450-1</nova:name>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:39</nova:creationTime>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <nova:port uuid="43b8a38b-0b5a-4b7d-8043-759fa3697e8a">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <entry name="serial">42be369c-5a19-4073-becc-4f28ef579c2c</entry>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <entry name="uuid">42be369c-5a19-4073-becc-4f28ef579c2c</entry>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/42be369c-5a19-4073-becc-4f28ef579c2c_disk">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/42be369c-5a19-4073-becc-4f28ef579c2c_disk.config">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d3:af:21"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <target dev="tap43b8a38b-0b"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/console.log" append="off"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:41:40 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:41:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:41:40 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:41:40 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.317 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Preparing to wait for external event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.318 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.318 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.318 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.319 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-1',id=66,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:31Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=42be369c-5a19-4073-becc-4f28ef579c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.319 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.320 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.320 254096 DEBUG os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.321 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.322 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.325 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43b8a38b-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.325 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43b8a38b-0b, col_values=(('external_ids', {'iface-id': '43b8a38b-0b5a-4b7d-8043-759fa3697e8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:af:21', 'vm-uuid': '42be369c-5a19-4073-becc-4f28ef579c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:41:40 np0005535469 NetworkManager[48891]: <info>  [1764088900.3311] manager: (tap43b8a38b-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.349 254096 INFO os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b')#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.417 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.418 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.418 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:d3:af:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.419 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Using config drive#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.440 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:41:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734604155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.688 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.707 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.714 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]: {
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:    "0": [
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:        {
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "devices": [
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "/dev/loop3"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            ],
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_name": "ceph_lv0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_size": "21470642176",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "name": "ceph_lv0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "tags": {
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cluster_name": "ceph",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.crush_device_class": "",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.encrypted": "0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osd_id": "0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.type": "block",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.vdo": "0"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            },
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "type": "block",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "vg_name": "ceph_vg0"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:        }
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:    ],
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:    "1": [
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:        {
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "devices": [
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "/dev/loop4"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            ],
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_name": "ceph_lv1",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_size": "21470642176",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "name": "ceph_lv1",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "tags": {
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cluster_name": "ceph",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.crush_device_class": "",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.encrypted": "0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osd_id": "1",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.type": "block",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.vdo": "0"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            },
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "type": "block",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "vg_name": "ceph_vg1"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:        }
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:    ],
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:    "2": [
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:        {
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "devices": [
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "/dev/loop5"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            ],
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_name": "ceph_lv2",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_size": "21470642176",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "name": "ceph_lv2",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "tags": {
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.cluster_name": "ceph",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.crush_device_class": "",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.encrypted": "0",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osd_id": "2",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.type": "block",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:                "ceph.vdo": "0"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            },
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "type": "block",
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:            "vg_name": "ceph_vg2"
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:        }
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]:    ]
Nov 25 11:41:40 np0005535469 relaxed_yalow[325596]: }
Nov 25 11:41:40 np0005535469 systemd[1]: libpod-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope: Deactivated successfully.
Nov 25 11:41:40 np0005535469 conmon[325596]: conmon 67652e76246d7a6f4216 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope/container/memory.events
Nov 25 11:41:40 np0005535469 podman[325688]: 2025-11-25 16:41:40.869909942 +0000 UTC m=+0.032569245 container died 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.915 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-changed-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.915 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Refreshing instance network info cache due to event network-changed-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.915 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.916 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:40 np0005535469 nova_compute[254092]: 2025-11-25 16:41:40.916 254096 DEBUG nova.network.neutron [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Refreshing network info cache for port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:41:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a-merged.mount: Deactivated successfully.
Nov 25 11:41:40 np0005535469 podman[325688]: 2025-11-25 16:41:40.964706504 +0000 UTC m=+0.127365777 container remove 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:41:40 np0005535469 systemd[1]: libpod-conmon-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope: Deactivated successfully.
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.117 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Creating config drive at /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.123 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6fsf9bt5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522512571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.180 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.182 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-2',id=67,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:32Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=679f5e87-bac4-4169-bffa-555a53e7321f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.183 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.184 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.185 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 679f5e87-bac4-4169-bffa-555a53e7321f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.207 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <uuid>679f5e87-bac4-4169-bffa-555a53e7321f</uuid>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <name>instance-00000043</name>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <nova:name>tempest-tempest.common.compute-instance-258940450-2</nova:name>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:40</nova:creationTime>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <nova:port uuid="01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <entry name="serial">679f5e87-bac4-4169-bffa-555a53e7321f</entry>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <entry name="uuid">679f5e87-bac4-4169-bffa-555a53e7321f</entry>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/679f5e87-bac4-4169-bffa-555a53e7321f_disk">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/679f5e87-bac4-4169-bffa-555a53e7321f_disk.config">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:da:f6:54"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <target dev="tap01b0ba79-e7"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/console.log" append="off"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:41:41 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:41:41 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:41:41 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:41:41 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.208 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Preparing to wait for external event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.209 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.209 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.209 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.210 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-2',id=67,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:32Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=679f5e87-bac4-4169-bffa-555a53e7321f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.210 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.211 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.211 254096 DEBUG os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.212 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.212 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.215 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01b0ba79-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.215 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01b0ba79-e7, col_values=(('external_ids', {'iface-id': '01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:f6:54', 'vm-uuid': '679f5e87-bac4-4169-bffa-555a53e7321f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:41 np0005535469 NetworkManager[48891]: <info>  [1764088901.2180] manager: (tap01b0ba79-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.225 254096 INFO os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7')#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.259 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6fsf9bt5" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.287 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.290 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.329 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.330 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.330 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:da:f6:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.330 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Using config drive#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.350 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.486 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.487 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deleting local config drive /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config because it was imported into RBD.#033[00m
Nov 25 11:41:41 np0005535469 kernel: tap43b8a38b-0b: entered promiscuous mode
Nov 25 11:41:41 np0005535469 NetworkManager[48891]: <info>  [1764088901.5459] manager: (tap43b8a38b-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 25 11:41:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.4 MiB/s wr, 302 op/s
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:41Z|00607|binding|INFO|Claiming lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a for this chassis.
Nov 25 11:41:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:41Z|00608|binding|INFO|43b8a38b-0b5a-4b7d-8043-759fa3697e8a: Claiming fa:16:3e:d3:af:21 10.100.0.10
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.586 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:af:21 10.100.0.10'], port_security=['fa:16:3e:d3:af:21 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '42be369c-5a19-4073-becc-4f28ef579c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=43b8a38b-0b5a-4b7d-8043-759fa3697e8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f bound to our chassis#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.588 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f#033[00m
Nov 25 11:41:41 np0005535469 systemd-udevd[325940]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.599 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80b048f7-380f-4051-a3f7-f1b3668fa8e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.600 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf66413c8-51 in ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.603 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf66413c8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad5900-fd8e-41b0-9c62-5d32410bff19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[814cacc1-ffd3-4320-973b-ae098f07e202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 NetworkManager[48891]: <info>  [1764088901.6074] device (tap43b8a38b-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:41:41 np0005535469 NetworkManager[48891]: <info>  [1764088901.6084] device (tap43b8a38b-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:41:41 np0005535469 systemd-machined[216343]: New machine qemu-77-instance-00000042.
Nov 25 11:41:41 np0005535469 podman[325925]: 2025-11-25 16:41:41.614286556 +0000 UTC m=+0.061331164 container create 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.615 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[823c3ef7-6bd1-4a67-8f31-b72717b2469e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 systemd[1]: Started Virtual Machine qemu-77-instance-00000042.
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.641 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e885240-96c8-42b4-8a8b-ef851dda0bf5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 systemd[1]: Started libpod-conmon-6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5.scope.
Nov 25 11:41:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:41Z|00609|binding|INFO|Setting lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a ovn-installed in OVS
Nov 25 11:41:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:41Z|00610|binding|INFO|Setting lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a up in Southbound
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:41 np0005535469 podman[325925]: 2025-11-25 16:41:41.583135961 +0000 UTC m=+0.030180579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.680 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2e6044-390c-4b17-802d-39c09baae9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 systemd-udevd[325946]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:41 np0005535469 NetworkManager[48891]: <info>  [1764088901.6920] manager: (tapf66413c8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/270)
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.689 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[49e462e0-ec91-4fe0-a3e2-6aee87b66e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 podman[325925]: 2025-11-25 16:41:41.711185295 +0000 UTC m=+0.158229913 container init 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:41:41 np0005535469 podman[325925]: 2025-11-25 16:41:41.724785174 +0000 UTC m=+0.171829772 container start 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:41:41 np0005535469 podman[325925]: 2025-11-25 16:41:41.728303129 +0000 UTC m=+0.175347717 container attach 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:41:41 np0005535469 vigilant_newton[325955]: 167 167
Nov 25 11:41:41 np0005535469 systemd[1]: libpod-6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5.scope: Deactivated successfully.
Nov 25 11:41:41 np0005535469 podman[325925]: 2025-11-25 16:41:41.732424191 +0000 UTC m=+0.179468789 container died 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.740 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5414a6-480b-48c0-b334-f6f2f6c5b3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.744 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[96b8f2c1-16da-4e9c-ac38-4a45b1defbdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ec0f19aa4ce3910aa6a4c3434eebe4245f1cbc1562e5de76bd7584f806c2cb70-merged.mount: Deactivated successfully.
Nov 25 11:41:41 np0005535469 NetworkManager[48891]: <info>  [1764088901.7768] device (tapf66413c8-50): carrier: link connected
Nov 25 11:41:41 np0005535469 podman[325925]: 2025-11-25 16:41:41.779073576 +0000 UTC m=+0.226118174 container remove 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.784 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f899a85b-bdf6-4ce6-b545-68c810fa1055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.812 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50db7d6a-6daa-4910-9530-beb872984019]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326001, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 systemd[1]: libpod-conmon-6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5.scope: Deactivated successfully.
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e549a84f-b3f3-467e-a77d-bfca89166e11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535936, 'tstamp': 535936}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326003, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.869 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17b1d668-c8f1-4624-baf9-564ead40b257]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326004, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.900 254096 DEBUG nova.network.neutron [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updated VIF entry in instance network info cache for port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.901 254096 DEBUG nova.network.neutron [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updating instance_info_cache with network_info: [{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.919 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8938a777-61cf-4477-8c59-a660d0b01a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:41 np0005535469 nova_compute[254092]: 2025-11-25 16:41:41.923 254096 DEBUG oslo_concurrency.lockutils [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.022 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Creating config drive at /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.029 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp36q8xz_q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.022 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf36cf2-4b25-4975-83fe-8e4e49df1b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.035 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.038 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.039 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:42 np0005535469 NetworkManager[48891]: <info>  [1764088902.0415] manager: (tapf66413c8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Nov 25 11:41:42 np0005535469 kernel: tapf66413c8-50: entered promiscuous mode
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.051 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:42Z|00611|binding|INFO|Releasing lport 347c541f-24a8-4230-9881-74343160f6c8 from this chassis (sb_readonly=0)
Nov 25 11:41:42 np0005535469 podman[326013]: 2025-11-25 16:41:42.059018831 +0000 UTC m=+0.110861808 container create 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:41:42 np0005535469 podman[326013]: 2025-11-25 16:41:41.975789584 +0000 UTC m=+0.027632601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.092 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.098 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d5d1b1-12c3-42e3-a1fa-0df60d5f0249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.098 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.099 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'env', 'PROCESS_TAG=haproxy-f66413c8-5cde-4f70-af70-6b7886c1219f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f66413c8-5cde-4f70-af70-6b7886c1219f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:41:42 np0005535469 systemd[1]: Started libpod-conmon-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope.
Nov 25 11:41:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.185 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp36q8xz_q" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:42 np0005535469 podman[326013]: 2025-11-25 16:41:42.199582434 +0000 UTC m=+0.251425441 container init 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 11:41:42 np0005535469 podman[326013]: 2025-11-25 16:41:42.212518176 +0000 UTC m=+0.264361163 container start 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.227 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:42 np0005535469 podman[326013]: 2025-11-25 16:41:42.241083601 +0000 UTC m=+0.292926608 container attach 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.241 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.280 254096 DEBUG oslo_concurrency.lockutils [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.281 254096 DEBUG oslo_concurrency.lockutils [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.281 254096 DEBUG nova.compute.manager [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.294 254096 DEBUG nova.compute.manager [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.296 254096 DEBUG nova.objects.instance [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'flavor' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.356 254096 DEBUG nova.virt.libvirt.driver [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.432 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088902.4318604, 42be369c-5a19-4073-becc-4f28ef579c2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.432 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Started (Lifecycle Event)#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.435 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.435 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deleting local config drive /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.453 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.464 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088902.4319508, 42be369c-5a19-4073-becc-4f28ef579c2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.465 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.485 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.490 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:42 np0005535469 kernel: tap01b0ba79-e7: entered promiscuous mode
Nov 25 11:41:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:42Z|00612|binding|INFO|Claiming lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for this chassis.
Nov 25 11:41:42 np0005535469 systemd-udevd[325969]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:42Z|00613|binding|INFO|01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb: Claiming fa:16:3e:da:f6:54 10.100.0.4
Nov 25 11:41:42 np0005535469 NetworkManager[48891]: <info>  [1764088902.4965] manager: (tap01b0ba79-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Nov 25 11:41:42 np0005535469 NetworkManager[48891]: <info>  [1764088902.5067] device (tap01b0ba79-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:41:42 np0005535469 NetworkManager[48891]: <info>  [1764088902.5073] device (tap01b0ba79-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.509 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:f6:54 10.100.0.4'], port_security=['fa:16:3e:da:f6:54 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '679f5e87-bac4-4169-bffa-555a53e7321f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.513 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:42Z|00614|binding|INFO|Setting lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb ovn-installed in OVS
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:42Z|00615|binding|INFO|Setting lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb up in Southbound
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:42 np0005535469 systemd-machined[216343]: New machine qemu-78-instance-00000043.
Nov 25 11:41:42 np0005535469 systemd[1]: Started Virtual Machine qemu-78-instance-00000043.
Nov 25 11:41:42 np0005535469 podman[326150]: 2025-11-25 16:41:42.551600855 +0000 UTC m=+0.078671885 container create a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:41:42 np0005535469 podman[326150]: 2025-11-25 16:41:42.514967581 +0000 UTC m=+0.042038631 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:41:42 np0005535469 systemd[1]: Started libpod-conmon-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce.scope.
Nov 25 11:41:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:41:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9347ecdd347217bc82bfdd8002e184ed71c674b5a74b2d55a33c2bf2b613a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:41:42 np0005535469 podman[326150]: 2025-11-25 16:41:42.668541438 +0000 UTC m=+0.195612498 container init a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:41:42 np0005535469 podman[326150]: 2025-11-25 16:41:42.67672506 +0000 UTC m=+0.203796090 container start a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:41:42 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : New worker (326187) forked
Nov 25 11:41:42 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : Loading success.
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.732 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.734 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.750 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[211299be-1204-4825-beff-5100701836de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.779 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[653f5974-aadc-48fb-af2f-2e65bb81d869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.783 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ea34ec01-1097-4c9a-8c12-f9a7e5d9f9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.809 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9242c1d6-0fac-4fbe-9812-e56208cbd6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.831 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feac263e-96bf-4a61-9048-51a6cbb03518]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326201, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.847 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b01e0d3d-1345-4240-8707-e872e7f3f892]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535955, 'tstamp': 535955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326202, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535960, 'tstamp': 535960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326202, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.848 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.851 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.851 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.980 254096 DEBUG nova.network.neutron [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updated VIF entry in instance network info cache for port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:41:42 np0005535469 nova_compute[254092]: 2025-11-25 16:41:42.980 254096 DEBUG nova.network.neutron [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updating instance_info_cache with network_info: [{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.001 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.001 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.002 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.002 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.002 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Processing event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.004 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.004 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] No waiting events found dispatching network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.004 254096 WARNING nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received unexpected event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.005 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.010 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.0105665, a9e83b59-224b-49dd-83f7-d057737f5825 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.011 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.013 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.023 254096 INFO nova.virt.libvirt.driver [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance spawned successfully.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.024 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.041 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Processing event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] No waiting events found dispatching network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 WARNING nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received unexpected event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.049 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Processing event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.049 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.051 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.055 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.055 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.056 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.056 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.056 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.057 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.061 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.074 254096 INFO nova.virt.libvirt.driver [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance spawned successfully.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.075 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.081 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.082 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.0537705, 42be369c-5a19-4073-becc-4f28ef579c2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.082 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.101 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.101 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.102 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.102 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.103 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.103 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.108 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.114 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.149 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.167 254096 INFO nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 14.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.168 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.183 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 11.63 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.183 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.277 254096 INFO nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 16.38 seconds to build instance.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.306 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.308 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 13.12 seconds to build instance.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.327 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]: {
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "osd_id": 1,
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "type": "bluestore"
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:    },
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "osd_id": 2,
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "type": "bluestore"
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:    },
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "osd_id": 0,
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:        "type": "bluestore"
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]:    }
Nov 25 11:41:43 np0005535469 distracted_knuth[326044]: }
Nov 25 11:41:43 np0005535469 systemd[1]: libpod-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope: Deactivated successfully.
Nov 25 11:41:43 np0005535469 systemd[1]: libpod-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope: Consumed 1.099s CPU time.
Nov 25 11:41:43 np0005535469 podman[326013]: 2025-11-25 16:41:43.484956526 +0000 UTC m=+1.536799513 container died 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:41:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b-merged.mount: Deactivated successfully.
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.534 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.5335133, 679f5e87-bac4-4169-bffa-555a53e7321f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.534 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.536 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:41:43 np0005535469 podman[326013]: 2025-11-25 16:41:43.546172507 +0000 UTC m=+1.598015484 container remove 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.549 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.553 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.560 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.563 254096 INFO nova.virt.libvirt.driver [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance spawned successfully.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.563 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.569 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.570 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.533881, 679f5e87-bac4-4169-bffa-555a53e7321f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.570 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:41:43 np0005535469 systemd[1]: libpod-conmon-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope: Deactivated successfully.
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.584 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.587 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.588 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.589 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.589 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.590 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.590 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.597 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.5388157, 679f5e87-bac4-4169-bffa-555a53e7321f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.597 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:41:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:41:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:41:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:41:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 24f50b42-530a-4a50-8873-51fc98660269 does not exist
Nov 25 11:41:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4cade59b-ea21-477c-bd2a-9149784902a6 does not exist
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.620 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.626 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.642 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.648 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 11.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.649 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.714 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 13.28 seconds to build instance.#033[00m
Nov 25 11:41:43 np0005535469 nova_compute[254092]: 2025-11-25 16:41:43.729 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:43Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:c0:a9 10.100.0.8
Nov 25 11:41:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:43Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:c0:a9 10.100.0.8
Nov 25 11:41:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:41:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:41:45 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 11:41:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 345 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 5.2 MiB/s wr, 401 op/s
Nov 25 11:41:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:45 np0005535469 nova_compute[254092]: 2025-11-25 16:41:45.986 254096 DEBUG nova.compute.manager [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:45 np0005535469 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG oslo_concurrency.lockutils [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:45 np0005535469 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG oslo_concurrency.lockutils [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:45 np0005535469 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG oslo_concurrency.lockutils [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:45 np0005535469 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG nova.compute.manager [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] No waiting events found dispatching network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:45 np0005535469 nova_compute[254092]: 2025-11-25 16:41:45.988 254096 WARNING nova.compute.manager [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received unexpected event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for instance with vm_state active and task_state None.#033[00m
Nov 25 11:41:46 np0005535469 nova_compute[254092]: 2025-11-25 16:41:46.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.473 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.474 254096 INFO nova.compute.manager [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Terminating instance#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.475 254096 DEBUG nova.compute.manager [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:41:47 np0005535469 kernel: tap43b8a38b-0b (unregistering): left promiscuous mode
Nov 25 11:41:47 np0005535469 NetworkManager[48891]: <info>  [1764088907.5301] device (tap43b8a38b-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00616|binding|INFO|Releasing lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a from this chassis (sb_readonly=0)
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00617|binding|INFO|Setting lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a down in Southbound
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00618|binding|INFO|Removing iface tap43b8a38b-0b ovn-installed in OVS
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.556 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:af:21 10.100.0.10'], port_security=['fa:16:3e:d3:af:21 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '42be369c-5a19-4073-becc-4f28ef579c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=43b8a38b-0b5a-4b7d-8043-759fa3697e8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.557 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis#033[00m
Nov 25 11:41:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 362 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 312 op/s
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.562 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000042.scope: Deactivated successfully.
Nov 25 11:41:47 np0005535469 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000042.scope: Consumed 4.702s CPU time.
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b02b4f4-9141-43aa-b7c0-ec1e71759b2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:47 np0005535469 systemd-machined[216343]: Machine qemu-77-instance-00000042 terminated.
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:61:e3 10.100.0.13
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:61:e3 10.100.0.13
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.645 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc247d4-1e16-443d-9062-0e5db0024b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.648 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[329d5092-ce08-4bdc-872d-1fe3ee3993bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.682 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.684 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.684 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.685 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.685 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.686 254096 INFO nova.compute.manager [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Terminating instance#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.687 254096 DEBUG nova.compute.manager [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.686 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9fb4a5-c848-49a2-8640-cdca12fc0ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.713 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0367a6-9e8e-4215-991e-593eeda62551]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326350, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.718 254096 INFO nova.virt.libvirt.driver [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance destroyed successfully.#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.718 254096 DEBUG nova.objects.instance [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid 42be369c-5a19-4073-becc-4f28ef579c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.728 254096 DEBUG nova.virt.libvirt.vif [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-1',id=66,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:43Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=42be369c-5a19-4073-becc-4f28ef579c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.728 254096 DEBUG nova.network.os_vif_util [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.729 254096 DEBUG nova.network.os_vif_util [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.729 254096 DEBUG os_vif [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.732 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43b8a38b-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.735 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f56533-92db-4c79-8e16-25e821a4ba8a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535955, 'tstamp': 535955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326360, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535960, 'tstamp': 535960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326360, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.737 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.740 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.740 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.740 254096 INFO os_vif [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b')#033[00m
Nov 25 11:41:47 np0005535469 kernel: tap01b0ba79-e7 (unregistering): left promiscuous mode
Nov 25 11:41:47 np0005535469 NetworkManager[48891]: <info>  [1764088907.7564] device (tap01b0ba79-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00619|binding|INFO|Releasing lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb from this chassis (sb_readonly=0)
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00620|binding|INFO|Setting lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb down in Southbound
Nov 25 11:41:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:47Z|00621|binding|INFO|Removing iface tap01b0ba79-e7 ovn-installed in OVS
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.776 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:f6:54 10.100.0.4'], port_security=['fa:16:3e:da:f6:54 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '679f5e87-bac4-4169-bffa-555a53e7321f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.778 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.780 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f66413c8-5cde-4f70-af70-6b7886c1219f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[613646d4-ffd5-4059-9e48-dd9ef6772119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.781 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace which is not needed anymore#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.784 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 25 11:41:47 np0005535469 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000043.scope: Consumed 4.827s CPU time.
Nov 25 11:41:47 np0005535469 systemd-machined[216343]: Machine qemu-78-instance-00000043 terminated.
Nov 25 11:41:47 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : haproxy version is 2.8.14-c23fe91
Nov 25 11:41:47 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : path to executable is /usr/sbin/haproxy
Nov 25 11:41:47 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [WARNING]  (326185) : Exiting Master process...
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.925 254096 INFO nova.virt.libvirt.driver [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance destroyed successfully.#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.926 254096 DEBUG nova.objects.instance [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid 679f5e87-bac4-4169-bffa-555a53e7321f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:47 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [ALERT]    (326185) : Current worker (326187) exited with code 143 (Terminated)
Nov 25 11:41:47 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [WARNING]  (326185) : All workers exited. Exiting... (0)
Nov 25 11:41:47 np0005535469 systemd[1]: libpod-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce.scope: Deactivated successfully.
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.939 254096 DEBUG nova.virt.libvirt.vif [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-2',id=67,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T16:41:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:43Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=679f5e87-bac4-4169-bffa-555a53e7321f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.940 254096 DEBUG nova.network.os_vif_util [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.941 254096 DEBUG nova.network.os_vif_util [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.941 254096 DEBUG os_vif [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.945 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01b0ba79-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:47 np0005535469 podman[326399]: 2025-11-25 16:41:47.947607133 +0000 UTC m=+0.071709755 container died a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:47 np0005535469 nova_compute[254092]: 2025-11-25 16:41:47.955 254096 INFO os_vif [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7')#033[00m
Nov 25 11:41:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce-userdata-shm.mount: Deactivated successfully.
Nov 25 11:41:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8b9347ecdd347217bc82bfdd8002e184ed71c674b5a74b2d55a33c2bf2b613a2-merged.mount: Deactivated successfully.
Nov 25 11:41:48 np0005535469 podman[326399]: 2025-11-25 16:41:48.005005081 +0000 UTC m=+0.129107693 container cleanup a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:41:48 np0005535469 systemd[1]: libpod-conmon-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce.scope: Deactivated successfully.
Nov 25 11:41:48 np0005535469 podman[326460]: 2025-11-25 16:41:48.081858726 +0000 UTC m=+0.051895719 container remove a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.089 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42ba9198-91bb-4791-a7b4-1d8b1c7fe0c8]: (4, ('Tue Nov 25 04:41:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce)\na9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce\nTue Nov 25 04:41:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce)\na9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.091 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0698f8e2-2aa4-4df8-b546-05bf9263d48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.092 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:48 np0005535469 kernel: tapf66413c8-50: left promiscuous mode
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64fc119c-694b-4448-8907-3e255dee13b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.133 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66f366bc-c7c9-4bc2-873c-bdfb8ce8dd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4c93cf-1e4c-4406-9053-7439612f90ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.151 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18d8ec7f-b718-4bae-906a-c029d14bb664]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535926, 'reachable_time': 23963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326475, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:48 np0005535469 systemd[1]: run-netns-ovnmeta\x2df66413c8\x2d5cde\x2d4f70\x2daf70\x2d6b7886c1219f.mount: Deactivated successfully.
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.154 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:41:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.155 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[054829f6-b9ae-4da6-82e2-571811998bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.168 254096 INFO nova.virt.libvirt.driver [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deleting instance files /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c_del#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.169 254096 INFO nova.virt.libvirt.driver [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deletion of /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c_del complete#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.250 254096 INFO nova.compute.manager [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.250 254096 DEBUG oslo.service.loopingcall [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.251 254096 DEBUG nova.compute.manager [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.251 254096 DEBUG nova.network.neutron [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.501 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-unplugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] No waiting events found dispatching network-vif-unplugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-unplugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] No waiting events found dispatching network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 WARNING nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received unexpected event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.716 254096 INFO nova.virt.libvirt.driver [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deleting instance files /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f_del#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.717 254096 INFO nova.virt.libvirt.driver [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deletion of /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f_del complete#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.806 254096 INFO nova.compute.manager [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.806 254096 DEBUG oslo.service.loopingcall [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.807 254096 DEBUG nova.compute.manager [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:41:48 np0005535469 nova_compute[254092]: 2025-11-25 16:41:48.807 254096 DEBUG nova.network.neutron [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.012 254096 DEBUG nova.network.neutron [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.031 254096 INFO nova.compute.manager [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 0.78 seconds to deallocate network for instance.#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.079 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.079 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.128 254096 DEBUG nova.compute.manager [req-204250bd-1497-4235-9a40-e758533d271a req-938236db-821c-435d-907b-095f1eecad69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-deleted-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.244 254096 DEBUG oslo_concurrency.processutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.281 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.281 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.282 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.282 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.282 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.285 254096 INFO nova.compute.manager [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Terminating instance#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.287 254096 DEBUG nova.compute.manager [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:41:49 np0005535469 kernel: tap63c5b67d-9c (unregistering): left promiscuous mode
Nov 25 11:41:49 np0005535469 NetworkManager[48891]: <info>  [1764088909.3244] device (tap63c5b67d-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:41:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:49Z|00622|binding|INFO|Releasing lport 63c5b67d-9c2e-4371-887d-db0d034d9072 from this chassis (sb_readonly=0)
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:49Z|00623|binding|INFO|Setting lport 63c5b67d-9c2e-4371-887d-db0d034d9072 down in Southbound
Nov 25 11:41:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:49Z|00624|binding|INFO|Removing iface tap63c5b67d-9c ovn-installed in OVS
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.355 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:43:17 10.100.0.7'], port_security=['fa:16:3e:1b:43:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a9e83b59-224b-49dd-83f7-d057737f5825', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63c5b67d-9c2e-4371-887d-db0d034d9072) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.356 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63c5b67d-9c2e-4371-887d-db0d034d9072 in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.359 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e9eb3b-aac1-4428-b7e2-83072742ae37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.360 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore#033[00m
Nov 25 11:41:49 np0005535469 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 25 11:41:49 np0005535469 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000041.scope: Consumed 6.390s CPU time.
Nov 25 11:41:49 np0005535469 systemd-machined[216343]: Machine qemu-76-instance-00000041 terminated.
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : haproxy version is 2.8.14-c23fe91
Nov 25 11:41:49 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : path to executable is /usr/sbin/haproxy
Nov 25 11:41:49 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [WARNING]  (325456) : Exiting Master process...
Nov 25 11:41:49 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [WARNING]  (325456) : Exiting Master process...
Nov 25 11:41:49 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [ALERT]    (325456) : Current worker (325460) exited with code 143 (Terminated)
Nov 25 11:41:49 np0005535469 systemd[1]: libpod-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9.scope: Deactivated successfully.
Nov 25 11:41:49 np0005535469 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [WARNING]  (325456) : All workers exited. Exiting... (0)
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.513 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.518 254096 INFO nova.virt.libvirt.driver [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance destroyed successfully.#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.519 254096 DEBUG nova.objects.instance [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid a9e83b59-224b-49dd-83f7-d057737f5825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:49 np0005535469 podman[326519]: 2025-11-25 16:41:49.520936647 +0000 UTC m=+0.068699475 container died fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.528 254096 DEBUG nova.network.neutron [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.530 254096 DEBUG nova.virt.libvirt.vif [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-695586713',display_name='tempest-ServerDiskConfigTestJSON-server-695586713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-695586713',id=65,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-mslb6gvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:47Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=a9e83b59-224b-49dd-83f7-d057737f5825,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.531 254096 DEBUG nova.network.os_vif_util [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.532 254096 DEBUG nova.network.os_vif_util [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.532 254096 DEBUG os_vif [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.535 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63c5b67d-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.541 254096 INFO os_vif [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c')#033[00m
Nov 25 11:41:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9-userdata-shm.mount: Deactivated successfully.
Nov 25 11:41:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-549e2f31ca8fd68d7bd7aa09e114cdc7531e725e9150fc3ab7f41d99c3d361aa-merged.mount: Deactivated successfully.
Nov 25 11:41:49 np0005535469 podman[326519]: 2025-11-25 16:41:49.560435289 +0000 UTC m=+0.108198117 container cleanup fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:41:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 362 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.564 254096 INFO nova.compute.manager [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 0.76 seconds to deallocate network for instance.#033[00m
Nov 25 11:41:49 np0005535469 systemd[1]: libpod-conmon-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9.scope: Deactivated successfully.
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.613 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:49 np0005535469 podman[326574]: 2025-11-25 16:41:49.631597819 +0000 UTC m=+0.048366303 container remove fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.637 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[76fa45b1-f255-436c-ac3a-96000b277335]: (4, ('Tue Nov 25 04:41:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9)\nfd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9\nTue Nov 25 04:41:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9)\nfd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.639 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d15554f5-6f90-4e76-9518-f678396a431f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.639 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:49 np0005535469 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163470565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[828af609-3003-435c-845b-0ad134c0aa0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:49Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 11:41:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:49Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3dffe21d-c05c-4e18-b79d-770dac77c415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.719 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28244552-aac2-409c-9331-96c5cdf619e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.732 254096 DEBUG oslo_concurrency.processutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.739 254096 DEBUG nova.compute.provider_tree [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28af4fcd-1af9-488a-86fe-b356f5f38fcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535589, 'reachable_time': 34979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326594, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.742 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:41:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.742 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6c75b4-6a6b-44c6-948b-43e135331690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:49 np0005535469 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.759 254096 DEBUG nova.scheduler.client.report [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.780 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.783 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.828 254096 INFO nova.scheduler.client.report [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance 42be369c-5a19-4073-becc-4f28ef579c2c#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.895 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.907 254096 DEBUG oslo_concurrency.processutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.966 254096 INFO nova.virt.libvirt.driver [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deleting instance files /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825_del#033[00m
Nov 25 11:41:49 np0005535469 nova_compute[254092]: 2025-11-25 16:41:49.967 254096 INFO nova.virt.libvirt.driver [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deletion of /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825_del complete#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.011 254096 INFO nova.compute.manager [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.012 254096 DEBUG oslo.service.loopingcall [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.012 254096 DEBUG nova.compute.manager [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.012 254096 DEBUG nova.network.neutron [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:41:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727871419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.333 254096 DEBUG oslo_concurrency.processutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.338 254096 DEBUG nova.compute.provider_tree [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.351 254096 DEBUG nova.scheduler.client.report [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.374 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.412 254096 INFO nova.scheduler.client.report [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance 679f5e87-bac4-4169-bffa-555a53e7321f#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.479 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.583 254096 DEBUG nova.network.neutron [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.601 254096 INFO nova.compute.manager [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 0.59 seconds to deallocate network for instance.#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-unplugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] No waiting events found dispatching network-vif-unplugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 WARNING nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received unexpected event network-vif-unplugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] No waiting events found dispatching network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 WARNING nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received unexpected event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-unplugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] No waiting events found dispatching network-vif-unplugged-63c5b67d-9c2e-4371-887d-db0d034d9072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-unplugged-63c5b67d-9c2e-4371-887d-db0d034d9072 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-deleted-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.611 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] No waiting events found dispatching network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.611 254096 WARNING nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received unexpected event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.657 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.657 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:50 np0005535469 nova_compute[254092]: 2025-11-25 16:41:50.794 254096 DEBUG oslo_concurrency.processutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441204567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002612369867975516 of space, bias 1.0, pg target 0.7837109603926548 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:41:51 np0005535469 nova_compute[254092]: 2025-11-25 16:41:51.228 254096 DEBUG oslo_concurrency.processutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:51 np0005535469 nova_compute[254092]: 2025-11-25 16:41:51.236 254096 DEBUG nova.compute.provider_tree [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:51 np0005535469 nova_compute[254092]: 2025-11-25 16:41:51.257 254096 DEBUG nova.scheduler.client.report [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:51 np0005535469 nova_compute[254092]: 2025-11-25 16:41:51.287 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:51 np0005535469 nova_compute[254092]: 2025-11-25 16:41:51.322 254096 INFO nova.scheduler.client.report [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Deleted allocations for instance a9e83b59-224b-49dd-83f7-d057737f5825#033[00m
Nov 25 11:41:51 np0005535469 nova_compute[254092]: 2025-11-25 16:41:51.412 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 279 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 493 op/s
Nov 25 11:41:52 np0005535469 nova_compute[254092]: 2025-11-25 16:41:52.479 254096 DEBUG nova.virt.libvirt.driver [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:41:52 np0005535469 nova_compute[254092]: 2025-11-25 16:41:52.865 254096 DEBUG nova.compute.manager [req-7b3eeca5-12f5-4590-9c34-2f197e641e67 req-f878ab69-620c-46c0-a16a-4bb3d3f2eb29 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-deleted-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 279 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 475 op/s
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.590 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.590 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.611 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.631 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.632 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.658 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.693 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.694 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.704 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.705 254096 INFO nova.compute.claims [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.747 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:53 np0005535469 nova_compute[254092]: 2025-11-25 16:41:53.891 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422864745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.377 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.384 254096 DEBUG nova.compute.provider_tree [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.401 254096 DEBUG nova.scheduler.client.report [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.423 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.424 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.427 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.437 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.437 254096 INFO nova.compute.claims [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.500 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.500 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.539 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.562 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.670 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.709 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.711 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.712 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Creating image(s)#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.734 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.758 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.789 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.795 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.842 254096 DEBUG nova.policy [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:54 np0005535469 kernel: tap5136afea-10 (unregistering): left promiscuous mode
Nov 25 11:41:54 np0005535469 NetworkManager[48891]: <info>  [1764088914.8573] device (tap5136afea-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:54Z|00625|binding|INFO|Releasing lport 5136afea-102e-46a1-8fdb-0af970c5af04 from this chassis (sb_readonly=0)
Nov 25 11:41:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:54Z|00626|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 down in Southbound
Nov 25 11:41:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:54Z|00627|binding|INFO|Removing iface tap5136afea-10 ovn-installed in OVS
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.876 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:41:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.877 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis#033[00m
Nov 25 11:41:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.879 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.900 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.901 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.902 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.902 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.902 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85c706d9-0b9a-44ec-9cf9-877b85176e19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:54 np0005535469 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 11:41:54 np0005535469 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003e.scope: Consumed 15.027s CPU time.
Nov 25 11:41:54 np0005535469 systemd-machined[216343]: Machine qemu-73-instance-0000003e terminated.
Nov 25 11:41:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.938 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa99468-0e68-4682-a3d9-8d2e6ffc8ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.941 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.944 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3720b7d2-d73a-44a7-a29c-fc6c5caa14dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:54 np0005535469 nova_compute[254092]: 2025-11-25 16:41:54.958 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.975 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ef225fec-827d-4beb-955e-14e82e70d1f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.999 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b3c049-9b40-4e91-857a-fd9c097dc8ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326770, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99a10069-eb53-44dd-adcc-dad173bfbe13]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326771, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326771, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:41:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.021 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.023 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.029 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.029 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:41:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:41:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614987473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.164 254096 DEBUG nova.compute.manager [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-unplugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.165 254096 DEBUG oslo_concurrency.lockutils [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.165 254096 DEBUG oslo_concurrency.lockutils [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.166 254096 DEBUG oslo_concurrency.lockutils [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.167 254096 DEBUG nova.compute.manager [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-unplugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.169 254096 WARNING nova.compute.manager [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-unplugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state powering-off.#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.172 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.186 254096 DEBUG nova.compute.provider_tree [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.214 254096 DEBUG nova.scheduler.client.report [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:41:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:41:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920058783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:41:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:41:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920058783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.264 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.266 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.339 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.339 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.384 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.416 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.465 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.553 254096 INFO nova.virt.libvirt.driver [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.556 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.557 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.557 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Creating image(s)#033[00m
Nov 25 11:41:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 279 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 478 op/s
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.577 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.598 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.619 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.623 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.670 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.780 254096 DEBUG nova.policy [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.784 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.785 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Successfully created port: 9ef3b6ce-50de-444f-b27f-b16a5b2b832a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.789 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.844 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.848 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5e2a584-5835-4c63-84de-6f0446220d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.890 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance destroyed successfully.#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.891 254096 DEBUG nova.objects.instance [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'numa_topology' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.906 254096 DEBUG nova.compute.manager [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.983 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 796c46a8-971c-4b51-96c9-0e7c8682cfa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.998 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.999 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Ensure instance console log exists: /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:55 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.999 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:56 np0005535469 nova_compute[254092]: 2025-11-25 16:41:55.999 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:56 np0005535469 nova_compute[254092]: 2025-11-25 16:41:56.000 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:56 np0005535469 nova_compute[254092]: 2025-11-25 16:41:56.022 254096 DEBUG oslo_concurrency.lockutils [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:56 np0005535469 nova_compute[254092]: 2025-11-25 16:41:56.784 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5e2a584-5835-4c63-84de-6f0446220d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.936s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:56 np0005535469 nova_compute[254092]: 2025-11-25 16:41:56.850 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.256 254096 DEBUG nova.compute.manager [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.256 254096 DEBUG oslo_concurrency.lockutils [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.257 254096 DEBUG oslo_concurrency.lockutils [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.258 254096 DEBUG oslo_concurrency.lockutils [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.258 254096 DEBUG nova.compute.manager [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.259 254096 WARNING nova.compute.manager [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.317 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Successfully created port: d2de6446-cca8-4827-a039-647fe671bab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.322 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid b5e2a584-5835-4c63-84de-6f0446220d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.336 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.336 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Ensure instance console log exists: /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.337 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.337 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.338 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.413 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Successfully updated port: 9ef3b6ce-50de-444f-b27f-b16a5b2b832a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.433 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.433 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.433 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:41:57Z|00628|binding|INFO|Releasing lport 57c889f7-e44b-4f52-8e8a-db17b4e1f3b8 from this chassis (sb_readonly=0)
Nov 25 11:41:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 303 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.4 MiB/s wr, 341 op/s
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:57 np0005535469 nova_compute[254092]: 2025-11-25 16:41:57.679 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.183 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'flavor' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.201 254096 DEBUG oslo_concurrency.lockutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.202 254096 DEBUG oslo_concurrency.lockutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.202 254096 DEBUG nova.network.neutron [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.202 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'info_cache' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.534 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Successfully updated port: d2de6446-cca8-4827-a039-647fe671bab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.560 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.561 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.561 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.805 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:41:58 np0005535469 nova_compute[254092]: 2025-11-25 16:41:58.979 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updating instance_info_cache with network_info: [{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.060 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.061 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance network_info: |[{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.064 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start _get_guest_xml network_info=[{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.072 254096 WARNING nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.077 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.078 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.083 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.084 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.084 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.085 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.085 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.085 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.092 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.356 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-changed-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Refreshing instance network info cache due to event network-changed-9ef3b6ce-50de-444f-b27f-b16a5b2b832a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Refreshing network info cache for port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:41:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 303 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 263 op/s
Nov 25 11:41:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:41:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125572864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.625 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.646 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.650 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.763 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updating instance_info_cache with network_info: [{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.886 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.886 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance network_info: |[{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.888 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start _get_guest_xml network_info=[{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.892 254096 WARNING nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.896 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.896 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.899 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.900 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.900 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.901 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.901 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:41:59 np0005535469 nova_compute[254092]: 2025-11-25 16:41:59.907 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925887909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.109 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.111 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-1',id=68,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:54Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=796c46a8-971c-4b51-96c9-0e7c8682cfa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.111 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.112 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.113 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 796c46a8-971c-4b51-96c9-0e7c8682cfa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.124 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <uuid>796c46a8-971c-4b51-96c9-0e7c8682cfa8</uuid>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <name>instance-00000044</name>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:name>tempest-MultipleCreateTestJSON-server-1199053129-1</nova:name>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:59</nova:creationTime>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:port uuid="9ef3b6ce-50de-444f-b27f-b16a5b2b832a">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="serial">796c46a8-971c-4b51-96c9-0e7c8682cfa8</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="uuid">796c46a8-971c-4b51-96c9-0e7c8682cfa8</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:b9:ae:2e"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <target dev="tap9ef3b6ce-50"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/console.log" append="off"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:42:00 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:42:00 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.125 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Preparing to wait for external event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.125 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.126 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.126 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.127 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-1',id=68,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:54Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=796c46a8-971c-4b51-96c9-0e7c8682cfa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.127 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.127 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.128 254096 DEBUG os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.128 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.129 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.129 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.131 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ef3b6ce-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.132 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ef3b6ce-50, col_values=(('external_ids', {'iface-id': '9ef3b6ce-50de-444f-b27f-b16a5b2b832a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:ae:2e', 'vm-uuid': '796c46a8-971c-4b51-96c9-0e7c8682cfa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:00 np0005535469 NetworkManager[48891]: <info>  [1764088920.1343] manager: (tap9ef3b6ce-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.146 254096 INFO os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50')#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.218 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.218 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.218 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:b9:ae:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.219 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Using config drive#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.242 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.261 254096 DEBUG nova.network.neutron [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.276 254096 DEBUG oslo_concurrency.lockutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.297 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance destroyed successfully.#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.297 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'numa_topology' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.309 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.323 254096 DEBUG nova.virt.libvirt.vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.323 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.324 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.324 254096 DEBUG os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.326 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5136afea-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.334 254096 INFO os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.340 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start _get_guest_xml network_info=[{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.342 254096 WARNING nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238201947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.368 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.369 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.372 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.372 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.373 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.373 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.374 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.374 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.376 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.376 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.376 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.377 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.377 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.384 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.402 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.408 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.438 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207363251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788681208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.865 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.867 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-2',id=69,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=b5e2a584-5835-4c63-84de-6f0446220d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.868 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.869 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.870 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5e2a584-5835-4c63-84de-6f0446220d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.878 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.914 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <uuid>b5e2a584-5835-4c63-84de-6f0446220d35</uuid>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <name>instance-00000045</name>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:name>tempest-MultipleCreateTestJSON-server-1199053129-2</nova:name>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:41:59</nova:creationTime>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <nova:port uuid="d2de6446-cca8-4827-a039-647fe671bab4">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="serial">b5e2a584-5835-4c63-84de-6f0446220d35</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="uuid">b5e2a584-5835-4c63-84de-6f0446220d35</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b5e2a584-5835-4c63-84de-6f0446220d35_disk">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b5e2a584-5835-4c63-84de-6f0446220d35_disk.config">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:a1:2d:7c"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <target dev="tapd2de6446-cc"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/console.log" append="off"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:42:00 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:42:00 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:42:00 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:42:00 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.915 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Preparing to wait for external event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.916 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.916 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.917 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.918 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-2',id=69,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=b5e2a584-5835-4c63-84de-6f0446220d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.918 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.919 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.919 254096 DEBUG os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.920 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.921 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.926 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.958 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2de6446-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.958 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2de6446-cc, col_values=(('external_ids', {'iface-id': 'd2de6446-cca8-4827-a039-647fe671bab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:2d:7c', 'vm-uuid': 'b5e2a584-5835-4c63-84de-6f0446220d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:00 np0005535469 NetworkManager[48891]: <info>  [1764088920.9790] manager: (tapd2de6446-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:00 np0005535469 nova_compute[254092]: 2025-11-25 16:42:00.989 254096 INFO os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc')#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.005 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Creating config drive at /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.011 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgl4etayo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.050 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updated VIF entry in instance network info cache for port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.051 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updating instance_info_cache with network_info: [{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.067 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.068 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-changed-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.068 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Refreshing instance network info cache due to event network-changed-d2de6446-cca8-4827-a039-647fe671bab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.068 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.069 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.069 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Refreshing network info cache for port d2de6446-cca8-4827-a039-647fe671bab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.157 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgl4etayo" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.187 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.193 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.277 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.278 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.278 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:a1:2d:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.279 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Using config drive#033[00m
Nov 25 11:42:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542486749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.429 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.441 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.442 254096 DEBUG nova.virt.libvirt.vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.442 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.443 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.444 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.465 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <uuid>013dc18e-57cd-4733-8e98-7d20e3b5c4db</uuid>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <name>instance-0000003e</name>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1278256596</nova:name>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:42:00</nova:creationTime>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <nova:port uuid="5136afea-102e-46a1-8fdb-0af970c5af04">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <entry name="serial">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <entry name="uuid">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:70:61:e3"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <target dev="tap5136afea-10"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/console.log" append="off"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:42:01 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:42:01 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:42:01 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:42:01 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.466 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.467 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.467 254096 DEBUG nova.virt.libvirt.vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.468 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.468 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.469 254096 DEBUG os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.473 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5136afea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.473 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5136afea-10, col_values=(('external_ids', {'iface-id': '5136afea-102e-46a1-8fdb-0af970c5af04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:61:e3', 'vm-uuid': '013dc18e-57cd-4733-8e98-7d20e3b5c4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 NetworkManager[48891]: <info>  [1764088921.4756] manager: (tap5136afea-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.486 254096 INFO os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')#033[00m
Nov 25 11:42:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.3 MiB/s wr, 314 op/s
Nov 25 11:42:01 np0005535469 NetworkManager[48891]: <info>  [1764088921.6574] manager: (tap5136afea-10): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Nov 25 11:42:01 np0005535469 kernel: tap5136afea-10: entered promiscuous mode
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:01Z|00629|binding|INFO|Claiming lport 5136afea-102e-46a1-8fdb-0af970c5af04 for this chassis.
Nov 25 11:42:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:01Z|00630|binding|INFO|5136afea-102e-46a1-8fdb-0af970c5af04: Claiming fa:16:3e:70:61:e3 10.100.0.13
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.682 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.683 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.685 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:42:01 np0005535469 systemd-udevd[327322]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:42:01 np0005535469 systemd-machined[216343]: New machine qemu-79-instance-0000003e.
Nov 25 11:42:01 np0005535469 NetworkManager[48891]: <info>  [1764088921.6994] device (tap5136afea-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:42:01 np0005535469 NetworkManager[48891]: <info>  [1764088921.7005] device (tap5136afea-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:42:01 np0005535469 systemd[1]: Started Virtual Machine qemu-79-instance-0000003e.
Nov 25 11:42:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:01Z|00631|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 ovn-installed in OVS
Nov 25 11:42:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:01Z|00632|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 up in Southbound
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c24338a6-8cbe-494e-bbd7-d2053a93ffd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.733 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cdab62b0-9abe-44c8-b9e9-3ed5de3b8734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.736 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d47a4063-359d-4bd2-ad60-c659874f8981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.761 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[16dcb656-1095-4ea7-8b83-a1219533fabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.780 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[189c5a0d-1112-456b-bb12-a774b53e6a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327338, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd470bf-7b56-4abd-9394-44396693faad]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327339, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327339, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.798 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.800 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 nova_compute[254092]: 2025-11-25 16:42:01.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.807 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.808 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.808 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.808 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.172 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Creating config drive at /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.176 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjpj3x57 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.310 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjpj3x57" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.470 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.477 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config b5e2a584-5835-4c63-84de-6f0446220d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.512 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 013dc18e-57cd-4733-8e98-7d20e3b5c4db due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.512 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088922.3605864, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.513 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.514 254096 DEBUG nova.compute.manager [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.519 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance rebooted successfully.#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.519 254096 DEBUG nova.compute.manager [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.549 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.554 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.571 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.571 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088922.3607922, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.571 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Started (Lifecycle Event)#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.588 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.591 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.613 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.715 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088907.7149312, 42be369c-5a19-4073-becc-4f28ef579c2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.716 254096 INFO nova.compute.manager [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.730 254096 DEBUG nova.compute.manager [None req-35896b72-4f45-48d9-8633-8d97ebadc23f - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.834 254096 DEBUG nova.compute.manager [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG oslo_concurrency.lockutils [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG oslo_concurrency.lockutils [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG oslo_concurrency.lockutils [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG nova.compute.manager [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.836 254096 WARNING nova.compute.manager [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.925 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088907.924449, 679f5e87-bac4-4169-bffa-555a53e7321f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.926 254096 INFO nova.compute.manager [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.929 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.929 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deleting local config drive /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config because it was imported into RBD.#033[00m
Nov 25 11:42:02 np0005535469 nova_compute[254092]: 2025-11-25 16:42:02.948 254096 DEBUG nova.compute.manager [None req-596190c9-3316-4940-8643-0290f953124b - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:02 np0005535469 systemd-udevd[327327]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:42:02 np0005535469 NetworkManager[48891]: <info>  [1764088922.9927] manager: (tap9ef3b6ce-50): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Nov 25 11:42:02 np0005535469 kernel: tap9ef3b6ce-50: entered promiscuous mode
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.001 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00633|binding|INFO|Claiming lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a for this chassis.
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00634|binding|INFO|9ef3b6ce-50de-444f-b27f-b16a5b2b832a: Claiming fa:16:3e:b9:ae:2e 10.100.0.5
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.013 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:ae:2e 10.100.0.5'], port_security=['fa:16:3e:b9:ae:2e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '796c46a8-971c-4b51-96c9-0e7c8682cfa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9ef3b6ce-50de-444f-b27f-b16a5b2b832a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.014 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f bound to our chassis#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.016 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f#033[00m
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.0174] device (tap9ef3b6ce-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.0183] device (tap9ef3b6ce-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.016 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config b5e2a584-5835-4c63-84de-6f0446220d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.017 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deleting local config drive /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config because it was imported into RBD.#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.030 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1024e5-83be-42ee-9e28-c660e033113c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.032 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf66413c8-51 in ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.033 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf66413c8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.033 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6b5bad-497a-40ba-acd8-18f6f92b5225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.035 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[190ebc35-8b5a-437c-b0d3-ad1a905fb455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 systemd-machined[216343]: New machine qemu-80-instance-00000044.
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.049 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2a876f02-8414-451a-bef7-f3ee69a51bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 systemd[1]: Started Virtual Machine qemu-80-instance-00000044.
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.053 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updated VIF entry in instance network info cache for port d2de6446-cca8-4827-a039-647fe671bab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.054 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updating instance_info_cache with network_info: [{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.067 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.070 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af9e00b6-b464-4732-993c-c5b8e4fa14ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 kernel: tapd2de6446-cc: entered promiscuous mode
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.0907] manager: (tapd2de6446-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00635|binding|INFO|Claiming lport d2de6446-cca8-4827-a039-647fe671bab4 for this chassis.
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00636|binding|INFO|d2de6446-cca8-4827-a039-647fe671bab4: Claiming fa:16:3e:a1:2d:7c 10.100.0.4
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00637|binding|INFO|Setting lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a ovn-installed in OVS
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00638|binding|INFO|Setting lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a up in Southbound
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.1055] device (tapd2de6446-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.102 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:2d:7c 10.100.0.4'], port_security=['fa:16:3e:a1:2d:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5e2a584-5835-4c63-84de-6f0446220d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d2de6446-cca8-4827-a039-647fe671bab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.1063] device (tapd2de6446-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.111 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[653cc620-282c-437e-b4d0-24e47d4248e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00639|binding|INFO|Setting lport d2de6446-cca8-4827-a039-647fe671bab4 ovn-installed in OVS
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00640|binding|INFO|Setting lport d2de6446-cca8-4827-a039-647fe671bab4 up in Southbound
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.1218] manager: (tapf66413c8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/279)
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d33b79db-5d38-482d-8be6-d37be5d42098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.156 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb332ea-4c9d-4171-a95e-5d15548b5e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 systemd-machined[216343]: New machine qemu-81-instance-00000045.
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.159 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe4ae9e-2529-4ef0-b1ff-89d135ebb44d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 systemd[1]: Started Virtual Machine qemu-81-instance-00000045.
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.1883] device (tapf66413c8-50): carrier: link connected
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.194 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[446fbb57-fb5d-4d8e-9112-cbcdc82bd780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbb5b1c-40cd-4b6a-8f8e-a1a8cf54052a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327490, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.227 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[adb0e6c4-88da-4e12-81cd-df8c46b5ac6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538077, 'tstamp': 538077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327494, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.248 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[db10aea9-9b5f-43c2-9a6b-8e2972b894b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327497, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.278 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42fd6aab-9d67-4010-98f7-5f282e571e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.342 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c181a4-40e8-4141-ae80-c12e58e5e447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 kernel: tapf66413c8-50: entered promiscuous mode
Nov 25 11:42:03 np0005535469 NetworkManager[48891]: <info>  [1764088923.3468] manager: (tapf66413c8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.351 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.355 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:42:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:03Z|00641|binding|INFO|Releasing lport 347c541f-24a8-4230-9881-74343160f6c8 from this chassis (sb_readonly=0)
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45b80b8e-a9a0-475e-ae55-8bcfd2b14527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.357 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.357 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'env', 'PROCESS_TAG=haproxy-f66413c8-5cde-4f70-af70-6b7886c1219f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f66413c8-5cde-4f70-af70-6b7886c1219f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.782 254096 DEBUG nova.compute.manager [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.782 254096 DEBUG oslo_concurrency.lockutils [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.784 254096 DEBUG oslo_concurrency.lockutils [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.784 254096 DEBUG oslo_concurrency.lockutils [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.784 254096 DEBUG nova.compute.manager [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Processing event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:42:03 np0005535469 podman[327595]: 2025-11-25 16:42:03.78632142 +0000 UTC m=+0.076350192 container create df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.799 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.7978551, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.799 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Started (Lifecycle Event)#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.801 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.807 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.811 254096 INFO nova.virt.libvirt.driver [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance spawned successfully.#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.812 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.828 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:03 np0005535469 systemd[1]: Started libpod-conmon-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope.
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.839 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.839 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:03 np0005535469 podman[327595]: 2025-11-25 16:42:03.746426858 +0000 UTC m=+0.036455680 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.840 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.842 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.842 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.842 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.847 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211f7d06795e09a0a0b405763765b8431f236ef9df850c7ebdf97675234af87d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:03 np0005535469 podman[327595]: 2025-11-25 16:42:03.876790524 +0000 UTC m=+0.166819316 container init df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.881 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.882 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.7979243, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.882 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:42:03 np0005535469 podman[327595]: 2025-11-25 16:42:03.882830379 +0000 UTC m=+0.172859161 container start df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:42:03 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : New worker (327634) forked
Nov 25 11:42:03 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : Loading success.
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.915 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.919 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.803791, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.920 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.933 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 9.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.933 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.942 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d2de6446-cca8-4827-a039-647fe671bab4 in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.944 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.947 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.950 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e870018b-ff19-453a-902a-6b5577579ead]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.974 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.975 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.8594325, b5e2a584-5835-4c63-84de-6f0446220d35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:03 np0005535469 nova_compute[254092]: 2025-11-25 16:42:03.975 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Started (Lifecycle Event)#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.017 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.021 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1fcde9-7784-495c-9e50-8aec0b3a1641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.027 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 10.36 seconds to build instance.#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.028 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c78f82fe-a4c9-414b-8ebc-e9428b0366ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.030 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.8595078, b5e2a584-5835-4c63-84de-6f0446220d35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.030 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.050 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.051 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.054 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.055 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2207f9cd-151b-4388-9b7d-22c60b48371b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.071 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a189a1c-eedb-4e1a-b9c5-68c0d9ce46fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 180, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 180, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327648, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.077 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b85ca7-7535-4746-97de-79a62218b4d1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538090, 'tstamp': 538090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327649, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538092, 'tstamp': 538092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327649, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.092 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.096 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.096 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.518 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088909.5167675, a9e83b59-224b-49dd-83f7-d057737f5825 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.518 254096 INFO nova.compute.manager [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.535 254096 DEBUG nova.compute.manager [None req-e391b162-c827-4594-a752-7bfa7090f033 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.966 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.967 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.967 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.967 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 WARNING nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Processing event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] No waiting events found dispatching network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 WARNING nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received unexpected event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.971 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.974 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088924.9738872, b5e2a584-5835-4c63-84de-6f0446220d35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.975 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.976 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.984 254096 INFO nova.virt.libvirt.driver [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance spawned successfully.#033[00m
Nov 25 11:42:04 np0005535469 nova_compute[254092]: 2025-11-25 16:42:04.985 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.010 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.016 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.016 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.018 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.020 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.021 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.021 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.029 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.066 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.101 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 9.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.101 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.325 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 11.60 seconds to build instance.#033[00m
Nov 25 11:42:05 np0005535469 nova_compute[254092]: 2025-11-25 16:42:05.345 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Nov 25 11:42:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG nova.compute.manager [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG oslo_concurrency.lockutils [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG oslo_concurrency.lockutils [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG oslo_concurrency.lockutils [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.086 254096 DEBUG nova.compute.manager [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] No waiting events found dispatching network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.086 254096 WARNING nova.compute.manager [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received unexpected event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a for instance with vm_state active and task_state None.#033[00m
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.475 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:06 np0005535469 nova_compute[254092]: 2025-11-25 16:42:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:42:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.779 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.781 254096 INFO nova.compute.manager [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Terminating instance#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.782 254096 DEBUG nova.compute.manager [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:42:07 np0005535469 kernel: tap9ef3b6ce-50 (unregistering): left promiscuous mode
Nov 25 11:42:07 np0005535469 NetworkManager[48891]: <info>  [1764088927.8271] device (tap9ef3b6ce-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:07Z|00642|binding|INFO|Releasing lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a from this chassis (sb_readonly=0)
Nov 25 11:42:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:07Z|00643|binding|INFO|Setting lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a down in Southbound
Nov 25 11:42:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:07Z|00644|binding|INFO|Removing iface tap9ef3b6ce-50 ovn-installed in OVS
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.838 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.842 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:ae:2e 10.100.0.5'], port_security=['fa:16:3e:b9:ae:2e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '796c46a8-971c-4b51-96c9-0e7c8682cfa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9ef3b6ce-50de-444f-b27f-b16a5b2b832a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.843 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.844 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f#033[00m
Nov 25 11:42:07 np0005535469 nova_compute[254092]: 2025-11-25 16:42:07.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.866 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc136dc9-c0d5-4604-aeb3-a2b630ddf7d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:07 np0005535469 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Deactivated successfully.
Nov 25 11:42:07 np0005535469 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Consumed 4.634s CPU time.
Nov 25 11:42:07 np0005535469 systemd-machined[216343]: Machine qemu-80-instance-00000044 terminated.
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.909 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[161f4af2-38e9-4b6f-8ded-6f0080d5960c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3861c5-1b9b-4116-bceb-694c9f0331f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.961 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ecec605a-3d10-43b8-a770-6d1bb7683ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73623577-b892-418b-a0a3-50b7d6c156ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327662, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae405a79-fb00-406e-82f9-0de6fdf34e2c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538090, 'tstamp': 538090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327663, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538092, 'tstamp': 538092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327663, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.998 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.021 254096 INFO nova.virt.libvirt.driver [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance destroyed successfully.#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.023 254096 DEBUG nova.objects.instance [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid 796c46a8-971c-4b51-96c9-0e7c8682cfa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.036 254096 DEBUG nova.virt.libvirt.vif [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-1',id=68,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:03Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=796c46a8-971c-4b51-96c9-0e7c8682cfa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.036 254096 DEBUG nova.network.os_vif_util [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.037 254096 DEBUG nova.network.os_vif_util [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.038 254096 DEBUG os_vif [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.040 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ef3b6ce-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.050 254096 INFO os_vif [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50')#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.089 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.092 254096 INFO nova.compute.manager [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Terminating instance#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.093 254096 DEBUG nova.compute.manager [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:42:08 np0005535469 kernel: tapd2de6446-cc (unregistering): left promiscuous mode
Nov 25 11:42:08 np0005535469 NetworkManager[48891]: <info>  [1764088928.1371] device (tapd2de6446-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:08Z|00645|binding|INFO|Releasing lport d2de6446-cca8-4827-a039-647fe671bab4 from this chassis (sb_readonly=0)
Nov 25 11:42:08 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:08Z|00646|binding|INFO|Setting lport d2de6446-cca8-4827-a039-647fe671bab4 down in Southbound
Nov 25 11:42:08 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:08Z|00647|binding|INFO|Removing iface tapd2de6446-cc ovn-installed in OVS
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.150 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:2d:7c 10.100.0.4'], port_security=['fa:16:3e:a1:2d:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5e2a584-5835-4c63-84de-6f0446220d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d2de6446-cca8-4827-a039-647fe671bab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.153 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d2de6446-cca8-4827-a039-647fe671bab4 in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.157 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f66413c8-5cde-4f70-af70-6b7886c1219f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.158 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7123e92b-24e9-47bb-888c-f48d38e116f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.159 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace which is not needed anymore#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 25 11:42:08 np0005535469 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Consumed 3.832s CPU time.
Nov 25 11:42:08 np0005535469 systemd-machined[216343]: Machine qemu-81-instance-00000045 terminated.
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.333 254096 INFO nova.virt.libvirt.driver [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance destroyed successfully.#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.333 254096 DEBUG nova.objects.instance [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid b5e2a584-5835-4c63-84de-6f0446220d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.348 254096 DEBUG nova.virt.libvirt.vif [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-2',id=69,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T16:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:05Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=b5e2a584-5835-4c63-84de-6f0446220d35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.348 254096 DEBUG nova.network.os_vif_util [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.349 254096 DEBUG nova.network.os_vif_util [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.349 254096 DEBUG os_vif [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.350 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.351 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2de6446-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.355 254096 INFO os_vif [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc')#033[00m
Nov 25 11:42:08 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : haproxy version is 2.8.14-c23fe91
Nov 25 11:42:08 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : path to executable is /usr/sbin/haproxy
Nov 25 11:42:08 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [WARNING]  (327632) : Exiting Master process...
Nov 25 11:42:08 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [ALERT]    (327632) : Current worker (327634) exited with code 143 (Terminated)
Nov 25 11:42:08 np0005535469 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [WARNING]  (327632) : All workers exited. Exiting... (0)
Nov 25 11:42:08 np0005535469 systemd[1]: libpod-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope: Deactivated successfully.
Nov 25 11:42:08 np0005535469 conmon[327628]: conmon df9b579937c6ff8f7575 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope/container/memory.events
Nov 25 11:42:08 np0005535469 podman[327714]: 2025-11-25 16:42:08.395081214 +0000 UTC m=+0.123878212 container died df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.508 254096 DEBUG nova.compute.manager [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-unplugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.508 254096 DEBUG oslo_concurrency.lockutils [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG oslo_concurrency.lockutils [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG oslo_concurrency.lockutils [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG nova.compute.manager [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] No waiting events found dispatching network-vif-unplugged-d2de6446-cca8-4827-a039-647fe671bab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG nova.compute.manager [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-unplugged-d2de6446-cca8-4827-a039-647fe671bab4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb-userdata-shm.mount: Deactivated successfully.
Nov 25 11:42:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-211f7d06795e09a0a0b405763765b8431f236ef9df850c7ebdf97675234af87d-merged.mount: Deactivated successfully.
Nov 25 11:42:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538972660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:08 np0005535469 podman[327714]: 2025-11-25 16:42:08.972291413 +0000 UTC m=+0.701088391 container cleanup df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:42:08 np0005535469 systemd[1]: libpod-conmon-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope: Deactivated successfully.
Nov 25 11:42:08 np0005535469 nova_compute[254092]: 2025-11-25 16:42:08.987 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.089 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.089 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.092 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.093 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.096 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.097 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.101 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:09 np0005535469 podman[327792]: 2025-11-25 16:42:09.200715401 +0000 UTC m=+0.204255233 container remove df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.208 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efeeee9d-6351-4779-8896-0a3d9cf6b50f]: (4, ('Tue Nov 25 04:42:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb)\ndf9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb\nTue Nov 25 04:42:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb)\ndf9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.214 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a37cbff5-0b15-4836-9fff-92f3734a8393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.217 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:09 np0005535469 kernel: tapf66413c8-50: left promiscuous mode
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.251 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[435d18c1-f89e-4cf4-a2ec-f895c5785aca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:09 np0005535469 podman[327793]: 2025-11-25 16:42:09.264883941 +0000 UTC m=+0.251890954 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Nov 25 11:42:09 np0005535469 podman[327800]: 2025-11-25 16:42:09.268021056 +0000 UTC m=+0.247777503 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.267 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7ac778-b387-4eb3-9d78-522dcf40f26f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.271 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e205491-4680-4d03-93f3-e0aee498c16f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.288 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5075e03e-bf11-4e2e-8e0e-0be300969d3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538069, 'reachable_time': 22413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327866, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:09 np0005535469 systemd[1]: run-netns-ovnmeta\x2df66413c8\x2d5cde\x2d4f70\x2daf70\x2d6b7886c1219f.mount: Deactivated successfully.
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.295 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:42:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.295 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a3450458-74ee-427c-a613-3cd82a261b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:09 np0005535469 podman[327801]: 2025-11-25 16:42:09.298765721 +0000 UTC m=+0.268086725 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.411 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.413 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3441MB free_disk=59.80998611450195GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.414 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 013dc18e-57cd-4733-8e98-7d20e3b5c4db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance dce3a591-9fb6-4495-a7fb-867af2de384f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b5c5a442-8e8e-40c5-9634-e36c49e6e41b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 796c46a8-971c-4b51-96c9-0e7c8682cfa8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b5e2a584-5835-4c63-84de-6f0446220d35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.503 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1216MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:42:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 203 op/s
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.624 254096 INFO nova.virt.libvirt.driver [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deleting instance files /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8_del#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.625 254096 INFO nova.virt.libvirt.driver [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deletion of /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8_del complete#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.693 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.735 254096 INFO nova.virt.libvirt.driver [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deleting instance files /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35_del#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.736 254096 INFO nova.virt.libvirt.driver [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deletion of /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35_del complete#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.742 254096 INFO nova.compute.manager [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 1.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.742 254096 DEBUG oslo.service.loopingcall [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.742 254096 DEBUG nova.compute.manager [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.743 254096 DEBUG nova.network.neutron [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.826 254096 INFO nova.compute.manager [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 1.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.826 254096 DEBUG oslo.service.loopingcall [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.827 254096 DEBUG nova.compute.manager [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:42:09 np0005535469 nova_compute[254092]: 2025-11-25 16:42:09.827 254096 DEBUG nova.network.neutron [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.067550) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930067598, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1516, "num_deletes": 254, "total_data_size": 2123862, "memory_usage": 2159280, "flush_reason": "Manual Compaction"}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930088250, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2089018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33889, "largest_seqno": 35404, "table_properties": {"data_size": 2082114, "index_size": 3915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15692, "raw_average_key_size": 20, "raw_value_size": 2067728, "raw_average_value_size": 2702, "num_data_blocks": 174, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088800, "oldest_key_time": 1764088800, "file_creation_time": 1764088930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 21165 microseconds, and 5728 cpu microseconds.
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.088719) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2089018 bytes OK
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.088836) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.090541) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.090561) EVENT_LOG_v1 {"time_micros": 1764088930090554, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.090580) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2117088, prev total WAL file size 2117088, number of live WAL files 2.
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.092015) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2040KB)], [74(9439KB)]
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930092889, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11755346, "oldest_snapshot_seqno": -1}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6054 keys, 10095973 bytes, temperature: kUnknown
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930160393, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10095973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10052813, "index_size": 26922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 153357, "raw_average_key_size": 25, "raw_value_size": 9941366, "raw_average_value_size": 1642, "num_data_blocks": 1093, "num_entries": 6054, "num_filter_entries": 6054, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.160677) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10095973 bytes
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.163051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.1 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 6577, records dropped: 523 output_compression: NoCompression
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.163076) EVENT_LOG_v1 {"time_micros": 1764088930163066, "job": 42, "event": "compaction_finished", "compaction_time_micros": 67531, "compaction_time_cpu_micros": 24678, "output_level": 6, "num_output_files": 1, "total_output_size": 10095973, "num_input_records": 6577, "num_output_records": 6054, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930163475, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930165044, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.091920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2710764261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.188 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.193 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.206 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.230 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.230 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.950 254096 DEBUG nova.compute.manager [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG oslo_concurrency.lockutils [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG oslo_concurrency.lockutils [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG oslo_concurrency.lockutils [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG nova.compute.manager [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] No waiting events found dispatching network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:10 np0005535469 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 WARNING nova.compute.manager [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received unexpected event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:42:11 np0005535469 nova_compute[254092]: 2025-11-25 16:42:11.077 254096 DEBUG nova.compute.manager [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-unplugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:11 np0005535469 nova_compute[254092]: 2025-11-25 16:42:11.077 254096 DEBUG oslo_concurrency.lockutils [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:11 np0005535469 nova_compute[254092]: 2025-11-25 16:42:11.078 254096 DEBUG oslo_concurrency.lockutils [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:11 np0005535469 nova_compute[254092]: 2025-11-25 16:42:11.078 254096 DEBUG oslo_concurrency.lockutils [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:11 np0005535469 nova_compute[254092]: 2025-11-25 16:42:11.079 254096 DEBUG nova.compute.manager [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] No waiting events found dispatching network-vif-unplugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:11 np0005535469 nova_compute[254092]: 2025-11-25 16:42:11.079 254096 DEBUG nova.compute.manager [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-unplugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:42:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 279 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 323 op/s
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.149 254096 DEBUG nova.network.neutron [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.150 254096 DEBUG nova.network.neutron [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.167 254096 INFO nova.compute.manager [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 2.34 seconds to deallocate network for instance.#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.181 254096 INFO nova.compute.manager [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 2.44 seconds to deallocate network for instance.#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.253 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.253 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.265 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.368 254096 DEBUG oslo_concurrency.processutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.574 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.575 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.575 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.576 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.576 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.577 254096 INFO nova.compute.manager [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Terminating instance#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.578 254096 DEBUG nova.compute.manager [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:42:12 np0005535469 kernel: tap1404e99c-a3 (unregistering): left promiscuous mode
Nov 25 11:42:12 np0005535469 NetworkManager[48891]: <info>  [1764088932.6514] device (tap1404e99c-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:12Z|00648|binding|INFO|Releasing lport 1404e99c-a32c-404a-a7d6-3daccc67c48b from this chassis (sb_readonly=0)
Nov 25 11:42:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:12Z|00649|binding|INFO|Setting lport 1404e99c-a32c-404a-a7d6-3daccc67c48b down in Southbound
Nov 25 11:42:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:12Z|00650|binding|INFO|Removing iface tap1404e99c-a3 ovn-installed in OVS
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.670 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.671 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.672 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.694 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2de614-672d-4bb5-8098-a07e263c3afe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000040.scope: Deactivated successfully.
Nov 25 11:42:12 np0005535469 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000040.scope: Consumed 16.952s CPU time.
Nov 25 11:42:12 np0005535469 systemd-machined[216343]: Machine qemu-75-instance-00000040 terminated.
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.730 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[73434ee3-394d-4fbf-9e2b-d0f4c8aeceee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.735 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f58adba7-4104-4bd9-a5d4-89a110248fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.760 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30a2ec2c-6986-4c58-92be-85381a5eaff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e272aef-1695-48ee-b6ca-71bec07e235f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327924, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 kernel: tap1404e99c-a3: entered promiscuous mode
Nov 25 11:42:12 np0005535469 NetworkManager[48891]: <info>  [1764088932.8048] manager: (tap1404e99c-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Nov 25 11:42:12 np0005535469 systemd-udevd[327917]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:42:12 np0005535469 kernel: tap1404e99c-a3 (unregistering): left promiscuous mode
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.808 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:12Z|00651|binding|INFO|Claiming lport 1404e99c-a32c-404a-a7d6-3daccc67c48b for this chassis.
Nov 25 11:42:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:12Z|00652|binding|INFO|1404e99c-a32c-404a-a7d6-3daccc67c48b: Claiming fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.806 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a36a399-0382-4fd4-bd90-4543ddb85504]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327925, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327925, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.815 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.819 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605014799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:12Z|00653|binding|INFO|Releasing lport 1404e99c-a32c-404a-a7d6-3daccc67c48b from this chassis (sb_readonly=0)
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.838 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.840 254096 INFO nova.virt.libvirt.driver [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance destroyed successfully.#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.840 254096 DEBUG nova.objects.instance [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid b5c5a442-8e8e-40c5-9634-e36c49e6e41b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.846 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.854 254096 DEBUG nova.virt.libvirt.vif [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1125840801',display_name='tempest-ListServerFiltersTestJSON-instance-1125840801',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1125840801',id=64,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-e1rd6q3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:32Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=b5c5a442-8e8e-40c5-9634-e36c49e6e41b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.855 254096 DEBUG nova.network.os_vif_util [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.856 254096 DEBUG nova.network.os_vif_util [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.856 254096 DEBUG os_vif [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.858 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1404e99c-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.860 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.860 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.860 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.861 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.861 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.862 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.864 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.866 254096 INFO os_vif [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3')#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.879 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[68c1321b-e474-4110-8a57-c76475333409]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.901 254096 DEBUG oslo_concurrency.processutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.909 254096 DEBUG nova.compute.provider_tree [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[76588322-e9d2-4272-9a68-6462a3537e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13917fa3-910c-4e99-867e-58eed2428cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.924 254096 DEBUG nova.scheduler.client.report [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.949 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fbedb1-1004-4ba2-bbdb-2d10044e1d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.953 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.956 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.966 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed50f186-144d-4bf0-8335-4dc216fd6c13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327956, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.977 254096 INFO nova.scheduler.client.report [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance 796c46a8-971c-4b51-96c9-0e7c8682cfa8#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a66976e-5919-4620-849a-95d8d795dfd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327957, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327957, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 nova_compute[254092]: 2025-11-25 16:42:12.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.988 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.988 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.989 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.989 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.990 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis#033[00m
Nov 25 11:42:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.991 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.008 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb66a936-f215-4cb5-ae5b-4deffbd7bb30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.037 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[673acd5c-90be-4e39-a170-cf29cf46b8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.042 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d408c69f-d998-4e55-aadc-7eeaed8c46c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.044 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bf496676-88ef-4364-bba8-c8f6e0f809bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.083 254096 DEBUG oslo_concurrency.processutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.089 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[991eacea-c11f-42c9-9373-399295fd85b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327963, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.113 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc99efd-61fd-4c3b-856f-479f8dd7ae55]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327966, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327966, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.116 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.120 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.121 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.215 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.216 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.216 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.216 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.217 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] No waiting events found dispatching network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.218 254096 WARNING nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received unexpected event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.219 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-deleted-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.220 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-deleted-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.220 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-unplugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] No waiting events found dispatching network-vif-unplugged-1404e99c-a32c-404a-a7d6-3daccc67c48b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.222 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-unplugged-1404e99c-a32c-404a-a7d6-3daccc67c48b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.309 254096 INFO nova.virt.libvirt.driver [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deleting instance files /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_del#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.311 254096 INFO nova.virt.libvirt.driver [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deletion of /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_del complete#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.366 254096 INFO nova.compute.manager [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.368 254096 DEBUG oslo.service.loopingcall [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.369 254096 DEBUG nova.compute.manager [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.369 254096 DEBUG nova.network.neutron [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:42:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 279 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 29 KiB/s wr, 271 op/s
Nov 25 11:42:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174271123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.603 254096 DEBUG oslo_concurrency.processutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.609 254096 DEBUG nova.compute.provider_tree [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.619 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.620 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.621 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.624 254096 DEBUG nova.scheduler.client.report [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.646 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.672 254096 INFO nova.scheduler.client.report [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance b5e2a584-5835-4c63-84de-6f0446220d35#033[00m
Nov 25 11:42:13 np0005535469 nova_compute[254092]: 2025-11-25 16:42:13.757 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.226 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.226 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.227 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.227 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.263 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.267 254096 DEBUG nova.network.neutron [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.296 254096 INFO nova.compute.manager [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 1.93 seconds to deallocate network for instance.#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.333 254096 DEBUG nova.compute.manager [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.333 254096 DEBUG oslo_concurrency.lockutils [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.333 254096 DEBUG oslo_concurrency.lockutils [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.334 254096 DEBUG oslo_concurrency.lockutils [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.334 254096 DEBUG nova.compute.manager [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] No waiting events found dispatching network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.334 254096 WARNING nova.compute.manager [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received unexpected event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.354 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.354 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.446 254096 DEBUG oslo_concurrency.processutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.547 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.548 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.549 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 225 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 43 KiB/s wr, 306 op/s
Nov 25 11:42:15 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:15Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:61:e3 10.100.0.13
Nov 25 11:42:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938954176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.936 254096 DEBUG oslo_concurrency.processutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.942 254096 DEBUG nova.compute.provider_tree [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.958 254096 DEBUG nova.scheduler.client.report [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:15 np0005535469 nova_compute[254092]: 2025-11-25 16:42:15.979 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.001 254096 INFO nova.scheduler.client.report [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Deleted allocations for instance b5c5a442-8e8e-40c5-9634-e36c49e6e41b#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.118 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.379 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.379 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.380 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.380 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.380 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.381 254096 INFO nova.compute.manager [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Terminating instance#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.382 254096 DEBUG nova.compute.manager [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:42:16 np0005535469 kernel: tap9e60e140-ca (unregistering): left promiscuous mode
Nov 25 11:42:16 np0005535469 NetworkManager[48891]: <info>  [1764088936.5008] device (tap9e60e140-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:42:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:16Z|00654|binding|INFO|Releasing lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 from this chassis (sb_readonly=0)
Nov 25 11:42:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:16Z|00655|binding|INFO|Setting lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 down in Southbound
Nov 25 11:42:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:16Z|00656|binding|INFO|Removing iface tap9e60e140-ca ovn-installed in OVS
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.569 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:c0:a9 10.100.0.8'], port_security=['fa:16:3e:67:c0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dce3a591-9fb6-4495-a7fb-867af2de384f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9e60e140-ca34-40f4-b867-d7c53f05bca4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.570 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9e60e140-ca34-40f4-b867-d7c53f05bca4 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.571 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1#033[00m
Nov 25 11:42:16 np0005535469 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 25 11:42:16 np0005535469 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003f.scope: Consumed 15.427s CPU time.
Nov 25 11:42:16 np0005535469 systemd-machined[216343]: Machine qemu-74-instance-0000003f terminated.
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.590 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e356c8dd-8d4d-4def-a47d-c7e7765e8288]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.619 254096 INFO nova.virt.libvirt.driver [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance destroyed successfully.#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.620 254096 DEBUG nova.objects.instance [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid dce3a591-9fb6-4495-a7fb-867af2de384f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.632 254096 DEBUG nova.virt.libvirt.vif [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-596690940',display_name='tempest-ListServerFiltersTestJSON-instance-596690940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-596690940',id=63,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-pzh0k6o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:30Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=dce3a591-9fb6-4495-a7fb-867af2de384f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.633 254096 DEBUG nova.network.os_vif_util [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.634 254096 DEBUG nova.network.os_vif_util [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.635 254096 DEBUG os_vif [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.638 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e60e140-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.640 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.640 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[14cdad82-8b24-44a3-9b5c-a28ba432c7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.645 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1ecd2c-aa15-4857-b68a-8e560bbab56c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.646 254096 INFO os_vif [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca')#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.675 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.675 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.683 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c33b068-ed97-4c0d-8d67-60f1fd8cd1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.694 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c83dead-b4c5-44cc-9535-16dbbbab0deb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328050, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.723 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a96966a-ec0a-4c9d-8858-d70cb376292d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328051, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328051, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.725 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.728 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.728 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.876 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.876 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.884 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:42:16 np0005535469 nova_compute[254092]: 2025-11-25 16:42:16.884 254096 INFO nova.compute.claims [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.071 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183870431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.535 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.541 254096 DEBUG nova.compute.provider_tree [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.558 254096 DEBUG nova.scheduler.client.report [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 200 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 19 KiB/s wr, 242 op/s
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.593 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-deleted-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-unplugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG oslo_concurrency.lockutils [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG oslo_concurrency.lockutils [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG oslo_concurrency.lockutils [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.595 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] No waiting events found dispatching network-vif-unplugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.596 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-unplugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.600 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.600 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.680 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.680 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.702 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.726 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.844 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.845 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.845 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Creating image(s)#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.866 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.892 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.912 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.916 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.986 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.987 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.987 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:17 np0005535469 nova_compute[254092]: 2025-11-25 16:42:17.988 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.012 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.016 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e887a377-e792-462d-8bcd-002a93dac12d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.046 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.084 254096 DEBUG nova.policy [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e76c1b261c0442caa52f39297ccf296d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea024a03380a4251a920e126716935de', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.112 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.112 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.683 254096 INFO nova.virt.libvirt.driver [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deleting instance files /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f_del#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.684 254096 INFO nova.virt.libvirt.driver [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deletion of /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f_del complete#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.688 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e887a377-e792-462d-8bcd-002a93dac12d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.766 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] resizing rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.801 254096 INFO nova.compute.manager [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 2.42 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.802 254096 DEBUG oslo.service.loopingcall [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.803 254096 DEBUG nova.compute.manager [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.803 254096 DEBUG nova.network.neutron [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.868 254096 DEBUG nova.objects.instance [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'migration_context' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.883 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.883 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Ensure instance console log exists: /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.884 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.884 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:18 np0005535469 nova_compute[254092]: 2025-11-25 16:42:18.884 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 200 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 176 op/s
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.697 254096 DEBUG nova.compute.manager [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.698 254096 DEBUG oslo_concurrency.lockutils [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.698 254096 DEBUG oslo_concurrency.lockutils [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.698 254096 DEBUG oslo_concurrency.lockutils [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.699 254096 DEBUG nova.compute.manager [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] No waiting events found dispatching network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.699 254096 WARNING nova.compute.manager [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received unexpected event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.700 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Successfully created port: ce73fc27-d707-4321-8e2e-f77bd4b984ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.753 254096 DEBUG nova.network.neutron [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.768 254096 INFO nova.compute.manager [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 0.96 seconds to deallocate network for instance.#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.823 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.824 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:19.863 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:19.864 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:19 np0005535469 nova_compute[254092]: 2025-11-25 16:42:19.970 254096 DEBUG oslo_concurrency.processutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860457252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:20Z|00657|binding|INFO|Releasing lport 57c889f7-e44b-4f52-8e8a-db17b4e1f3b8 from this chassis (sb_readonly=0)
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.430 254096 DEBUG oslo_concurrency.processutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.437 254096 DEBUG nova.compute.provider_tree [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.462 254096 DEBUG nova.scheduler.client.report [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.496 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.522 254096 INFO nova.scheduler.client.report [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Deleted allocations for instance dce3a591-9fb6-4495-a7fb-867af2de384f#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.607 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.838 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Successfully updated port: ce73fc27-d707-4321-8e2e-f77bd4b984ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquired lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:42:20 np0005535469 nova_compute[254092]: 2025-11-25 16:42:20.853 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:42:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:21 np0005535469 nova_compute[254092]: 2025-11-25 16:42:21.087 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:42:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 169 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Nov 25 11:42:21 np0005535469 nova_compute[254092]: 2025-11-25 16:42:21.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:21 np0005535469 nova_compute[254092]: 2025-11-25 16:42:21.910 254096 DEBUG nova.compute.manager [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-deleted-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:21 np0005535469 nova_compute[254092]: 2025-11-25 16:42:21.911 254096 DEBUG nova.compute.manager [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-changed-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:21 np0005535469 nova_compute[254092]: 2025-11-25 16:42:21.911 254096 DEBUG nova.compute.manager [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Refreshing instance network info cache due to event network-changed-ce73fc27-d707-4321-8e2e-f77bd4b984ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:42:21 np0005535469 nova_compute[254092]: 2025-11-25 16:42:21.911 254096 DEBUG oslo_concurrency.lockutils [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.120 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.121 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.121 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.122 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.122 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.124 254096 INFO nova.compute.manager [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Terminating instance#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.125 254096 DEBUG nova.compute.manager [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:42:22 np0005535469 kernel: tap5136afea-10 (unregistering): left promiscuous mode
Nov 25 11:42:22 np0005535469 NetworkManager[48891]: <info>  [1764088942.1670] device (tap5136afea-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:42:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:22Z|00658|binding|INFO|Releasing lport 5136afea-102e-46a1-8fdb-0af970c5af04 from this chassis (sb_readonly=0)
Nov 25 11:42:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:22Z|00659|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 down in Southbound
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:22Z|00660|binding|INFO|Removing iface tap5136afea-10 ovn-installed in OVS
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.217 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.218 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.219 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f00f265b-63fa-48fb-9383-38ff6abf51c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.220 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[332ad6f0-c632-4dda-9de8-691670e3e127]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.221 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 namespace which is not needed anymore#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 11:42:22 np0005535469 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003e.scope: Consumed 14.333s CPU time.
Nov 25 11:42:22 np0005535469 systemd-machined[216343]: Machine qemu-79-instance-0000003e terminated.
Nov 25 11:42:22 np0005535469 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : haproxy version is 2.8.14-c23fe91
Nov 25 11:42:22 np0005535469 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : path to executable is /usr/sbin/haproxy
Nov 25 11:42:22 np0005535469 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [WARNING]  (323779) : Exiting Master process...
Nov 25 11:42:22 np0005535469 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [ALERT]    (323779) : Current worker (323781) exited with code 143 (Terminated)
Nov 25 11:42:22 np0005535469 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [WARNING]  (323779) : All workers exited. Exiting... (0)
Nov 25 11:42:22 np0005535469 systemd[1]: libpod-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192.scope: Deactivated successfully.
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 podman[328287]: 2025-11-25 16:42:22.34725567 +0000 UTC m=+0.044092138 container died 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.350 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.358 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance destroyed successfully.#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.359 254096 DEBUG nova.objects.instance [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.374 254096 DEBUG nova.virt.libvirt.vif [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:02Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.375 254096 DEBUG nova.network.os_vif_util [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.375 254096 DEBUG nova.network.os_vif_util [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.376 254096 DEBUG os_vif [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.378 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5136afea-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192-userdata-shm.mount: Deactivated successfully.
Nov 25 11:42:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b958805e8f49ae0dd91301b40e75db313a46de7a441e4d3c3ccb3b8eb4101d4c-merged.mount: Deactivated successfully.
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.386 254096 INFO os_vif [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')#033[00m
Nov 25 11:42:22 np0005535469 podman[328287]: 2025-11-25 16:42:22.395175129 +0000 UTC m=+0.092011617 container cleanup 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:42:22 np0005535469 systemd[1]: libpod-conmon-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192.scope: Deactivated successfully.
Nov 25 11:42:22 np0005535469 podman[328341]: 2025-11-25 16:42:22.473364751 +0000 UTC m=+0.054908531 container remove 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4adae5f9-a8f2-46e4-b923-27268ba7dba3]: (4, ('Tue Nov 25 04:42:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 (3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192)\n3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192\nTue Nov 25 04:42:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 (3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192)\n3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.484 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3613d2b1-2b55-4514-95fd-ee94012db26c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.485 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:22 np0005535469 kernel: tapf00f265b-60: left promiscuous mode
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15d571eb-c339-42fb-b1a2-ca4ac3d0eb0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4255a10f-c05e-4b6a-85f3-7df4bbbd27f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.531 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5150948-1842-4408-80bd-6a911cd45992]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.550 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[628d0486-3713-4d87-8849-3f8d8fa38fbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534200, 'reachable_time': 24562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328359, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 systemd[1]: run-netns-ovnmeta\x2df00f265b\x2d63fa\x2d48fb\x2d9383\x2d38ff6abf51c1.mount: Deactivated successfully.
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.554 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:42:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.554 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[706ed680-578d-4571-b35f-c00091baa083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.764 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.782 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Releasing lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.782 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance network_info: |[{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.783 254096 DEBUG oslo_concurrency.lockutils [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.783 254096 DEBUG nova.network.neutron [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Refreshing network info cache for port ce73fc27-d707-4321-8e2e-f77bd4b984ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.785 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start _get_guest_xml network_info=[{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.789 254096 WARNING nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.793 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.794 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.802 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.802 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.803 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.803 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.803 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.808 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.856 254096 INFO nova.virt.libvirt.driver [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deleting instance files /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db_del#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.858 254096 INFO nova.virt.libvirt.driver [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deletion of /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db_del complete#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.911 254096 INFO nova.compute.manager [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.912 254096 DEBUG oslo.service.loopingcall [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.912 254096 DEBUG nova.compute.manager [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:42:22 np0005535469 nova_compute[254092]: 2025-11-25 16:42:22.912 254096 DEBUG nova.network.neutron [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.015 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088928.0119298, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.015 254096 INFO nova.compute.manager [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.039 254096 DEBUG nova.compute.manager [None req-5e9c52a4-0d49-4095-a995-f30a67046872 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323649743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.288 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.314 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.318 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.348 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088928.329823, b5e2a584-5835-4c63-84de-6f0446220d35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.349 254096 INFO nova.compute.manager [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.371 254096 DEBUG nova.compute.manager [None req-518c6854-485a-4063-b198-59d9916613fc - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 169 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 25 11:42:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/929171924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.748 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.749 254096 DEBUG nova.virt.libvirt.vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:17Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.750 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.750 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.751 254096 DEBUG nova.objects.instance [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'pci_devices' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.836 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <uuid>e887a377-e792-462d-8bcd-002a93dac12d</uuid>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <name>instance-00000046</name>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <nova:name>tempest-InstanceActionsTestJSON-server-1282971748</nova:name>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:42:22</nova:creationTime>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:user uuid="e76c1b261c0442caa52f39297ccf296d">tempest-InstanceActionsTestJSON-16048106-project-member</nova:user>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:project uuid="ea024a03380a4251a920e126716935de">tempest-InstanceActionsTestJSON-16048106</nova:project>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <nova:port uuid="ce73fc27-d707-4321-8e2e-f77bd4b984ad">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <entry name="serial">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <entry name="uuid">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk.config">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:5e:35:7f"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <target dev="tapce73fc27-d7"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/console.log" append="off"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:42:23 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:42:23 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:42:23 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:42:23 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.837 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Preparing to wait for external event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.837 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.838 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.838 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.839 254096 DEBUG nova.virt.libvirt.vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:17Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.839 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.840 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.840 254096 DEBUG os_vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.841 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.841 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.846 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce73fc27-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.846 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce73fc27-d7, col_values=(('external_ids', {'iface-id': 'ce73fc27-d707-4321-8e2e-f77bd4b984ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:35:7f', 'vm-uuid': 'e887a377-e792-462d-8bcd-002a93dac12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:23 np0005535469 NetworkManager[48891]: <info>  [1764088943.8489] manager: (tapce73fc27-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.854 254096 INFO os_vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.914 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.915 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.915 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] No VIF found with MAC fa:16:3e:5e:35:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.915 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Using config drive#033[00m
Nov 25 11:42:23 np0005535469 nova_compute[254092]: 2025-11-25 16:42:23.933 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.232 254096 DEBUG nova.network.neutron [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.250 254096 INFO nova.compute.manager [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 1.34 seconds to deallocate network for instance.#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.312 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.313 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.365 254096 DEBUG nova.compute.manager [req-dad1052f-a80c-478f-b3d6-a8bfcd90a4d7 req-4a0bd464-e551-4273-a8aa-5cff078df4e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-deleted-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.391 254096 DEBUG oslo_concurrency.processutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.479 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Creating config drive at /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.484 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsjmcps10 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.623 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsjmcps10" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.648 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.651 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config e887a377-e792-462d-8bcd-002a93dac12d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309791242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.851 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config e887a377-e792-462d-8bcd-002a93dac12d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.852 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deleting local config drive /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config because it was imported into RBD.#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.869 254096 DEBUG oslo_concurrency.processutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.879 254096 DEBUG nova.compute.provider_tree [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.893 254096 DEBUG nova.scheduler.client.report [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:24 np0005535469 kernel: tapce73fc27-d7: entered promiscuous mode
Nov 25 11:42:24 np0005535469 systemd-udevd[328266]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:42:24 np0005535469 NetworkManager[48891]: <info>  [1764088944.9056] manager: (tapce73fc27-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Nov 25 11:42:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:24Z|00661|binding|INFO|Claiming lport ce73fc27-d707-4321-8e2e-f77bd4b984ad for this chassis.
Nov 25 11:42:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:24Z|00662|binding|INFO|ce73fc27-d707-4321-8e2e-f77bd4b984ad: Claiming fa:16:3e:5e:35:7f 10.100.0.6
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:24 np0005535469 NetworkManager[48891]: <info>  [1764088944.9180] device (tapce73fc27-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.916 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:24 np0005535469 NetworkManager[48891]: <info>  [1764088944.9192] device (tapce73fc27-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.924 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.926 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a bound to our chassis#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.927 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 203581b6-f356-4499-9dc8-abafe93b350a#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.940 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8b79aa-b2ba-4bc2-8c52-477476791db8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.942 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap203581b6-f1 in ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.944 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap203581b6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.945 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4c1ca5-f169-428f-bf9b-69cf86ec39c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[219e04e3-a456-44d1-8c03-9095f4badc61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:24 np0005535469 systemd-machined[216343]: New machine qemu-82-instance-00000046.
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.951 254096 INFO nova.scheduler.client.report [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Deleted allocations for instance 013dc18e-57cd-4733-8e98-7d20e3b5c4db#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.957 254096 DEBUG nova.network.neutron [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updated VIF entry in instance network info cache for port ce73fc27-d707-4321-8e2e-f77bd4b984ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.958 254096 DEBUG nova.network.neutron [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.962 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[280911db-9de5-4304-aefd-e10120663705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:24 np0005535469 systemd[1]: Started Virtual Machine qemu-82-instance-00000046.
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.975 254096 DEBUG oslo_concurrency.lockutils [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:24 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.994 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5cce7dc7-e301-4a81-9148-dc041aa6edf6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:24.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.024 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6b55cb-a1bc-435a-a524-4df160a9bf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.031 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29473529-13e1-4177-817e-3a99e4179120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 NetworkManager[48891]: <info>  [1764088945.0318] manager: (tap203581b6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/284)
Nov 25 11:42:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:25Z|00663|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad ovn-installed in OVS
Nov 25 11:42:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:25Z|00664|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad up in Southbound
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.036 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.066 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[72664c52-faa4-4361-8101-e776296087a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.069 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3d4c34-b34b-4ab7-89e9-9840669bce8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 NetworkManager[48891]: <info>  [1764088945.0924] device (tap203581b6-f0): carrier: link connected
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.099 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ee52ad-ca4b-4794-b4c7-69098239f78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5594fbf-7b6b-4d90-9ccf-81909d54524d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540268, 'reachable_time': 21983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328550, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8179abe-00cc-457f-a470-b557caa8146a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:fc5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540268, 'tstamp': 540268}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328551, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7885b97-001d-46f5-8c0a-f837d2aa1bb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540268, 'reachable_time': 21983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328552, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.185 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f5724c-033c-4259-ad1d-de1cb17d16e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.252 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f93e4a56-043b-4246-afd6-a46b148e3f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.253 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.253 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.254 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap203581b6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:25 np0005535469 kernel: tap203581b6-f0: entered promiscuous mode
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:25 np0005535469 NetworkManager[48891]: <info>  [1764088945.2580] manager: (tap203581b6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.261 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap203581b6-f0, col_values=(('external_ids', {'iface-id': '43fd9010-c369-4c3b-8331-da7c798cb131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:25Z|00665|binding|INFO|Releasing lport 43fd9010-c369-4c3b-8331-da7c798cb131 from this chassis (sb_readonly=0)
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.265 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.266 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a307bfdd-2d7f-42aa-9174-0f80de575f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.267 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:42:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.267 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'env', 'PROCESS_TAG=haproxy-203581b6-f356-4499-9dc8-abafe93b350a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/203581b6-f356-4499-9dc8-abafe93b350a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.329 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088945.328489, e887a377-e792-462d-8bcd-002a93dac12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.330 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Started (Lifecycle Event)#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.353 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.358 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088945.32885, e887a377-e792-462d-8bcd-002a93dac12d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.377 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:25 np0005535469 nova_compute[254092]: 2025-11-25 16:42:25.404 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:42:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 120 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 25 11:42:25 np0005535469 podman[328627]: 2025-11-25 16:42:25.64108867 +0000 UTC m=+0.049365631 container create d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:42:25 np0005535469 systemd[1]: Started libpod-conmon-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7.scope.
Nov 25 11:42:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:25 np0005535469 podman[328627]: 2025-11-25 16:42:25.614542079 +0000 UTC m=+0.022819060 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:42:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5789ea30890dfa3f5bff74e20beffed1576d518a78c46e7d983852961c8fdd8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:25 np0005535469 podman[328627]: 2025-11-25 16:42:25.737884235 +0000 UTC m=+0.146161246 container init d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:42:25 np0005535469 podman[328627]: 2025-11-25 16:42:25.743393085 +0000 UTC m=+0.151670046 container start d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 11:42:25 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : New worker (328648) forked
Nov 25 11:42:25 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : Loading success.
Nov 25 11:42:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 11:42:27 np0005535469 nova_compute[254092]: 2025-11-25 16:42:27.831 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088932.8272161, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:27 np0005535469 nova_compute[254092]: 2025-11-25 16:42:27.832 254096 INFO nova.compute.manager [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:27 np0005535469 nova_compute[254092]: 2025-11-25 16:42:27.848 254096 DEBUG nova.compute.manager [None req-346bd3e4-1623-4fdb-b819-57a19e383d25 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:27.866 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:28 np0005535469 nova_compute[254092]: 2025-11-25 16:42:28.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:28 np0005535469 nova_compute[254092]: 2025-11-25 16:42:28.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 11:42:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 11:42:31 np0005535469 nova_compute[254092]: 2025-11-25 16:42:31.615 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088936.614849, dce3a591-9fb6-4495-a7fb-867af2de384f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:31 np0005535469 nova_compute[254092]: 2025-11-25 16:42:31.616 254096 INFO nova.compute.manager [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:31 np0005535469 nova_compute[254092]: 2025-11-25 16:42:31.634 254096 DEBUG nova.compute.manager [None req-786bd18d-c146-497f-91fc-2f74b43b54da - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.151 254096 DEBUG nova.compute.manager [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.151 254096 DEBUG oslo_concurrency.lockutils [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.151 254096 DEBUG oslo_concurrency.lockutils [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.152 254096 DEBUG oslo_concurrency.lockutils [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.152 254096 DEBUG nova.compute.manager [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Processing event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.153 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.157 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088952.1572037, e887a377-e792-462d-8bcd-002a93dac12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.157 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.159 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.162 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance spawned successfully.#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.163 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.262 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.272 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.279 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.280 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.280 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.281 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.281 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.282 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.351 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.876 254096 INFO nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 15.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:42:32 np0005535469 nova_compute[254092]: 2025-11-25 16:42:32.876 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:33 np0005535469 nova_compute[254092]: 2025-11-25 16:42:33.033 254096 INFO nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 16.18 seconds to build instance.#033[00m
Nov 25 11:42:33 np0005535469 nova_compute[254092]: 2025-11-25 16:42:33.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:33 np0005535469 nova_compute[254092]: 2025-11-25 16:42:33.230 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 22 KiB/s wr, 37 op/s
Nov 25 11:42:33 np0005535469 nova_compute[254092]: 2025-11-25 16:42:33.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:34 np0005535469 nova_compute[254092]: 2025-11-25 16:42:34.272 254096 DEBUG nova.compute.manager [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:34 np0005535469 nova_compute[254092]: 2025-11-25 16:42:34.273 254096 DEBUG oslo_concurrency.lockutils [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:34 np0005535469 nova_compute[254092]: 2025-11-25 16:42:34.273 254096 DEBUG oslo_concurrency.lockutils [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:34 np0005535469 nova_compute[254092]: 2025-11-25 16:42:34.274 254096 DEBUG oslo_concurrency.lockutils [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:34 np0005535469 nova_compute[254092]: 2025-11-25 16:42:34.274 254096 DEBUG nova.compute.manager [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] No waiting events found dispatching network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:42:34 np0005535469 nova_compute[254092]: 2025-11-25 16:42:34.274 254096 WARNING nova.compute.manager [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received unexpected event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad for instance with vm_state active and task_state None.#033[00m
Nov 25 11:42:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 89 op/s
Nov 25 11:42:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.357 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088942.3568356, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.357 254096 INFO nova.compute.manager [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.386 254096 DEBUG nova.compute.manager [None req-7af2a72e-3ab8-4168-b629-d45248412868 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 73 op/s
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.611 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.612 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.612 254096 INFO nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Rebooting instance#033[00m
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.625 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.626 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquired lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:42:37 np0005535469 nova_compute[254092]: 2025-11-25 16:42:37.626 254096 DEBUG nova.network.neutron [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:42:38 np0005535469 nova_compute[254092]: 2025-11-25 16:42:38.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:38 np0005535469 nova_compute[254092]: 2025-11-25 16:42:38.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 11:42:39 np0005535469 podman[328660]: 2025-11-25 16:42:39.654674849 +0000 UTC m=+0.073779872 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:42:39 np0005535469 podman[328659]: 2025-11-25 16:42:39.672454081 +0000 UTC m=+0.091983266 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:42:39 np0005535469 podman[328661]: 2025-11-25 16:42:39.675500014 +0000 UTC m=+0.090789184 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:42:39 np0005535469 nova_compute[254092]: 2025-11-25 16:42:39.856 254096 DEBUG nova.network.neutron [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.016 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Releasing lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.017 254096 DEBUG nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:42:40
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:42:40 np0005535469 kernel: tapce73fc27-d7 (unregistering): left promiscuous mode
Nov 25 11:42:40 np0005535469 NetworkManager[48891]: <info>  [1764088960.3425] device (tapce73fc27-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:40Z|00666|binding|INFO|Releasing lport ce73fc27-d707-4321-8e2e-f77bd4b984ad from this chassis (sb_readonly=0)
Nov 25 11:42:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:40Z|00667|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad down in Southbound
Nov 25 11:42:40 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:40Z|00668|binding|INFO|Removing iface tapce73fc27-d7 ovn-installed in OVS
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.413 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.414 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a unbound from our chassis#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.415 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 203581b6-f356-4499-9dc8-abafe93b350a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ead0fcc-93d1-4668-a5ea-05f98ad65207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.416 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace which is not needed anymore#033[00m
Nov 25 11:42:40 np0005535469 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 25 11:42:40 np0005535469 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000046.scope: Consumed 8.588s CPU time.
Nov 25 11:42:40 np0005535469 systemd-machined[216343]: Machine qemu-82-instance-00000046 terminated.
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:42:40 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : haproxy version is 2.8.14-c23fe91
Nov 25 11:42:40 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : path to executable is /usr/sbin/haproxy
Nov 25 11:42:40 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [WARNING]  (328646) : Exiting Master process...
Nov 25 11:42:40 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [ALERT]    (328646) : Current worker (328648) exited with code 143 (Terminated)
Nov 25 11:42:40 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [WARNING]  (328646) : All workers exited. Exiting... (0)
Nov 25 11:42:40 np0005535469 systemd[1]: libpod-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7.scope: Deactivated successfully.
Nov 25 11:42:40 np0005535469 podman[328745]: 2025-11-25 16:42:40.567630548 +0000 UTC m=+0.061962482 container died d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:42:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7-userdata-shm.mount: Deactivated successfully.
Nov 25 11:42:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5789ea30890dfa3f5bff74e20beffed1576d518a78c46e7d983852961c8fdd8a-merged.mount: Deactivated successfully.
Nov 25 11:42:40 np0005535469 podman[328745]: 2025-11-25 16:42:40.619205326 +0000 UTC m=+0.113537240 container cleanup d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 systemd[1]: libpod-conmon-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7.scope: Deactivated successfully.
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.640 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance destroyed successfully.#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.641 254096 DEBUG nova.objects.instance [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'resources' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.661 254096 DEBUG nova.virt.libvirt.vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:40Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.662 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.664 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.664 254096 DEBUG os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce73fc27-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.673 254096 INFO os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.680 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start _get_guest_xml network_info=[{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.684 254096 WARNING nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.694 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.695 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:42:40 np0005535469 podman[328778]: 2025-11-25 16:42:40.697355907 +0000 UTC m=+0.050968044 container remove d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.699 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.700 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.700 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.701 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.701 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.701 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.objects.instance [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'vcpu_model' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.704 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1525a9-63b2-4e8f-8a03-a8ff4104c247]: (4, ('Tue Nov 25 04:42:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7)\nd600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7\nTue Nov 25 04:42:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7)\nd600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.705 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e2abe1-68a3-4d24-bf06-972bfff79118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.706 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 kernel: tap203581b6-f0: left promiscuous mode
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.726 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03678243-95c5-44b7-86fe-a2b2acc1d82b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce67f42b-03b0-417c-b653-8616306727fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.749 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[14cbb22b-f3c4-4c18-b4ab-90b4c7cb118c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 nova_compute[254092]: 2025-11-25 16:42:40.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4fc717-4dc0-4779-a70d-741b21417394]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540260, 'reachable_time': 19161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328798, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 systemd[1]: run-netns-ovnmeta\x2d203581b6\x2df356\x2d4499\x2d9dc8\x2dabafe93b350a.mount: Deactivated successfully.
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.769 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:42:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.769 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6876b98e-aadf-4b53-9e1a-f71eec06029e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038257087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.170 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.201 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 11:42:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649746531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.635 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.637 254096 DEBUG nova.virt.libvirt.vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:40Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.637 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.638 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.639 254096 DEBUG nova.objects.instance [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'pci_devices' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.653 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <uuid>e887a377-e792-462d-8bcd-002a93dac12d</uuid>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <name>instance-00000046</name>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <nova:name>tempest-InstanceActionsTestJSON-server-1282971748</nova:name>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:42:40</nova:creationTime>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:user uuid="e76c1b261c0442caa52f39297ccf296d">tempest-InstanceActionsTestJSON-16048106-project-member</nova:user>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:project uuid="ea024a03380a4251a920e126716935de">tempest-InstanceActionsTestJSON-16048106</nova:project>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <nova:port uuid="ce73fc27-d707-4321-8e2e-f77bd4b984ad">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <entry name="serial">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <entry name="uuid">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk.config">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:5e:35:7f"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <target dev="tapce73fc27-d7"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/console.log" append="off"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:42:41 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:42:41 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:42:41 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:42:41 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.654 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.655 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.655 254096 DEBUG nova.virt.libvirt.vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:40Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.655 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.656 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.656 254096 DEBUG os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.659 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce73fc27-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.660 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce73fc27-d7, col_values=(('external_ids', {'iface-id': 'ce73fc27-d707-4321-8e2e-f77bd4b984ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:35:7f', 'vm-uuid': 'e887a377-e792-462d-8bcd-002a93dac12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:41 np0005535469 NetworkManager[48891]: <info>  [1764088961.6623] manager: (tapce73fc27-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.667 254096 INFO os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')#033[00m
Nov 25 11:42:41 np0005535469 kernel: tapce73fc27-d7: entered promiscuous mode
Nov 25 11:42:41 np0005535469 systemd-udevd[328727]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:42:41 np0005535469 NetworkManager[48891]: <info>  [1764088961.7253] manager: (tapce73fc27-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Nov 25 11:42:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:41Z|00669|binding|INFO|Claiming lport ce73fc27-d707-4321-8e2e-f77bd4b984ad for this chassis.
Nov 25 11:42:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:41Z|00670|binding|INFO|ce73fc27-d707-4321-8e2e-f77bd4b984ad: Claiming fa:16:3e:5e:35:7f 10.100.0.6
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.727 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:41 np0005535469 NetworkManager[48891]: <info>  [1764088961.7355] device (tapce73fc27-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.735 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:41 np0005535469 NetworkManager[48891]: <info>  [1764088961.7368] device (tapce73fc27-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.737 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a bound to our chassis#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.738 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 203581b6-f356-4499-9dc8-abafe93b350a#033[00m
Nov 25 11:42:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:41Z|00671|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad ovn-installed in OVS
Nov 25 11:42:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:41Z|00672|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad up in Southbound
Nov 25 11:42:41 np0005535469 nova_compute[254092]: 2025-11-25 16:42:41.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.748 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d433513d-b9bb-495b-841a-d77039cdd668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.749 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap203581b6-f1 in ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.751 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap203581b6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.751 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aceb397a-72cd-42bf-b48a-8c86225dd341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.752 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[96273c90-e33d-439a-b6c6-02ecdce19f63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 systemd-machined[216343]: New machine qemu-83-instance-00000046.
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.764 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fe790576-7757-4169-ab04-1568c98491c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 systemd[1]: Started Virtual Machine qemu-83-instance-00000046.
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.786 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2028d0-1f85-4a85-b9ba-a719a8402120]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.813 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[12d98f0a-729d-40cc-827b-057ea1382395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.817 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7e364a-5159-41dd-899a-f615b40da74f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 NetworkManager[48891]: <info>  [1764088961.8181] manager: (tap203581b6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/288)
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.845 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[74ce8ed8-20d3-455d-a7ca-a3d02cbab975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.847 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eedb7ddd-06a1-420e-b59f-0049d282b9e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 NetworkManager[48891]: <info>  [1764088961.8693] device (tap203581b6-f0): carrier: link connected
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.874 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c0846485-8ebc-4b81-b82d-9d58f0e9bccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.890 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f571aa5-9da4-488b-abe1-395c78d45047]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541946, 'reachable_time': 36665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328905, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5756fe8-90f3-488d-9ab1-56a423f663c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:fc5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541946, 'tstamp': 541946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328906, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7cf8335-b7ac-47e4-9b11-dda9cb3f88ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541946, 'reachable_time': 36665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328907, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.950 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2679b167-e0b4-4fa2-ad64-dc1deee02b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20792e6c-b0fe-406b-bab0-7f1a9ff94a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap203581b6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:42 np0005535469 NetworkManager[48891]: <info>  [1764088962.0154] manager: (tap203581b6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Nov 25 11:42:42 np0005535469 kernel: tap203581b6-f0: entered promiscuous mode
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.019 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap203581b6-f0, col_values=(('external_ids', {'iface-id': '43fd9010-c369-4c3b-8331-da7c798cb131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:42Z|00673|binding|INFO|Releasing lport 43fd9010-c369-4c3b-8331-da7c798cb131 from this chassis (sb_readonly=0)
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.022 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.023 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7da86e35-9a5e-4cf8-a733-b65f369a07cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.024 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:42:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.025 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'env', 'PROCESS_TAG=haproxy-203581b6-f356-4499-9dc8-abafe93b350a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/203581b6-f356-4499-9dc8-abafe93b350a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.150 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for e887a377-e792-462d-8bcd-002a93dac12d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.150 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088962.1497972, e887a377-e792-462d-8bcd-002a93dac12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.151 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.153 254096 DEBUG nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.156 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance rebooted successfully.#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.157 254096 DEBUG nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.182 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.186 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.225 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.226 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088962.15123, e887a377-e792-462d-8bcd-002a93dac12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.226 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Started (Lifecycle Event)#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.241 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.271 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:42 np0005535469 nova_compute[254092]: 2025-11-25 16:42:42.273 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:42 np0005535469 podman[328981]: 2025-11-25 16:42:42.377845128 +0000 UTC m=+0.048219549 container create 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 11:42:42 np0005535469 systemd[1]: Started libpod-conmon-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526.scope.
Nov 25 11:42:42 np0005535469 podman[328981]: 2025-11-25 16:42:42.350442174 +0000 UTC m=+0.020816625 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:42:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16913faf8a5c974f1946e4ed69e9941f406ff9e2fad5af1678ff5c86655f484c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:42 np0005535469 podman[328981]: 2025-11-25 16:42:42.480689588 +0000 UTC m=+0.151064019 container init 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:42:42 np0005535469 podman[328981]: 2025-11-25 16:42:42.487281697 +0000 UTC m=+0.157656138 container start 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:42:42 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : New worker (329003) forked
Nov 25 11:42:42 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : Loading success.
Nov 25 11:42:43 np0005535469 nova_compute[254092]: 2025-11-25 16:42:43.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.356 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.357 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.357 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.358 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.358 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.359 254096 INFO nova.compute.manager [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Terminating instance#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.360 254096 DEBUG nova.compute.manager [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:42:44 np0005535469 kernel: tapce73fc27-d7 (unregistering): left promiscuous mode
Nov 25 11:42:44 np0005535469 NetworkManager[48891]: <info>  [1764088964.4092] device (tapce73fc27-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:42:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:44Z|00674|binding|INFO|Releasing lport ce73fc27-d707-4321-8e2e-f77bd4b984ad from this chassis (sb_readonly=0)
Nov 25 11:42:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:44Z|00675|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad down in Southbound
Nov 25 11:42:44 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:44Z|00676|binding|INFO|Removing iface tapce73fc27-d7 ovn-installed in OVS
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.429 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.430 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a unbound from our chassis#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.431 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 203581b6-f356-4499-9dc8-abafe93b350a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.433 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[588a9af2-7ca9-4d5d-a1f9-f2eccf59e218]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.434 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace which is not needed anymore#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 25 11:42:44 np0005535469 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000046.scope: Consumed 2.627s CPU time.
Nov 25 11:42:44 np0005535469 systemd-machined[216343]: Machine qemu-83-instance-00000046 terminated.
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:42:44 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : haproxy version is 2.8.14-c23fe91
Nov 25 11:42:44 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : path to executable is /usr/sbin/haproxy
Nov 25 11:42:44 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [WARNING]  (329001) : Exiting Master process...
Nov 25 11:42:44 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [ALERT]    (329001) : Current worker (329003) exited with code 143 (Terminated)
Nov 25 11:42:44 np0005535469 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [WARNING]  (329001) : All workers exited. Exiting... (0)
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:42:44 np0005535469 systemd[1]: libpod-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526.scope: Deactivated successfully.
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:42:44 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 729ea660-28c2-4fec-bbb4-319f3ca59699 does not exist
Nov 25 11:42:44 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev aaf0e74c-dd12-4261-97cc-c4e5c66d58e3 does not exist
Nov 25 11:42:44 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0f0d044c-b72c-40eb-b2a2-a01ec5e40948 does not exist
Nov 25 11:42:44 np0005535469 podman[329167]: 2025-11-25 16:42:44.575845999 +0000 UTC m=+0.046884453 container died 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:42:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.592 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance destroyed successfully.#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.593 254096 DEBUG nova.objects.instance [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'resources' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.603 254096 DEBUG nova.virt.libvirt.vif [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:42Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.604 254096 DEBUG nova.network.os_vif_util [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.605 254096 DEBUG nova.network.os_vif_util [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.605 254096 DEBUG os_vif [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:42:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526-userdata-shm.mount: Deactivated successfully.
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.607 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce73fc27-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-16913faf8a5c974f1946e4ed69e9941f406ff9e2fad5af1678ff5c86655f484c-merged.mount: Deactivated successfully.
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.612 254096 INFO os_vif [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')#033[00m
Nov 25 11:42:44 np0005535469 podman[329167]: 2025-11-25 16:42:44.620535551 +0000 UTC m=+0.091573995 container cleanup 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:42:44 np0005535469 systemd[1]: libpod-conmon-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526.scope: Deactivated successfully.
Nov 25 11:42:44 np0005535469 podman[329237]: 2025-11-25 16:42:44.6949574 +0000 UTC m=+0.051714514 container remove 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0edbfa-9d02-485d-b025-dcc27001dbfc]: (4, ('Tue Nov 25 04:42:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526)\n47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526\nTue Nov 25 04:42:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526)\n47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07a92950-cdce-4e3c-a08c-ff07d45637e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.704 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 kernel: tap203581b6-f0: left promiscuous mode
Nov 25 11:42:44 np0005535469 nova_compute[254092]: 2025-11-25 16:42:44.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.725 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05d2d850-c9f6-44fe-b321-1d4a5256e41f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[44cc6d00-9296-4766-8b63-62e67552ecca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc250b2-aa9b-4113-96b2-1089baeec61b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.756 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ec534d5f-9550-42ba-8531-84f9e0ff1ba9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541939, 'reachable_time': 32130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329306, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.759 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:42:44 np0005535469 systemd[1]: run-netns-ovnmeta\x2d203581b6\x2df356\x2d4499\x2d9dc8\x2dabafe93b350a.mount: Deactivated successfully.
Nov 25 11:42:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.759 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd5dd9a-561c-413e-ac8a-02cc87590353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:45 np0005535469 nova_compute[254092]: 2025-11-25 16:42:45.021 254096 INFO nova.virt.libvirt.driver [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deleting instance files /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d_del#033[00m
Nov 25 11:42:45 np0005535469 nova_compute[254092]: 2025-11-25 16:42:45.022 254096 INFO nova.virt.libvirt.driver [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deletion of /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d_del complete#033[00m
Nov 25 11:42:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:42:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:42:45 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:42:45 np0005535469 nova_compute[254092]: 2025-11-25 16:42:45.068 254096 INFO nova.compute.manager [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:42:45 np0005535469 nova_compute[254092]: 2025-11-25 16:42:45.069 254096 DEBUG oslo.service.loopingcall [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:42:45 np0005535469 nova_compute[254092]: 2025-11-25 16:42:45.070 254096 DEBUG nova.compute.manager [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:42:45 np0005535469 nova_compute[254092]: 2025-11-25 16:42:45.070 254096 DEBUG nova.network.neutron [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:42:45 np0005535469 podman[329381]: 2025-11-25 16:42:45.155902275 +0000 UTC m=+0.040450739 container create 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:42:45 np0005535469 systemd[1]: Started libpod-conmon-483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f.scope.
Nov 25 11:42:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:45 np0005535469 podman[329381]: 2025-11-25 16:42:45.139254134 +0000 UTC m=+0.023802618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:42:45 np0005535469 podman[329381]: 2025-11-25 16:42:45.238632819 +0000 UTC m=+0.123181303 container init 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:42:45 np0005535469 podman[329381]: 2025-11-25 16:42:45.248115697 +0000 UTC m=+0.132664161 container start 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:42:45 np0005535469 podman[329381]: 2025-11-25 16:42:45.251246262 +0000 UTC m=+0.135794726 container attach 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 11:42:45 np0005535469 suspicious_satoshi[329398]: 167 167
Nov 25 11:42:45 np0005535469 systemd[1]: libpod-483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f.scope: Deactivated successfully.
Nov 25 11:42:45 np0005535469 podman[329381]: 2025-11-25 16:42:45.254747927 +0000 UTC m=+0.139296411 container died 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:42:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b639813311d0af849d826cf090bba73658a9b6757e54569c2a61c66b82337996-merged.mount: Deactivated successfully.
Nov 25 11:42:45 np0005535469 podman[329381]: 2025-11-25 16:42:45.292513321 +0000 UTC m=+0.177061785 container remove 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:42:45 np0005535469 systemd[1]: libpod-conmon-483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f.scope: Deactivated successfully.
Nov 25 11:42:45 np0005535469 podman[329421]: 2025-11-25 16:42:45.44619034 +0000 UTC m=+0.042993587 container create 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:42:45 np0005535469 systemd[1]: Started libpod-conmon-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope.
Nov 25 11:42:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:45 np0005535469 podman[329421]: 2025-11-25 16:42:45.425012506 +0000 UTC m=+0.021815783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:42:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:45 np0005535469 podman[329421]: 2025-11-25 16:42:45.536267154 +0000 UTC m=+0.133070431 container init 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:42:45 np0005535469 podman[329421]: 2025-11-25 16:42:45.545374392 +0000 UTC m=+0.142177639 container start 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:42:45 np0005535469 podman[329421]: 2025-11-25 16:42:45.548797534 +0000 UTC m=+0.145600801 container attach 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:42:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 47 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 853 B/s wr, 144 op/s
Nov 25 11:42:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.141 254096 DEBUG nova.network.neutron [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.160 254096 INFO nova.compute.manager [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.210 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.211 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.263 254096 DEBUG oslo_concurrency.processutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.344 254096 DEBUG nova.compute.manager [req-a9de1b94-714b-4ad0-94a5-0989047d30cb req-2b21b84b-df4c-4202-85fa-06f82996f233 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-vif-deleted-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:46 np0005535469 goofy_hermann[329438]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:42:46 np0005535469 goofy_hermann[329438]: --> relative data size: 1.0
Nov 25 11:42:46 np0005535469 goofy_hermann[329438]: --> All data devices are unavailable
Nov 25 11:42:46 np0005535469 systemd[1]: libpod-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope: Deactivated successfully.
Nov 25 11:42:46 np0005535469 podman[329421]: 2025-11-25 16:42:46.657065522 +0000 UTC m=+1.253868769 container died 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:42:46 np0005535469 systemd[1]: libpod-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope: Consumed 1.047s CPU time.
Nov 25 11:42:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02-merged.mount: Deactivated successfully.
Nov 25 11:42:46 np0005535469 podman[329421]: 2025-11-25 16:42:46.718430655 +0000 UTC m=+1.315233902 container remove 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:42:46 np0005535469 systemd[1]: libpod-conmon-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope: Deactivated successfully.
Nov 25 11:42:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54234647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.764 254096 DEBUG oslo_concurrency.processutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.771 254096 DEBUG nova.compute.provider_tree [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.788 254096 DEBUG nova.scheduler.client.report [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.811 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.843 254096 INFO nova.scheduler.client.report [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Deleted allocations for instance e887a377-e792-462d-8bcd-002a93dac12d#033[00m
Nov 25 11:42:46 np0005535469 nova_compute[254092]: 2025-11-25 16:42:46.901 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:47 np0005535469 podman[329643]: 2025-11-25 16:42:47.300806835 +0000 UTC m=+0.039333598 container create bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 11:42:47 np0005535469 systemd[1]: Started libpod-conmon-bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44.scope.
Nov 25 11:42:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:47 np0005535469 podman[329643]: 2025-11-25 16:42:47.285283164 +0000 UTC m=+0.023809957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:42:47 np0005535469 podman[329643]: 2025-11-25 16:42:47.39533825 +0000 UTC m=+0.133865033 container init bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:42:47 np0005535469 podman[329643]: 2025-11-25 16:42:47.404011145 +0000 UTC m=+0.142537908 container start bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:42:47 np0005535469 podman[329643]: 2025-11-25 16:42:47.407938312 +0000 UTC m=+0.146465075 container attach bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 11:42:47 np0005535469 unruffled_mendeleev[329659]: 167 167
Nov 25 11:42:47 np0005535469 systemd[1]: libpod-bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44.scope: Deactivated successfully.
Nov 25 11:42:47 np0005535469 podman[329643]: 2025-11-25 16:42:47.41336951 +0000 UTC m=+0.151896273 container died bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:42:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-aedef1d132b3938a6898ce5e3b04c6412c0ce47cc4c6d6bcba01882e08cb1c13-merged.mount: Deactivated successfully.
Nov 25 11:42:47 np0005535469 podman[329643]: 2025-11-25 16:42:47.453814147 +0000 UTC m=+0.192340910 container remove bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:42:47 np0005535469 systemd[1]: libpod-conmon-bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44.scope: Deactivated successfully.
Nov 25 11:42:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 41 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 109 op/s
Nov 25 11:42:47 np0005535469 podman[329683]: 2025-11-25 16:42:47.618301779 +0000 UTC m=+0.040334296 container create 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:42:47 np0005535469 systemd[1]: Started libpod-conmon-23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7.scope.
Nov 25 11:42:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:47 np0005535469 podman[329683]: 2025-11-25 16:42:47.602237903 +0000 UTC m=+0.024270350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:42:47 np0005535469 podman[329683]: 2025-11-25 16:42:47.702462513 +0000 UTC m=+0.124494950 container init 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:42:47 np0005535469 podman[329683]: 2025-11-25 16:42:47.709727899 +0000 UTC m=+0.131760306 container start 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:42:47 np0005535469 podman[329683]: 2025-11-25 16:42:47.712309309 +0000 UTC m=+0.134341726 container attach 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:42:47 np0005535469 nova_compute[254092]: 2025-11-25 16:42:47.794 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:47 np0005535469 nova_compute[254092]: 2025-11-25 16:42:47.795 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:47 np0005535469 nova_compute[254092]: 2025-11-25 16:42:47.811 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:42:47 np0005535469 nova_compute[254092]: 2025-11-25 16:42:47.880 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:47 np0005535469 nova_compute[254092]: 2025-11-25 16:42:47.880 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:47 np0005535469 nova_compute[254092]: 2025-11-25 16:42:47.888 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:42:47 np0005535469 nova_compute[254092]: 2025-11-25 16:42:47.889 254096 INFO nova.compute.claims [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.013 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351790797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.479 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.486 254096 DEBUG nova.compute.provider_tree [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]: {
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:    "0": [
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:        {
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "devices": [
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "/dev/loop3"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            ],
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_name": "ceph_lv0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_size": "21470642176",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "name": "ceph_lv0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "tags": {
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cluster_name": "ceph",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.crush_device_class": "",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.encrypted": "0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osd_id": "0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.type": "block",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.vdo": "0"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            },
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "type": "block",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "vg_name": "ceph_vg0"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:        }
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:    ],
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:    "1": [
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:        {
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "devices": [
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "/dev/loop4"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            ],
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_name": "ceph_lv1",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_size": "21470642176",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "name": "ceph_lv1",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "tags": {
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cluster_name": "ceph",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.crush_device_class": "",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.encrypted": "0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osd_id": "1",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.type": "block",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.vdo": "0"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            },
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "type": "block",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "vg_name": "ceph_vg1"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:        }
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:    ],
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:    "2": [
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:        {
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "devices": [
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "/dev/loop5"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            ],
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_name": "ceph_lv2",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_size": "21470642176",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "name": "ceph_lv2",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "tags": {
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.cluster_name": "ceph",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.crush_device_class": "",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.encrypted": "0",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osd_id": "2",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.type": "block",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:                "ceph.vdo": "0"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            },
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "type": "block",
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:            "vg_name": "ceph_vg2"
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:        }
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]:    ]
Nov 25 11:42:48 np0005535469 musing_driscoll[329700]: }
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.507 254096 DEBUG nova.scheduler.client.report [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:48 np0005535469 systemd[1]: libpod-23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7.scope: Deactivated successfully.
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.535 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:48 np0005535469 podman[329683]: 2025-11-25 16:42:48.53653512 +0000 UTC m=+0.958567537 container died 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.537 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:42:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27-merged.mount: Deactivated successfully.
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.578 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.578 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:42:48 np0005535469 podman[329683]: 2025-11-25 16:42:48.595816939 +0000 UTC m=+1.017849356 container remove 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.597 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:42:48 np0005535469 systemd[1]: libpod-conmon-23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7.scope: Deactivated successfully.
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.612 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.709 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.711 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.711 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Creating image(s)#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.737 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.769 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.796 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.802 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.871 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.872 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.872 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.872 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.892 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.895 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ed4031e8-a918-4816-b9b8-b1134a086f8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:48 np0005535469 nova_compute[254092]: 2025-11-25 16:42:48.927 254096 DEBUG nova.policy [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3c9e536e4984598a1b18e79b453cbde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20d786999bc74073bae1fde6aede7fcd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.197 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ed4031e8-a918-4816-b9b8-b1134a086f8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:49 np0005535469 podman[329980]: 2025-11-25 16:42:49.271797438 +0000 UTC m=+0.049730261 container create 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.276 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] resizing rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:42:49 np0005535469 systemd[1]: Started libpod-conmon-659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907.scope.
Nov 25 11:42:49 np0005535469 podman[329980]: 2025-11-25 16:42:49.247336314 +0000 UTC m=+0.025269157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:42:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:49 np0005535469 podman[329980]: 2025-11-25 16:42:49.370937277 +0000 UTC m=+0.148870120 container init 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:42:49 np0005535469 podman[329980]: 2025-11-25 16:42:49.385311167 +0000 UTC m=+0.163244010 container start 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 11:42:49 np0005535469 podman[329980]: 2025-11-25 16:42:49.390474707 +0000 UTC m=+0.168407560 container attach 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 11:42:49 np0005535469 wizardly_heyrovsky[330043]: 167 167
Nov 25 11:42:49 np0005535469 systemd[1]: libpod-659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907.scope: Deactivated successfully.
Nov 25 11:42:49 np0005535469 podman[329980]: 2025-11-25 16:42:49.395408061 +0000 UTC m=+0.173340904 container died 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.415 254096 DEBUG nova.objects.instance [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lazy-loading 'migration_context' on Instance uuid ed4031e8-a918-4816-b9b8-b1134a086f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-105daf22d5684f8b0ef198339bd145d8bbff71ad58844a3ace581a7d88e39c49-merged.mount: Deactivated successfully.
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.426 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.426 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Ensure instance console log exists: /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.427 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.427 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.427 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:49 np0005535469 podman[329980]: 2025-11-25 16:42:49.438162171 +0000 UTC m=+0.216094994 container remove 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:42:49 np0005535469 systemd[1]: libpod-conmon-659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907.scope: Deactivated successfully.
Nov 25 11:42:49 np0005535469 podman[330086]: 2025-11-25 16:42:49.581999944 +0000 UTC m=+0.037988223 container create 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:42:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 41 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 25 11:42:49 np0005535469 nova_compute[254092]: 2025-11-25 16:42:49.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:49 np0005535469 podman[330086]: 2025-11-25 16:42:49.564221371 +0000 UTC m=+0.020209670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:42:49 np0005535469 systemd[1]: Started libpod-conmon-71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14.scope.
Nov 25 11:42:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:49 np0005535469 podman[330086]: 2025-11-25 16:42:49.711033074 +0000 UTC m=+0.167021403 container init 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:42:49 np0005535469 podman[330086]: 2025-11-25 16:42:49.720438709 +0000 UTC m=+0.176426998 container start 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:42:49 np0005535469 podman[330086]: 2025-11-25 16:42:49.724782977 +0000 UTC m=+0.180771306 container attach 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]: {
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "osd_id": 1,
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "type": "bluestore"
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:    },
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "osd_id": 2,
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "type": "bluestore"
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:    },
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "osd_id": 0,
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:        "type": "bluestore"
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]:    }
Nov 25 11:42:50 np0005535469 jolly_aryabhata[330103]: }
Nov 25 11:42:50 np0005535469 systemd[1]: libpod-71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14.scope: Deactivated successfully.
Nov 25 11:42:50 np0005535469 podman[330086]: 2025-11-25 16:42:50.709414119 +0000 UTC m=+1.165402398 container died 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:42:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046-merged.mount: Deactivated successfully.
Nov 25 11:42:50 np0005535469 nova_compute[254092]: 2025-11-25 16:42:50.768 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Successfully created port: b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:42:50 np0005535469 podman[330086]: 2025-11-25 16:42:50.768204994 +0000 UTC m=+1.224193313 container remove 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:42:50 np0005535469 systemd[1]: libpod-conmon-71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14.scope: Deactivated successfully.
Nov 25 11:42:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:42:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:42:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:42:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:42:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 994a639a-9258-4fcd-9713-3b5971ccf6f3 does not exist
Nov 25 11:42:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 500d9054-3604-428a-a201-0201c38fdb92 does not exist
Nov 25 11:42:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:42:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:42:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.722 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Successfully updated port: b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.735 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.736 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquired lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.736 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.835 254096 DEBUG nova.compute.manager [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-changed-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.835 254096 DEBUG nova.compute.manager [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Refreshing instance network info cache due to event network-changed-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.836 254096 DEBUG oslo_concurrency.lockutils [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:42:52 np0005535469 nova_compute[254092]: 2025-11-25 16:42:52.991 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.881 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updating instance_info_cache with network_info: [{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.911 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Releasing lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.912 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance network_info: |[{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.912 254096 DEBUG oslo_concurrency.lockutils [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.912 254096 DEBUG nova.network.neutron [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Refreshing network info cache for port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.915 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start _get_guest_xml network_info=[{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.919 254096 WARNING nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.928 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.929 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.932 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.932 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:42:53 np0005535469 nova_compute[254092]: 2025-11-25 16:42:53.938 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31402073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.371 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.394 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.397 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:42:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049341593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.825 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.826 254096 DEBUG nova.virt.libvirt.vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1926067749',display_name='tempest-ServerAddressesNegativeTestJSON-server-1926067749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1926067749',id=71,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20d786999bc74073bae1fde6aede7fcd',ramdisk_id='',reservation_id='r-wet8rv2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-772969752',owner_user_name='tempest-ServerAddressesNegativeTestJSON-772969752-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:48Z,user_data=None,user_id='e3c9e536e4984598a1b18e79b453cbde',uuid=ed4031e8-a918-4816-b9b8-b1134a086f8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.827 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converting VIF {"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.827 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.829 254096 DEBUG nova.objects.instance [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lazy-loading 'pci_devices' on Instance uuid ed4031e8-a918-4816-b9b8-b1134a086f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.845 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <uuid>ed4031e8-a918-4816-b9b8-b1134a086f8b</uuid>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <name>instance-00000047</name>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1926067749</nova:name>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:42:53</nova:creationTime>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:user uuid="e3c9e536e4984598a1b18e79b453cbde">tempest-ServerAddressesNegativeTestJSON-772969752-project-member</nova:user>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:project uuid="20d786999bc74073bae1fde6aede7fcd">tempest-ServerAddressesNegativeTestJSON-772969752</nova:project>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <nova:port uuid="b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <entry name="serial">ed4031e8-a918-4816-b9b8-b1134a086f8b</entry>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <entry name="uuid">ed4031e8-a918-4816-b9b8-b1134a086f8b</entry>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/ed4031e8-a918-4816-b9b8-b1134a086f8b_disk">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:2f:d4:24"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <target dev="tapb0cf5f9a-6e"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/console.log" append="off"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:42:54 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:42:54 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:42:54 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:42:54 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.848 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Preparing to wait for external event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.848 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.849 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.849 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.850 254096 DEBUG nova.virt.libvirt.vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1926067749',display_name='tempest-ServerAddressesNegativeTestJSON-server-1926067749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1926067749',id=71,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20d786999bc74073bae1fde6aede7fcd',ramdisk_id='',reservation_id='r-wet8rv2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-772969752',owner_user_name='tempest-ServerAddressesNegativeTestJSON-772969752-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:48Z,user_data=None,user_id='e3c9e536e4984598a1b18e79b453cbde',uuid=ed4031e8-a918-4816-b9b8-b1134a086f8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.851 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converting VIF {"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.852 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.852 254096 DEBUG os_vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.854 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.855 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.860 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.860 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0cf5f9a-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.861 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0cf5f9a-6e, col_values=(('external_ids', {'iface-id': 'b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:d4:24', 'vm-uuid': 'ed4031e8-a918-4816-b9b8-b1134a086f8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:54 np0005535469 NetworkManager[48891]: <info>  [1764088974.8641] manager: (tapb0cf5f9a-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.872 254096 INFO os_vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e')#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.927 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.928 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.929 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] No VIF found with MAC fa:16:3e:2f:d4:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.929 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Using config drive#033[00m
Nov 25 11:42:54 np0005535469 nova_compute[254092]: 2025-11-25 16:42:54.950 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:42:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3345856000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:42:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:42:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3345856000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:42:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 11:42:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.019 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Creating config drive at /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.024 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzu2zenz_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.160 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzu2zenz_" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.188 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.193 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.350 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.351 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deleting local config drive /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config because it was imported into RBD.#033[00m
Nov 25 11:42:56 np0005535469 kernel: tapb0cf5f9a-6e: entered promiscuous mode
Nov 25 11:42:56 np0005535469 NetworkManager[48891]: <info>  [1764088976.4215] manager: (tapb0cf5f9a-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:56Z|00677|binding|INFO|Claiming lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for this chassis.
Nov 25 11:42:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:56Z|00678|binding|INFO|b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4: Claiming fa:16:3e:2f:d4:24 10.100.0.13
Nov 25 11:42:56 np0005535469 systemd-udevd[330334]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.474 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d4:24 10.100.0.13'], port_security=['fa:16:3e:2f:d4:24 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ed4031e8-a918-4816-b9b8-b1134a086f8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3f25be-8960-4c87-987c-97d65f879d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20d786999bc74073bae1fde6aede7fcd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4b33984-130a-4842-ad65-63b97381bcf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6108dd19-2654-4369-9d7f-d2bb4f54095b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.474 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 in datapath aa3f25be-8960-4c87-987c-97d65f879d23 bound to our chassis#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.475 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3f25be-8960-4c87-987c-97d65f879d23#033[00m
Nov 25 11:42:56 np0005535469 NetworkManager[48891]: <info>  [1764088976.4787] device (tapb0cf5f9a-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:42:56 np0005535469 NetworkManager[48891]: <info>  [1764088976.4799] device (tapb0cf5f9a-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.486 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24999901-1243-4aa3-96be-a9242511930f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.487 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa3f25be-81 in ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.489 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa3f25be-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.489 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f166ecc8-6dc7-459a-ac89-d58e1562b1dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[636b3cc7-35be-4311-8aff-42c368d5f420]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 systemd-machined[216343]: New machine qemu-84-instance-00000047.
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.503 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[32ff4d74-ccd0-4ff8-898d-2dddef241c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.530 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45491d41-cf66-45a6-bff6-5ec57c013ee1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 systemd[1]: Started Virtual Machine qemu-84-instance-00000047.
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:56Z|00679|binding|INFO|Setting lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 ovn-installed in OVS
Nov 25 11:42:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:56Z|00680|binding|INFO|Setting lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 up in Southbound
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.558 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc565b68-0919-4fe5-b789-484340b90ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.564 254096 DEBUG nova.network.neutron [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updated VIF entry in instance network info cache for port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.565 254096 DEBUG nova.network.neutron [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updating instance_info_cache with network_info: [{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:42:56 np0005535469 systemd-udevd[330338]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:42:56 np0005535469 NetworkManager[48891]: <info>  [1764088976.5659] manager: (tapaa3f25be-80): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.565 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fbf455-6c02-40d3-a89a-2da97523edd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.578 254096 DEBUG oslo_concurrency.lockutils [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.601 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a2f4b9-49ec-4622-a990-e7dcf63f91f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.604 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[136838a3-2d13-4a42-b9fe-e01a12564a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 NetworkManager[48891]: <info>  [1764088976.6291] device (tapaa3f25be-80): carrier: link connected
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.637 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0f852db2-c5da-4ea1-823e-41f81f0d1dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.658 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[61118e7f-d2d9-4a94-8445-96002efc1c84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3f25be-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:0e:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543421, 'reachable_time': 25326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330370, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.675 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aba46940-681d-4545-a406-c644410d6a11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:e31'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543421, 'tstamp': 543421}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330371, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b374673c-e1c9-4329-8328-a9f35cee6bdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3f25be-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:0e:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543421, 'reachable_time': 25326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330372, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0d9a7c-97b5-4176-b302-ca93211d093e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a808d87-8d27-4508-8b55-70d2f10787ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.799 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3f25be-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.799 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.799 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3f25be-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:56 np0005535469 NetworkManager[48891]: <info>  [1764088976.8025] manager: (tapaa3f25be-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Nov 25 11:42:56 np0005535469 kernel: tapaa3f25be-80: entered promiscuous mode
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.805 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3f25be-80, col_values=(('external_ids', {'iface-id': 'a8379971-0cc4-450a-a8ab-bf056efebfda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:42:56Z|00681|binding|INFO|Releasing lport a8379971-0cc4-450a-a8ab-bf056efebfda from this chassis (sb_readonly=0)
Nov 25 11:42:56 np0005535469 nova_compute[254092]: 2025-11-25 16:42:56.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.823 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa3f25be-8960-4c87-987c-97d65f879d23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa3f25be-8960-4c87-987c-97d65f879d23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[321a7e56-b67c-4416-a13d-287a199f8f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.825 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-aa3f25be-8960-4c87-987c-97d65f879d23
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/aa3f25be-8960-4c87-987c-97d65f879d23.pid.haproxy
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID aa3f25be-8960-4c87-987c-97d65f879d23
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:42:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.826 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'env', 'PROCESS_TAG=haproxy-aa3f25be-8960-4c87-987c-97d65f879d23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa3f25be-8960-4c87-987c-97d65f879d23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.102 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088977.1023457, ed4031e8-a918-4816-b9b8-b1134a086f8b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.104 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Started (Lifecycle Event)#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.120 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.123 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088977.1025941, ed4031e8-a918-4816-b9b8-b1134a086f8b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.123 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.137 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.141 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:42:57 np0005535469 nova_compute[254092]: 2025-11-25 16:42:57.155 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:42:57 np0005535469 podman[330444]: 2025-11-25 16:42:57.223936657 +0000 UTC m=+0.052797504 container create 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:42:57 np0005535469 systemd[1]: Started libpod-conmon-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2.scope.
Nov 25 11:42:57 np0005535469 podman[330444]: 2025-11-25 16:42:57.193262204 +0000 UTC m=+0.022123081 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:42:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:42:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f606a2dbbfae9d634065c5f557617efb065ff4a0c40004d3347bac53165b01a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:42:57 np0005535469 podman[330444]: 2025-11-25 16:42:57.311884742 +0000 UTC m=+0.140745619 container init 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:42:57 np0005535469 podman[330444]: 2025-11-25 16:42:57.317255708 +0000 UTC m=+0.146116555 container start 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:42:57 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : New worker (330465) forked
Nov 25 11:42:57 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : Loading success.
Nov 25 11:42:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.141 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.142 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.162 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.236 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.237 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.249 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.249 254096 INFO nova.compute.claims [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.388 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:42:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293241420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.806 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.812 254096 DEBUG nova.compute.provider_tree [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.833 254096 DEBUG nova.scheduler.client.report [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.853 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.854 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.914 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.915 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.942 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:42:58 np0005535469 nova_compute[254092]: 2025-11-25 16:42:58.964 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.088 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.089 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.090 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Creating image(s)#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.112 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.145 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.175 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.180 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.282 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.283 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.284 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.285 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.312 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.316 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.591 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088964.5906055, e887a377-e792-462d-8bcd-002a93dac12d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.592 254096 INFO nova.compute.manager [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.614 254096 DEBUG nova.compute.manager [None req-76090146-6253-4170-9f7b-a62524b6de36 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.624 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:42:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.699 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] resizing rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.741 254096 DEBUG nova.policy [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aea2d8cf3bb54cdbbc72e41805fb1f90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d15f5aabd3491da5314b126a20225a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.801 254096 DEBUG nova.objects.instance [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'migration_context' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.816 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.817 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Ensure instance console log exists: /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.817 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.818 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.818 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:42:59 np0005535469 nova_compute[254092]: 2025-11-25 16:42:59.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:00 np0005535469 nova_compute[254092]: 2025-11-25 16:43:00.450 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Successfully created port: 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:43:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.055 254096 DEBUG nova.compute.manager [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.055 254096 DEBUG oslo_concurrency.lockutils [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.056 254096 DEBUG oslo_concurrency.lockutils [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.056 254096 DEBUG oslo_concurrency.lockutils [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.056 254096 DEBUG nova.compute.manager [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Processing event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.057 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.060 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088981.0603907, ed4031e8-a918-4816-b9b8-b1134a086f8b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.060 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.063 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.067 254096 INFO nova.virt.libvirt.driver [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance spawned successfully.#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.067 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.098 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.102 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.102 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.103 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.103 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.103 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.104 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.108 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.140 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.220 254096 INFO nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 12.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.220 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.296 254096 INFO nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 13.43 seconds to build instance.#033[00m
Nov 25 11:43:01 np0005535469 nova_compute[254092]: 2025-11-25 16:43:01.313 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 134 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 64 op/s
Nov 25 11:43:02 np0005535469 nova_compute[254092]: 2025-11-25 16:43:02.255 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Successfully updated port: 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:43:02 np0005535469 nova_compute[254092]: 2025-11-25 16:43:02.291 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:02 np0005535469 nova_compute[254092]: 2025-11-25 16:43:02.292 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:02 np0005535469 nova_compute[254092]: 2025-11-25 16:43:02.292 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:43:02 np0005535469 nova_compute[254092]: 2025-11-25 16:43:02.459 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.147 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.152 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.153 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.153 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.153 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] No waiting events found dispatching network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 WARNING nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received unexpected event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing instance network info cache due to event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.155 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.176 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.177 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance network_info: |[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.177 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.178 254096 DEBUG nova.network.neutron [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.181 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start _get_guest_xml network_info=[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.187 254096 WARNING nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.191 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.192 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.199 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.199 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.200 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.200 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.201 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.201 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.201 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.203 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.203 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.203 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.206 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017691378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 134 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.644 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.663 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:03 np0005535469 nova_compute[254092]: 2025-11-25 16:43:03.666 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159350667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.103 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.105 254096 DEBUG nova.virt.libvirt.vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.105 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.106 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.107 254096 DEBUG nova.objects.instance [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.126 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <uuid>01f96314-1fbe-4eee-a4ed-db7f448a5320</uuid>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <name>instance-00000048</name>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerActionsTestJSON-server-1951123052</nova:name>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:43:03</nova:creationTime>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <nova:port uuid="4fe8c3a9-70ba-4a51-8bc1-1505a49fe349">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <entry name="serial">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <entry name="uuid">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ff:a0:1c"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <target dev="tap4fe8c3a9-70"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log" append="off"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:43:04 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:43:04 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:43:04 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:43:04 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Preparing to wait for external event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.129 254096 DEBUG nova.virt.libvirt.vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.129 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.130 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.130 254096 DEBUG os_vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.131 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.131 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.134 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.134 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.134 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:04 np0005535469 NetworkManager[48891]: <info>  [1764088984.1367] manager: (tap4fe8c3a9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.143 254096 INFO os_vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.211 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.211 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.212 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No VIF found with MAC fa:16:3e:ff:a0:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.212 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Using config drive#033[00m
Nov 25 11:43:04 np0005535469 nova_compute[254092]: 2025-11-25 16:43:04.233 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.170 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Creating config drive at /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.176 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zmineeo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.285 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.288 254096 INFO nova.compute.manager [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Terminating instance#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.289 254096 DEBUG nova.compute.manager [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.315 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zmineeo" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:05 np0005535469 kernel: tapb0cf5f9a-6e (unregistering): left promiscuous mode
Nov 25 11:43:05 np0005535469 NetworkManager[48891]: <info>  [1764088985.3350] device (tapb0cf5f9a-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.340 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.343 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:05Z|00682|binding|INFO|Releasing lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 from this chassis (sb_readonly=0)
Nov 25 11:43:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:05Z|00683|binding|INFO|Setting lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 down in Southbound
Nov 25 11:43:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:05Z|00684|binding|INFO|Removing iface tapb0cf5f9a-6e ovn-installed in OVS
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.363 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d4:24 10.100.0.13'], port_security=['fa:16:3e:2f:d4:24 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ed4031e8-a918-4816-b9b8-b1134a086f8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3f25be-8960-4c87-987c-97d65f879d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20d786999bc74073bae1fde6aede7fcd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4b33984-130a-4842-ad65-63b97381bcf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6108dd19-2654-4369-9d7f-d2bb4f54095b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.364 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 in datapath aa3f25be-8960-4c87-987c-97d65f879d23 unbound from our chassis#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.365 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa3f25be-8960-4c87-987c-97d65f879d23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.367 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f82b436-97ac-4bc4-bd37-afba0c15009f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.367 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 namespace which is not needed anymore#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000047.scope: Deactivated successfully.
Nov 25 11:43:05 np0005535469 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000047.scope: Consumed 4.913s CPU time.
Nov 25 11:43:05 np0005535469 systemd-machined[216343]: Machine qemu-84-instance-00000047 terminated.
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:05 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : haproxy version is 2.8.14-c23fe91
Nov 25 11:43:05 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : path to executable is /usr/sbin/haproxy
Nov 25 11:43:05 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [WARNING]  (330463) : Exiting Master process...
Nov 25 11:43:05 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [WARNING]  (330463) : Exiting Master process...
Nov 25 11:43:05 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [ALERT]    (330463) : Current worker (330465) exited with code 143 (Terminated)
Nov 25 11:43:05 np0005535469 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [WARNING]  (330463) : All workers exited. Exiting... (0)
Nov 25 11:43:05 np0005535469 systemd[1]: libpod-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2.scope: Deactivated successfully.
Nov 25 11:43:05 np0005535469 podman[330810]: 2025-11-25 16:43:05.508323687 +0000 UTC m=+0.053878343 container died 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.526 254096 INFO nova.virt.libvirt.driver [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance destroyed successfully.#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.527 254096 DEBUG nova.objects.instance [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lazy-loading 'resources' on Instance uuid ed4031e8-a918-4816-b9b8-b1134a086f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.529 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.529 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deleting local config drive /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config because it was imported into RBD.#033[00m
Nov 25 11:43:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2-userdata-shm.mount: Deactivated successfully.
Nov 25 11:43:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f606a2dbbfae9d634065c5f557617efb065ff4a0c40004d3347bac53165b01a7-merged.mount: Deactivated successfully.
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.546 254096 DEBUG nova.virt.libvirt.vif [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1926067749',display_name='tempest-ServerAddressesNegativeTestJSON-server-1926067749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1926067749',id=71,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20d786999bc74073bae1fde6aede7fcd',ramdisk_id='',reservation_id='r-wet8rv2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-772969752',owner_user_name='tempest-ServerAddressesNegativeTestJSON-772969752-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:01Z,user_data=None,user_id='e3c9e536e4984598a1b18e79b453cbde',uuid=ed4031e8-a918-4816-b9b8-b1134a086f8b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.548 254096 DEBUG nova.network.os_vif_util [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converting VIF {"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.549 254096 DEBUG nova.network.os_vif_util [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.549 254096 DEBUG os_vif [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.553 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0cf5f9a-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:05 np0005535469 podman[330810]: 2025-11-25 16:43:05.559119195 +0000 UTC m=+0.104673861 container cleanup 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.561 254096 INFO os_vif [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e')#033[00m
Nov 25 11:43:05 np0005535469 systemd[1]: libpod-conmon-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2.scope: Deactivated successfully.
Nov 25 11:43:05 np0005535469 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 11:43:05 np0005535469 NetworkManager[48891]: <info>  [1764088985.5971] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 25 11:43:05 np0005535469 systemd-udevd[330772]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:43:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:05Z|00685|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 11:43:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:05Z|00686|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 NetworkManager[48891]: <info>  [1764088985.6119] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:43:05 np0005535469 NetworkManager[48891]: <info>  [1764088985.6129] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.616 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 134 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 11:43:05 np0005535469 systemd-machined[216343]: New machine qemu-85-instance-00000048.
Nov 25 11:43:05 np0005535469 podman[330867]: 2025-11-25 16:43:05.6466827 +0000 UTC m=+0.059529306 container remove 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.654 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d94a960c-9875-4f79-b312-af39dcf9ff72]: (4, ('Tue Nov 25 04:43:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 (9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2)\n9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2\nTue Nov 25 04:43:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 (9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2)\n9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.655 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eaae98ac-794a-49fd-ad9e-cc70869c9d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.656 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3f25be-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 systemd[1]: Started Virtual Machine qemu-85-instance-00000048.
Nov 25 11:43:05 np0005535469 kernel: tapaa3f25be-80: left promiscuous mode
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.694 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da1e742c-62fa-4066-8559-2bf24eda5135]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:05Z|00687|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 11:43:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:05Z|00688|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d47df880-a1eb-4948-a86a-51b4dbc24489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.710 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2835f68b-a7ca-4246-8bba-5c08fb66797f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.711 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.726 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[371b99b1-4347-44cc-b3c6-36298267ebdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543414, 'reachable_time': 34653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330907, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 systemd[1]: run-netns-ovnmeta\x2daa3f25be\x2d8960\x2d4c87\x2d987c\x2d97d65f879d23.mount: Deactivated successfully.
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.728 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.728 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d13116-bf33-4626-871f-156b0453e58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.729 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.730 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bca06b3e-3685-4216-9d8e-25b435efa349]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.743 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.745 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[16fd92f0-c505-4003-9f84-11fd69d11084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe6cec5-ec74-4cd1-8f59-ff492c7905ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.758 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b6101ce1-44ee-4af6-a9c9-b0d04e90c21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa529f2-944d-4bae-9d65-1a562ae1b365]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.829 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[130e064a-652b-4bbc-b77d-c77ff5f14f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.834 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8801791a-e4b1-46a0-b410-3165a77d871e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 NetworkManager[48891]: <info>  [1764088985.8363] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Nov 25 11:43:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.870 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fafea6-a74a-4cab-86aa-02a347982c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.873 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[45e4f5f3-a40f-457e-be2e-31ac9f110d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 NetworkManager[48891]: <info>  [1764088985.9024] device (tap50ea1716-90): carrier: link connected
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.906 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7fadef4a-6928-484b-b79a-aec87b059537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.909 254096 DEBUG nova.network.neutron [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updated VIF entry in instance network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.910 254096 DEBUG nova.network.neutron [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02280116-6204-4892-96f1-abd3d1ee4c85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544349, 'reachable_time': 38505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330934, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.925 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c753dd24-07e1-48d1-8f8f-012ca1070a10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544349, 'tstamp': 544349}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330935, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.963 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ec3727-c40b-45cb-9c89-739bc9d34a67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544349, 'reachable_time': 38505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330936, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.992 254096 INFO nova.virt.libvirt.driver [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deleting instance files /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b_del#033[00m
Nov 25 11:43:05 np0005535469 nova_compute[254092]: 2025-11-25 16:43:05.993 254096 INFO nova.virt.libvirt.driver [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deletion of /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b_del complete#033[00m
Nov 25 11:43:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.998 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[267c55f4-cb56-46e7-8c79-f3400ebbcf73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6873eb81-730f-40ad-8390-3ea939d44257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.069 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.069 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.069 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:06 np0005535469 NetworkManager[48891]: <info>  [1764088986.0727] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Nov 25 11:43:06 np0005535469 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.075 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:06Z|00689|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.095 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.096 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5832ad8-c1bf-4fa8-bd23-307e9c6c9edc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.097 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:43:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.098 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.173 254096 INFO nova.compute.manager [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.174 254096 DEBUG oslo.service.loopingcall [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.174 254096 DEBUG nova.compute.manager [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.174 254096 DEBUG nova.network.neutron [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.445 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088986.4445972, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.446 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.467 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.471 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088986.4447162, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.472 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.488 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.496 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:06 np0005535469 podman[331007]: 2025-11-25 16:43:06.503783243 +0000 UTC m=+0.097617520 container create 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:43:06 np0005535469 nova_compute[254092]: 2025-11-25 16:43:06.514 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:06 np0005535469 podman[331007]: 2025-11-25 16:43:06.432758676 +0000 UTC m=+0.026592973 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:43:06 np0005535469 systemd[1]: Started libpod-conmon-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62.scope.
Nov 25 11:43:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c59fe121ea6f90e0e0f73a558d007bdcbf7807f621ad6454cfd7572f9bd582/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:06 np0005535469 podman[331007]: 2025-11-25 16:43:06.614281591 +0000 UTC m=+0.208115898 container init 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:43:06 np0005535469 podman[331007]: 2025-11-25 16:43:06.619993366 +0000 UTC m=+0.213827643 container start 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:43:06 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : New worker (331030) forked
Nov 25 11:43:06 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : Loading success.
Nov 25 11:43:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 117 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 11:43:07 np0005535469 nova_compute[254092]: 2025-11-25 16:43:07.909 254096 DEBUG nova.compute.manager [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-unplugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:07 np0005535469 nova_compute[254092]: 2025-11-25 16:43:07.909 254096 DEBUG oslo_concurrency.lockutils [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:07 np0005535469 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG oslo_concurrency.lockutils [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:07 np0005535469 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG oslo_concurrency.lockutils [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:07 np0005535469 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG nova.compute.manager [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] No waiting events found dispatching network-vif-unplugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:07 np0005535469 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG nova.compute.manager [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-unplugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.042 254096 DEBUG nova.network.neutron [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.059 254096 INFO nova.compute.manager [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 1.89 seconds to deallocate network for instance.#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.100 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.101 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.168 254096 DEBUG oslo_concurrency.processutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563667184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.608 254096 DEBUG oslo_concurrency.processutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.616 254096 DEBUG nova.compute.provider_tree [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.639 254096 DEBUG nova.scheduler.client.report [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.674 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.727 254096 INFO nova.scheduler.client.report [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Deleted allocations for instance ed4031e8-a918-4816-b9b8-b1134a086f8b#033[00m
Nov 25 11:43:08 np0005535469 nova_compute[254092]: 2025-11-25 16:43:08.793 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:09 np0005535469 nova_compute[254092]: 2025-11-25 16:43:09.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:09 np0005535469 nova_compute[254092]: 2025-11-25 16:43:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:09 np0005535469 nova_compute[254092]: 2025-11-25 16:43:09.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:43:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 117 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.016 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-deleted-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.016 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] No waiting events found dispatching network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 WARNING nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received unexpected event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Processing event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.020 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.020 254096 WARNING nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.021 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.026 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088990.0253155, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.026 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.028 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.032 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance spawned successfully.#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.032 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.045 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.052 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.055 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.055 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.056 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.056 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.057 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.057 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.079 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.113 254096 INFO nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 11.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.113 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.175 254096 INFO nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 11.97 seconds to build instance.#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.187 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:10 np0005535469 nova_compute[254092]: 2025-11-25 16:43:10.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:10 np0005535469 podman[331063]: 2025-11-25 16:43:10.650659905 +0000 UTC m=+0.054178791 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 11:43:10 np0005535469 podman[331062]: 2025-11-25 16:43:10.65193893 +0000 UTC m=+0.059463604 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 11:43:10 np0005535469 podman[331064]: 2025-11-25 16:43:10.694947277 +0000 UTC m=+0.092788019 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:43:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978490605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.005 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.066 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.066 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.207 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.209 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4026MB free_disk=59.95417404174805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.209 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.209 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.275 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 01f96314-1fbe-4eee-a4ed-db7f448a5320 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.276 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.276 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.307 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Nov 25 11:43:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446431314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.712 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.718 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.746 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.774 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:43:11 np0005535469 nova_compute[254092]: 2025-11-25 16:43:11.774 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:12 np0005535469 nova_compute[254092]: 2025-11-25 16:43:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:12 np0005535469 nova_compute[254092]: 2025-11-25 16:43:12.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:43:12 np0005535469 NetworkManager[48891]: <info>  [1764088992.6452] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Nov 25 11:43:12 np0005535469 nova_compute[254092]: 2025-11-25 16:43:12.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:12 np0005535469 NetworkManager[48891]: <info>  [1764088992.6469] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Nov 25 11:43:12 np0005535469 nova_compute[254092]: 2025-11-25 16:43:12.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:12 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:12Z|00690|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 11:43:12 np0005535469 nova_compute[254092]: 2025-11-25 16:43:12.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:12 np0005535469 nova_compute[254092]: 2025-11-25 16:43:12.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:13 np0005535469 nova_compute[254092]: 2025-11-25 16:43:13.270 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:13 np0005535469 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG nova.compute.manager [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:13 np0005535469 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG nova.compute.manager [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing instance network info cache due to event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:43:13 np0005535469 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG oslo_concurrency.lockutils [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:13 np0005535469 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG oslo_concurrency.lockutils [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:13 np0005535469 nova_compute[254092]: 2025-11-25 16:43:13.306 254096 DEBUG nova.network.neutron [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:43:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:13.621 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:13.621 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:13.622 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 103 op/s
Nov 25 11:43:14 np0005535469 nova_compute[254092]: 2025-11-25 16:43:14.504 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:14 np0005535469 nova_compute[254092]: 2025-11-25 16:43:14.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:14 np0005535469 nova_compute[254092]: 2025-11-25 16:43:14.505 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:43:14 np0005535469 nova_compute[254092]: 2025-11-25 16:43:14.526 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:43:14 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:14Z|00691|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 11:43:14 np0005535469 nova_compute[254092]: 2025-11-25 16:43:14.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.180 254096 DEBUG nova.network.neutron [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updated VIF entry in instance network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.181 254096 DEBUG nova.network.neutron [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.198 254096 DEBUG oslo_concurrency.lockutils [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.487 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.487 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.516 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.534 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.535 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.564 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.567 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.567 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.569 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.595 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.595 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.601 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.601 254096 INFO nova.compute.claims [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.642 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:43:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 16 KiB/s wr, 149 op/s
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.715 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.736 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:15 np0005535469 nova_compute[254092]: 2025-11-25 16:43:15.800 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406646157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.227 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.232 254096 DEBUG nova.compute.provider_tree [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.245 254096 DEBUG nova.scheduler.client.report [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.267 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.268 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.270 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.276 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.276 254096 INFO nova.compute.claims [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.365 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.366 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.403 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.424 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.465 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.504 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.538 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.558 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.560 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.560 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Creating image(s)#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.587 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.612 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.649 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.653 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.725 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.726 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.727 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.727 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.746 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.750 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ef028cf3-f8af-4112-9424-8a12fdda7690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.886 254096 DEBUG nova.policy [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a8a49326e3040eea57c8e1a61660f19', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:43:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685190118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.982 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:16 np0005535469 nova_compute[254092]: 2025-11-25 16:43:16.989 254096 DEBUG nova.compute.provider_tree [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.003 254096 DEBUG nova.scheduler.client.report [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.025 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.026 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.028 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.036 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.037 254096 INFO nova.compute.claims [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.068 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ef028cf3-f8af-4112-9424-8a12fdda7690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.130 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.130 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.139 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] resizing rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.174 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.201 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.252 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'migration_context' on Instance uuid ef028cf3-f8af-4112-9424-8a12fdda7690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.270 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.271 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Ensure instance console log exists: /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.271 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.272 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.272 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.297 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.299 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.299 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Creating image(s)#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.324 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.353 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.382 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.389 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.445 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.476 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.477 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.478 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.478 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.502 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.506 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f5a259d0-4460-4335-aa4a-f874f93a7e93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.560 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Successfully created port: 146f0586-22f7-43d7-9a96-06459ea85508 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.622 254096 DEBUG nova.policy [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a8a49326e3040eea57c8e1a61660f19', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:43:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 16 KiB/s wr, 122 op/s
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.817 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f5a259d0-4460-4335-aa4a-f874f93a7e93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.873 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] resizing rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:43:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280267706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.903 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.908 254096 DEBUG nova.compute.provider_tree [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.921 254096 DEBUG nova.scheduler.client.report [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.961 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.962 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.968 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'migration_context' on Instance uuid f5a259d0-4460-4335-aa4a-f874f93a7e93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.984 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.984 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Ensure instance console log exists: /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.985 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.985 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:17 np0005535469 nova_compute[254092]: 2025-11-25 16:43:17.985 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.007 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.007 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.023 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.038 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.124 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.126 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.126 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Creating image(s)#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.143 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.164 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.188 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.192 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.264 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.265 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.265 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.265 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.309 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.312 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.425 254096 DEBUG nova.policy [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a8a49326e3040eea57c8e1a61660f19', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.430 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Successfully created port: 2a3974bd-02ad-406e-9531-3844e5df4bfa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.536 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.615 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.681 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] resizing rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.766 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'migration_context' on Instance uuid 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.776 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.777 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Ensure instance console log exists: /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.777 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.777 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.778 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.791 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Successfully updated port: 146f0586-22f7-43d7-9a96-06459ea85508 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.806 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.807 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquired lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.807 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.930 254096 DEBUG nova.compute.manager [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-changed-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.931 254096 DEBUG nova.compute.manager [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Refreshing instance network info cache due to event network-changed-146f0586-22f7-43d7-9a96-06459ea85508. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.932 254096 DEBUG oslo_concurrency.lockutils [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:18 np0005535469 nova_compute[254092]: 2025-11-25 16:43:18.975 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:43:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 95 op/s
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.046 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Successfully created port: a297c9f1-753f-4f96-b8e4-38a42969484d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:43:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:20.219 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:20.220 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.231 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updating instance_info_cache with network_info: [{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.251 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Releasing lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.251 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance network_info: |[{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.252 254096 DEBUG oslo_concurrency.lockutils [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.252 254096 DEBUG nova.network.neutron [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Refreshing network info cache for port 146f0586-22f7-43d7-9a96-06459ea85508 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.255 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start _get_guest_xml network_info=[{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.259 254096 WARNING nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.265 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.265 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.272 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.272 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.273 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.273 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.273 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.274 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.274 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.274 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.276 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.276 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.278 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.353 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Successfully updated port: 2a3974bd-02ad-406e-9531-3844e5df4bfa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.375 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.376 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquired lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.376 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.525 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088985.524121, ed4031e8-a918-4816-b9b8-b1134a086f8b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.526 254096 INFO nova.compute.manager [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.570 254096 DEBUG nova.compute.manager [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-changed-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.571 254096 DEBUG nova.compute.manager [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Refreshing instance network info cache due to event network-changed-2a3974bd-02ad-406e-9531-3844e5df4bfa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.571 254096 DEBUG oslo_concurrency.lockutils [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.574 254096 DEBUG nova.compute.manager [None req-b1129082-1b9a-4bd2-8e84-e9fdba443370 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652906314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.712 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.750 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.756 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:20 np0005535469 nova_compute[254092]: 2025-11-25 16:43:20.883 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:43:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647307016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.261 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.263 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-1',id=73,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:16Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=ef028cf3-f8af-4112-9424-8a12fdda7690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.264 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.264 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.265 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'pci_devices' on Instance uuid ef028cf3-f8af-4112-9424-8a12fdda7690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.278 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <uuid>ef028cf3-f8af-4112-9424-8a12fdda7690</uuid>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <name>instance-00000049</name>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListServersNegativeTestJSON-server-206851324-1</nova:name>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:43:20</nova:creationTime>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:user uuid="1a8a49326e3040eea57c8e1a61660f19">tempest-ListServersNegativeTestJSON-999655333-project-member</nova:user>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:project uuid="b96c962def8e44a98e659bf2a55a8dcc">tempest-ListServersNegativeTestJSON-999655333</nova:project>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <nova:port uuid="146f0586-22f7-43d7-9a96-06459ea85508">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <entry name="serial">ef028cf3-f8af-4112-9424-8a12fdda7690</entry>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <entry name="uuid">ef028cf3-f8af-4112-9424-8a12fdda7690</entry>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/ef028cf3-f8af-4112-9424-8a12fdda7690_disk">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:62:f4:12"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <target dev="tap146f0586-22"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/console.log" append="off"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:43:21 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:43:21 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:43:21 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:43:21 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.279 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Preparing to wait for external event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.279 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.280 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.280 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.281 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-1',id=73,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:16Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=ef028cf3-f8af-4112-9424-8a12fdda7690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.281 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.281 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.282 254096 DEBUG os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.283 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.283 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.286 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap146f0586-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.287 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap146f0586-22, col_values=(('external_ids', {'iface-id': '146f0586-22f7-43d7-9a96-06459ea85508', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:f4:12', 'vm-uuid': 'ef028cf3-f8af-4112-9424-8a12fdda7690'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:21 np0005535469 NetworkManager[48891]: <info>  [1764089001.2892] manager: (tap146f0586-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.294 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.295 254096 INFO os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22')#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.353 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.353 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.353 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No VIF found with MAC fa:16:3e:62:f4:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.354 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Using config drive#033[00m
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.376 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 227 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 177 op/s
Nov 25 11:43:21 np0005535469 nova_compute[254092]: 2025-11-25 16:43:21.872 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Successfully updated port: a297c9f1-753f-4f96-b8e4-38a42969484d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.036 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.036 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquired lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.036 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.079 254096 DEBUG nova.network.neutron [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updated VIF entry in instance network info cache for port 146f0586-22f7-43d7-9a96-06459ea85508. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.080 254096 DEBUG nova.network.neutron [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updating instance_info_cache with network_info: [{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.093 254096 DEBUG oslo_concurrency.lockutils [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.326 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Creating config drive at /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.331 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98qiu354 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.365 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.447 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updating instance_info_cache with network_info: [{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.470 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98qiu354" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.796 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.799 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.851 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Releasing lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.852 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance network_info: |[{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.853 254096 DEBUG nova.compute.manager [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-changed-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG nova.compute.manager [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Refreshing instance network info cache due to event network-changed-a297c9f1-753f-4f96-b8e4-38a42969484d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG oslo_concurrency.lockutils [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG oslo_concurrency.lockutils [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG nova.network.neutron [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Refreshing network info cache for port 2a3974bd-02ad-406e-9531-3844e5df4bfa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.857 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start _get_guest_xml network_info=[{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.861 254096 WARNING nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.865 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.865 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.867 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.867 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.868 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.868 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.868 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.870 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.870 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:43:22 np0005535469 nova_compute[254092]: 2025-11-25 16:43:22.873 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960710798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.372 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.404 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.410 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.600 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.601 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deleting local config drive /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config because it was imported into RBD.#033[00m
Nov 25 11:43:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 227 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 141 op/s
Nov 25 11:43:23 np0005535469 NetworkManager[48891]: <info>  [1764089003.6589] manager: (tap146f0586-22): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Nov 25 11:43:23 np0005535469 kernel: tap146f0586-22: entered promiscuous mode
Nov 25 11:43:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:23Z|00692|binding|INFO|Claiming lport 146f0586-22f7-43d7-9a96-06459ea85508 for this chassis.
Nov 25 11:43:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:23Z|00693|binding|INFO|146f0586-22f7-43d7-9a96-06459ea85508: Claiming fa:16:3e:62:f4:12 10.100.0.3
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.672 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f4:12 10.100.0.3'], port_security=['fa:16:3e:62:f4:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef028cf3-f8af-4112-9424-8a12fdda7690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=146f0586-22f7-43d7-9a96-06459ea85508) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.674 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 146f0586-22f7-43d7-9a96-06459ea85508 in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf bound to our chassis#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.675 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf#033[00m
Nov 25 11:43:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:23Z|00694|binding|INFO|Setting lport 146f0586-22f7-43d7-9a96-06459ea85508 ovn-installed in OVS
Nov 25 11:43:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:23Z|00695|binding|INFO|Setting lport 146f0586-22f7-43d7-9a96-06459ea85508 up in Southbound
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.689 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73583b6c-3c2c-45f0-88e3-0b09617611f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.690 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap126cf01f-b1 in ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.692 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap126cf01f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.693 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efffe1de-9d6d-4cb5-9239-5c187a7c52aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.694 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6951a5cf-7641-4c50-81fc-19577db58080]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 systemd-udevd[331928]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:43:23 np0005535469 systemd-machined[216343]: New machine qemu-86-instance-00000049.
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.709 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1c568d-e24f-4184-a8c2-e5997c3b37e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 NetworkManager[48891]: <info>  [1764089003.7140] device (tap146f0586-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:43:23 np0005535469 NetworkManager[48891]: <info>  [1764089003.7151] device (tap146f0586-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:43:23 np0005535469 systemd[1]: Started Virtual Machine qemu-86-instance-00000049.
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a63ca00-5fbf-4267-bce9-09ac15a5f266]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.768 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d79cf7e4-ba66-43ae-81f2-a852d16c9a91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 NetworkManager[48891]: <info>  [1764089003.7769] manager: (tap126cf01f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/302)
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a96471e2-a128-4c82-8dfc-863afa4a7d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 systemd-udevd[331932]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.810 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2becb42e-257d-4a0c-9643-065923b7ecc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.814 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[422e9328-7fec-41ba-bf27-d32cf2793568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 NetworkManager[48891]: <info>  [1764089003.8439] device (tap126cf01f-b0): carrier: link connected
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.850 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffa82a2-3e7a-4b02-8252-fb105a6da427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.868 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfdd176-989d-4e3a-bc78-b8f4b40e634e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331963, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.886 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb35b7b-bdd4-47d9-bd2a-7530d5bf1945]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546143, 'tstamp': 546143}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331964, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971467510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.909 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3105e9-c068-4f39-a38e-0fd948f194e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331965, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.938 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.940 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-2',id=74,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:17Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=f5a259d0-4460-4335-aa4a-f874f93a7e93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.940 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.941 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.943 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'pci_devices' on Instance uuid f5a259d0-4460-4335-aa4a-f874f93a7e93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.949 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb10ac2-4283-44c4-90f1-7bf553299ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.961 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <uuid>f5a259d0-4460-4335-aa4a-f874f93a7e93</uuid>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <name>instance-0000004a</name>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListServersNegativeTestJSON-server-206851324-2</nova:name>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:43:22</nova:creationTime>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:user uuid="1a8a49326e3040eea57c8e1a61660f19">tempest-ListServersNegativeTestJSON-999655333-project-member</nova:user>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:project uuid="b96c962def8e44a98e659bf2a55a8dcc">tempest-ListServersNegativeTestJSON-999655333</nova:project>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <nova:port uuid="2a3974bd-02ad-406e-9531-3844e5df4bfa">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <entry name="serial">f5a259d0-4460-4335-aa4a-f874f93a7e93</entry>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <entry name="uuid">f5a259d0-4460-4335-aa4a-f874f93a7e93</entry>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f5a259d0-4460-4335-aa4a-f874f93a7e93_disk">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:02:21:9c"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <target dev="tap2a3974bd-02"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/console.log" append="off"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:43:23 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:43:23 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:43:23 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:43:23 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.967 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Preparing to wait for external event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.967 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.967 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.968 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.969 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-2',id=74,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:17Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=f5a259d0-4460-4335-aa4a-f874f93a7e93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.969 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.970 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.971 254096 DEBUG os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.972 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.972 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.978 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a3974bd-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.978 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a3974bd-02, col_values=(('external_ids', {'iface-id': '2a3974bd-02ad-406e-9531-3844e5df4bfa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:21:9c', 'vm-uuid': 'f5a259d0-4460-4335-aa4a-f874f93a7e93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:23 np0005535469 NetworkManager[48891]: <info>  [1764089003.9811] manager: (tap2a3974bd-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:23 np0005535469 nova_compute[254092]: 2025-11-25 16:43:23.986 254096 INFO os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02')#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.026 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d11e085-a483-440d-9b9f-fbd492e952c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.028 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.028 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.028 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:24 np0005535469 NetworkManager[48891]: <info>  [1764089004.0318] manager: (tap126cf01f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Nov 25 11:43:24 np0005535469 kernel: tap126cf01f-b0: entered promiscuous mode
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.038 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.039 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:24 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:24Z|00696|binding|INFO|Releasing lport 41886c6c-e968-4c0b-b7f6-75887ad8a7ea from this chassis (sb_readonly=0)
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.062 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.062 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.063 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No VIF found with MAC fa:16:3e:02:21:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.063 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Using config drive#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.066 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/126cf01f-b6da-4bbc-847b-2d16936986cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/126cf01f-b6da-4bbc-847b-2d16936986cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2a817f-91ab-4f88-9b66-ede888906737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.069 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/126cf01f-b6da-4bbc-847b-2d16936986cf.pid.haproxy
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.071 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'env', 'PROCESS_TAG=haproxy-126cf01f-b6da-4bbc-847b-2d16936986cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/126cf01f-b6da-4bbc-847b-2d16936986cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.094 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.158 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updating instance_info_cache with network_info: [{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.177 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Releasing lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.178 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance network_info: |[{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.179 254096 DEBUG oslo_concurrency.lockutils [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.179 254096 DEBUG nova.network.neutron [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Refreshing network info cache for port a297c9f1-753f-4f96-b8e4-38a42969484d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.184 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start _get_guest_xml network_info=[{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.190 254096 WARNING nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.197 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.198 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.201 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.201 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.202 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.203 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.203 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.203 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.204 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.206 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.206 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.206 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.209 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.225 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:24 np0005535469 podman[332038]: 2025-11-25 16:43:24.435098431 +0000 UTC m=+0.025093442 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.561 254096 DEBUG nova.compute.manager [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.563 254096 DEBUG oslo_concurrency.lockutils [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.564 254096 DEBUG oslo_concurrency.lockutils [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.564 254096 DEBUG oslo_concurrency.lockutils [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.565 254096 DEBUG nova.compute.manager [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Processing event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.619 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Creating config drive at /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.629 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpdmsrhy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444421611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.689 254096 DEBUG nova.network.neutron [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updated VIF entry in instance network info cache for port 2a3974bd-02ad-406e-9531-3844e5df4bfa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.690 254096 DEBUG nova.network.neutron [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updating instance_info_cache with network_info: [{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.695 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:24 np0005535469 podman[332038]: 2025-11-25 16:43:24.726011493 +0000 UTC m=+0.316006494 container create b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.729 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.738 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.771 254096 DEBUG oslo_concurrency.lockutils [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:24 np0005535469 systemd[1]: Started libpod-conmon-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da.scope.
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.797 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpdmsrhy" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.830 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965eee395a059bda45c6a5b38023a9548ac21b0145fb7c81bf8446d934d09752/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.837 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.868 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089004.8052185, ef028cf3-f8af-4112-9424-8a12fdda7690 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.869 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Started (Lifecycle Event)#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.872 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:43:24 np0005535469 podman[332038]: 2025-11-25 16:43:24.877351919 +0000 UTC m=+0.467346950 container init b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.883 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:43:24 np0005535469 podman[332038]: 2025-11-25 16:43:24.886632171 +0000 UTC m=+0.476627172 container start b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.891 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.907 254096 INFO nova.virt.libvirt.driver [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance spawned successfully.#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.908 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:43:24 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : New worker (332180) forked
Nov 25 11:43:24 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : Loading success.
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.925 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.930 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.930 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.931 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.931 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.931 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.932 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.961 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.962 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089004.8053908, ef028cf3-f8af-4112-9424-8a12fdda7690 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.962 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.991 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.994 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089004.8753698, ef028cf3-f8af-4112-9424-8a12fdda7690 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:24 np0005535469 nova_compute[254092]: 2025-11-25 16:43:24.994 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.004 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 8.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.004 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.016 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.020 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.040 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.041 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deleting local config drive /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config because it was imported into RBD.#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.062 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.099 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 9.53 seconds to build instance.#033[00m
Nov 25 11:43:25 np0005535469 kernel: tap2a3974bd-02: entered promiscuous mode
Nov 25 11:43:25 np0005535469 NetworkManager[48891]: <info>  [1764089005.1089] manager: (tap2a3974bd-02): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Nov 25 11:43:25 np0005535469 systemd-udevd[331953]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:25Z|00697|binding|INFO|Claiming lport 2a3974bd-02ad-406e-9531-3844e5df4bfa for this chassis.
Nov 25 11:43:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:25Z|00698|binding|INFO|2a3974bd-02ad-406e-9531-3844e5df4bfa: Claiming fa:16:3e:02:21:9c 10.100.0.9
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.120 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:21:9c 10.100.0.9'], port_security=['fa:16:3e:02:21:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a259d0-4460-4335-aa4a-f874f93a7e93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a3974bd-02ad-406e-9531-3844e5df4bfa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.123 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a3974bd-02ad-406e-9531-3844e5df4bfa in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf bound to our chassis#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.126 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf#033[00m
Nov 25 11:43:25 np0005535469 NetworkManager[48891]: <info>  [1764089005.1299] device (tap2a3974bd-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:43:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:25Z|00699|binding|INFO|Setting lport 2a3974bd-02ad-406e-9531-3844e5df4bfa ovn-installed in OVS
Nov 25 11:43:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:25Z|00700|binding|INFO|Setting lport 2a3974bd-02ad-406e-9531-3844e5df4bfa up in Southbound
Nov 25 11:43:25 np0005535469 NetworkManager[48891]: <info>  [1764089005.1312] device (tap2a3974bd-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.135 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.154 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1f3124-74ce-41d8-9407-14cd9413077a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:25 np0005535469 systemd-machined[216343]: New machine qemu-87-instance-0000004a.
Nov 25 11:43:25 np0005535469 systemd[1]: Started Virtual Machine qemu-87-instance-0000004a.
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.199 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abf57aed-7cf9-45d7-bd01-a3bb8bbcc85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.204 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9994441e-71a0-4f8e-9498-87862607895e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719824996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.254 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f48f4ac1-f63e-49b6-8bc1-381565a2adbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.266 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.268 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-3',id=75,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:18Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=69b3cbbb-9713-4f49-9e67-1f33a3ae2642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.269 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.269 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.271 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'pci_devices' on Instance uuid 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.283 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <uuid>69b3cbbb-9713-4f49-9e67-1f33a3ae2642</uuid>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <name>instance-0000004b</name>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <nova:name>tempest-ListServersNegativeTestJSON-server-206851324-3</nova:name>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:43:24</nova:creationTime>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:user uuid="1a8a49326e3040eea57c8e1a61660f19">tempest-ListServersNegativeTestJSON-999655333-project-member</nova:user>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:project uuid="b96c962def8e44a98e659bf2a55a8dcc">tempest-ListServersNegativeTestJSON-999655333</nova:project>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <nova:port uuid="a297c9f1-753f-4f96-b8e4-38a42969484d">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <entry name="serial">69b3cbbb-9713-4f49-9e67-1f33a3ae2642</entry>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <entry name="uuid">69b3cbbb-9713-4f49-9e67-1f33a3ae2642</entry>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:1e:28:6d"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <target dev="tapa297c9f1-75"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/console.log" append="off"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:43:25 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:43:25 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:43:25 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:43:25 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.284 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Preparing to wait for external event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.284 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.285 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.285 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.286 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-3',id=75,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:18Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=69b3cbbb-9713-4f49-9e67-1f33a3ae2642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.287 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.288 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.288 254096 DEBUG os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.289 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.289 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.291 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa297c9f1-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.292 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa297c9f1-75, col_values=(('external_ids', {'iface-id': 'a297c9f1-753f-4f96-b8e4-38a42969484d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:28:6d', 'vm-uuid': '69b3cbbb-9713-4f49-9e67-1f33a3ae2642'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:25Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 11:43:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:25Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 11:43:25 np0005535469 NetworkManager[48891]: <info>  [1764089005.2941] manager: (tapa297c9f1-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.295 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2df270-e5a5-460c-8cc7-256cd7d39935]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332219, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.295 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.298 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.299 254096 INFO os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75')#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.320 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e54bbc82-d69a-4b7d-bcae-a43beebcf57b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332221, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332221, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.322 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.328 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.328 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.329 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.329 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No VIF found with MAC fa:16:3e:1e:28:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Using config drive#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.367 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.627 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089005.6266854, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.627 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Started (Lifecycle Event)#033[00m
Nov 25 11:43:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 241 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 179 op/s
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.651 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.655 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089005.629937, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.655 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.674 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:25 np0005535469 nova_compute[254092]: 2025-11-25 16:43:25.687 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.207 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Creating config drive at /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.217 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_vktite execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.368 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_vktite" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.408 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.415 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.465 254096 DEBUG nova.compute.manager [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.466 254096 DEBUG oslo_concurrency.lockutils [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.467 254096 DEBUG oslo_concurrency.lockutils [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.467 254096 DEBUG oslo_concurrency.lockutils [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.467 254096 DEBUG nova.compute.manager [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Processing event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.468 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.471 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089006.4703562, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.471 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.474 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.477 254096 INFO nova.virt.libvirt.driver [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance spawned successfully.#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.477 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.505 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.513 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.517 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.518 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.518 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.519 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.519 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.519 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.543 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.572 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.572 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deleting local config drive /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config because it was imported into RBD.#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.582 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 9.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.583 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:26 np0005535469 NetworkManager[48891]: <info>  [1764089006.6177] manager: (tapa297c9f1-75): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Nov 25 11:43:26 np0005535469 kernel: tapa297c9f1-75: entered promiscuous mode
Nov 25 11:43:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:26Z|00701|binding|INFO|Claiming lport a297c9f1-753f-4f96-b8e4-38a42969484d for this chassis.
Nov 25 11:43:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:26Z|00702|binding|INFO|a297c9f1-753f-4f96-b8e4-38a42969484d: Claiming fa:16:3e:1e:28:6d 10.100.0.8
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.626 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:28:6d 10.100.0.8'], port_security=['fa:16:3e:1e:28:6d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69b3cbbb-9713-4f49-9e67-1f33a3ae2642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a297c9f1-753f-4f96-b8e4-38a42969484d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.627 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a297c9f1-753f-4f96-b8e4-38a42969484d in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf bound to our chassis#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.629 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf#033[00m
Nov 25 11:43:26 np0005535469 NetworkManager[48891]: <info>  [1764089006.6326] device (tapa297c9f1-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:43:26 np0005535469 NetworkManager[48891]: <info>  [1764089006.6335] device (tapa297c9f1-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:43:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:26Z|00703|binding|INFO|Setting lport a297c9f1-753f-4f96-b8e4-38a42969484d ovn-installed in OVS
Nov 25 11:43:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:26Z|00704|binding|INFO|Setting lport a297c9f1-753f-4f96-b8e4-38a42969484d up in Southbound
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.649 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 10.99 seconds to build instance.#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.652 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[121d79d3-5c8f-4735-8faa-40fa47731404]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:26 np0005535469 systemd-machined[216343]: New machine qemu-88-instance-0000004b.
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.668 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.670 254096 DEBUG nova.compute.manager [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG oslo_concurrency.lockutils [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG oslo_concurrency.lockutils [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG oslo_concurrency.lockutils [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG nova.compute.manager [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] No waiting events found dispatching network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.672 254096 WARNING nova.compute.manager [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received unexpected event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:43:26 np0005535469 systemd[1]: Started Virtual Machine qemu-88-instance-0000004b.
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.678 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bf56c87d-58cb-4827-813e-3b28e4e6060c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.682 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[00478d77-50f7-46ba-bd50-4ad681ad08ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.709 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1db08130-585f-4996-85e1-f05f0095dd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.726 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba44a919-af9e-4fea-bdef-671c0f4b6940]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332346, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.743 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[958b6abd-0145-4af8-92e9-da8171425f63]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332351, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332351, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.744 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:26 np0005535469 nova_compute[254092]: 2025-11-25 16:43:26.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.049 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089007.0487735, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.049 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Started (Lifecycle Event)#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.073 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.077 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089007.0537202, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.077 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.092 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.095 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.112 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.117 254096 DEBUG nova.network.neutron [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updated VIF entry in instance network info cache for port a297c9f1-753f-4f96-b8e4-38a42969484d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.118 254096 DEBUG nova.network.neutron [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updating instance_info_cache with network_info: [{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.129 254096 DEBUG oslo_concurrency.lockutils [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:27 np0005535469 nova_compute[254092]: 2025-11-25 16:43:27.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 248 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 983 KiB/s rd, 7.4 MiB/s wr, 163 op/s
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.571 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] No waiting events found dispatching network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 WARNING nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received unexpected event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa for instance with vm_state active and task_state None.#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Processing event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] No waiting events found dispatching network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.575 254096 WARNING nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received unexpected event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.576 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.639 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089008.6331325, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.640 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.656 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.657 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.660 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.663 254096 INFO nova.virt.libvirt.driver [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance spawned successfully.#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.663 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.681 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.690 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.690 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.691 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.691 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.691 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.692 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.762 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 10.64 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.762 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.820 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 13.10 seconds to build instance.#033[00m
Nov 25 11:43:28 np0005535469 nova_compute[254092]: 2025-11-25 16:43:28.837 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 248 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 7.4 MiB/s wr, 149 op/s
Nov 25 11:43:30 np0005535469 nova_compute[254092]: 2025-11-25 16:43:30.294 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 260 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 7.5 MiB/s wr, 362 op/s
Nov 25 11:43:33 np0005535469 nova_compute[254092]: 2025-11-25 16:43:33.079 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:33 np0005535469 nova_compute[254092]: 2025-11-25 16:43:33.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 260 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 281 op/s
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.266 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.267 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.267 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.267 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.268 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.269 254096 INFO nova.compute.manager [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Terminating instance#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.270 254096 DEBUG nova.compute.manager [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:43:34 np0005535469 kernel: tap146f0586-22 (unregistering): left promiscuous mode
Nov 25 11:43:34 np0005535469 NetworkManager[48891]: <info>  [1764089014.3205] device (tap146f0586-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:43:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:34Z|00705|binding|INFO|Releasing lport 146f0586-22f7-43d7-9a96-06459ea85508 from this chassis (sb_readonly=0)
Nov 25 11:43:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:34Z|00706|binding|INFO|Setting lport 146f0586-22f7-43d7-9a96-06459ea85508 down in Southbound
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:34Z|00707|binding|INFO|Removing iface tap146f0586-22 ovn-installed in OVS
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.338 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f4:12 10.100.0.3'], port_security=['fa:16:3e:62:f4:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef028cf3-f8af-4112-9424-8a12fdda7690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=146f0586-22f7-43d7-9a96-06459ea85508) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.341 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 146f0586-22f7-43d7-9a96-06459ea85508 in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf unbound from our chassis#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.343 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.377 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d0375b-1895-46ba-aa15-2927ffa11d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:34 np0005535469 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000049.scope: Deactivated successfully.
Nov 25 11:43:34 np0005535469 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000049.scope: Consumed 10.337s CPU time.
Nov 25 11:43:34 np0005535469 systemd-machined[216343]: Machine qemu-86-instance-00000049 terminated.
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.409 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[071f5a97-e864-4b16-978c-53d42ab2246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.412 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fd888d-dde8-49b1-8a5f-7b1c262eff6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.442 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[37e1d7a2-b31d-461b-b658-151f8ed7d1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.465 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e7565498-6d28-470f-af23-68b38298dbfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332407, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.489 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[608d67ea-4a3a-4300-9d21-52386a115ed8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332408, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332408, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.506 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.507 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.507 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.508 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.510 254096 INFO nova.virt.libvirt.driver [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance destroyed successfully.#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.511 254096 DEBUG nova.objects.instance [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'resources' on Instance uuid ef028cf3-f8af-4112-9424-8a12fdda7690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.525 254096 DEBUG nova.virt.libvirt.vif [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-1',id=73,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:25Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=ef028cf3-f8af-4112-9424-8a12fdda7690,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.525 254096 DEBUG nova.network.os_vif_util [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.526 254096 DEBUG nova.network.os_vif_util [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.527 254096 DEBUG os_vif [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap146f0586-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.534 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.537 254096 INFO os_vif [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22')#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.892 254096 INFO nova.virt.libvirt.driver [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deleting instance files /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690_del#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.893 254096 INFO nova.virt.libvirt.driver [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deletion of /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690_del complete#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.949 254096 INFO nova.compute.manager [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.951 254096 DEBUG oslo.service.loopingcall [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.951 254096 DEBUG nova.compute.manager [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:43:34 np0005535469 nova_compute[254092]: 2025-11-25 16:43:34.951 254096 DEBUG nova.network.neutron [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:43:35 np0005535469 nova_compute[254092]: 2025-11-25 16:43:35.650 254096 DEBUG nova.network.neutron [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 229 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 303 op/s
Nov 25 11:43:35 np0005535469 nova_compute[254092]: 2025-11-25 16:43:35.674 254096 INFO nova.compute.manager [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 0.72 seconds to deallocate network for instance.#033[00m
Nov 25 11:43:35 np0005535469 nova_compute[254092]: 2025-11-25 16:43:35.723 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:35 np0005535469 nova_compute[254092]: 2025-11-25 16:43:35.723 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:35 np0005535469 nova_compute[254092]: 2025-11-25 16:43:35.738 254096 DEBUG nova.compute.manager [req-c2014b31-b142-49e2-acc6-3e8ebafbe71f req-ba8b0a06-4b57-4222-b89c-8bc886fdae86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-vif-deleted-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:35 np0005535469 nova_compute[254092]: 2025-11-25 16:43:35.836 254096 DEBUG oslo_concurrency.processutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:35 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 25 11:43:35 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:35.918096) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:43:35 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 25 11:43:35 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089015918197, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 979, "num_deletes": 250, "total_data_size": 1281279, "memory_usage": 1301552, "flush_reason": "Manual Compaction"}
Nov 25 11:43:35 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016025083, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 794860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35405, "largest_seqno": 36383, "table_properties": {"data_size": 791049, "index_size": 1463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10622, "raw_average_key_size": 20, "raw_value_size": 782686, "raw_average_value_size": 1537, "num_data_blocks": 66, "num_entries": 509, "num_filter_entries": 509, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088931, "oldest_key_time": 1764088931, "file_creation_time": 1764089015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 106976 microseconds, and 3160 cpu microseconds.
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.025124) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 794860 bytes OK
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.025144) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.034470) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.034514) EVENT_LOG_v1 {"time_micros": 1764089016034501, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.034544) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1276576, prev total WAL file size 1303112, number of live WAL files 2.
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.035591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323537' seq:72057594037927935, type:22 .. '6D6772737461740031353038' seq:0, type:0; will stop at (end)
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(776KB)], [77(9859KB)]
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016035700, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 10890833, "oldest_snapshot_seqno": -1}
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6087 keys, 8057783 bytes, temperature: kUnknown
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016138601, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8057783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8018197, "index_size": 23284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154306, "raw_average_key_size": 25, "raw_value_size": 7910021, "raw_average_value_size": 1299, "num_data_blocks": 944, "num_entries": 6087, "num_filter_entries": 6087, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089016, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.138879) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8057783 bytes
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.151590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.7 rd, 78.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(23.8) write-amplify(10.1) OK, records in: 6563, records dropped: 476 output_compression: NoCompression
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.151628) EVENT_LOG_v1 {"time_micros": 1764089016151616, "job": 44, "event": "compaction_finished", "compaction_time_micros": 103008, "compaction_time_cpu_micros": 24637, "output_level": 6, "num_output_files": 1, "total_output_size": 8057783, "num_input_records": 6563, "num_output_records": 6087, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016152010, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016153563, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.035434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190893710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.378 254096 DEBUG oslo_concurrency.processutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.384 254096 DEBUG nova.compute.provider_tree [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.404 254096 DEBUG nova.scheduler.client.report [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.435 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.477 254096 INFO nova.scheduler.client.report [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Deleted allocations for instance ef028cf3-f8af-4112-9424-8a12fdda7690#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.570 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.579 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.579 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.613 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.685 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.686 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.691 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.691 254096 INFO nova.compute.claims [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:43:36 np0005535469 nova_compute[254092]: 2025-11-25 16:43:36.869 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458475742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.441 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.447 254096 DEBUG nova.compute.provider_tree [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.461 254096 DEBUG nova.scheduler.client.report [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.490 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.491 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.537 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.537 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.564 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.585 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:43:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 214 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 914 KiB/s wr, 269 op/s
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.701 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.703 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.703 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Creating image(s)#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.727 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.751 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.779 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.783 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.854 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.855 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.856 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.856 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.880 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.884 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 44f3c94a-060c-4650-bfe7-a214c6a10207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:37 np0005535469 nova_compute[254092]: 2025-11-25 16:43:37.921 254096 DEBUG nova.policy [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e0e4ce0eeda4a79ab738e1f8dc0f725', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e0a920505c8240228ed836913ffcdbe4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.640 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 44f3c94a-060c-4650-bfe7-a214c6a10207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.711 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] resizing rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.857 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Successfully created port: b5793671-5020-4692-92f1-65a87bcdf38e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.864 254096 DEBUG nova.objects.instance [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lazy-loading 'migration_context' on Instance uuid 44f3c94a-060c-4650-bfe7-a214c6a10207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.876 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.877 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Ensure instance console log exists: /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.877 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.878 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:38 np0005535469 nova_compute[254092]: 2025-11-25 16:43:38.878 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:39 np0005535469 nova_compute[254092]: 2025-11-25 16:43:39.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 214 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 122 KiB/s wr, 239 op/s
Nov 25 11:43:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:39Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:21:9c 10.100.0.9
Nov 25 11:43:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:39Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:21:9c 10.100.0.9
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:43:40
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'volumes', 'backups', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:43:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:43:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 23K writes, 96K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 7556 syncs, 3.09 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8508 writes, 34K keys, 8508 commit groups, 1.0 writes per commit group, ingest: 33.82 MB, 0.06 MB/s#012Interval WAL: 8507 writes, 3171 syncs, 2.68 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:43:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:41 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.019 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.020 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.020 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.021 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.021 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.023 254096 INFO nova.compute.manager [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Terminating instance#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.025 254096 DEBUG nova.compute.manager [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:43:41 np0005535469 kernel: tap2a3974bd-02 (unregistering): left promiscuous mode
Nov 25 11:43:41 np0005535469 NetworkManager[48891]: <info>  [1764089021.1987] device (tap2a3974bd-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:41Z|00708|binding|INFO|Releasing lport 2a3974bd-02ad-406e-9531-3844e5df4bfa from this chassis (sb_readonly=0)
Nov 25 11:43:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:41Z|00709|binding|INFO|Setting lport 2a3974bd-02ad-406e-9531-3844e5df4bfa down in Southbound
Nov 25 11:43:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:41Z|00710|binding|INFO|Removing iface tap2a3974bd-02 ovn-installed in OVS
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.211 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Nov 25 11:43:41 np0005535469 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004a.scope: Consumed 13.729s CPU time.
Nov 25 11:43:41 np0005535469 systemd-machined[216343]: Machine qemu-87-instance-0000004a terminated.
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.237 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:21:9c 10.100.0.9'], port_security=['fa:16:3e:02:21:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a259d0-4460-4335-aa4a-f874f93a7e93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a3974bd-02ad-406e-9531-3844e5df4bfa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.239 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a3974bd-02ad-406e-9531-3844e5df4bfa in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf unbound from our chassis#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.240 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.249 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.249 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.250 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.250 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.250 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.252 254096 INFO nova.compute.manager [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Terminating instance#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.253 254096 DEBUG nova.compute.manager [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.255 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[485ccb36-c684-45c2-9cd7-5f10eb928e28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.286 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[86b25006-5b8e-4a1f-9415-0579ef7f69fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.288 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da8595e1-949d-486a-9fe0-9137ba09eb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.317 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e306e1e2-6cbe-42c6-8b41-a633757c98d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:41 np0005535469 podman[332652]: 2025-11-25 16:43:41.322134297 +0000 UTC m=+0.086922749 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 11:43:41 np0005535469 podman[332653]: 2025-11-25 16:43:41.333166096 +0000 UTC m=+0.097867036 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da4001dd-e9e4-4143-a5db-256c0c8cd53b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332718, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.351 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a01d4d36-0e7b-4184-805a-59693fc61394]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332722, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332722, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:41 np0005535469 podman[332654]: 2025-11-25 16:43:41.3528426 +0000 UTC m=+0.116990215 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.353 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.359 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.360 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.360 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.468 254096 INFO nova.virt.libvirt.driver [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance destroyed successfully.#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.468 254096 DEBUG nova.objects.instance [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'resources' on Instance uuid f5a259d0-4460-4335-aa4a-f874f93a7e93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.483 254096 DEBUG nova.virt.libvirt.vif [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-2',id=74,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T16:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:26Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=f5a259d0-4460-4335-aa4a-f874f93a7e93,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.483 254096 DEBUG nova.network.os_vif_util [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.484 254096 DEBUG nova.network.os_vif_util [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.484 254096 DEBUG os_vif [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.487 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a3974bd-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.492 254096 INFO os_vif [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02')#033[00m
Nov 25 11:43:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 288 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.0 MiB/s wr, 315 op/s
Nov 25 11:43:41 np0005535469 kernel: tapa297c9f1-75 (unregistering): left promiscuous mode
Nov 25 11:43:41 np0005535469 NetworkManager[48891]: <info>  [1764089021.8008] device (tapa297c9f1-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:41Z|00711|binding|INFO|Releasing lport a297c9f1-753f-4f96-b8e4-38a42969484d from this chassis (sb_readonly=0)
Nov 25 11:43:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:41Z|00712|binding|INFO|Setting lport a297c9f1-753f-4f96-b8e4-38a42969484d down in Southbound
Nov 25 11:43:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:41Z|00713|binding|INFO|Removing iface tapa297c9f1-75 ovn-installed in OVS
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.826 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:28:6d 10.100.0.8'], port_security=['fa:16:3e:1e:28:6d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69b3cbbb-9713-4f49-9e67-1f33a3ae2642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a297c9f1-753f-4f96-b8e4-38a42969484d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a297c9f1-753f-4f96-b8e4-38a42969484d in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf unbound from our chassis#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.828 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 126cf01f-b6da-4bbc-847b-2d16936986cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86e99a9e-a964-40c9-b562-ef888da73b83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.830 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf namespace which is not needed anymore#033[00m
Nov 25 11:43:41 np0005535469 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 25 11:43:41 np0005535469 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004b.scope: Consumed 11.860s CPU time.
Nov 25 11:43:41 np0005535469 systemd-machined[216343]: Machine qemu-88-instance-0000004b terminated.
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.919 254096 DEBUG nova.compute.manager [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-unplugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.920 254096 DEBUG oslo_concurrency.lockutils [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.920 254096 DEBUG oslo_concurrency.lockutils [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.920 254096 DEBUG oslo_concurrency.lockutils [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.921 254096 DEBUG nova.compute.manager [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] No waiting events found dispatching network-vif-unplugged-2a3974bd-02ad-406e-9531-3844e5df4bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:41 np0005535469 nova_compute[254092]: 2025-11-25 16:43:41.921 254096 DEBUG nova.compute.manager [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-unplugged-2a3974bd-02ad-406e-9531-3844e5df4bfa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:43:41 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : haproxy version is 2.8.14-c23fe91
Nov 25 11:43:41 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : path to executable is /usr/sbin/haproxy
Nov 25 11:43:41 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [WARNING]  (332155) : Exiting Master process...
Nov 25 11:43:41 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [WARNING]  (332155) : Exiting Master process...
Nov 25 11:43:41 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [ALERT]    (332155) : Current worker (332180) exited with code 143 (Terminated)
Nov 25 11:43:41 np0005535469 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [WARNING]  (332155) : All workers exited. Exiting... (0)
Nov 25 11:43:41 np0005535469 systemd[1]: libpod-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da.scope: Deactivated successfully.
Nov 25 11:43:41 np0005535469 podman[332774]: 2025-11-25 16:43:41.993903962 +0000 UTC m=+0.070427652 container died b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.092 254096 INFO nova.virt.libvirt.driver [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance destroyed successfully.#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.092 254096 DEBUG nova.objects.instance [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'resources' on Instance uuid 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.103 254096 DEBUG nova.virt.libvirt.vif [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-3',id=75,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-25T16:43:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:28Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=69b3cbbb-9713-4f49-9e67-1f33a3ae2642,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.104 254096 DEBUG nova.network.os_vif_util [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.105 254096 DEBUG nova.network.os_vif_util [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.105 254096 DEBUG os_vif [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.107 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa297c9f1-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.111 254096 INFO os_vif [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75')#033[00m
Nov 25 11:43:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da-userdata-shm.mount: Deactivated successfully.
Nov 25 11:43:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-965eee395a059bda45c6a5b38023a9548ac21b0145fb7c81bf8446d934d09752-merged.mount: Deactivated successfully.
Nov 25 11:43:42 np0005535469 podman[332774]: 2025-11-25 16:43:42.274112414 +0000 UTC m=+0.350636104 container cleanup b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:43:42 np0005535469 systemd[1]: libpod-conmon-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da.scope: Deactivated successfully.
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.281 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Successfully updated port: b5793671-5020-4692-92f1-65a87bcdf38e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.296 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.296 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquired lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.296 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:43:42 np0005535469 podman[332834]: 2025-11-25 16:43:42.387375696 +0000 UTC m=+0.090294240 container remove b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.393 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[876a1a05-d9c2-4a89-b4e8-8d99dd77aa7d]: (4, ('Tue Nov 25 04:43:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf (b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da)\nb39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da\nTue Nov 25 04:43:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf (b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da)\nb39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.395 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eee44d6b-e85f-4963-aa76-1ecd8ebe546c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.396 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:42 np0005535469 kernel: tap126cf01f-b0: left promiscuous mode
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.452 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:43:42 np0005535469 nova_compute[254092]: 2025-11-25 16:43:42.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.469 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da7f4e55-b4ec-4142-b340-1d508122db0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8abf357-437d-419a-b3cb-53670a993438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.491 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f152c1ba-251a-411e-811a-38e7f21f2ed9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.508 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf713ecd-424b-4620-8be0-e8c07fc87bee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546135, 'reachable_time': 34050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332849, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.511 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:43:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.511 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9f90325e-ede1-493a-bac0-2d4bb9478968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:42 np0005535469 systemd[1]: run-netns-ovnmeta\x2d126cf01f\x2db6da\x2d4bbc\x2d847b\x2d2d16936986cf.mount: Deactivated successfully.
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.309 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.530 254096 INFO nova.virt.libvirt.driver [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deleting instance files /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93_del#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.530 254096 INFO nova.virt.libvirt.driver [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deletion of /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93_del complete#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.583 254096 INFO nova.compute.manager [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 2.56 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.584 254096 DEBUG oslo.service.loopingcall [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.584 254096 DEBUG nova.compute.manager [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.584 254096 DEBUG nova.network.neutron [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:43:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 288 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.658 254096 INFO nova.virt.libvirt.driver [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deleting instance files /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_del#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.659 254096 INFO nova.virt.libvirt.driver [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deletion of /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_del complete#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.720 254096 INFO nova.compute.manager [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 2.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.721 254096 DEBUG oslo.service.loopingcall [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.721 254096 DEBUG nova.compute.manager [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:43:43 np0005535469 nova_compute[254092]: 2025-11-25 16:43:43.722 254096 DEBUG nova.network.neutron [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.024 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-changed-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.025 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Refreshing instance network info cache due to event network-changed-b5793671-5020-4692-92f1-65a87bcdf38e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.025 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.571 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updating instance_info_cache with network_info: [{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.737 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Releasing lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.737 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance network_info: |[{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.738 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.738 254096 DEBUG nova.network.neutron [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Refreshing network info cache for port b5793671-5020-4692-92f1-65a87bcdf38e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.741 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start _get_guest_xml network_info=[{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.749 254096 WARNING nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.758 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.759 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.765 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.766 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.767 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.767 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.767 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.770 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.776 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.888 254096 DEBUG nova.network.neutron [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.943 254096 INFO nova.compute.manager [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.944 254096 DEBUG nova.network.neutron [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:44 np0005535469 nova_compute[254092]: 2025-11-25 16:43:44.998 254096 INFO nova.compute.manager [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 1.28 seconds to deallocate network for instance.#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.059 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.060 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.100 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.218 254096 DEBUG oslo_concurrency.processutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397520148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.302 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.339 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.344 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 200 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 5.3 MiB/s wr, 161 op/s
Nov 25 11:43:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773892915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.696 254096 DEBUG oslo_concurrency.processutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.704 254096 DEBUG nova.compute.provider_tree [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.725 254096 DEBUG nova.scheduler.client.report [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237226620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.805 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.806 254096 DEBUG nova.virt.libvirt.vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-816918544',display_name='tempest-ServerMetadataNegativeTestJSON-server-816918544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-816918544',id=76,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0a920505c8240228ed836913ffcdbe4',ramdisk_id='',reservation_id='r-0p559jg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1696100389',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1696100389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:37Z,user_data=None,user_id='5e0e4ce0eeda4a79ab738e1f8dc0f725',uuid=44f3c94a-060c-4650-bfe7-a214c6a10207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.807 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converting VIF {"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.807 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.808 254096 DEBUG nova.objects.instance [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f3c94a-060c-4650-bfe7-a214c6a10207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.819 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <uuid>44f3c94a-060c-4650-bfe7-a214c6a10207</uuid>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <name>instance-0000004c</name>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerMetadataNegativeTestJSON-server-816918544</nova:name>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:43:44</nova:creationTime>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:user uuid="5e0e4ce0eeda4a79ab738e1f8dc0f725">tempest-ServerMetadataNegativeTestJSON-1696100389-project-member</nova:user>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:project uuid="e0a920505c8240228ed836913ffcdbe4">tempest-ServerMetadataNegativeTestJSON-1696100389</nova:project>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <nova:port uuid="b5793671-5020-4692-92f1-65a87bcdf38e">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <entry name="serial">44f3c94a-060c-4650-bfe7-a214c6a10207</entry>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <entry name="uuid">44f3c94a-060c-4650-bfe7-a214c6a10207</entry>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/44f3c94a-060c-4650-bfe7-a214c6a10207_disk">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e1:5b:98"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <target dev="tapb5793671-50"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/console.log" append="off"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:43:45 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:43:45 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:43:45 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:43:45 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.821 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Preparing to wait for external event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.821 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.821 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.822 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.822 254096 DEBUG nova.virt.libvirt.vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-816918544',display_name='tempest-ServerMetadataNegativeTestJSON-server-816918544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-816918544',id=76,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0a920505c8240228ed836913ffcdbe4',ramdisk_id='',reservation_id='r-0p559jg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1696100389',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1696100389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:37Z,user_data=None,user_id='5e0e4ce0eeda4a79ab738e1f8dc0f725',uuid=44f3c94a-060c-4650-bfe7-a214c6a10207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.823 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converting VIF {"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.823 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.823 254096 DEBUG os_vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.824 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.825 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5793671-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5793671-50, col_values=(('external_ids', {'iface-id': 'b5793671-5020-4692-92f1-65a87bcdf38e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:5b:98', 'vm-uuid': '44f3c94a-060c-4650-bfe7-a214c6a10207'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:45 np0005535469 NetworkManager[48891]: <info>  [1764089025.8300] manager: (tapb5793671-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.835 254096 INFO os_vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50')#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.855 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.857 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.899 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.899 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.899 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] No VIF found with MAC fa:16:3e:e1:5b:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.900 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Using config drive#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.965 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:45 np0005535469 nova_compute[254092]: 2025-11-25 16:43:45.972 254096 INFO nova.scheduler.client.report [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Deleted allocations for instance f5a259d0-4460-4335-aa4a-f874f93a7e93#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.055 254096 DEBUG oslo_concurrency.processutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.116 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.309 254096 DEBUG nova.compute.manager [req-95c9df47-c4c4-4bf2-95cf-75aed0f8b948 req-21291017-af0c-4f65-aee2-e5fa11588642 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-deleted-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.310 254096 DEBUG nova.compute.manager [req-95c9df47-c4c4-4bf2-95cf-75aed0f8b948 req-21291017-af0c-4f65-aee2-e5fa11588642 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-deleted-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.469 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Creating config drive at /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.473 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqochm2wc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1029103014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.590 254096 DEBUG oslo_concurrency.processutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.596 254096 DEBUG nova.compute.provider_tree [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.608 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqochm2wc" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.629 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.633 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.663 254096 DEBUG nova.scheduler.client.report [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.688 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.733 254096 INFO nova.scheduler.client.report [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Deleted allocations for instance 69b3cbbb-9713-4f49-9e67-1f33a3ae2642#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.735 254096 DEBUG nova.network.neutron [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updated VIF entry in instance network info cache for port b5793671-5020-4692-92f1-65a87bcdf38e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.736 254096 DEBUG nova.network.neutron [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updating instance_info_cache with network_info: [{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.753 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.754 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.754 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] No waiting events found dispatching network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 WARNING nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received unexpected event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-unplugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.757 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] No waiting events found dispatching network-vif-unplugged-a297c9f1-753f-4f96-b8e4-38a42969484d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.757 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-unplugged-a297c9f1-753f-4f96-b8e4-38a42969484d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.757 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.758 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.758 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.758 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.759 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] No waiting events found dispatching network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.759 254096 WARNING nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received unexpected event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:43:46 np0005535469 nova_compute[254092]: 2025-11-25 16:43:46.837 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.014 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.015 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deleting local config drive /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config because it was imported into RBD.#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.024 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.025 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.025 254096 INFO nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Rebooting instance#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.043 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.043 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.044 254096 DEBUG nova.network.neutron [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:43:47 np0005535469 kernel: tapb5793671-50: entered promiscuous mode
Nov 25 11:43:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:47Z|00714|binding|INFO|Claiming lport b5793671-5020-4692-92f1-65a87bcdf38e for this chassis.
Nov 25 11:43:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:47Z|00715|binding|INFO|b5793671-5020-4692-92f1-65a87bcdf38e: Claiming fa:16:3e:e1:5b:98 10.100.0.7
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:47 np0005535469 NetworkManager[48891]: <info>  [1764089027.0755] manager: (tapb5793671-50): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.084 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5b:98 10.100.0.7'], port_security=['fa:16:3e:e1:5b:98 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44f3c94a-060c-4650-bfe7-a214c6a10207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b60c60-81b1-400f-ad99-152388e55616', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0a920505c8240228ed836913ffcdbe4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fab53028-087c-4e81-a981-98f5af5e037e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=268fa885-b3f3-471a-9d08-3e6f7bd64b52, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b5793671-5020-4692-92f1-65a87bcdf38e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.087 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b5793671-5020-4692-92f1-65a87bcdf38e in datapath 41b60c60-81b1-400f-ad99-152388e55616 bound to our chassis#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.089 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b60c60-81b1-400f-ad99-152388e55616#033[00m
Nov 25 11:43:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:47Z|00716|binding|INFO|Setting lport b5793671-5020-4692-92f1-65a87bcdf38e ovn-installed in OVS
Nov 25 11:43:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:47Z|00717|binding|INFO|Setting lport b5793671-5020-4692-92f1-65a87bcdf38e up in Southbound
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.102 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2a5c41-9162-4b55-87b2-39f6a719888a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.103 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41b60c60-81 in ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.105 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41b60c60-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.105 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4f692d-fe12-4dc6-9c3b-75c2a0c3ef62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.106 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ab641-0d5f-42a8-b885-3d8d85a1f824]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 systemd-udevd[333033]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:43:47 np0005535469 systemd-machined[216343]: New machine qemu-89-instance-0000004c.
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.119 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a7eb0cec-e9af-455e-a5cf-722f7a11e738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 NetworkManager[48891]: <info>  [1764089027.1242] device (tapb5793671-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:43:47 np0005535469 systemd[1]: Started Virtual Machine qemu-89-instance-0000004c.
Nov 25 11:43:47 np0005535469 NetworkManager[48891]: <info>  [1764089027.1264] device (tapb5793671-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[493eced0-fa9c-4cb0-a5f4-38d81c0b3fd6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.179 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1846c71f-1e35-414a-9f81-27ea0ee74749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.184 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a31824fd-ab93-47a4-9723-f10d42fb5787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 NetworkManager[48891]: <info>  [1764089027.1855] manager: (tap41b60c60-80): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.220 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13d0a946-b05d-4876-9770-4c315862d12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.223 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[23bcb1a6-0696-4f24-a0e5-71cfb8b0c465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 NetworkManager[48891]: <info>  [1764089027.2586] device (tap41b60c60-80): carrier: link connected
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.265 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cebbc111-ed1c-4b43-91a2-32766727a56e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.286 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66a49ce2-f17f-46d7-a8b2-a2d428005b8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b60c60-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:21:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548484, 'reachable_time': 16232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333065, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f84f807-83d7-4650-a240-534955b6e425]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:219a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548484, 'tstamp': 548484}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333066, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24718d09-ab56-499f-a6f7-d45937264681]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b60c60-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:21:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548484, 'reachable_time': 16232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333067, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.358 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe0cb6f-a133-4137-9d35-97b9b2cdb86e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c6d02a-8522-461a-87ed-fbd8552ebcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.529 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b60c60-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.529 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.530 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b60c60-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:47 np0005535469 NetworkManager[48891]: <info>  [1764089027.5323] manager: (tap41b60c60-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Nov 25 11:43:47 np0005535469 kernel: tap41b60c60-80: entered promiscuous mode
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.534 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b60c60-80, col_values=(('external_ids', {'iface-id': 'd615f472-f4a0-4cb5-a4b8-8b9aa4b9f756'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:47Z|00718|binding|INFO|Releasing lport d615f472-f4a0-4cb5-a4b8-8b9aa4b9f756 from this chassis (sb_readonly=0)
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.553 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41b60c60-81b1-400f-ad99-152388e55616.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41b60c60-81b1-400f-ad99-152388e55616.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67111b8a-396a-4b30-b775-cc557d0c1311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.556 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-41b60c60-81b1-400f-ad99-152388e55616
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/41b60c60-81b1-400f-ad99-152388e55616.pid.haproxy
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 41b60c60-81b1-400f-ad99-152388e55616
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:43:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.557 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'env', 'PROCESS_TAG=haproxy-41b60c60-81b1-400f-ad99-152388e55616', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41b60c60-81b1-400f-ad99-152388e55616.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:43:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 5.8 MiB/s wr, 160 op/s
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.790 254096 DEBUG nova.compute.manager [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.791 254096 DEBUG oslo_concurrency.lockutils [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.793 254096 DEBUG oslo_concurrency.lockutils [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.793 254096 DEBUG oslo_concurrency.lockutils [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.793 254096 DEBUG nova.compute.manager [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Processing event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.894 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089027.893465, 44f3c94a-060c-4650-bfe7-a214c6a10207 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.895 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] VM Started (Lifecycle Event)#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.897 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.902 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.905 254096 INFO nova.virt.libvirt.driver [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance spawned successfully.#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.906 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.911 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.914 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.930 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.931 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.931 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.932 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.932 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.933 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.937 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.938 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089027.8937201, 44f3c94a-060c-4650-bfe7-a214c6a10207 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.938 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.972 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.976 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089027.900881, 44f3c94a-060c-4650-bfe7-a214c6a10207 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:47 np0005535469 nova_compute[254092]: 2025-11-25 16:43:47.977 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:43:48 np0005535469 podman[333141]: 2025-11-25 16:43:47.927112967 +0000 UTC m=+0.042617747 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.036 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.039 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.058 254096 INFO nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 10.36 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.058 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.059 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:43:48 np0005535469 podman[333141]: 2025-11-25 16:43:48.126692431 +0000 UTC m=+0.242197201 container create b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.158 254096 INFO nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 11.49 seconds to build instance.#033[00m
Nov 25 11:43:48 np0005535469 systemd[1]: Started libpod-conmon-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038.scope.
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.175 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c02ee99f386b93fdd51ea0e33033064775b4969dd78ae64bfa17f9e14b0747ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:48 np0005535469 podman[333141]: 2025-11-25 16:43:48.224810323 +0000 UTC m=+0.340315113 container init b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:43:48 np0005535469 podman[333141]: 2025-11-25 16:43:48.232104961 +0000 UTC m=+0.347609731 container start b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:43:48 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : New worker (333162) forked
Nov 25 11:43:48 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : Loading success.
Nov 25 11:43:48 np0005535469 nova_compute[254092]: 2025-11-25 16:43:48.312 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.018 254096 DEBUG nova.network.neutron [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.036 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.037 254096 DEBUG nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:49 np0005535469 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 11:43:49 np0005535469 NetworkManager[48891]: <info>  [1764089029.2393] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:49Z|00719|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 11:43:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:49Z|00720|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 11:43:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:49Z|00721|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.254 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.255 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.257 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.257 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a067d70-9843-40f0-8059-ed8f69ae6404]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.258 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 11:43:49 np0005535469 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000048.scope: Consumed 15.256s CPU time.
Nov 25 11:43:49 np0005535469 systemd-machined[216343]: Machine qemu-85-instance-00000048 terminated.
Nov 25 11:43:49 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : haproxy version is 2.8.14-c23fe91
Nov 25 11:43:49 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : path to executable is /usr/sbin/haproxy
Nov 25 11:43:49 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [WARNING]  (331028) : Exiting Master process...
Nov 25 11:43:49 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [ALERT]    (331028) : Current worker (331030) exited with code 143 (Terminated)
Nov 25 11:43:49 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [WARNING]  (331028) : All workers exited. Exiting... (0)
Nov 25 11:43:49 np0005535469 systemd[1]: libpod-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62.scope: Deactivated successfully.
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 podman[333192]: 2025-11-25 16:43:49.404390754 +0000 UTC m=+0.049526274 container died 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.423 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.423 254096 DEBUG nova.objects.instance [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.435 254096 DEBUG nova.virt.libvirt.vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.436 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.436 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.437 254096 DEBUG os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:43:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62-userdata-shm.mount: Deactivated successfully.
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.440 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe8c3a9-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-17c59fe121ea6f90e0e0f73a558d007bdcbf7807f621ad6454cfd7572f9bd582-merged.mount: Deactivated successfully.
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.449 254096 INFO os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')#033[00m
Nov 25 11:43:49 np0005535469 podman[333192]: 2025-11-25 16:43:49.451464821 +0000 UTC m=+0.096600341 container cleanup 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.458 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start _get_guest_xml network_info=[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:43:49 np0005535469 systemd[1]: libpod-conmon-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62.scope: Deactivated successfully.
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.462 254096 WARNING nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.467 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.467 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.470 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.objects.instance [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.484 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:49 np0005535469 podman[333231]: 2025-11-25 16:43:49.520288949 +0000 UTC m=+0.042535186 container remove 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.523 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089014.5038822, ef028cf3-f8af-4112-9424-8a12fdda7690 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.525 254096 INFO nova.compute.manager [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5eab71d1-f244-417a-8338-d65924e7fee2]: (4, ('Tue Nov 25 04:43:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62)\n687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62\nTue Nov 25 04:43:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62)\n687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.530 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35ae4606-831a-4b8c-a011-09bde389de18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.532 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.546 254096 DEBUG nova.compute.manager [None req-0a8a06a5-9d0f-4783-bff6-1fe6e9110e30 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.549 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88f85e59-5cd0-4269-9fed-fbdcc3403e40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f42e547c-1641-4025-8046-08d76d8badb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fc53ddb5-08e8-420d-ae31-e5bd4202e040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.590 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18bb5d93-f5a6-43f8-a013-d08476d1e729]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544341, 'reachable_time': 25878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333246, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.597 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:43:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.597 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b44e8f0e-4678-42f5-bc03-220710b653a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 5.7 MiB/s wr, 155 op/s
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.878 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.878 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.879 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.879 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.879 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] No waiting events found dispatching network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 WARNING nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received unexpected event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e for instance with vm_state active and task_state None.#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.881 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.881 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.881 254096 WARNING nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 25 11:43:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219119069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.955 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:49 np0005535469 nova_compute[254092]: 2025-11-25 16:43:49.985 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:43:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3001.2 total, 600.0 interval#012Cumulative writes: 25K writes, 98K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8494 syncs, 3.01 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8833 writes, 32K keys, 8833 commit groups, 1.0 writes per commit group, ingest: 32.32 MB, 0.05 MB/s#012Interval WAL: 8833 writes, 3452 syncs, 2.56 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:43:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:43:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169005987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.455 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.457 254096 DEBUG nova.virt.libvirt.vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.457 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.458 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.460 254096 DEBUG nova.objects.instance [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.480 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <uuid>01f96314-1fbe-4eee-a4ed-db7f448a5320</uuid>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <name>instance-00000048</name>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerActionsTestJSON-server-1951123052</nova:name>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:43:49</nova:creationTime>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <nova:port uuid="4fe8c3a9-70ba-4a51-8bc1-1505a49fe349">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <entry name="serial">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <entry name="uuid">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ff:a0:1c"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <target dev="tap4fe8c3a9-70"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log" append="off"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:43:50 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:43:50 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:43:50 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:43:50 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.481 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.481 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.481 254096 DEBUG nova.virt.libvirt.vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.482 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.482 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.482 254096 DEBUG os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.483 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.486 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.486 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.487 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 NetworkManager[48891]: <info>  [1764089030.4887] manager: (tap4fe8c3a9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.495 254096 INFO os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')#033[00m
Nov 25 11:43:50 np0005535469 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 11:43:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:50Z|00722|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 11:43:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:50Z|00723|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 NetworkManager[48891]: <info>  [1764089030.5757] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.585 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.586 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525#033[00m
Nov 25 11:43:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:50Z|00724|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 11:43:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:50Z|00725|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 11:43:50 np0005535469 systemd-udevd[333321]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4904131-8bb6-4230-8a26-90fcd947b878]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.600 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.602 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb68b11-444b-4a77-beff-64d95c4530f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d46b36c-7011-4184-aed9-d700519ee3b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 NetworkManager[48891]: <info>  [1764089030.6110] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:43:50 np0005535469 NetworkManager[48891]: <info>  [1764089030.6120] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.616 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5538d68e-611d-461b-96e4-1b9911b2e972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 systemd-machined[216343]: New machine qemu-90-instance-00000048.
Nov 25 11:43:50 np0005535469 systemd[1]: Started Virtual Machine qemu-90-instance-00000048.
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e44a5997-0fa1-46b3-90d9-2cb21795daa8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.665 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf145470-f41b-40f5-99d9-2c0fab65148b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 NetworkManager[48891]: <info>  [1764089030.6741] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/314)
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.673 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[96560310-264f-448b-87dc-7a79941251b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.707 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd25ac3-3465-429b-8cf7-4242aea69783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.710 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[01b46ca3-dfd4-4260-8d08-ed3e8b9afa88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 NetworkManager[48891]: <info>  [1764089030.7389] device (tap50ea1716-90): carrier: link connected
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.742 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f23f5cf-0f22-4d67-8eaf-20c8663b22c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2a0b17-08f5-4252-8bc0-8129fd820fdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548832, 'reachable_time': 28935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333355, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af56328a-b15f-46a2-98e0-1687ad570228]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548832, 'tstamp': 548832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333356, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.791 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b64e2b8-1c70-4e5e-a266-492d2f46314e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548832, 'reachable_time': 28935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333357, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efe255be-cb18-44ca-9a44-d724742e1a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.889 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c6be5e-487c-4e2d-94e2-8dfe13c560bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.891 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.891 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:43:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.892 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 NetworkManager[48891]: <info>  [1764089030.8983] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 25 11:43:50 np0005535469 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:50Z|00726|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.908 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 nova_compute[254092]: 2025-11-25 16:43:50.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.920 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.921 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b403d4f-5dda-4ad0-93e5-ce548bbc2fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.923 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:43:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.924 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.091 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 01f96314-1fbe-4eee-a4ed-db7f448a5320 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.092 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089031.0913043, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.092 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.097 254096 DEBUG nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.100 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance rebooted successfully.#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.101 254096 DEBUG nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.112 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.139 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.139 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089031.0942383, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.139 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.157 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.160 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.196 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.198 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105651636875692 of space, bias 1.0, pg target 0.33169549106270757 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:43:51 np0005535469 podman[333529]: 2025-11-25 16:43:51.378130041 +0000 UTC m=+0.060193024 container create 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 11:43:51 np0005535469 systemd[1]: Started libpod-conmon-9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a.scope.
Nov 25 11:43:51 np0005535469 podman[333529]: 2025-11-25 16:43:51.343608804 +0000 UTC m=+0.025671817 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:43:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0644dc8bc60d336343ae20e3273c06bdd19306c75a44733d23fab24fe3d7c49b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:51 np0005535469 podman[333529]: 2025-11-25 16:43:51.479994864 +0000 UTC m=+0.162057847 container init 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:43:51 np0005535469 podman[333529]: 2025-11-25 16:43:51.485444702 +0000 UTC m=+0.167507685 container start 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 11:43:51 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [NOTICE]   (333559) : New worker (333561) forked
Nov 25 11:43:51 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [NOTICE]   (333559) : Loading success.
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.8 MiB/s wr, 232 op/s
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dd228caf-a806-47f0-ae1a-6816370d9963 does not exist
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 82fd6c2a-ef87-4c00-a018-7834258fc4c1 does not exist
Nov 25 11:43:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8ee99efa-6dfd-4573-b90d-79d1b13db401 does not exist
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.946 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.947 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 WARNING nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 WARNING nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 WARNING nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.951 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.953 254096 INFO nova.compute.manager [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Terminating instance#033[00m
Nov 25 11:43:51 np0005535469 nova_compute[254092]: 2025-11-25 16:43:51.954 254096 DEBUG nova.compute.manager [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:43:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:43:52 np0005535469 kernel: tapb5793671-50 (unregistering): left promiscuous mode
Nov 25 11:43:52 np0005535469 NetworkManager[48891]: <info>  [1764089032.0425] device (tapb5793671-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:43:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:52Z|00727|binding|INFO|Releasing lport b5793671-5020-4692-92f1-65a87bcdf38e from this chassis (sb_readonly=0)
Nov 25 11:43:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:52Z|00728|binding|INFO|Setting lport b5793671-5020-4692-92f1-65a87bcdf38e down in Southbound
Nov 25 11:43:52 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:52Z|00729|binding|INFO|Removing iface tapb5793671-50 ovn-installed in OVS
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.051 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.069 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5b:98 10.100.0.7'], port_security=['fa:16:3e:e1:5b:98 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44f3c94a-060c-4650-bfe7-a214c6a10207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b60c60-81b1-400f-ad99-152388e55616', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0a920505c8240228ed836913ffcdbe4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fab53028-087c-4e81-a981-98f5af5e037e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=268fa885-b3f3-471a-9d08-3e6f7bd64b52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b5793671-5020-4692-92f1-65a87bcdf38e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:43:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.071 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b5793671-5020-4692-92f1-65a87bcdf38e in datapath 41b60c60-81b1-400f-ad99-152388e55616 unbound from our chassis#033[00m
Nov 25 11:43:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.072 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41b60c60-81b1-400f-ad99-152388e55616, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.074 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bdc61d-a8bb-452e-8cfe-55f68ae44633]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.075 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 namespace which is not needed anymore#033[00m
Nov 25 11:43:52 np0005535469 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Nov 25 11:43:52 np0005535469 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Consumed 4.691s CPU time.
Nov 25 11:43:52 np0005535469 systemd-machined[216343]: Machine qemu-89-instance-0000004c terminated.
Nov 25 11:43:52 np0005535469 NetworkManager[48891]: <info>  [1764089032.1732] manager: (tapb5793671-50): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.189 254096 INFO nova.virt.libvirt.driver [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance destroyed successfully.#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.190 254096 DEBUG nova.objects.instance [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lazy-loading 'resources' on Instance uuid 44f3c94a-060c-4650-bfe7-a214c6a10207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.208 254096 DEBUG nova.virt.libvirt.vif [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-816918544',display_name='tempest-ServerMetadataNegativeTestJSON-server-816918544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-816918544',id=76,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e0a920505c8240228ed836913ffcdbe4',ramdisk_id='',reservation_id='r-0p559jg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1696100389',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1696100389-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:48Z,user_data=None,user_id='5e0e4ce0eeda4a79ab738e1f8dc0f725',uuid=44f3c94a-060c-4650-bfe7-a214c6a10207,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.208 254096 DEBUG nova.network.os_vif_util [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converting VIF {"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.209 254096 DEBUG nova.network.os_vif_util [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.209 254096 DEBUG os_vif [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.211 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5793671-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.219 254096 INFO os_vif [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50')#033[00m
Nov 25 11:43:52 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : haproxy version is 2.8.14-c23fe91
Nov 25 11:43:52 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : path to executable is /usr/sbin/haproxy
Nov 25 11:43:52 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [WARNING]  (333160) : Exiting Master process...
Nov 25 11:43:52 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [WARNING]  (333160) : Exiting Master process...
Nov 25 11:43:52 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [ALERT]    (333160) : Current worker (333162) exited with code 143 (Terminated)
Nov 25 11:43:52 np0005535469 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [WARNING]  (333160) : All workers exited. Exiting... (0)
Nov 25 11:43:52 np0005535469 systemd[1]: libpod-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038.scope: Deactivated successfully.
Nov 25 11:43:52 np0005535469 podman[333705]: 2025-11-25 16:43:52.391835033 +0000 UTC m=+0.217819751 container died b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:43:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c02ee99f386b93fdd51ea0e33033064775b4969dd78ae64bfa17f9e14b0747ea-merged.mount: Deactivated successfully.
Nov 25 11:43:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038-userdata-shm.mount: Deactivated successfully.
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.866 254096 INFO nova.compute.manager [None req-c6203dad-193b-429b-9337-df54fabcd9db aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Get console output#033[00m
Nov 25 11:43:52 np0005535469 nova_compute[254092]: 2025-11-25 16:43:52.881 254096 INFO oslo.privsep.daemon [None req-c6203dad-193b-429b-9337-df54fabcd9db aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpxhc0cb_k/privsep.sock']#033[00m
Nov 25 11:43:52 np0005535469 podman[333705]: 2025-11-25 16:43:52.92586546 +0000 UTC m=+0.751850178 container cleanup b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:43:52 np0005535469 systemd[1]: libpod-conmon-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038.scope: Deactivated successfully.
Nov 25 11:43:53 np0005535469 podman[333791]: 2025-11-25 16:43:53.032358439 +0000 UTC m=+0.080234407 container remove b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.040 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba64593d-8232-4449-975b-5e03e6576e73]: (4, ('Tue Nov 25 04:43:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 (b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038)\nb12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038\nTue Nov 25 04:43:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 (b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038)\nb12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.042 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57b0f6f9-eb2d-4380-a321-6497a56d992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.043 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b60c60-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:43:53 np0005535469 kernel: tap41b60c60-80: left promiscuous mode
Nov 25 11:43:53 np0005535469 nova_compute[254092]: 2025-11-25 16:43:53.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:53 np0005535469 nova_compute[254092]: 2025-11-25 16:43:53.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.069 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d5012f-ce6c-4316-9d33-4f93686ad95c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f647b802-ac1d-410a-81ab-24496712afbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ee49206c-6de2-4e30-859b-df6de6f3cdcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.115 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9e75fa-ba98-42f8-a7b4-2e7c93aed366]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548476, 'reachable_time': 21228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333823, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:53 np0005535469 systemd[1]: run-netns-ovnmeta\x2d41b60c60\x2d81b1\x2d400f\x2dad99\x2d152388e55616.mount: Deactivated successfully.
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.122 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:43:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.122 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a21ee7-d4c3-46f9-8656-ac5d6d5a84b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:43:53 np0005535469 podman[333814]: 2025-11-25 16:43:53.147530944 +0000 UTC m=+0.059640099 container create bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:43:53 np0005535469 podman[333814]: 2025-11-25 16:43:53.111893427 +0000 UTC m=+0.024002502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:43:53 np0005535469 systemd[1]: Started libpod-conmon-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope.
Nov 25 11:43:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:53 np0005535469 podman[333814]: 2025-11-25 16:43:53.311202144 +0000 UTC m=+0.223311209 container init bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:43:53 np0005535469 nova_compute[254092]: 2025-11-25 16:43:53.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:53 np0005535469 podman[333814]: 2025-11-25 16:43:53.320550338 +0000 UTC m=+0.232659383 container start bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:43:53 np0005535469 friendly_bohr[333831]: 167 167
Nov 25 11:43:53 np0005535469 systemd[1]: libpod-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope: Deactivated successfully.
Nov 25 11:43:53 np0005535469 conmon[333831]: conmon bb7c4c75e632a45836a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope/container/memory.events
Nov 25 11:43:53 np0005535469 podman[333814]: 2025-11-25 16:43:53.369809204 +0000 UTC m=+0.281918249 container attach bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:43:53 np0005535469 podman[333814]: 2025-11-25 16:43:53.370274527 +0000 UTC m=+0.282383562 container died bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:43:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ecb915abef550dd25c802353cc02a98f9c0fe4d02ace417380da0cc868af7931-merged.mount: Deactivated successfully.
Nov 25 11:43:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Nov 25 11:43:53 np0005535469 podman[333814]: 2025-11-25 16:43:53.904707436 +0000 UTC m=+0.816816471 container remove bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:43:54 np0005535469 systemd[1]: libpod-conmon-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope: Deactivated successfully.
Nov 25 11:43:54 np0005535469 nova_compute[254092]: 2025-11-25 16:43:54.058 254096 INFO oslo.privsep.daemon [None req-c6203dad-193b-429b-9337-df54fabcd9db aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 11:43:54 np0005535469 nova_compute[254092]: 2025-11-25 16:43:53.907 333852 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 11:43:54 np0005535469 nova_compute[254092]: 2025-11-25 16:43:53.911 333852 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 11:43:54 np0005535469 nova_compute[254092]: 2025-11-25 16:43:53.913 333852 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 25 11:43:54 np0005535469 nova_compute[254092]: 2025-11-25 16:43:53.913 333852 INFO oslo.privsep.daemon [-] privsep daemon running as pid 333852#033[00m
Nov 25 11:43:54 np0005535469 nova_compute[254092]: 2025-11-25 16:43:54.171 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:43:54 np0005535469 podman[333860]: 2025-11-25 16:43:54.084717369 +0000 UTC m=+0.024144876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:43:54 np0005535469 podman[333860]: 2025-11-25 16:43:54.337342123 +0000 UTC m=+0.276769600 container create c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:43:54 np0005535469 systemd[1]: Started libpod-conmon-c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02.scope.
Nov 25 11:43:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:54 np0005535469 podman[333860]: 2025-11-25 16:43:54.76297264 +0000 UTC m=+0.702400137 container init c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:43:54 np0005535469 podman[333860]: 2025-11-25 16:43:54.769694352 +0000 UTC m=+0.709121839 container start c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:43:54 np0005535469 podman[333860]: 2025-11-25 16:43:54.851740699 +0000 UTC m=+0.791168186 container attach c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 11:43:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:43:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/427274679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:43:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:43:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/427274679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:43:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 128 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.9 MiB/s wr, 225 op/s
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.789 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-unplugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] No waiting events found dispatching network-vif-unplugged-b5793671-5020-4692-92f1-65a87bcdf38e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-unplugged-b5793671-5020-4692-92f1-65a87bcdf38e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] No waiting events found dispatching network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:43:55 np0005535469 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 WARNING nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received unexpected event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:43:55 np0005535469 sweet_herschel[333877]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:43:55 np0005535469 sweet_herschel[333877]: --> relative data size: 1.0
Nov 25 11:43:55 np0005535469 sweet_herschel[333877]: --> All data devices are unavailable
Nov 25 11:43:55 np0005535469 systemd[1]: libpod-c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02.scope: Deactivated successfully.
Nov 25 11:43:55 np0005535469 podman[333860]: 2025-11-25 16:43:55.830763929 +0000 UTC m=+1.770191466 container died c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:43:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:43:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66-merged.mount: Deactivated successfully.
Nov 25 11:43:56 np0005535469 podman[333860]: 2025-11-25 16:43:56.287468599 +0000 UTC m=+2.226896066 container remove c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 11:43:56 np0005535469 systemd[1]: libpod-conmon-c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02.scope: Deactivated successfully.
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.465 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089021.4648309, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.466 254096 INFO nova.compute.manager [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.486 254096 DEBUG nova.compute.manager [None req-2dc35482-4134-4a50-94e3-cc87e7901877 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.805 254096 INFO nova.virt.libvirt.driver [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deleting instance files /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207_del#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.805 254096 INFO nova.virt.libvirt.driver [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deletion of /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207_del complete#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.894 254096 INFO nova.compute.manager [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 4.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.894 254096 DEBUG oslo.service.loopingcall [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.895 254096 DEBUG nova.compute.manager [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:43:56 np0005535469 nova_compute[254092]: 2025-11-25 16:43:56.895 254096 DEBUG nova.network.neutron [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:43:56 np0005535469 podman[334059]: 2025-11-25 16:43:56.921416448 +0000 UTC m=+0.052056203 container create 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:43:56 np0005535469 systemd[1]: Started libpod-conmon-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope.
Nov 25 11:43:56 np0005535469 podman[334059]: 2025-11-25 16:43:56.898411283 +0000 UTC m=+0.029051038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:43:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:57 np0005535469 podman[334059]: 2025-11-25 16:43:57.01920459 +0000 UTC m=+0.149844335 container init 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:43:57 np0005535469 podman[334059]: 2025-11-25 16:43:57.025706167 +0000 UTC m=+0.156345922 container start 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:43:57 np0005535469 systemd[1]: libpod-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope: Deactivated successfully.
Nov 25 11:43:57 np0005535469 amazing_feynman[334074]: 167 167
Nov 25 11:43:57 np0005535469 conmon[334074]: conmon 87001c545f35b5f163ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope/container/memory.events
Nov 25 11:43:57 np0005535469 podman[334059]: 2025-11-25 16:43:57.062536856 +0000 UTC m=+0.193176631 container attach 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:43:57 np0005535469 podman[334059]: 2025-11-25 16:43:57.063017819 +0000 UTC m=+0.193657574 container died 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:43:57 np0005535469 nova_compute[254092]: 2025-11-25 16:43:57.091 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089022.0897741, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:43:57 np0005535469 nova_compute[254092]: 2025-11-25 16:43:57.092 254096 INFO nova.compute.manager [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:43:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c8fca0df505a6272f4dfeafd5006482cda961329ae10c5308395bd080f232575-merged.mount: Deactivated successfully.
Nov 25 11:43:57 np0005535469 podman[334059]: 2025-11-25 16:43:57.207555241 +0000 UTC m=+0.338194996 container remove 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:43:57 np0005535469 nova_compute[254092]: 2025-11-25 16:43:57.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:57 np0005535469 nova_compute[254092]: 2025-11-25 16:43:57.246 254096 DEBUG nova.compute.manager [None req-ef07e316-ff35-48af-947f-f9ed5a65919a - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:43:57 np0005535469 systemd[1]: libpod-conmon-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope: Deactivated successfully.
Nov 25 11:43:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:43:57Z|00730|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 11:43:57 np0005535469 nova_compute[254092]: 2025-11-25 16:43:57.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:57 np0005535469 podman[334100]: 2025-11-25 16:43:57.39109137 +0000 UTC m=+0.054518650 container create e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:43:57 np0005535469 podman[334100]: 2025-11-25 16:43:57.362071733 +0000 UTC m=+0.025499033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:43:57 np0005535469 systemd[1]: Started libpod-conmon-e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c.scope.
Nov 25 11:43:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:43:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 121 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 433 KiB/s wr, 185 op/s
Nov 25 11:43:57 np0005535469 podman[334100]: 2025-11-25 16:43:57.796125198 +0000 UTC m=+0.459552508 container init e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:43:57 np0005535469 podman[334100]: 2025-11-25 16:43:57.804785553 +0000 UTC m=+0.468212833 container start e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:43:57 np0005535469 podman[334100]: 2025-11-25 16:43:57.972116403 +0000 UTC m=+0.635543703 container attach e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:43:58 np0005535469 nova_compute[254092]: 2025-11-25 16:43:58.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:43:58 np0005535469 nova_compute[254092]: 2025-11-25 16:43:58.425 254096 DEBUG nova.network.neutron [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:43:58 np0005535469 nova_compute[254092]: 2025-11-25 16:43:58.470 254096 INFO nova.compute.manager [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 1.58 seconds to deallocate network for instance.#033[00m
Nov 25 11:43:58 np0005535469 nova_compute[254092]: 2025-11-25 16:43:58.554 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:43:58 np0005535469 nova_compute[254092]: 2025-11-25 16:43:58.554 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:43:58 np0005535469 nova_compute[254092]: 2025-11-25 16:43:58.580 254096 DEBUG nova.compute.manager [req-4bc1b623-8588-494f-8bb7-7de3bce07d17 req-909a90a9-4f53-4ade-b8bc-68b264ff0e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-deleted-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]: {
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:    "0": [
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:        {
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "devices": [
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "/dev/loop3"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            ],
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_name": "ceph_lv0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_size": "21470642176",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "name": "ceph_lv0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "tags": {
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cluster_name": "ceph",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.crush_device_class": "",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.encrypted": "0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osd_id": "0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.type": "block",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.vdo": "0"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            },
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "type": "block",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "vg_name": "ceph_vg0"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:        }
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:    ],
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:    "1": [
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:        {
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "devices": [
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "/dev/loop4"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            ],
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_name": "ceph_lv1",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_size": "21470642176",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "name": "ceph_lv1",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "tags": {
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cluster_name": "ceph",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.crush_device_class": "",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.encrypted": "0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osd_id": "1",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.type": "block",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.vdo": "0"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            },
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "type": "block",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "vg_name": "ceph_vg1"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:        }
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:    ],
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:    "2": [
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:        {
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "devices": [
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "/dev/loop5"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            ],
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_name": "ceph_lv2",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_size": "21470642176",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "name": "ceph_lv2",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "tags": {
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.cluster_name": "ceph",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.crush_device_class": "",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.encrypted": "0",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osd_id": "2",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.type": "block",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:                "ceph.vdo": "0"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            },
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "type": "block",
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:            "vg_name": "ceph_vg2"
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:        }
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]:    ]
Nov 25 11:43:58 np0005535469 recursing_chaplygin[334116]: }
Nov 25 11:43:58 np0005535469 systemd[1]: libpod-e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c.scope: Deactivated successfully.
Nov 25 11:43:58 np0005535469 podman[334100]: 2025-11-25 16:43:58.609290149 +0000 UTC m=+1.272717429 container died e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:43:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c-merged.mount: Deactivated successfully.
Nov 25 11:43:59 np0005535469 podman[334100]: 2025-11-25 16:43:59.131383963 +0000 UTC m=+1.794811243 container remove e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:43:59 np0005535469 systemd[1]: libpod-conmon-e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c.scope: Deactivated successfully.
Nov 25 11:43:59 np0005535469 nova_compute[254092]: 2025-11-25 16:43:59.151 254096 DEBUG oslo_concurrency.processutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:43:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:43:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809511181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:43:59 np0005535469 nova_compute[254092]: 2025-11-25 16:43:59.644 254096 DEBUG oslo_concurrency.processutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:43:59 np0005535469 nova_compute[254092]: 2025-11-25 16:43:59.650 254096 DEBUG nova.compute.provider_tree [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:43:59 np0005535469 nova_compute[254092]: 2025-11-25 16:43:59.663 254096 DEBUG nova.scheduler.client.report [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:43:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 121 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 13 KiB/s wr, 164 op/s
Nov 25 11:43:59 np0005535469 nova_compute[254092]: 2025-11-25 16:43:59.695 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:59 np0005535469 nova_compute[254092]: 2025-11-25 16:43:59.750 254096 INFO nova.scheduler.client.report [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Deleted allocations for instance 44f3c94a-060c-4650-bfe7-a214c6a10207#033[00m
Nov 25 11:43:59 np0005535469 podman[334302]: 2025-11-25 16:43:59.77927005 +0000 UTC m=+0.072769645 container create dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:43:59 np0005535469 podman[334302]: 2025-11-25 16:43:59.727086774 +0000 UTC m=+0.020586369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:43:59 np0005535469 nova_compute[254092]: 2025-11-25 16:43:59.839 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:43:59 np0005535469 systemd[1]: Started libpod-conmon-dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b.scope.
Nov 25 11:43:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:43:59 np0005535469 podman[334302]: 2025-11-25 16:43:59.994571601 +0000 UTC m=+0.288071196 container init dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:44:00 np0005535469 podman[334302]: 2025-11-25 16:44:00.001803357 +0000 UTC m=+0.295302932 container start dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:44:00 np0005535469 wonderful_lamport[334318]: 167 167
Nov 25 11:44:00 np0005535469 systemd[1]: libpod-dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b.scope: Deactivated successfully.
Nov 25 11:44:00 np0005535469 podman[334302]: 2025-11-25 16:44:00.088295854 +0000 UTC m=+0.381795449 container attach dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 11:44:00 np0005535469 podman[334302]: 2025-11-25 16:44:00.088723795 +0000 UTC m=+0.382223370 container died dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:44:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b53dc4531679885d5a0b623aec9717b0a510fe1c545d5a5468fd5fe13eb3fbf1-merged.mount: Deactivated successfully.
Nov 25 11:44:00 np0005535469 nova_compute[254092]: 2025-11-25 16:44:00.468 254096 DEBUG oslo_concurrency.lockutils [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:44:00 np0005535469 nova_compute[254092]: 2025-11-25 16:44:00.469 254096 DEBUG oslo_concurrency.lockutils [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:44:00 np0005535469 nova_compute[254092]: 2025-11-25 16:44:00.469 254096 DEBUG nova.compute.manager [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:44:00 np0005535469 nova_compute[254092]: 2025-11-25 16:44:00.474 254096 DEBUG nova.compute.manager [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 11:44:00 np0005535469 nova_compute[254092]: 2025-11-25 16:44:00.475 254096 DEBUG nova.objects.instance [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:44:00 np0005535469 nova_compute[254092]: 2025-11-25 16:44:00.493 254096 DEBUG nova.virt.libvirt.driver [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:44:00 np0005535469 podman[334302]: 2025-11-25 16:44:00.503678233 +0000 UTC m=+0.797177818 container remove dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:44:00 np0005535469 systemd[1]: libpod-conmon-dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b.scope: Deactivated successfully.
Nov 25 11:44:00 np0005535469 podman[334343]: 2025-11-25 16:44:00.680629823 +0000 UTC m=+0.025297857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:44:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:44:00 np0005535469 podman[334343]: 2025-11-25 16:44:00.953431965 +0000 UTC m=+0.298099959 container create b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:44:01 np0005535469 systemd[1]: Started libpod-conmon-b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2.scope.
Nov 25 11:44:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:44:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:44:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:44:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:44:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:44:01 np0005535469 podman[334343]: 2025-11-25 16:44:01.132671097 +0000 UTC m=+0.477339111 container init b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:44:01 np0005535469 podman[334343]: 2025-11-25 16:44:01.140020786 +0000 UTC m=+0.484688780 container start b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 11:44:01 np0005535469 podman[334343]: 2025-11-25 16:44:01.185033068 +0000 UTC m=+0.529701082 container attach b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:44:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 171 op/s
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]: {
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "osd_id": 1,
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "type": "bluestore"
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:    },
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "osd_id": 2,
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "type": "bluestore"
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:    },
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "osd_id": 0,
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:        "type": "bluestore"
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]:    }
Nov 25 11:44:02 np0005535469 heuristic_matsumoto[334360]: }
Nov 25 11:46:22 np0005535469 podman[342042]: 2025-11-25 16:46:22.815594766 +0000 UTC m=+0.060694260 container create 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:46:22 np0005535469 systemd[1]: Started libpod-conmon-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815.scope.
Nov 25 11:46:22 np0005535469 podman[342042]: 2025-11-25 16:46:22.783900605 +0000 UTC m=+0.029000109 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:46:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:46:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1774b9fa250e4e770fd47cef979c1f2ebcd875e71157ccd0e21761837005a179/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:46:22 np0005535469 podman[342042]: 2025-11-25 16:46:22.919331465 +0000 UTC m=+0.164430979 container init 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:46:22 np0005535469 podman[342042]: 2025-11-25 16:46:22.927574939 +0000 UTC m=+0.172674423 container start 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:46:22 np0005535469 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : New worker (342063) forked
Nov 25 11:46:22 np0005535469 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : Loading success.
Nov 25 11:46:23 np0005535469 rsyslogd[1006]: imjournal: 5793 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 349 KiB/s wr, 53 op/s
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.810 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 WARNING nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Processing event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.814 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] No waiting events found dispatching network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.814 254096 WARNING nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received unexpected event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.814 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.819 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089183.8196986, 8b20d119-17cb-4742-9223-90e5020f93a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.820 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.823 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.826 254096 INFO nova.virt.libvirt.driver [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance spawned successfully.#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.826 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.839 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.846 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.850 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.851 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.851 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.852 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.852 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.853 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.878 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.906 254096 INFO nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 13.76 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.906 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.968 254096 INFO nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 14.62 seconds to build instance.#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.986 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089168.9848976, 6b676874-6857-4021-9d83-c3673f57cebb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:23 np0005535469 nova_compute[254092]: 2025-11-25 16:46:23.986 254096 INFO nova.compute.manager [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:46:24 np0005535469 nova_compute[254092]: 2025-11-25 16:46:24.018 254096 DEBUG nova.compute.manager [None req-a40a15e2-1c7a-4689-8da1-09eb58de538a - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:24 np0005535469 nova_compute[254092]: 2025-11-25 16:46:24.099 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:24.502 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:25 np0005535469 nova_compute[254092]: 2025-11-25 16:46:25.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 350 KiB/s wr, 124 op/s
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.142612) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186142708, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 654, "num_deletes": 257, "total_data_size": 678734, "memory_usage": 690824, "flush_reason": "Manual Compaction"}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186150338, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 671929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37738, "largest_seqno": 38391, "table_properties": {"data_size": 668513, "index_size": 1260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8073, "raw_average_key_size": 18, "raw_value_size": 661502, "raw_average_value_size": 1556, "num_data_blocks": 55, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089142, "oldest_key_time": 1764089142, "file_creation_time": 1764089186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 7735 microseconds, and 2541 cpu microseconds.
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.150378) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 671929 bytes OK
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.150395) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.161695) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.161737) EVENT_LOG_v1 {"time_micros": 1764089186161728, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.161760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 675195, prev total WAL file size 675645, number of live WAL files 2.
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.162286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353038' seq:0, type:0; will stop at (end)
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(656KB)], [83(8135KB)]
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186162351, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 9002915, "oldest_snapshot_seqno": -1}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6160 keys, 8881270 bytes, temperature: kUnknown
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186229750, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8881270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8840074, "index_size": 24702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 157578, "raw_average_key_size": 25, "raw_value_size": 8729546, "raw_average_value_size": 1417, "num_data_blocks": 996, "num_entries": 6160, "num_filter_entries": 6160, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.230017) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8881270 bytes
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.233563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 131.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(26.6) write-amplify(13.2) OK, records in: 6686, records dropped: 526 output_compression: NoCompression
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.233588) EVENT_LOG_v1 {"time_micros": 1764089186233578, "job": 48, "event": "compaction_finished", "compaction_time_micros": 67498, "compaction_time_cpu_micros": 20585, "output_level": 6, "num_output_files": 1, "total_output_size": 8881270, "num_input_records": 6686, "num_output_records": 6160, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186233867, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186235262, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.162159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:46:26 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.387 254096 DEBUG nova.objects.instance [None req-92cefdaf-c9ce-4055-8bc3-3bc13c332018 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.420 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089186.4197242, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.421 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.442 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.448 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 11:46:26 np0005535469 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 11:46:26 np0005535469 NetworkManager[48891]: <info>  [1764089186.9663] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:46:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:26Z|00808|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 11:46:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:26Z|00809|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 11:46:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:26Z|00810|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.975 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.976 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis#033[00m
Nov 25 11:46:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.977 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:46:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bcb617-2363-4e3d-bfc3-ec64102c962d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.979 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore#033[00m
Nov 25 11:46:26 np0005535469 nova_compute[254092]: 2025-11-25 16:46:26.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:27 np0005535469 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 11:46:27 np0005535469 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000048.scope: Consumed 6.224s CPU time.
Nov 25 11:46:27 np0005535469 systemd-machined[216343]: Machine qemu-102-instance-00000048 terminated.
Nov 25 11:46:27 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [NOTICE]   (341841) : haproxy version is 2.8.14-c23fe91
Nov 25 11:46:27 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [NOTICE]   (341841) : path to executable is /usr/sbin/haproxy
Nov 25 11:46:27 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [WARNING]  (341841) : Exiting Master process...
Nov 25 11:46:27 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [WARNING]  (341841) : Exiting Master process...
Nov 25 11:46:27 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [ALERT]    (341841) : Current worker (341843) exited with code 143 (Terminated)
Nov 25 11:46:27 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [WARNING]  (341841) : All workers exited. Exiting... (0)
Nov 25 11:46:27 np0005535469 systemd[1]: libpod-ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab.scope: Deactivated successfully.
Nov 25 11:46:27 np0005535469 podman[342099]: 2025-11-25 16:46:27.124004028 +0000 UTC m=+0.046401462 container died ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.157 254096 DEBUG nova.compute.manager [None req-92cefdaf-c9ce-4055-8bc3-3bc13c332018 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab-userdata-shm.mount: Deactivated successfully.
Nov 25 11:46:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-88d10bf89bd6257734a4b32f3f12e85a3c8ca3a700feb7a137e3b1e15f2c8e91-merged.mount: Deactivated successfully.
Nov 25 11:46:27 np0005535469 podman[342099]: 2025-11-25 16:46:27.183216077 +0000 UTC m=+0.105613511 container cleanup ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 11:46:27 np0005535469 systemd[1]: libpod-conmon-ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab.scope: Deactivated successfully.
Nov 25 11:46:27 np0005535469 podman[342139]: 2025-11-25 16:46:27.247766871 +0000 UTC m=+0.041167370 container remove ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.256 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9e32c2-4ab1-4fb4-882c-f98e1f9b0f3a]: (4, ('Tue Nov 25 04:46:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab)\ned28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab\nTue Nov 25 04:46:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab)\ned28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.257 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[865efda7-ed9f-4697-8dda-a1488fd2ccd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.258 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:27 np0005535469 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.285 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b27732e-b85c-4bea-ae99-b008f9aed439]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.304 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a79bb9d-1ea3-4926-ab77-052af997d160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.306 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ae47b-85b7-41d5-822d-b528e3dded23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.322 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c979bd2c-ff9a-4d17-8ec0-b9b120300070]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563807, 'reachable_time': 15350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342157, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.326 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:46:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.327 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6395d366-6365-405b-93cf-ec98906bd8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:27 np0005535469 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.384 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.385 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.402 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.505 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.505 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.510 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.511 254096 INFO nova.compute.claims [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:46:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 249 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 166 op/s
Nov 25 11:46:27 np0005535469 nova_compute[254092]: 2025-11-25 16:46:27.898 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854033698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.393 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.400 254096 DEBUG nova.compute.provider_tree [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.415 254096 DEBUG nova.scheduler.client.report [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.437 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.437 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.473 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.474 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.489 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.504 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.572 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.574 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.574 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Creating image(s)#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.606 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.639 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.670 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.676 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.757 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.763 254096 DEBUG nova.policy [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bd4800c25cd462b9365649e599d0a0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.768 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:28 np0005535469 nova_compute[254092]: 2025-11-25 16:46:28.769 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.366 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.388 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.391 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.427 254096 DEBUG nova.compute.manager [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG oslo_concurrency.lockutils [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG oslo_concurrency.lockutils [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG oslo_concurrency.lockutils [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG nova.compute.manager [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 WARNING nova.compute.manager [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.466 254096 INFO nova.compute.manager [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Resuming#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.467 254096 DEBUG nova.objects.instance [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.503 254096 DEBUG oslo_concurrency.lockutils [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.504 254096 DEBUG oslo_concurrency.lockutils [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:46:29 np0005535469 nova_compute[254092]: 2025-11-25 16:46:29.504 254096 DEBUG nova.network.neutron [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:46:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 249 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 166 op/s
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.326 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.381 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] resizing rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.512 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Successfully created port: 522caf03-4901-44aa-ba29-8d9b37be1158 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.714 254096 DEBUG nova.objects.instance [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'migration_context' on Instance uuid 7bf5d985-1a7f-41b6-8002-b801999e99f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.731 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.731 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Ensure instance console log exists: /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.732 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.732 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:30 np0005535469 nova_compute[254092]: 2025-11-25 16:46:30.732 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.458 254096 DEBUG nova.network.neutron [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.472 254096 DEBUG oslo_concurrency.lockutils [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.478 254096 DEBUG nova.virt.libvirt.vif [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.479 254096 DEBUG nova.network.os_vif_util [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.479 254096 DEBUG nova.network.os_vif_util [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.480 254096 DEBUG os_vif [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.485 254096 INFO os_vif [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.504 254096 DEBUG nova.objects.instance [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:31 np0005535469 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 11:46:31 np0005535469 NetworkManager[48891]: <info>  [1764089191.5687] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Nov 25 11:46:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:31Z|00811|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 11:46:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:31Z|00812|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.639 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:31Z|00813|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.640 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.641 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525#033[00m
Nov 25 11:46:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:31Z|00814|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 systemd-udevd[342359]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:46:31 np0005535469 systemd-machined[216343]: New machine qemu-104-instance-00000048.
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.657 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[568f1961-0ddf-40bb-8d8b-f2b665bc2ddd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.659 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:46:31 np0005535469 NetworkManager[48891]: <info>  [1764089191.6603] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:46:31 np0005535469 NetworkManager[48891]: <info>  [1764089191.6612] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.661 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:46:31 np0005535469 systemd[1]: Started Virtual Machine qemu-104-instance-00000048.
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.662 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fd37c1-34f0-4490-8857-5aded4859058]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.670 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[958b9ef4-4b61-4c7c-8503-25267a02cf40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.681 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[954432f1-289a-46be-9c01-c1ec336d2cec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1cf3ae-c724-4837-835d-d558b15c586b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.722 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[934d6a2f-e41a-463e-b8c7-7466c08a6ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 289 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 196 op/s
Nov 25 11:46:31 np0005535469 NetworkManager[48891]: <info>  [1764089191.7434] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[858e044f-7795-4713-b88f-66aebef74622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.779 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5982a5a6-d936-4512-9f17-a63399bca561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.782 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4e04b53a-2fa3-4c0b-9770-efe793ce240d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.798 254096 DEBUG nova.compute.manager [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.798 254096 DEBUG oslo_concurrency.lockutils [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 DEBUG oslo_concurrency.lockutils [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 DEBUG oslo_concurrency.lockutils [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 DEBUG nova.compute.manager [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 WARNING nova.compute.manager [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 11:46:31 np0005535469 NetworkManager[48891]: <info>  [1764089191.8043] device (tap50ea1716-90): carrier: link connected
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.811 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[be004aca-4b85-4074-80a6-f4da78ab514d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9dcd2f-b2b3-4a96-8e91-fb4903bae458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564939, 'reachable_time': 31744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342393, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.845 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e2cb6c-b3ae-4a0f-bfe2-6dc37d340b39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564939, 'tstamp': 564939}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342394, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.863 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2155ced8-7807-404d-929c-d90de76e2477]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564939, 'reachable_time': 31744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342395, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c77ceec7-2208-406d-be78-6bd12908ca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.960 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c3d218-289a-422e-b8f4-bfea4df48a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.962 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.962 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.963 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:31 np0005535469 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 11:46:31 np0005535469 NetworkManager[48891]: <info>  [1764089191.9668] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.970 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:31Z|00815|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.975 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.979 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a716e08-0e5c-4f63-bb79-f26148dd3665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.980 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:46:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.982 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:46:31 np0005535469 nova_compute[254092]: 2025-11-25 16:46:31.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:32 np0005535469 podman[342445]: 2025-11-25 16:46:32.345046201 +0000 UTC m=+0.023850289 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.477 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Successfully updated port: 522caf03-4901-44aa-ba29-8d9b37be1158 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.497 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.497 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquired lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.497 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:46:32 np0005535469 podman[342445]: 2025-11-25 16:46:32.511963757 +0000 UTC m=+0.190767815 container create 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:46:32 np0005535469 systemd[1]: Started libpod-conmon-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff.scope.
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.585 254096 DEBUG nova.compute.manager [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.585 254096 DEBUG nova.compute.manager [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing instance network info cache due to event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.585 254096 DEBUG oslo_concurrency.lockutils [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:46:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdd06ef15be99e9f8eefd5df555fdd7f637aaafdee50048abe0ba92363351e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:46:32 np0005535469 podman[342445]: 2025-11-25 16:46:32.613237939 +0000 UTC m=+0.292042017 container init 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:46:32 np0005535469 podman[342445]: 2025-11-25 16:46:32.621297078 +0000 UTC m=+0.300101136 container start 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:46:32 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : New worker (342490) forked
Nov 25 11:46:32 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : Loading success.
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.658 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.659 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.678 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.682 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.708 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 01f96314-1fbe-4eee-a4ed-db7f448a5320 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.709 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089192.7084296, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.709 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.729 254096 DEBUG nova.compute.manager [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.729 254096 DEBUG nova.objects.instance [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.734 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.737 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.753 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance running successfully.#033[00m
Nov 25 11:46:32 np0005535469 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.758 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089192.713489, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.761 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.761 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.763 254096 DEBUG nova.virt.libvirt.guest [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.763 254096 DEBUG nova.compute.manager [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.774 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.774 254096 INFO nova.compute.claims [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.778 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.783 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.812 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 11:46:32 np0005535469 nova_compute[254092]: 2025-11-25 16:46:32.978 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587360265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.440 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.448 254096 DEBUG nova.compute.provider_tree [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.475 254096 DEBUG nova.scheduler.client.report [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.496 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.497 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.542 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.543 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.560 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.575 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.602 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.620 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Releasing lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.621 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance network_info: |[{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.621 254096 DEBUG oslo_concurrency.lockutils [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.621 254096 DEBUG nova.network.neutron [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.624 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start _get_guest_xml network_info=[{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.628 254096 WARNING nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.636 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.636 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.644 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.645 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.645 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.645 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.650 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.695 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.697 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.697 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Creating image(s)#033[00m
Nov 25 11:46:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 289 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 168 op/s
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.729 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.758 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.786 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.790 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.865 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.866 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.866 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.867 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.887 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:33 np0005535469 nova_compute[254092]: 2025-11-25 16:46:33.890 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 076182c5-e049-4b78-b5a0-64489e37776b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.003 254096 DEBUG nova.policy [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:46:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:46:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625772133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.118 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.158 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.164 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.234 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 076182c5-e049-4b78-b5a0-64489e37776b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.286 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.368 254096 DEBUG nova.objects.instance [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 076182c5-e049-4b78-b5a0-64489e37776b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.380 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.381 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Ensure instance console log exists: /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.381 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.382 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.382 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:46:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972452691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.634 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.636 254096 DEBUG nova.virt.libvirt.vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1585503050',display_name='tempest-ServerActionsTestOtherA-server-1585503050',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1585503050',id=84,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2d3os2i1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:28Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=7bf5d985-1a7f-41b6-8002-b801999e99f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.636 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.638 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.640 254096 DEBUG nova.objects.instance [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bf5d985-1a7f-41b6-8002-b801999e99f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.656 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <uuid>7bf5d985-1a7f-41b6-8002-b801999e99f5</uuid>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <name>instance-00000054</name>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerActionsTestOtherA-server-1585503050</nova:name>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:46:33</nova:creationTime>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:user uuid="7bd4800c25cd462b9365649e599d0a0e">tempest-ServerActionsTestOtherA-878981139-project-member</nova:user>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:project uuid="d4964e211a6d4699ab499f7cadee8a8d">tempest-ServerActionsTestOtherA-878981139</nova:project>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <nova:port uuid="522caf03-4901-44aa-ba29-8d9b37be1158">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <entry name="serial">7bf5d985-1a7f-41b6-8002-b801999e99f5</entry>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <entry name="uuid">7bf5d985-1a7f-41b6-8002-b801999e99f5</entry>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7bf5d985-1a7f-41b6-8002-b801999e99f5_disk">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:fd:24:70"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <target dev="tap522caf03-49"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/console.log" append="off"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:46:34 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:46:34 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:46:34 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:46:34 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.659 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Preparing to wait for external event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.659 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.660 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.660 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.662 254096 DEBUG nova.virt.libvirt.vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1585503050',display_name='tempest-ServerActionsTestOtherA-server-1585503050',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1585503050',id=84,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2d3os2i1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:28Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=7bf5d985-1a7f-41b6-8002-b801999e99f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.663 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.664 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.665 254096 DEBUG os_vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.668 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.669 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.677 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap522caf03-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.679 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap522caf03-49, col_values=(('external_ids', {'iface-id': '522caf03-4901-44aa-ba29-8d9b37be1158', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:24:70', 'vm-uuid': '7bf5d985-1a7f-41b6-8002-b801999e99f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.682 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:34 np0005535469 NetworkManager[48891]: <info>  [1764089194.6832] manager: (tap522caf03-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.692 254096 INFO os_vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49')#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.754 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.755 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.755 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No VIF found with MAC fa:16:3e:fd:24:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.756 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Using config drive#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.777 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.827 254096 DEBUG nova.network.neutron [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updated VIF entry in instance network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.828 254096 DEBUG nova.network.neutron [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:34 np0005535469 nova_compute[254092]: 2025-11-25 16:46:34.843 254096 DEBUG oslo_concurrency.lockutils [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.027 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Successfully created port: 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.280 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Creating config drive at /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.286 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwes829p_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.425 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwes829p_" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.451 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.455 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.504 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.505 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.505 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.505 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.506 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.508 254096 INFO nova.compute.manager [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Terminating instance#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.509 254096 DEBUG nova.compute.manager [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:46:35 np0005535469 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 11:46:35 np0005535469 NetworkManager[48891]: <info>  [1764089195.5565] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:46:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:35Z|00816|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 11:46:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:35Z|00817|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 11:46:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:35Z|00818|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.577 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.582 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1dbd85d-1686-4de3-926a-9c3f49b6b0d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.585 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 11:46:35 np0005535469 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000048.scope: Consumed 3.519s CPU time.
Nov 25 11:46:35 np0005535469 systemd-machined[216343]: Machine qemu-104-instance-00000048 terminated.
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.649 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.650 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deleting local config drive /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config because it was imported into RBD.#033[00m
Nov 25 11:46:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 327 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Nov 25 11:46:35 np0005535469 systemd-udevd[342815]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:46:35 np0005535469 kernel: tap522caf03-49: entered promiscuous mode
Nov 25 11:46:35 np0005535469 NetworkManager[48891]: <info>  [1764089195.7350] manager: (tap522caf03-49): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Nov 25 11:46:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:35Z|00819|binding|INFO|Claiming lport 522caf03-4901-44aa-ba29-8d9b37be1158 for this chassis.
Nov 25 11:46:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:35Z|00820|binding|INFO|522caf03-4901-44aa-ba29-8d9b37be1158: Claiming fa:16:3e:fd:24:70 10.100.0.3
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.746 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:24:70 10.100.0.3'], port_security=['fa:16:3e:fd:24:70 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7bf5d985-1a7f-41b6-8002-b801999e99f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '168dc57b-72e8-4bf9-9fa6-0f910875b8fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=522caf03-4901-44aa-ba29-8d9b37be1158) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:35 np0005535469 NetworkManager[48891]: <info>  [1764089195.7562] device (tap522caf03-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:46:35 np0005535469 NetworkManager[48891]: <info>  [1764089195.7577] device (tap522caf03-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:46:35 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : haproxy version is 2.8.14-c23fe91
Nov 25 11:46:35 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : path to executable is /usr/sbin/haproxy
Nov 25 11:46:35 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [WARNING]  (342488) : Exiting Master process...
Nov 25 11:46:35 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [ALERT]    (342488) : Current worker (342490) exited with code 143 (Terminated)
Nov 25 11:46:35 np0005535469 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [WARNING]  (342488) : All workers exited. Exiting... (0)
Nov 25 11:46:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:35Z|00821|binding|INFO|Setting lport 522caf03-4901-44aa-ba29-8d9b37be1158 ovn-installed in OVS
Nov 25 11:46:35 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:35Z|00822|binding|INFO|Setting lport 522caf03-4901-44aa-ba29-8d9b37be1158 up in Southbound
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 systemd[1]: libpod-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff.scope: Deactivated successfully.
Nov 25 11:46:35 np0005535469 podman[342838]: 2025-11-25 16:46:35.774285221 +0000 UTC m=+0.065220854 container died 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.792 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.793 254096 DEBUG nova.objects.instance [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:35 np0005535469 systemd-machined[216343]: New machine qemu-105-instance-00000054.
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.804 254096 DEBUG nova.virt.libvirt.vif [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.804 254096 DEBUG nova.network.os_vif_util [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.805 254096 DEBUG nova.network.os_vif_util [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.806 254096 DEBUG os_vif [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.808 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.808 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe8c3a9-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:35 np0005535469 systemd[1]: Started Virtual Machine qemu-105-instance-00000054.
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.814 254096 INFO os_vif [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')#033[00m
Nov 25 11:46:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff-userdata-shm.mount: Deactivated successfully.
Nov 25 11:46:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3cdd06ef15be99e9f8eefd5df555fdd7f637aaafdee50048abe0ba92363351e7-merged.mount: Deactivated successfully.
Nov 25 11:46:35 np0005535469 podman[342838]: 2025-11-25 16:46:35.841726054 +0000 UTC m=+0.132661687 container cleanup 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:46:35 np0005535469 systemd[1]: libpod-conmon-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff.scope: Deactivated successfully.
Nov 25 11:46:35 np0005535469 podman[342904]: 2025-11-25 16:46:35.954548089 +0000 UTC m=+0.084704702 container remove 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.961 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b330f57-a06d-4e09-b4ce-4783bc2021a5]: (4, ('Tue Nov 25 04:46:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff)\n0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff\nTue Nov 25 04:46:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff)\n0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.963 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[380846b1-f40c-48cd-8786-3a008c0117c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.965 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 11:46:35 np0005535469 nova_compute[254092]: 2025-11-25 16:46:35.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9f52ef-aa1e-4298-bfee-bc8ea25f8f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6c95f51c-3b5f-4a69-bb00-3dbb4a37f5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.005 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec5ad17-5abb-45ef-a81e-fd2af2f49315]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.021 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45433fe3-d820-421e-95ef-1a84c41e567f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564930, 'reachable_time': 32712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342927, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.027 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.027 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[73391ff5-d6dc-440d-b390-9aba910c3221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.028 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 522caf03-4901-44aa-ba29-8d9b37be1158 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.029 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.055 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c110132-193c-4f3a-a26e-5d1a0a53b6cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.094 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24f3ef63-7c60-453d-b325-0ebbb0ccba3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.098 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[66eb7d95-fa69-4212-b5f8-712a78c431ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.129 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd715b8-d575-4ba0-a433-ea6f93e37ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9694f9f8-eb70-4fb6-b02d-8dd68434a712]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342934, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.165 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[768a9934-d47b-48f4-aca4-b2b8ec2b0781]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342935, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342935, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.168 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.168 254096 DEBUG nova.compute.manager [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.169 254096 DEBUG oslo_concurrency.lockutils [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.171 254096 DEBUG oslo_concurrency.lockutils [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.171 254096 DEBUG oslo_concurrency.lockutils [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.171 254096 DEBUG nova.compute.manager [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Processing event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.172 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.172 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.172 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.173 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.173 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.229 254096 DEBUG nova.compute.manager [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.229 254096 DEBUG oslo_concurrency.lockutils [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.229 254096 DEBUG oslo_concurrency.lockutils [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.230 254096 DEBUG oslo_concurrency.lockutils [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.230 254096 DEBUG nova.compute.manager [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.230 254096 WARNING nova.compute.manager [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.289 254096 INFO nova.virt.libvirt.driver [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deleting instance files /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320_del#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.291 254096 INFO nova.virt.libvirt.driver [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deletion of /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320_del complete#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.340 254096 INFO nova.compute.manager [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.341 254096 DEBUG oslo.service.loopingcall [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.341 254096 DEBUG nova.compute.manager [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.341 254096 DEBUG nova.network.neutron [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.370 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089196.3703105, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.371 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Started (Lifecycle Event)#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.373 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.376 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.379 254096 INFO nova.virt.libvirt.driver [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance spawned successfully.#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.379 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.394 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.401 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.404 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.404 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.405 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.405 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.406 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.406 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.430 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.431 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089196.3704488, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.431 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.457 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.461 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089196.3756955, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.462 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.466 254096 INFO nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 7.89 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.466 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.475 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.511 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.546 254096 INFO nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 9.08 seconds to build instance.#033[00m
Nov 25 11:46:36 np0005535469 nova_compute[254092]: 2025-11-25 16:46:36.558 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:36Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:0f:a2 10.100.0.7
Nov 25 11:46:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:36Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:0f:a2 10.100.0.7
Nov 25 11:46:37 np0005535469 nova_compute[254092]: 2025-11-25 16:46:37.126 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Successfully updated port: 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:46:37 np0005535469 nova_compute[254092]: 2025-11-25 16:46:37.142 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:37 np0005535469 nova_compute[254092]: 2025-11-25 16:46:37.142 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:46:37 np0005535469 nova_compute[254092]: 2025-11-25 16:46:37.142 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:46:37 np0005535469 nova_compute[254092]: 2025-11-25 16:46:37.445 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:46:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 344 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 136 op/s
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.800 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.800 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] No waiting events found dispatching network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 WARNING nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received unexpected event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.802 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-changed-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.802 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Refreshing instance network info cache due to event network-changed-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.802 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.871 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.872 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.872 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 WARNING nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.874 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.874 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.874 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:38 np0005535469 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 WARNING nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.068 254096 DEBUG nova.network.neutron [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.090 254096 INFO nova.compute.manager [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 2.75 seconds to deallocate network for instance.#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.131 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.131 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.234 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updating instance_info_cache with network_info: [{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.248 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.248 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance network_info: |[{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.249 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.249 254096 DEBUG nova.network.neutron [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Refreshing network info cache for port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.255 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start _get_guest_xml network_info=[{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.264 254096 WARNING nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.273 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.274 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.276 254096 DEBUG oslo_concurrency.processutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.316 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.317 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.318 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.319 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.320 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.320 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.321 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.321 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.321 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.322 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.322 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.323 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.323 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.324 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.330 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246613540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 344 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 3.9 MiB/s wr, 69 op/s
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.740 254096 DEBUG oslo_concurrency.processutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.746 254096 DEBUG nova.compute.provider_tree [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:46:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614024467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.762 254096 DEBUG nova.scheduler.client.report [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.773 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.796 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.800 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.833 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.890 254096 INFO nova.scheduler.client.report [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Deleted allocations for instance 01f96314-1fbe-4eee-a4ed-db7f448a5320#033[00m
Nov 25 11:46:39 np0005535469 nova_compute[254092]: 2025-11-25 16:46:39.939 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:46:40
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.mgr', 'vms', 'backups', 'volumes', 'cephfs.cephfs.meta']
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:46:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:46:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520203387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.242 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.243 254096 DEBUG nova.virt.libvirt.vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-765323280',display_name='tempest-ServersTestJSON-server-765323280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-765323280',id=85,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-s5gihthd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:33Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=076182c5-e049-4b78-b5a0-64489e37776b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.244 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.244 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.245 254096 DEBUG nova.objects.instance [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 076182c5-e049-4b78-b5a0-64489e37776b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.257 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <uuid>076182c5-e049-4b78-b5a0-64489e37776b</uuid>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <name>instance-00000055</name>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestJSON-server-765323280</nova:name>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:46:39</nova:creationTime>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <nova:port uuid="00bc7e19-5b5c-41aa-a9e9-4de7574a63eb">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <entry name="serial">076182c5-e049-4b78-b5a0-64489e37776b</entry>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <entry name="uuid">076182c5-e049-4b78-b5a0-64489e37776b</entry>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/076182c5-e049-4b78-b5a0-64489e37776b_disk">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/076182c5-e049-4b78-b5a0-64489e37776b_disk.config">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:35:b3:37"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <target dev="tap00bc7e19-5b"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/console.log" append="off"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:46:40 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:46:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:46:40 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:46:40 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Preparing to wait for external event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.259 254096 DEBUG nova.virt.libvirt.vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-765323280',display_name='tempest-ServersTestJSON-server-765323280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-765323280',id=85,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-s5gihthd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:33Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=076182c5-e049-4b78-b5a0-64489e37776b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.259 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.260 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.260 254096 DEBUG os_vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.261 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.261 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.261 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.264 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00bc7e19-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.265 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00bc7e19-5b, col_values=(('external_ids', {'iface-id': '00bc7e19-5b5c-41aa-a9e9-4de7574a63eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:b3:37', 'vm-uuid': '076182c5-e049-4b78-b5a0-64489e37776b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.266 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:40 np0005535469 NetworkManager[48891]: <info>  [1764089200.2672] manager: (tap00bc7e19-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.271 254096 INFO os_vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b')#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.316 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.316 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.316 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:35:b3:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.317 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Using config drive#033[00m
Nov 25 11:46:40 np0005535469 nova_compute[254092]: 2025-11-25 16:46:40.334 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:46:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.175 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Creating config drive at /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.180 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z5q14y3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.319 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z5q14y3" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.342 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.346 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config 076182c5-e049-4b78-b5a0-64489e37776b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.460 254096 DEBUG nova.compute.manager [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-deleted-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.461 254096 DEBUG nova.compute.manager [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.461 254096 DEBUG nova.compute.manager [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing instance network info cache due to event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.462 254096 DEBUG oslo_concurrency.lockutils [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.462 254096 DEBUG oslo_concurrency.lockutils [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.462 254096 DEBUG nova.network.neutron [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.500 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config 076182c5-e049-4b78-b5a0-64489e37776b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.500 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deleting local config drive /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config because it was imported into RBD.#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.522 254096 DEBUG nova.network.neutron [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updated VIF entry in instance network info cache for port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.523 254096 DEBUG nova.network.neutron [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updating instance_info_cache with network_info: [{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:41 np0005535469 NetworkManager[48891]: <info>  [1764089201.5369] manager: (tap00bc7e19-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Nov 25 11:46:41 np0005535469 kernel: tap00bc7e19-5b: entered promiscuous mode
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.538 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:46:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:41Z|00823|binding|INFO|Claiming lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb for this chassis.
Nov 25 11:46:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:41Z|00824|binding|INFO|00bc7e19-5b5c-41aa-a9e9-4de7574a63eb: Claiming fa:16:3e:35:b3:37 10.100.0.13
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.547 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b3:37 10.100.0.13'], port_security=['fa:16:3e:35:b3:37 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '076182c5-e049-4b78-b5a0-64489e37776b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.548 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.550 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:46:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:41Z|00825|binding|INFO|Setting lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb up in Southbound
Nov 25 11:46:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:41Z|00826|binding|INFO|Setting lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb ovn-installed in OVS
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.564 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.566 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f29db172-a54c-4bf1-9c24-f3ce8958b9b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 systemd-machined[216343]: New machine qemu-106-instance-00000055.
Nov 25 11:46:41 np0005535469 systemd[1]: Started Virtual Machine qemu-106-instance-00000055.
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.593 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[de71df63-33da-4f5b-9231-1f74c7fe5de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.600 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfa304f-c2e8-4c10-92ba-7deee08db6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 systemd-udevd[343139]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:46:41 np0005535469 NetworkManager[48891]: <info>  [1764089201.6157] device (tap00bc7e19-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:46:41 np0005535469 NetworkManager[48891]: <info>  [1764089201.6167] device (tap00bc7e19-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.627 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0251278b-e15a-48ab-8e34-4059912ddc2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.643 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1192ac8e-b76e-4170-95d9-d136036c5b44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343149, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.657 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bdba07ce-da4a-4680-8e2a-fd6ed17e533a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343150, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343150, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.659 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.662 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.662 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.662 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.663 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.674 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.676 254096 INFO nova.compute.manager [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Terminating instance#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.677 254096 DEBUG nova.compute.manager [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:46:41 np0005535469 kernel: tap522caf03-49 (unregistering): left promiscuous mode
Nov 25 11:46:41 np0005535469 NetworkManager[48891]: <info>  [1764089201.7257] device (tap522caf03-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:46:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 293 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 228 op/s
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:41Z|00827|binding|INFO|Releasing lport 522caf03-4901-44aa-ba29-8d9b37be1158 from this chassis (sb_readonly=0)
Nov 25 11:46:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:41Z|00828|binding|INFO|Setting lport 522caf03-4901-44aa-ba29-8d9b37be1158 down in Southbound
Nov 25 11:46:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:41Z|00829|binding|INFO|Removing iface tap522caf03-49 ovn-installed in OVS
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.786 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:24:70 10.100.0.3'], port_security=['fa:16:3e:fd:24:70 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7bf5d985-1a7f-41b6-8002-b801999e99f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=522caf03-4901-44aa-ba29-8d9b37be1158) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.788 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 522caf03-4901-44aa-ba29-8d9b37be1158 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.790 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.805 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c606ce5e-8757-464a-a65a-9b5cd25f14c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000054.scope: Deactivated successfully.
Nov 25 11:46:41 np0005535469 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000054.scope: Consumed 5.884s CPU time.
Nov 25 11:46:41 np0005535469 systemd-machined[216343]: Machine qemu-105-instance-00000054 terminated.
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.834 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccc4bb0-56ab-4586-bb38-73297933ee2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.839 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[189e38e5-e1d1-4756-bf46-3128ce3a5bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.867 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8332a522-6bcb-45ba-bce1-4d816d8f5130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 NetworkManager[48891]: <info>  [1764089201.8923] manager: (tap522caf03-49): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.892 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcb314d-d2db-48db-9708-8b0dc2c37857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343201, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.908 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f60afe-35fd-4cc9-a313-bae463f073b7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343210, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343210, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.909 254096 INFO nova.virt.libvirt.driver [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance destroyed successfully.#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.910 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.910 254096 DEBUG nova.objects.instance [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'resources' on Instance uuid 7bf5d985-1a7f-41b6-8002-b801999e99f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.921 254096 DEBUG nova.virt.libvirt.vif [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1585503050',display_name='tempest-ServerActionsTestOtherA-server-1585503050',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1585503050',id=84,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2d3os2i1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:36Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=7bf5d985-1a7f-41b6-8002-b801999e99f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.921 254096 DEBUG nova.network.os_vif_util [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.922 254096 DEBUG nova.network.os_vif_util [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.922 254096 DEBUG os_vif [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.924 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap522caf03-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.930 254096 INFO os_vif [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49')#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.964 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089201.9639485, 076182c5-e049-4b78-b5a0-64489e37776b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.965 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Started (Lifecycle Event)#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.982 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.986 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089201.9644976, 076182c5-e049-4b78-b5a0-64489e37776b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:41 np0005535469 nova_compute[254092]: 2025-11-25 16:46:41.986 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.000 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.004 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.019 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.302 254096 INFO nova.virt.libvirt.driver [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deleting instance files /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5_del#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.303 254096 INFO nova.virt.libvirt.driver [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deletion of /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5_del complete#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.344 254096 INFO nova.compute.manager [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.344 254096 DEBUG oslo.service.loopingcall [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.344 254096 DEBUG nova.compute.manager [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:46:42 np0005535469 nova_compute[254092]: 2025-11-25 16:46:42.345 254096 DEBUG nova.network.neutron [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.615 254096 DEBUG nova.compute.manager [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.616 254096 DEBUG oslo_concurrency.lockutils [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.617 254096 DEBUG oslo_concurrency.lockutils [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.617 254096 DEBUG oslo_concurrency.lockutils [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.617 254096 DEBUG nova.compute.manager [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Processing event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.618 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.622 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089203.6219947, 076182c5-e049-4b78-b5a0-64489e37776b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.622 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.627 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.631 254096 INFO nova.virt.libvirt.driver [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance spawned successfully.#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.631 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:46:43 np0005535469 podman[343238]: 2025-11-25 16:46:43.649367278 +0000 UTC m=+0.059213721 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.652 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.660 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:46:43 np0005535469 podman[343237]: 2025-11-25 16:46:43.661796365 +0000 UTC m=+0.072952644 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.664 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.664 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.665 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.665 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.666 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.666 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:46:43 np0005535469 podman[343239]: 2025-11-25 16:46:43.694692359 +0000 UTC m=+0.100274756 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.705 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:46:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 293 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 198 op/s
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.740 254096 INFO nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 10.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.741 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.824 254096 INFO nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 11.08 seconds to build instance.#033[00m
Nov 25 11:46:43 np0005535469 nova_compute[254092]: 2025-11-25 16:46:43.841 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.106 254096 DEBUG nova.network.neutron [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updated VIF entry in instance network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.108 254096 DEBUG nova.network.neutron [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.131 254096 DEBUG oslo_concurrency.lockutils [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.280 254096 DEBUG nova.network.neutron [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.309 254096 INFO nova.compute.manager [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 1.96 seconds to deallocate network for instance.#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.373 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.374 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.476 254096 DEBUG oslo_concurrency.processutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980439527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.951 254096 DEBUG oslo_concurrency.processutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.957 254096 DEBUG nova.compute.provider_tree [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.970 254096 DEBUG nova.scheduler.client.report [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:44 np0005535469 nova_compute[254092]: 2025-11-25 16:46:44.993 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.017 254096 INFO nova.scheduler.client.report [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Deleted allocations for instance 7bf5d985-1a7f-41b6-8002-b801999e99f5#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.068 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 256 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 266 op/s
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.743 254096 DEBUG nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.744 254096 DEBUG oslo_concurrency.lockutils [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.745 254096 DEBUG oslo_concurrency.lockutils [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.745 254096 DEBUG oslo_concurrency.lockutils [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.745 254096 DEBUG nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] No waiting events found dispatching network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.746 254096 WARNING nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received unexpected event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb for instance with vm_state active and task_state None.#033[00m
Nov 25 11:46:45 np0005535469 nova_compute[254092]: 2025-11-25 16:46:45.746 254096 DEBUG nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-vif-deleted-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.123 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.124 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.125 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.125 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.125 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.127 254096 INFO nova.compute.manager [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Terminating instance#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.128 254096 DEBUG nova.compute.manager [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:46:46 np0005535469 kernel: tap41fd5f5b-44 (unregistering): left promiscuous mode
Nov 25 11:46:46 np0005535469 NetworkManager[48891]: <info>  [1764089206.1715] device (tap41fd5f5b-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:46:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:46Z|00830|binding|INFO|Releasing lport 41fd5f5b-445b-4eed-adf5-045ddb262021 from this chassis (sb_readonly=0)
Nov 25 11:46:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:46Z|00831|binding|INFO|Setting lport 41fd5f5b-445b-4eed-adf5-045ddb262021 down in Southbound
Nov 25 11:46:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:46Z|00832|binding|INFO|Removing iface tap41fd5f5b-44 ovn-installed in OVS
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.193 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:dc:a2 10.100.0.6'], port_security=['fa:16:3e:a2:dc:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '98410ff5-26ab-4406-8d1b-063d9e114cf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c92cd9ca-5dd9-48df-bed9-cecbc09aacca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=41fd5f5b-445b-4eed-adf5-045ddb262021) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.194 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 41fd5f5b-445b-4eed-adf5-045ddb262021 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.196 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 290484fa-908f-44de-87e4-4f5bc85c5679, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.198 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[13d7d6f1-4811-4d73-96ec-1d08d4483028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.198 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 namespace which is not needed anymore#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 25 11:46:46 np0005535469 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004e.scope: Consumed 17.199s CPU time.
Nov 25 11:46:46 np0005535469 systemd-machined[216343]: Machine qemu-93-instance-0000004e terminated.
Nov 25 11:46:46 np0005535469 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [NOTICE]   (335841) : haproxy version is 2.8.14-c23fe91
Nov 25 11:46:46 np0005535469 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [NOTICE]   (335841) : path to executable is /usr/sbin/haproxy
Nov 25 11:46:46 np0005535469 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [WARNING]  (335841) : Exiting Master process...
Nov 25 11:46:46 np0005535469 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [ALERT]    (335841) : Current worker (335843) exited with code 143 (Terminated)
Nov 25 11:46:46 np0005535469 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [WARNING]  (335841) : All workers exited. Exiting... (0)
Nov 25 11:46:46 np0005535469 systemd[1]: libpod-78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db.scope: Deactivated successfully.
Nov 25 11:46:46 np0005535469 podman[343344]: 2025-11-25 16:46:46.368420978 +0000 UTC m=+0.053269629 container died 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.377 254096 INFO nova.virt.libvirt.driver [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Instance destroyed successfully.#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.377 254096 DEBUG nova.objects.instance [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'resources' on Instance uuid 98410ff5-26ab-4406-8d1b-063d9e114cf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.390 254096 DEBUG nova.virt.libvirt.vif [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2135129959',display_name='tempest-ServerActionsTestOtherA-server-2135129959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2135129959',id=78,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0CfOXgpdL9TA9v80eVPgWMFMAd3kyDMITWZbq91VqT30SkdY0BSiRtiMf/N/PxHYN1QDKdbRV0yenlOn8E69+KpPA991BPfs7OG9A96fwH3GKazl2NNuFOCSFE4XMmXQ==',key_name='tempest-keypair-1463003804',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:44:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-z971v96r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=98410ff5-26ab-4406-8d1b-063d9e114cf8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.390 254096 DEBUG nova.network.os_vif_util [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.393 254096 DEBUG nova.network.os_vif_util [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.394 254096 DEBUG os_vif [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.397 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41fd5f5b-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.404 254096 INFO os_vif [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44')#033[00m
Nov 25 11:46:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db-userdata-shm.mount: Deactivated successfully.
Nov 25 11:46:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8a8c31be6c702279b234bc478f162b1997c18dd87616887343918d4c9ac2c2c7-merged.mount: Deactivated successfully.
Nov 25 11:46:46 np0005535469 podman[343344]: 2025-11-25 16:46:46.430907806 +0000 UTC m=+0.115756457 container cleanup 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:46:46 np0005535469 systemd[1]: libpod-conmon-78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db.scope: Deactivated successfully.
Nov 25 11:46:46 np0005535469 podman[343396]: 2025-11-25 16:46:46.500026484 +0000 UTC m=+0.045104027 container remove 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f633cf-528e-4016-9c03-a6c5ea591aa6]: (4, ('Tue Nov 25 04:46:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 (78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db)\n78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db\nTue Nov 25 04:46:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 (78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db)\n78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.512 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be44eac9-1bd7-4586-9b87-f396febd439c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.513 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 kernel: tap290484fa-90: left promiscuous mode
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.574 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cff601ca-66d6-420d-8153-6e6178efd6d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.589 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edb46e56-f793-4b14-8bbe-56be2efc4b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ade95c7-72db-432f-8be3-edad491ede04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.611 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab258389-0773-447b-9ea1-654115d9ebbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554052, 'reachable_time': 34703, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343413, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 systemd[1]: run-netns-ovnmeta\x2d290484fa\x2d908f\x2d44de\x2d87e4\x2d4f5bc85c5679.mount: Deactivated successfully.
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.613 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.614 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[af1fcfb4-4443-4da6-a99f-8fba94eb16a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.738 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.740 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.740 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.740 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.741 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.743 254096 INFO nova.compute.manager [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Terminating instance#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.744 254096 DEBUG nova.compute.manager [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:46:46 np0005535469 kernel: tap00bc7e19-5b (unregistering): left promiscuous mode
Nov 25 11:46:46 np0005535469 NetworkManager[48891]: <info>  [1764089206.7902] device (tap00bc7e19-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:46:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:46Z|00833|binding|INFO|Releasing lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb from this chassis (sb_readonly=0)
Nov 25 11:46:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:46Z|00834|binding|INFO|Setting lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb down in Southbound
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:46Z|00835|binding|INFO|Removing iface tap00bc7e19-5b ovn-installed in OVS
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.803 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b3:37 10.100.0.13'], port_security=['fa:16:3e:35:b3:37 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '076182c5-e049-4b78-b5a0-64489e37776b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.804 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.806 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc332c3-edb4-402b-8663-20721d66b367]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.838 254096 INFO nova.virt.libvirt.driver [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Deleting instance files /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8_del#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.839 254096 INFO nova.virt.libvirt.driver [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Deletion of /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8_del complete#033[00m
Nov 25 11:46:46 np0005535469 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 25 11:46:46 np0005535469 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Consumed 3.531s CPU time.
Nov 25 11:46:46 np0005535469 systemd-machined[216343]: Machine qemu-106-instance-00000055 terminated.
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.872 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4152dab6-8b6f-460a-b4ca-4bf0f484aecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.875 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6b4c9b-544f-4887-87e0-9694de42928e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.886 254096 INFO nova.compute.manager [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.887 254096 DEBUG oslo.service.loopingcall [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.887 254096 DEBUG nova.compute.manager [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.887 254096 DEBUG nova.network.neutron [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5e83322b-0cbe-489e-b5e8-15eb254066fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.935 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9473de71-1336-4c01-bfac-ecba3bcc858b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343424, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.955 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4309d918-0bac-4491-8446-6b0d54c408b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343425, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343425, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.965 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.982 254096 INFO nova.virt.libvirt.driver [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance destroyed successfully.#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.982 254096 DEBUG nova.objects.instance [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 076182c5-e049-4b78-b5a0-64489e37776b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.995 254096 DEBUG nova.virt.libvirt.vif [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-765323280',display_name='tempest-ServersTestJSON-server-765323280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-765323280',id=85,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-s5gihthd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:43Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=076182c5-e049-4b78-b5a0-64489e37776b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.995 254096 DEBUG nova.network.os_vif_util [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.996 254096 DEBUG nova.network.os_vif_util [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.996 254096 DEBUG os_vif [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:46 np0005535469 nova_compute[254092]: 2025-11-25 16:46:46.998 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bc7e19-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.006 254096 INFO os_vif [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b')#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.451 254096 INFO nova.virt.libvirt.driver [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deleting instance files /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b_del#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.453 254096 INFO nova.virt.libvirt.driver [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deletion of /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b_del complete#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.531 254096 INFO nova.compute.manager [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.532 254096 DEBUG oslo.service.loopingcall [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.533 254096 DEBUG nova.compute.manager [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:46:47 np0005535469 nova_compute[254092]: 2025-11-25 16:46:47.533 254096 DEBUG nova.network.neutron [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:46:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 246 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.4 MiB/s wr, 277 op/s
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.339 254096 DEBUG nova.network.neutron [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.373 254096 INFO nova.compute.manager [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Took 1.49 seconds to deallocate network for instance.#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.423 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.423 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.429 254096 DEBUG nova.compute.manager [req-672e817b-fd05-41ab-ae23-317556fdfa52 req-9d04f5e7-c2d6-48da-903a-efe0f5863b34 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Received event network-vif-deleted-41fd5f5b-445b-4eed-adf5-045ddb262021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.511 254096 DEBUG oslo_concurrency.processutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914153769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.950 254096 DEBUG oslo_concurrency.processutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.956 254096 DEBUG nova.compute.provider_tree [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.972 254096 DEBUG nova.scheduler.client.report [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:48 np0005535469 nova_compute[254092]: 2025-11-25 16:46:48.993 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:49 np0005535469 nova_compute[254092]: 2025-11-25 16:46:49.023 254096 INFO nova.scheduler.client.report [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Deleted allocations for instance 98410ff5-26ab-4406-8d1b-063d9e114cf8#033[00m
Nov 25 11:46:49 np0005535469 nova_compute[254092]: 2025-11-25 16:46:49.087 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:49 np0005535469 nova_compute[254092]: 2025-11-25 16:46:49.627 254096 DEBUG nova.network.neutron [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:46:49 np0005535469 nova_compute[254092]: 2025-11-25 16:46:49.647 254096 INFO nova.compute.manager [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 2.11 seconds to deallocate network for instance.#033[00m
Nov 25 11:46:49 np0005535469 nova_compute[254092]: 2025-11-25 16:46:49.703 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:49 np0005535469 nova_compute[254092]: 2025-11-25 16:46:49.703 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 246 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 259 op/s
Nov 25 11:46:49 np0005535469 nova_compute[254092]: 2025-11-25 16:46:49.757 254096 DEBUG oslo_concurrency.processutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426369930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.214 254096 DEBUG oslo_concurrency.processutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.221 254096 DEBUG nova.compute.provider_tree [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.235 254096 DEBUG nova.scheduler.client.report [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.255 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.309 254096 INFO nova.scheduler.client.report [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 076182c5-e049-4b78-b5a0-64489e37776b#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.372 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:50Z|00836|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.574 254096 DEBUG nova.compute.manager [req-efea3501-4b11-493b-9311-06cf736a6075 req-7b3e4679-8541-422c-aa23-9155cb7c15e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-vif-deleted-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:46:50Z|00837|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.785 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089195.7837284, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.786 254096 INFO nova.compute.manager [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:46:50 np0005535469 nova_compute[254092]: 2025-11-25 16:46:50.808 254096 DEBUG nova.compute.manager [None req-eb5e3e5f-3082-4750-bef3-558a8e564c2d - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018666416373640685 of space, bias 1.0, pg target 0.5599924912092206 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:46:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 313 op/s
Nov 25 11:46:52 np0005535469 nova_compute[254092]: 2025-11-25 16:46:52.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:53 np0005535469 nova_compute[254092]: 2025-11-25 16:46:53.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 154 op/s
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.265 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.265 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.283 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.344 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.345 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.351 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.351 254096 INFO nova.compute.claims [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.462 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081260075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.938 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.946 254096 DEBUG nova.compute.provider_tree [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.962 254096 DEBUG nova.scheduler.client.report [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.990 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:54 np0005535469 nova_compute[254092]: 2025-11-25 16:46:54.990 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.035 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.035 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.066 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.083 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.186 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.188 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.188 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Creating image(s)#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.207 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.229 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:46:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2187319379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:46:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:46:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2187319379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.261 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.266 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.346 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.347 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.348 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.348 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.375 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.379 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.677 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 154 op/s
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.744 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.853 254096 DEBUG nova.objects.instance [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.864 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.865 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Ensure instance console log exists: /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.865 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.866 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.866 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:55 np0005535469 nova_compute[254092]: 2025-11-25 16:46:55.956 254096 DEBUG nova.policy [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:46:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:46:56 np0005535469 nova_compute[254092]: 2025-11-25 16:46:56.909 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089201.906971, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:46:56 np0005535469 nova_compute[254092]: 2025-11-25 16:46:56.909 254096 INFO nova.compute.manager [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:46:56 np0005535469 nova_compute[254092]: 2025-11-25 16:46:56.928 254096 DEBUG nova.compute.manager [None req-0b4b3b4f-f469-4c72-a6ee-dd0a9cd180a8 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.351 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.351 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.371 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.440 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.441 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.452 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.452 254096 INFO nova.compute.claims [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.598 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Successfully created port: f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:46:57 np0005535469 nova_compute[254092]: 2025-11-25 16:46:57.617 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 131 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 512 KiB/s wr, 98 op/s
Nov 25 11:46:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:46:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614283455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.097 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.107 254096 DEBUG nova.compute.provider_tree [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.122 254096 DEBUG nova.scheduler.client.report [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.145 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.147 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.189 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.213 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.235 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.337 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.339 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.340 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Creating image(s)#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.372 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.408 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.436 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.441 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.526 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.527 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.528 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.528 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.552 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.557 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.631 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Successfully updated port: f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.648 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.649 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.649 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.819 254096 DEBUG nova.compute.manager [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-changed-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.820 254096 DEBUG nova.compute.manager [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Refreshing instance network info cache due to event network-changed-f0f27b65-aab4-4ab1-ade1-b58eb7124f88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.820 254096 DEBUG oslo_concurrency.lockutils [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.872 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.922 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] resizing rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:46:58 np0005535469 nova_compute[254092]: 2025-11-25 16:46:58.960 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.007 254096 DEBUG nova.objects.instance [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'migration_context' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.022 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.022 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Ensure instance console log exists: /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.023 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.023 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.024 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.025 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.030 254096 WARNING nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.034 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.034 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.037 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.037 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.037 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.042 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:46:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400692947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.461 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.479 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.483 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:46:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 131 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 511 KiB/s wr, 66 op/s
Nov 25 11:46:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:46:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006274789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.915 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.916 254096 DEBUG nova.objects.instance [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:46:59 np0005535469 nova_compute[254092]: 2025-11-25 16:46:59.956 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <uuid>b4cc1fd8-a1ed-40f8-8373-33b1d1260300</uuid>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <name>instance-00000057</name>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersAaction247Test-server-681512672</nova:name>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:46:59</nova:creationTime>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <nova:user uuid="845a69c3091245f2a563f43567bf4a2f">tempest-ServersAaction247Test-680551157-project-member</nova:user>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <nova:project uuid="f23a436fcc3d46efba4e231d5103a5d5">tempest-ServersAaction247Test-680551157</nova:project>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <entry name="serial">b4cc1fd8-a1ed-40f8-8373-33b1d1260300</entry>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <entry name="uuid">b4cc1fd8-a1ed-40f8-8373-33b1d1260300</entry>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/console.log" append="off"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:46:59 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:46:59 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:46:59 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:46:59 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:47:00 np0005535469 nova_compute[254092]: 2025-11-25 16:47:00.004 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:00 np0005535469 nova_compute[254092]: 2025-11-25 16:47:00.005 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:00 np0005535469 nova_compute[254092]: 2025-11-25 16:47:00.005 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Using config drive#033[00m
Nov 25 11:47:00 np0005535469 nova_compute[254092]: 2025-11-25 16:47:00.025 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:00 np0005535469 nova_compute[254092]: 2025-11-25 16:47:00.986 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Creating config drive at /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config#033[00m
Nov 25 11:47:00 np0005535469 nova_compute[254092]: 2025-11-25 16:47:00.990 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpke9spmyg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.049 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updating instance_info_cache with network_info: [{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.076 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.076 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance network_info: |[{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.077 254096 DEBUG oslo_concurrency.lockutils [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.077 254096 DEBUG nova.network.neutron [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Refreshing network info cache for port f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.080 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start _get_guest_xml network_info=[{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.084 254096 WARNING nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.092 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.093 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.096 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.096 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.096 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.099 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.102 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.141 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpke9spmyg" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.165 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.169 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.339 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.340 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deleting local config drive /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config because it was imported into RBD.#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.375 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089206.3740659, 98410ff5-26ab-4406-8d1b-063d9e114cf8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.376 254096 INFO nova.compute.manager [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.392 254096 DEBUG nova.compute.manager [None req-0e7548c5-1325-441d-b384-034dabbf5c63 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:01 np0005535469 systemd-machined[216343]: New machine qemu-107-instance-00000057.
Nov 25 11:47:01 np0005535469 systemd[1]: Started Virtual Machine qemu-107-instance-00000057.
Nov 25 11:47:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544329129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.531 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.552 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.555 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 3.5 MiB/s wr, 108 op/s
Nov 25 11:47:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623298628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.962 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.964 254096 DEBUG nova.virt.libvirt.vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-262344867',display_name='tempest-ServersTestJSON-server-262344867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-262344867',id=86,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQJGP+WBaWzhABP2DN5HUriBkruJv6UMUXVsLc1/QAALeyb1ZBY8xYIWQD6vo5KB897SaecCjpvmyzj4CBX1phIjZTvTDr6O/Dm/ASyK8mTfIM6RZCRLmfnDkwJw8Gnzw==',key_name='tempest-key-735549316',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-6ok0603s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:55Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.964 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.965 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.967 254096 DEBUG nova.objects.instance [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.978 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089206.9777875, 076182c5-e049-4b78-b5a0-64489e37776b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.978 254096 INFO nova.compute.manager [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.990 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <uuid>d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb</uuid>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <name>instance-00000056</name>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestJSON-server-262344867</nova:name>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:47:01</nova:creationTime>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <nova:port uuid="f0f27b65-aab4-4ab1-ade1-b58eb7124f88">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <entry name="serial">d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb</entry>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <entry name="uuid">d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb</entry>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:1f:ac:77"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <target dev="tapf0f27b65-aa"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/console.log" append="off"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:47:01 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:47:01 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:47:01 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:47:01 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.996 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Preparing to wait for external event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.996 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.997 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.997 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.998 254096 DEBUG nova.virt.libvirt.vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-262344867',display_name='tempest-ServersTestJSON-server-262344867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-262344867',id=86,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQJGP+WBaWzhABP2DN5HUriBkruJv6UMUXVsLc1/QAALeyb1ZBY8xYIWQD6vo5KB897SaecCjpvmyzj4CBX1phIjZTvTDr6O/Dm/ASyK8mTfIM6RZCRLmfnDkwJw8Gnzw==',key_name='tempest-key-735549316',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-6ok0603s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:55Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.998 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.999 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:01 np0005535469 nova_compute[254092]: 2025-11-25 16:47:01.999 254096 DEBUG os_vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.001 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.001 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.002 254096 DEBUG nova.compute.manager [None req-00bcd083-1173-4d98-91c8-5756a6265137 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.004 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.005 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f27b65-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.005 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0f27b65-aa, col_values=(('external_ids', {'iface-id': 'f0f27b65-aab4-4ab1-ade1-b58eb7124f88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:ac:77', 'vm-uuid': 'd60cbcdd-ce8c-49cc-a4ef-f2a324be18bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:02 np0005535469 NetworkManager[48891]: <info>  [1764089222.0078] manager: (tapf0f27b65-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.012 254096 INFO os_vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa')#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.064 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.064 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.064 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:1f:ac:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.065 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Using config drive#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.086 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.272 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089222.2716823, b4cc1fd8-a1ed-40f8-8373-33b1d1260300 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.273 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.275 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.276 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.280 254096 INFO nova.virt.libvirt.driver [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance spawned successfully.#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.280 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.295 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.300 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.304 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.305 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.305 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.305 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.306 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.306 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.338 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.339 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089222.274979, b4cc1fd8-a1ed-40f8-8373-33b1d1260300 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.339 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] VM Started (Lifecycle Event)#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.368 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.374 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.383 254096 INFO nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 4.05 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.383 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.438 254096 INFO nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 5.02 seconds to build instance.#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.453 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.777 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Creating config drive at /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.783 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2h2tgisx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.924 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2h2tgisx" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.964 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:02 np0005535469 nova_compute[254092]: 2025-11-25 16:47:02.968 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.118 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.119 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deleting local config drive /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config because it was imported into RBD.#033[00m
Nov 25 11:47:03 np0005535469 kernel: tapf0f27b65-aa: entered promiscuous mode
Nov 25 11:47:03 np0005535469 systemd-udevd[344136]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:47:03 np0005535469 NetworkManager[48891]: <info>  [1764089223.1946] manager: (tapf0f27b65-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 25 11:47:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:03Z|00838|binding|INFO|Claiming lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for this chassis.
Nov 25 11:47:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:03Z|00839|binding|INFO|f0f27b65-aab4-4ab1-ade1-b58eb7124f88: Claiming fa:16:3e:1f:ac:77 10.100.0.4
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:03 np0005535469 NetworkManager[48891]: <info>  [1764089223.2102] device (tapf0f27b65-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:47:03 np0005535469 NetworkManager[48891]: <info>  [1764089223.2134] device (tapf0f27b65-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.215 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ac:77 10.100.0.4'], port_security=['fa:16:3e:1f:ac:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60cbcdd-ce8c-49cc-a4ef-f2a324be18bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f0f27b65-aab4-4ab1-ade1-b58eb7124f88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.217 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f0f27b65-aab4-4ab1-ade1-b58eb7124f88 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.218 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:47:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:03Z|00840|binding|INFO|Setting lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 ovn-installed in OVS
Nov 25 11:47:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:03Z|00841|binding|INFO|Setting lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 up in Southbound
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.238 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dabb8021-efc2-4926-a1c0-420571667ae8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:03 np0005535469 systemd-machined[216343]: New machine qemu-108-instance-00000056.
Nov 25 11:47:03 np0005535469 systemd[1]: Started Virtual Machine qemu-108-instance-00000056.
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.280 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[46bd69b1-f2d7-4266-bfdd-58b6c6fd8f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.284 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28fb8eb9-d52a-4ae2-8587-14719fdcb2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.322 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9db90a2f-0233-4dd4-ad27-b25cf57cae3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.343 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d31af113-ede8-4fdd-ab62-4513857c1fc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344204, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15042de2-a0b3-4c37-98be-b9e011511841]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344206, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344206, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.365 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.365 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.366 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.366 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.654 254096 DEBUG nova.network.neutron [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updated VIF entry in instance network info cache for port f0f27b65-aab4-4ab1-ade1-b58eb7124f88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.654 254096 DEBUG nova.network.neutron [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updating instance_info_cache with network_info: [{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.669 254096 DEBUG oslo_concurrency.lockutils [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.676 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089223.6759393, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.676 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Started (Lifecycle Event)#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.694 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.698 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089223.6760862, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.698 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.713 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.716 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.731 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.737 254096 DEBUG nova.compute.manager [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG oslo_concurrency.lockutils [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG oslo_concurrency.lockutils [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG oslo_concurrency.lockutils [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG nova.compute.manager [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Processing event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.739 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:47:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.743 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089223.7432911, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.744 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.747 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.751 254096 INFO nova.virt.libvirt.driver [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance spawned successfully.#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.752 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.762 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.767 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.776 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.776 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.776 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.777 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.777 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.777 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.782 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.828 254096 INFO nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 8.64 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.829 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.885 254096 INFO nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 9.55 seconds to build instance.#033[00m
Nov 25 11:47:03 np0005535469 nova_compute[254092]: 2025-11-25 16:47:03.903 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.216 254096 DEBUG nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.254 254096 INFO nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] instance snapshotting#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.256 254096 DEBUG nova.objects.instance [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'flavor' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.425 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.426 254096 INFO nova.compute.manager [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Terminating instance#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.427 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "refresh_cache-b4cc1fd8-a1ed-40f8-8373-33b1d1260300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.427 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquired lock "refresh_cache-b4cc1fd8-a1ed-40f8-8373-33b1d1260300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.427 254096 DEBUG nova.network.neutron [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.574 254096 INFO nova.virt.libvirt.driver [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Beginning live snapshot process#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.629 254096 DEBUG nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.637 254096 DEBUG nova.network.neutron [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.970 254096 DEBUG nova.network.neutron [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.984 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Releasing lock "refresh_cache-b4cc1fd8-a1ed-40f8-8373-33b1d1260300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:04 np0005535469 nova_compute[254092]: 2025-11-25 16:47:04.985 254096 DEBUG nova.compute.manager [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:47:05 np0005535469 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 25 11:47:05 np0005535469 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Consumed 3.530s CPU time.
Nov 25 11:47:05 np0005535469 systemd-machined[216343]: Machine qemu-107-instance-00000057 terminated.
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.156 254096 DEBUG nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.207 254096 INFO nova.virt.libvirt.driver [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance destroyed successfully.#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.207 254096 DEBUG nova.objects.instance [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'resources' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.536 254096 INFO nova.virt.libvirt.driver [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deleting instance files /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_del#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.537 254096 INFO nova.virt.libvirt.driver [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deletion of /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_del complete#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.584 254096 INFO nova.compute.manager [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.585 254096 DEBUG oslo.service.loopingcall [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.585 254096 DEBUG nova.compute.manager [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.585 254096 DEBUG nova.network.neutron [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.672 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.673 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.674 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.674 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.676 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.678 254096 INFO nova.compute.manager [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Terminating instance#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.679 254096 DEBUG nova.compute.manager [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:47:05 np0005535469 kernel: tapf0f27b65-aa (unregistering): left promiscuous mode
Nov 25 11:47:05 np0005535469 NetworkManager[48891]: <info>  [1764089225.7222] device (tapf0f27b65-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.726 254096 DEBUG nova.network.neutron [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:47:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:05Z|00842|binding|INFO|Releasing lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 from this chassis (sb_readonly=0)
Nov 25 11:47:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:05Z|00843|binding|INFO|Setting lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 down in Southbound
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:05Z|00844|binding|INFO|Removing iface tapf0f27b65-aa ovn-installed in OVS
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.740 254096 DEBUG nova.network.neutron [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.742 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ac:77 10.100.0.4'], port_security=['fa:16:3e:1f:ac:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60cbcdd-ce8c-49cc-a4ef-f2a324be18bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f0f27b65-aab4-4ab1-ade1-b58eb7124f88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.743 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f0f27b65-aab4-4ab1-ade1-b58eb7124f88 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.744 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.754 254096 INFO nova.compute.manager [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[749baa4b-a3f2-4708-a8e0-cde450cb5093]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:05 np0005535469 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 25 11:47:05 np0005535469 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000056.scope: Consumed 2.428s CPU time.
Nov 25 11:47:05 np0005535469 systemd-machined[216343]: Machine qemu-108-instance-00000056 terminated.
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.803 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.803 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9899f574-975f-4772-95ed-b5af2c8cb9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.804 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.806 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec27c61-ba3f-431b-b889-41a25bdc3a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.833 254096 DEBUG nova.compute.manager [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.834 254096 DEBUG oslo_concurrency.lockutils [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.834 254096 DEBUG oslo_concurrency.lockutils [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.834 254096 DEBUG oslo_concurrency.lockutils [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.835 254096 DEBUG nova.compute.manager [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] No waiting events found dispatching network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.835 254096 WARNING nova.compute.manager [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received unexpected event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.838 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3e387705-90c3-4cb7-9d26-5490a4bc1707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.860 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4244b066-760e-4535-8191-c7813515f6d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344282, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.883 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd129e8b-d74a-49dd-95eb-75bb8f3fad7c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344283, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344283, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.886 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.897 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.898 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.898 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.898 254096 DEBUG oslo_concurrency.processutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.899 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.937 254096 INFO nova.virt.libvirt.driver [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance destroyed successfully.#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.938 254096 DEBUG nova.objects.instance [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.949 254096 DEBUG nova.virt.libvirt.vif [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-262344867',display_name='tempest-ServersTestJSON-server-262344867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-262344867',id=86,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQJGP+WBaWzhABP2DN5HUriBkruJv6UMUXVsLc1/QAALeyb1ZBY8xYIWQD6vo5KB897SaecCjpvmyzj4CBX1phIjZTvTDr6O/Dm/ASyK8mTfIM6RZCRLmfnDkwJw8Gnzw==',key_name='tempest-key-735549316',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-6ok0603s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:03Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.950 254096 DEBUG nova.network.os_vif_util [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.951 254096 DEBUG nova.network.os_vif_util [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.951 254096 DEBUG os_vif [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.954 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f27b65-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:05 np0005535469 nova_compute[254092]: 2025-11-25 16:47:05.958 254096 INFO os_vif [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa')#033[00m
Nov 25 11:47:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.315 254096 INFO nova.virt.libvirt.driver [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deleting instance files /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_del#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.315 254096 INFO nova.virt.libvirt.driver [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deletion of /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_del complete#033[00m
Nov 25 11:47:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455516645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.334 254096 DEBUG oslo_concurrency.processutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.339 254096 DEBUG nova.compute.provider_tree [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.354 254096 DEBUG nova.scheduler.client.report [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.361 254096 INFO nova.compute.manager [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.361 254096 DEBUG oslo.service.loopingcall [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.361 254096 DEBUG nova.compute.manager [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.362 254096 DEBUG nova.network.neutron [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.371 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.422 254096 INFO nova.scheduler.client.report [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Deleted allocations for instance b4cc1fd8-a1ed-40f8-8373-33b1d1260300#033[00m
Nov 25 11:47:06 np0005535469 nova_compute[254092]: 2025-11-25 16:47:06.473 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 194 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.909 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-unplugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.910 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.910 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.910 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] No waiting events found dispatching network-vif-unplugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-unplugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] No waiting events found dispatching network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:07 np0005535469 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 WARNING nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received unexpected event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:47:08 np0005535469 nova_compute[254092]: 2025-11-25 16:47:08.428 254096 DEBUG nova.network.neutron [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:08 np0005535469 nova_compute[254092]: 2025-11-25 16:47:08.453 254096 INFO nova.compute.manager [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 2.09 seconds to deallocate network for instance.#033[00m
Nov 25 11:47:08 np0005535469 nova_compute[254092]: 2025-11-25 16:47:08.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:08 np0005535469 nova_compute[254092]: 2025-11-25 16:47:08.503 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:08 np0005535469 nova_compute[254092]: 2025-11-25 16:47:08.503 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:08 np0005535469 nova_compute[254092]: 2025-11-25 16:47:08.569 254096 DEBUG oslo_concurrency.processutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197991953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.021 254096 DEBUG oslo_concurrency.processutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.028 254096 DEBUG nova.compute.provider_tree [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.046 254096 DEBUG nova.scheduler.client.report [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.072 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.101 254096 INFO nova.scheduler.client.report [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb#033[00m
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.166 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 194 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 208 op/s
Nov 25 11:47:09 np0005535469 nova_compute[254092]: 2025-11-25 16:47:09.987 254096 DEBUG nova.compute.manager [req-b65a6aa9-89f1-4edd-863f-b2c70de65693 req-2a3eead5-b526-4dd9-b93c-fe4d2ed49d62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-deleted-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:47:10 np0005535469 nova_compute[254092]: 2025-11-25 16:47:10.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:11 np0005535469 nova_compute[254092]: 2025-11-25 16:47:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 121 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 242 op/s
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.178 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.178 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.211 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.292 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.293 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.298 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.298 254096 INFO nova.compute.claims [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.422 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242272465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.860 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.866 254096 DEBUG nova.compute.provider_tree [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.887 254096 DEBUG nova.scheduler.client.report [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.913 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.913 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.975 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.975 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:47:12 np0005535469 nova_compute[254092]: 2025-11-25 16:47:12.998 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.014 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.106 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.107 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.108 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Creating image(s)#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.130 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.159 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.186 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.191 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.293 254096 DEBUG nova.policy [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.298 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.299 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.300 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.300 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.326 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.331 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6affd696-c15d-4401-8512-2aabbf55fd4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:13.624 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:13.624 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.683 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6affd696-c15d-4401-8512-2aabbf55fd4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.738 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:47:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 121 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 200 op/s
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.816 254096 DEBUG nova.objects.instance [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 6affd696-c15d-4401-8512-2aabbf55fd4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.823 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.824 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.831 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.832 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Ensure instance console log exists: /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.832 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.832 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.833 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.858 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.927 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.928 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.936 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:47:13 np0005535469 nova_compute[254092]: 2025-11-25 16:47:13.937 254096 INFO nova.compute.claims [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.116 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.567 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Successfully created port: be2a1b3b-f8a0-4a67-9582-54b753171490 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:47:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999882169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.593 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.600 254096 DEBUG nova.compute.provider_tree [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.613 254096 DEBUG nova.scheduler.client.report [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.634 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.634 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:47:14 np0005535469 podman[344570]: 2025-11-25 16:47:14.658250278 +0000 UTC m=+0.062848080 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:47:14 np0005535469 podman[344567]: 2025-11-25 16:47:14.663829959 +0000 UTC m=+0.068196543 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.684 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.684 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.703 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:47:14 np0005535469 podman[344571]: 2025-11-25 16:47:14.703843517 +0000 UTC m=+0.101072948 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.718 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.794 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.795 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.796 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating image(s)#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.818 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.845 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.870 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.874 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.907 254096 DEBUG nova.policy [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23f6db77558a477bbd8b8b46cb4107d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.942 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.943 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.944 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.944 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.969 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:14 np0005535469 nova_compute[254092]: 2025-11-25 16:47:14.972 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73301044-3bad-4401-9e30-f009d417f662_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.272 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73301044-3bad-4401-9e30-f009d417f662_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.325 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] resizing rbd image 73301044-3bad-4401-9e30-f009d417f662_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.392 254096 DEBUG nova.objects.instance [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.403 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.404 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Ensure instance console log exists: /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.418 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.418 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.418 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 157 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 983 KiB/s wr, 227 op/s
Nov 25 11:47:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3619001051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:15 np0005535469 nova_compute[254092]: 2025-11-25 16:47:15.963 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.041 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.042 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:47:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.245 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.942779541015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.247 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.247 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.296 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Successfully updated port: be2a1b3b-f8a0-4a67-9582-54b753171490 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.313 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.314 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.314 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8b20d119-17cb-4742-9223-90e5020f93a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 6affd696-c15d-4401-8512-2aabbf55fd4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73301044-3bad-4401-9e30-f009d417f662 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.387 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.428 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Successfully created port: 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.502 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.609 254096 DEBUG nova.compute.manager [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-changed-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.609 254096 DEBUG nova.compute.manager [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Refreshing instance network info cache due to event network-changed-be2a1b3b-f8a0-4a67-9582-54b753171490. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.610 254096 DEBUG oslo_concurrency.lockutils [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046386077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.807 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.814 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.826 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:47:16 np0005535469 nova_compute[254092]: 2025-11-25 16:47:16.844 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 189 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Nov 25 11:47:17 np0005535469 nova_compute[254092]: 2025-11-25 16:47:17.845 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:17 np0005535469 nova_compute[254092]: 2025-11-25 16:47:17.846 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:47:17 np0005535469 nova_compute[254092]: 2025-11-25 16:47:17.875 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:47:17 np0005535469 nova_compute[254092]: 2025-11-25 16:47:17.875 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.118 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updating instance_info_cache with network_info: [{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.136 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.137 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance network_info: |[{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.137 254096 DEBUG oslo_concurrency.lockutils [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.138 254096 DEBUG nova.network.neutron [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Refreshing network info cache for port be2a1b3b-f8a0-4a67-9582-54b753171490 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.140 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start _get_guest_xml network_info=[{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.145 254096 WARNING nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.155 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.156 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.160 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.161 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.161 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.162 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.162 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.162 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.167 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.323 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.323 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.336 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.403 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.404 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.411 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.411 254096 INFO nova.compute.claims [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.525 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.604 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207592377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.652 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.675 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.679 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.796 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Successfully updated port: 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.809 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.809 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.809 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.890 254096 DEBUG nova.compute.manager [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.891 254096 DEBUG nova.compute.manager [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing instance network info cache due to event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.891 254096 DEBUG oslo_concurrency.lockutils [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:18 np0005535469 nova_compute[254092]: 2025-11-25 16:47:18.996 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:47:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820097639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.029 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.035 254096 DEBUG nova.compute.provider_tree [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.049 254096 DEBUG nova.scheduler.client.report [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.068 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.069 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:47:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123912769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.110 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.111 254096 DEBUG nova.virt.libvirt.vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=88,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-nafun8zg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:13Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=6affd696-c15d-4401-8512-2aabbf55fd4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.111 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.112 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.113 254096 DEBUG nova.objects.instance [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6affd696-c15d-4401-8512-2aabbf55fd4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.129 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.129 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.132 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <uuid>6affd696-c15d-4401-8512-2aabbf55fd4e</uuid>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <name>instance-00000058</name>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestJSON-server-694487906</nova:name>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:47:18</nova:creationTime>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <nova:port uuid="be2a1b3b-f8a0-4a67-9582-54b753171490">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <entry name="serial">6affd696-c15d-4401-8512-2aabbf55fd4e</entry>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <entry name="uuid">6affd696-c15d-4401-8512-2aabbf55fd4e</entry>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6affd696-c15d-4401-8512-2aabbf55fd4e_disk">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:8a:e5:4f"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <target dev="tapbe2a1b3b-f8"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/console.log" append="off"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:47:19 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:47:19 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:47:19 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:47:19 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.134 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Preparing to wait for external event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.134 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.134 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.135 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.135 254096 DEBUG nova.virt.libvirt.vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=88,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-nafun8zg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:13Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=6affd696-c15d-4401-8512-2aabbf55fd4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.135 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.136 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.136 254096 DEBUG os_vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.140 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.140 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.144 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe2a1b3b-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe2a1b3b-f8, col_values=(('external_ids', {'iface-id': 'be2a1b3b-f8a0-4a67-9582-54b753171490', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:e5:4f', 'vm-uuid': '6affd696-c15d-4401-8512-2aabbf55fd4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.147 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:47:19 np0005535469 NetworkManager[48891]: <info>  [1764089239.1480] manager: (tapbe2a1b3b-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.157 254096 INFO os_vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8')#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.175 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.243 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.244 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.244 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:8a:e5:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.244 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Using config drive#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.262 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.302 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.304 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.304 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Creating image(s)#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.323 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.347 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.370 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.374 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.440 254096 DEBUG nova.policy [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.443 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.444 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.444 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.444 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.468 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.472 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2122cb4e-4525-451f-a46f-184e4a72cb34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 189 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.783 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2122cb4e-4525-451f-a46f-184e4a72cb34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.846 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.881 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Creating config drive at /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.887 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfacujh8t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.961 254096 DEBUG nova.objects.instance [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.965 254096 DEBUG nova.network.neutron [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updated VIF entry in instance network info cache for port be2a1b3b-f8a0-4a67-9582-54b753171490. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.965 254096 DEBUG nova.network.neutron [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updating instance_info_cache with network_info: [{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.978 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.978 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Ensure instance console log exists: /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.978 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.979 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.979 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:19 np0005535469 nova_compute[254092]: 2025-11-25 16:47:19.995 254096 DEBUG oslo_concurrency.lockutils [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.025 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfacujh8t" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.046 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.052 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.170 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.183 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.184 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deleting local config drive /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config because it was imported into RBD.#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.185 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.186 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance network_info: |[{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.186 254096 DEBUG oslo_concurrency.lockutils [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.186 254096 DEBUG nova.network.neutron [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.189 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start _get_guest_xml network_info=[{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.193 254096 WARNING nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.204 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.204 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.205 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089225.204876, b4cc1fd8-a1ed-40f8-8373-33b1d1260300 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.205 254096 INFO nova.compute.manager [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.212 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.212 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.215 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.215 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.218 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:20 np0005535469 kernel: tapbe2a1b3b-f8: entered promiscuous mode
Nov 25 11:47:20 np0005535469 NetworkManager[48891]: <info>  [1764089240.2463] manager: (tapbe2a1b3b-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.254 254096 DEBUG nova.compute.manager [None req-bc86e9ea-46bc-4882-9bcc-a56f9801d154 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:20 np0005535469 systemd-udevd[345163]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:47:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:20Z|00845|binding|INFO|Claiming lport be2a1b3b-f8a0-4a67-9582-54b753171490 for this chassis.
Nov 25 11:47:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:20Z|00846|binding|INFO|be2a1b3b-f8a0-4a67-9582-54b753171490: Claiming fa:16:3e:8a:e5:4f 10.100.0.14
Nov 25 11:47:20 np0005535469 NetworkManager[48891]: <info>  [1764089240.2920] device (tapbe2a1b3b-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:47:20 np0005535469 NetworkManager[48891]: <info>  [1764089240.2929] device (tapbe2a1b3b-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.300 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:e5:4f 10.100.0.14'], port_security=['fa:16:3e:8a:e5:4f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6affd696-c15d-4401-8512-2aabbf55fd4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=be2a1b3b-f8a0-4a67-9582-54b753171490) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.304 163338 INFO neutron.agent.ovn.metadata.agent [-] Port be2a1b3b-f8a0-4a67-9582-54b753171490 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.310 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:47:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:20Z|00847|binding|INFO|Setting lport be2a1b3b-f8a0-4a67-9582-54b753171490 ovn-installed in OVS
Nov 25 11:47:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:20Z|00848|binding|INFO|Setting lport be2a1b3b-f8a0-4a67-9582-54b753171490 up in Southbound
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:20 np0005535469 systemd-machined[216343]: New machine qemu-109-instance-00000058.
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.328 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d34f18f-6405-45bc-9ff9-040c3b4dcb48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:20 np0005535469 systemd[1]: Started Virtual Machine qemu-109-instance-00000058.
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.365 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24699707-7cc9-49c8-9f2c-a020c236450a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.368 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c86fc115-c4d4-49da-9f15-529c03ed4e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.419 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7c8638-a8cc-4c56-9685-ed03075425cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.445 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26e76eae-2b74-453d-89f5-bcf6dfa30131]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345249, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.465 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2573eef-04ed-4bd2-8ccd-5ee8aaf585ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345267, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345267, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.466 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262003797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.698 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.722 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.726 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.899 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Successfully created port: 54bd7c02-9f22-4656-9514-7219e656dbef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.935 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089225.9170492, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.935 254096 INFO nova.compute.manager [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:47:20 np0005535469 nova_compute[254092]: 2025-11-25 16:47:20.959 254096 DEBUG nova.compute.manager [None req-fa44ff10-ccab-4a6d-b0b0-00582f701b25 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:47:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0bbf03ff-0b99-4ebf-b745-64f9fd3a9c04 does not exist
Nov 25 11:47:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e0da7f1a-e5ce-4d44-881a-ac0ae9fd0b1a does not exist
Nov 25 11:47:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 860e9e3e-fccd-4b67-b670-0b215a4ae660 does not exist
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.083 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089241.0832179, 6affd696-c15d-4401-8512-2aabbf55fd4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.084 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Started (Lifecycle Event)#033[00m
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.101 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.107 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089241.084172, 6affd696-c15d-4401-8512-2aabbf55fd4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.107 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.121 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.127 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.148 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748233305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.169 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.170 254096 DEBUG nova.virt.libvirt.vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.171 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.171 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.173 254096 DEBUG nova.objects.instance [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.190 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <uuid>73301044-3bad-4401-9e30-f009d417f662</uuid>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <name>instance-00000059</name>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerActionsTestOtherB-server-932750089</nova:name>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:47:20</nova:creationTime>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <nova:port uuid="792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <entry name="serial">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <entry name="uuid">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk.config">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:c4:5c:49"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <target dev="tap792a5867-7e"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log" append="off"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:47:21 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:47:21 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:47:21 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:47:21 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.190 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Preparing to wait for external event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.190 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.191 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.191 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.192 254096 DEBUG nova.virt.libvirt.vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.192 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.193 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.193 254096 DEBUG os_vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.195 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.195 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.198 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap792a5867-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.198 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap792a5867-7e, col_values=(('external_ids', {'iface-id': '792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:5c:49', 'vm-uuid': '73301044-3bad-4401-9e30-f009d417f662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:21 np0005535469 NetworkManager[48891]: <info>  [1764089241.2008] manager: (tap792a5867-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.206 254096 INFO os_vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.257 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.258 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.258 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:c4:5c:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.258 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Using config drive#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.278 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.416 254096 DEBUG nova.network.neutron [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated VIF entry in instance network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.417 254096 DEBUG nova.network.neutron [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.434 254096 DEBUG oslo_concurrency.lockutils [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:21 np0005535469 nova_compute[254092]: 2025-11-25 16:47:21.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:47:21 np0005535469 podman[345575]: 2025-11-25 16:47:21.609962561 +0000 UTC m=+0.039148115 container create 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:47:21 np0005535469 systemd[1]: Started libpod-conmon-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope.
Nov 25 11:47:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:21 np0005535469 podman[345575]: 2025-11-25 16:47:21.594136232 +0000 UTC m=+0.023321766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:47:21 np0005535469 podman[345575]: 2025-11-25 16:47:21.707584634 +0000 UTC m=+0.136770178 container init 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:47:21 np0005535469 podman[345575]: 2025-11-25 16:47:21.714531604 +0000 UTC m=+0.143717118 container start 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:47:21 np0005535469 podman[345575]: 2025-11-25 16:47:21.717493384 +0000 UTC m=+0.146678918 container attach 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:47:21 np0005535469 lucid_newton[345591]: 167 167
Nov 25 11:47:21 np0005535469 systemd[1]: libpod-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope: Deactivated successfully.
Nov 25 11:47:21 np0005535469 conmon[345591]: conmon 434993ca0d8cc7ad6f53 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope/container/memory.events
Nov 25 11:47:21 np0005535469 podman[345575]: 2025-11-25 16:47:21.722345656 +0000 UTC m=+0.151531170 container died 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:47:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0ce4e6ccf90b7f6e6f7ba9f616b62da07e24d39576900036bf1a70f6201016f8-merged.mount: Deactivated successfully.
Nov 25 11:47:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 259 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.3 MiB/s wr, 118 op/s
Nov 25 11:47:21 np0005535469 podman[345575]: 2025-11-25 16:47:21.766322361 +0000 UTC m=+0.195507905 container remove 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 25 11:47:21 np0005535469 systemd[1]: libpod-conmon-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope: Deactivated successfully.
Nov 25 11:47:21 np0005535469 podman[345616]: 2025-11-25 16:47:21.953857787 +0000 UTC m=+0.034362825 container create 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:47:21 np0005535469 systemd[1]: Started libpod-conmon-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope.
Nov 25 11:47:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:22 np0005535469 podman[345616]: 2025-11-25 16:47:22.028196068 +0000 UTC m=+0.108701126 container init 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:47:22 np0005535469 podman[345616]: 2025-11-25 16:47:21.939914378 +0000 UTC m=+0.020419436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:47:22 np0005535469 podman[345616]: 2025-11-25 16:47:22.039457174 +0000 UTC m=+0.119962222 container start 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:47:22 np0005535469 podman[345616]: 2025-11-25 16:47:22.042833505 +0000 UTC m=+0.123338573 container attach 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.183 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating config drive at /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.190 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85sipxzv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.329 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85sipxzv" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.353 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.358 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.509 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Successfully updated port: 54bd7c02-9f22-4656-9514-7219e656dbef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.521 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.522 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting local config drive /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config because it was imported into RBD.#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.535 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.535 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.536 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:47:22 np0005535469 kernel: tap792a5867-7e: entered promiscuous mode
Nov 25 11:47:22 np0005535469 NetworkManager[48891]: <info>  [1764089242.5721] manager: (tap792a5867-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 25 11:47:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:22Z|00849|binding|INFO|Claiming lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for this chassis.
Nov 25 11:47:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:22Z|00850|binding|INFO|792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3: Claiming fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 11:47:22 np0005535469 systemd-udevd[345168]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.587 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.588 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis#033[00m
Nov 25 11:47:22 np0005535469 NetworkManager[48891]: <info>  [1764089242.5906] device (tap792a5867-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.590 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919#033[00m
Nov 25 11:47:22 np0005535469 NetworkManager[48891]: <info>  [1764089242.5914] device (tap792a5867-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.605 254096 DEBUG nova.compute.manager [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.606 254096 DEBUG nova.compute.manager [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing instance network info cache due to event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.605 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8797a822-430e-4070-bab2-d15970e4d11e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.606 254096 DEBUG oslo_concurrency.lockutils [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.606 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34b8c77e-81 in ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.608 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34b8c77e-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.608 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ee576e-55ad-4a6e-b78e-f73dee203d27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[160705a4-23cc-40e9-83af-e2e8e5ba3b22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.622 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d7aa58-f4e6-42f8-9309-5a97598ccaa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 systemd-machined[216343]: New machine qemu-110-instance-00000059.
Nov 25 11:47:22 np0005535469 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.644 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc914a2e-9409-4587-931a-3ffabc269bec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:22Z|00851|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 ovn-installed in OVS
Nov 25 11:47:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:22Z|00852|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 up in Southbound
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.673 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4e84aa51-f890-448e-aa94-05591210fbb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.680 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[717c4cd0-d11f-4b36-a51e-6c65539b486d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 NetworkManager[48891]: <info>  [1764089242.6810] manager: (tap34b8c77e-80): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.692 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.720 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf53c866-4285-45ec-a27c-4c1cb7c233c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.722 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0a513186-ff5c-4592-96de-8bbcc4231be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 NetworkManager[48891]: <info>  [1764089242.7459] device (tap34b8c77e-80): carrier: link connected
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.751 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ebcce5-862b-4b78-b318-aad85c29cd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.769 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84f90e7a-3d2f-4d53-b0da-71acfc6c300c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345728, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9733fdd-47d0-48ca-b605-fc0129aa2142]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:4984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570033, 'tstamp': 570033}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345731, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.796 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba0a104-51b6-4b2d-911c-17af34d1f4a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345733, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.826 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e27182-a91a-4093-a3c6-a798abb9f425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.878 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c5f388-f546-44b9-823f-b6a196505fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.879 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.880 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.880 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:22 np0005535469 NetworkManager[48891]: <info>  [1764089242.8827] manager: (tap34b8c77e-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 25 11:47:22 np0005535469 kernel: tap34b8c77e-80: entered promiscuous mode
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.887 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:22Z|00853|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.890 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34b8c77e-8369-4eab-a81e-0825e5fa2919.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34b8c77e-8369-4eab-a81e-0825e5fa2919.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.891 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4abc29f4-f714-4c06-a1de-f5636eb5557c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.892 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/34b8c77e-8369-4eab-a81e-0825e5fa2919.pid.haproxy
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:47:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.892 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'env', 'PROCESS_TAG=haproxy-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34b8c77e-8369-4eab-a81e-0825e5fa2919.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:47:22 np0005535469 nova_compute[254092]: 2025-11-25 16:47:22.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:23 np0005535469 objective_mcnulty[345632]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:47:23 np0005535469 objective_mcnulty[345632]: --> relative data size: 1.0
Nov 25 11:47:23 np0005535469 objective_mcnulty[345632]: --> All data devices are unavailable
Nov 25 11:47:23 np0005535469 systemd[1]: libpod-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope: Deactivated successfully.
Nov 25 11:47:23 np0005535469 conmon[345632]: conmon 3cb71bfa33cb47b098c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope/container/memory.events
Nov 25 11:47:23 np0005535469 podman[345759]: 2025-11-25 16:47:23.123435041 +0000 UTC m=+0.028390202 container died 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 11:47:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b-merged.mount: Deactivated successfully.
Nov 25 11:47:23 np0005535469 podman[345759]: 2025-11-25 16:47:23.205279885 +0000 UTC m=+0.110235026 container remove 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:47:23 np0005535469 systemd[1]: libpod-conmon-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope: Deactivated successfully.
Nov 25 11:47:23 np0005535469 podman[345792]: 2025-11-25 16:47:23.273857638 +0000 UTC m=+0.054819780 container create 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:47:23 np0005535469 systemd[1]: Started libpod-conmon-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a.scope.
Nov 25 11:47:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e7daa1a78ba94c079dcbea794e084a9276b64e1bfa6e1a2b7fa4bec0c3a08d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:23 np0005535469 podman[345792]: 2025-11-25 16:47:23.244515331 +0000 UTC m=+0.025477503 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:47:23 np0005535469 podman[345792]: 2025-11-25 16:47:23.350323777 +0000 UTC m=+0.131285949 container init 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:47:23 np0005535469 podman[345792]: 2025-11-25 16:47:23.355727264 +0000 UTC m=+0.136689406 container start 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:47:23 np0005535469 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : New worker (345862) forked
Nov 25 11:47:23 np0005535469 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : Loading success.
Nov 25 11:47:23 np0005535469 nova_compute[254092]: 2025-11-25 16:47:23.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 259 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 5.3 MiB/s wr, 85 op/s
Nov 25 11:47:23 np0005535469 podman[345961]: 2025-11-25 16:47:23.813922345 +0000 UTC m=+0.040067170 container create c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:47:23 np0005535469 systemd[1]: Started libpod-conmon-c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81.scope.
Nov 25 11:47:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:23 np0005535469 podman[345961]: 2025-11-25 16:47:23.798380453 +0000 UTC m=+0.024525298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:47:23 np0005535469 podman[345961]: 2025-11-25 16:47:23.89802744 +0000 UTC m=+0.124172275 container init c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:47:23 np0005535469 podman[345961]: 2025-11-25 16:47:23.908131475 +0000 UTC m=+0.134276300 container start c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:47:23 np0005535469 agitated_yalow[345978]: 167 167
Nov 25 11:47:23 np0005535469 systemd[1]: libpod-c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81.scope: Deactivated successfully.
Nov 25 11:47:23 np0005535469 podman[345961]: 2025-11-25 16:47:23.916522453 +0000 UTC m=+0.142667358 container attach c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:47:23 np0005535469 podman[345961]: 2025-11-25 16:47:23.91715117 +0000 UTC m=+0.143295995 container died c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:47:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6595cea6809860e4aeeddf78fe2a091798ccaa4a3f3d4fe303ce6f60a8a46a11-merged.mount: Deactivated successfully.
Nov 25 11:47:23 np0005535469 podman[345961]: 2025-11-25 16:47:23.961848925 +0000 UTC m=+0.187993750 container remove c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:47:23 np0005535469 systemd[1]: libpod-conmon-c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81.scope: Deactivated successfully.
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.024 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.042 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.043 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance network_info: |[{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.043 254096 DEBUG oslo_concurrency.lockutils [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.043 254096 DEBUG nova.network.neutron [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.046 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start _get_guest_xml network_info=[{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.051 254096 WARNING nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.057 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.058 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.066 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.067 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.067 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.067 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.068 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.068 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.068 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.069 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.069 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.069 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.073 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:24 np0005535469 podman[346003]: 2025-11-25 16:47:24.15010814 +0000 UTC m=+0.042686120 container create 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:47:24 np0005535469 systemd[1]: Started libpod-conmon-7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f.scope.
Nov 25 11:47:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:24 np0005535469 podman[346003]: 2025-11-25 16:47:24.133253673 +0000 UTC m=+0.025831643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:47:24 np0005535469 podman[346003]: 2025-11-25 16:47:24.23731602 +0000 UTC m=+0.129894010 container init 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:47:24 np0005535469 podman[346003]: 2025-11-25 16:47:24.245912214 +0000 UTC m=+0.138490184 container start 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:47:24 np0005535469 podman[346003]: 2025-11-25 16:47:24.24906096 +0000 UTC m=+0.141638950 container attach 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:47:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2901929645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.612 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.640 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.646 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.789 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089244.788521, 73301044-3bad-4401-9e30-f009d417f662 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.789 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Started (Lifecycle Event)#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.809 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.815 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089244.7915783, 73301044-3bad-4401-9e30-f009d417f662 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.815 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.837 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.840 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:24 np0005535469 nova_compute[254092]: 2025-11-25 16:47:24.859 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.073 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.073 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.087 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:47:25 np0005535469 musing_wilson[346020]: {
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:    "0": [
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:        {
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "devices": [
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "/dev/loop3"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            ],
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_name": "ceph_lv0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_size": "21470642176",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "name": "ceph_lv0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "tags": {
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cluster_name": "ceph",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.crush_device_class": "",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.encrypted": "0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osd_id": "0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.type": "block",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.vdo": "0"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            },
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "type": "block",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "vg_name": "ceph_vg0"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:        }
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:    ],
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:    "1": [
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:        {
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "devices": [
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "/dev/loop4"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            ],
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_name": "ceph_lv1",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_size": "21470642176",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "name": "ceph_lv1",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "tags": {
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cluster_name": "ceph",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.crush_device_class": "",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.encrypted": "0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osd_id": "1",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.type": "block",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.vdo": "0"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            },
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "type": "block",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "vg_name": "ceph_vg1"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:        }
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:    ],
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:    "2": [
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:        {
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "devices": [
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "/dev/loop5"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            ],
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_name": "ceph_lv2",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_size": "21470642176",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "name": "ceph_lv2",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "tags": {
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.cluster_name": "ceph",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.crush_device_class": "",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.encrypted": "0",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osd_id": "2",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.type": "block",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:                "ceph.vdo": "0"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            },
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "type": "block",
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:            "vg_name": "ceph_vg2"
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:        }
Nov 25 11:47:25 np0005535469 musing_wilson[346020]:    ]
Nov 25 11:47:25 np0005535469 musing_wilson[346020]: }
Nov 25 11:47:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289262250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.159 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.160 254096 DEBUG nova.virt.libvirt.vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399723727',display_name='tempest-TestNetworkAdvancedServerOps-server-1399723727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399723727',id=90,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnJ2dj3tQavSsgt3v0xzD62McsGR8kH7FVN3Mskcpal4JOU2s80ZUbXF/gFef079w4ZACdh3Ov4E4/XDFuKoso7mgUy6/r/VedNuEZjiR2unDQEIrd20/t0Y7CqF7ga+A==',key_name='tempest-TestNetworkAdvancedServerOps-714044061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-yk420f19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:19Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2122cb4e-4525-451f-a46f-184e4a72cb34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.161 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.161 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.162 254096 DEBUG nova.objects.instance [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:25 np0005535469 systemd[1]: libpod-7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f.scope: Deactivated successfully.
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.176 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <uuid>2122cb4e-4525-451f-a46f-184e4a72cb34</uuid>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <name>instance-0000005a</name>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1399723727</nova:name>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:47:24</nova:creationTime>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <nova:port uuid="54bd7c02-9f22-4656-9514-7219e656dbef">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <entry name="serial">2122cb4e-4525-451f-a46f-184e4a72cb34</entry>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <entry name="uuid">2122cb4e-4525-451f-a46f-184e4a72cb34</entry>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2122cb4e-4525-451f-a46f-184e4a72cb34_disk">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:42:5b:d2"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <target dev="tap54bd7c02-9f"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/console.log" append="off"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:47:25 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:47:25 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:47:25 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:47:25 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.176 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Preparing to wait for external event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.176 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.177 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.177 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.177 254096 DEBUG nova.virt.libvirt.vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399723727',display_name='tempest-TestNetworkAdvancedServerOps-server-1399723727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399723727',id=90,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnJ2dj3tQavSsgt3v0xzD62McsGR8kH7FVN3Mskcpal4JOU2s80ZUbXF/gFef079w4ZACdh3Ov4E4/XDFuKoso7mgUy6/r/VedNuEZjiR2unDQEIrd20/t0Y7CqF7ga+A==',key_name='tempest-TestNetworkAdvancedServerOps-714044061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-yk420f19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:19Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2122cb4e-4525-451f-a46f-184e4a72cb34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.178 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.178 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.178 254096 DEBUG os_vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.179 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.179 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.182 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.182 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54bd7c02-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54bd7c02-9f, col_values=(('external_ids', {'iface-id': '54bd7c02-9f22-4656-9514-7219e656dbef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:5b:d2', 'vm-uuid': '2122cb4e-4525-451f-a46f-184e4a72cb34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:25 np0005535469 NetworkManager[48891]: <info>  [1764089245.1877] manager: (tap54bd7c02-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.191 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.191 254096 INFO nova.compute.claims [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.198 254096 INFO os_vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f')#033[00m
Nov 25 11:47:25 np0005535469 podman[346133]: 2025-11-25 16:47:25.218664239 +0000 UTC m=+0.032358001 container died 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.272 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.273 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.274 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:42:5b:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:47:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140-merged.mount: Deactivated successfully.
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.275 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Using config drive#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.295 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:25 np0005535469 podman[346133]: 2025-11-25 16:47:25.319072137 +0000 UTC m=+0.132765879 container remove 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:47:25 np0005535469 systemd[1]: libpod-conmon-7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f.scope: Deactivated successfully.
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.450 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.575 254096 DEBUG nova.network.neutron [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updated VIF entry in instance network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.575 254096 DEBUG nova.network.neutron [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.589 254096 DEBUG oslo_concurrency.lockutils [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.677 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Creating config drive at /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.682 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwuure9l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 5.4 MiB/s wr, 98 op/s
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.818 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwuure9l" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.849 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.852 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422928314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.954 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.964 254096 DEBUG nova.compute.provider_tree [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:25 np0005535469 nova_compute[254092]: 2025-11-25 16:47:25.980 254096 DEBUG nova.scheduler.client.report [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.015 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.017 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:47:26 np0005535469 podman[346366]: 2025-11-25 16:47:26.027622772 +0000 UTC m=+0.041326043 container create 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 11:47:26 np0005535469 systemd[1]: Started libpod-conmon-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope.
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.078 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.078 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deleting local config drive /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config because it was imported into RBD.#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.090 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.090 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:47:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:26 np0005535469 podman[346366]: 2025-11-25 16:47:26.008313798 +0000 UTC m=+0.022017089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.109 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:47:26 np0005535469 podman[346366]: 2025-11-25 16:47:26.120409054 +0000 UTC m=+0.134112345 container init 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.124 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:47:26 np0005535469 podman[346366]: 2025-11-25 16:47:26.12907717 +0000 UTC m=+0.142780441 container start 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:47:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:26 np0005535469 kernel: tap54bd7c02-9f: entered promiscuous mode
Nov 25 11:47:26 np0005535469 agitated_maxwell[346385]: 167 167
Nov 25 11:47:26 np0005535469 NetworkManager[48891]: <info>  [1764089246.1375] manager: (tap54bd7c02-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Nov 25 11:47:26 np0005535469 systemd[1]: libpod-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope: Deactivated successfully.
Nov 25 11:47:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:26Z|00854|binding|INFO|Claiming lport 54bd7c02-9f22-4656-9514-7219e656dbef for this chassis.
Nov 25 11:47:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:26Z|00855|binding|INFO|54bd7c02-9f22-4656-9514-7219e656dbef: Claiming fa:16:3e:42:5b:d2 10.100.0.4
Nov 25 11:47:26 np0005535469 conmon[346385]: conmon 21f6748474a7612dc298 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope/container/memory.events
Nov 25 11:47:26 np0005535469 podman[346366]: 2025-11-25 16:47:26.14086264 +0000 UTC m=+0.154565931 container attach 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:26 np0005535469 systemd-udevd[346106]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:47:26 np0005535469 podman[346366]: 2025-11-25 16:47:26.143219624 +0000 UTC m=+0.156922895 container died 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.155 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:5b:d2 10.100.0.4'], port_security=['fa:16:3e:42:5b:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2122cb4e-4525-451f-a46f-184e4a72cb34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be38e015-3930-495b-9582-fe9707042e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8e904eb-f3d0-4bff-8be5-5af69a444c2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f101b358-01b6-416d-bcc6-f10ed8ec5155, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54bd7c02-9f22-4656-9514-7219e656dbef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.157 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54bd7c02-9f22-4656-9514-7219e656dbef in datapath be38e015-3930-495b-9582-fe9707042e20 bound to our chassis#033[00m
Nov 25 11:47:26 np0005535469 NetworkManager[48891]: <info>  [1764089246.1594] device (tap54bd7c02-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.158 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be38e015-3930-495b-9582-fe9707042e20#033[00m
Nov 25 11:47:26 np0005535469 NetworkManager[48891]: <info>  [1764089246.1605] device (tap54bd7c02-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.173 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[992340d0-65f3-4edb-a8aa-af487d5ab6cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.175 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe38e015-31 in ovnmeta-be38e015-3930-495b-9582-fe9707042e20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.177 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe38e015-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a78026-90a0-452a-9659-5c7cb8b39bb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.178 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92788db7-5d1e-4377-beec-b9aa3e8ccb74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 systemd-machined[216343]: New machine qemu-111-instance-0000005a.
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.193 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[32c56783-ae13-4a78-a4e5-e6d339090ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Nov 25 11:47:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-37b1209381e3ac8c8557da9bf91430733833f7ff8bf38c7f70ee5bbf004d326c-merged.mount: Deactivated successfully.
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.212 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.216 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.217 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Creating image(s)#033[00m
Nov 25 11:47:26 np0005535469 podman[346366]: 2025-11-25 16:47:26.224928034 +0000 UTC m=+0.238631305 container remove 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[446fc2ea-5cd1-4a69-bed1-e2da5543a734]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:26Z|00856|binding|INFO|Setting lport 54bd7c02-9f22-4656-9514-7219e656dbef ovn-installed in OVS
Nov 25 11:47:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:26Z|00857|binding|INFO|Setting lport 54bd7c02-9f22-4656-9514-7219e656dbef up in Southbound
Nov 25 11:47:26 np0005535469 systemd[1]: libpod-conmon-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope: Deactivated successfully.
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.248 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.270 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[69213224-4e57-411f-abd3-d61c55c123a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.274 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.276 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f44d6dfa-903e-40bb-b5b2-3ff384c50171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 NetworkManager[48891]: <info>  [1764089246.2789] manager: (tapbe38e015-30): new Veth device (/org/freedesktop/NetworkManager/Devices/367)
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.315 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.316 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d31527-97ab-4f96-927f-b7f9ad1aa5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.319 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c4375c12-2b76-4f1a-9fea-8baea2b6f9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.320 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:26 np0005535469 NetworkManager[48891]: <info>  [1764089246.3458] device (tapbe38e015-30): carrier: link connected
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.354 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad0c409-744c-4e11-9c06-deefd7fac36f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.364 254096 DEBUG nova.policy [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ccd27eb10a8431bbd43519a883a3970', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a4c85f6be5040518f229e3e2c1c39ae', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.374 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[78829372-9e93-4e9e-ab76-f4c300de815f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe38e015-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:2b:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570393, 'reachable_time': 21224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346503, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.394 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40475270-4512-4ca1-9b10-f9298919f369]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:2bd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570393, 'tstamp': 570393}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346509, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.401 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.401 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.410 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe66e6a-2b70-4df7-bc04-688bd97fb900]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe38e015-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:2b:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570393, 'reachable_time': 21224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346517, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.432 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.439 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.446 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6ba086-6272-4f66-8777-5a2bddde333a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 podman[346508]: 2025-11-25 16:47:26.456462746 +0000 UTC m=+0.057582686 container create 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:47:26 np0005535469 systemd[1]: Started libpod-conmon-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope.
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a77229c7-9b24-4311-b14d-2a5614c597ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.515 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe38e015-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.516 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.516 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe38e015-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:26 np0005535469 kernel: tapbe38e015-30: entered promiscuous mode
Nov 25 11:47:26 np0005535469 NetworkManager[48891]: <info>  [1764089246.5191] manager: (tapbe38e015-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.520 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.522 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe38e015-30, col_values=(('external_ids', {'iface-id': 'b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:26 np0005535469 podman[346508]: 2025-11-25 16:47:26.432886846 +0000 UTC m=+0.034006796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:47:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:26Z|00858|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 11:47:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.544 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be38e015-3930-495b-9582-fe9707042e20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be38e015-3930-495b-9582-fe9707042e20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.560 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[231f71f0-b11c-4603-b3cf-07eb398d7929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.562 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-be38e015-3930-495b-9582-fe9707042e20
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/be38e015-3930-495b-9582-fe9707042e20.pid.haproxy
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID be38e015-3930-495b-9582-fe9707042e20
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:47:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.562 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'env', 'PROCESS_TAG=haproxy-be38e015-3930-495b-9582-fe9707042e20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be38e015-3930-495b-9582-fe9707042e20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:47:26 np0005535469 podman[346508]: 2025-11-25 16:47:26.573581379 +0000 UTC m=+0.174701349 container init 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:47:26 np0005535469 podman[346508]: 2025-11-25 16:47:26.581893675 +0000 UTC m=+0.183013625 container start 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:47:26 np0005535469 podman[346508]: 2025-11-25 16:47:26.631934995 +0000 UTC m=+0.233054935 container attach 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.733 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089246.7334042, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.734 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Started (Lifecycle Event)#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.753 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089246.7389889, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.757 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.773 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.779 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:26 np0005535469 nova_compute[254092]: 2025-11-25 16:47:26.796 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:27 np0005535469 podman[346644]: 2025-11-25 16:47:26.914801512 +0000 UTC m=+0.019473361 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:27.112 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.137 254096 DEBUG nova.compute.manager [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.137 254096 DEBUG oslo_concurrency.lockutils [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.138 254096 DEBUG oslo_concurrency.lockutils [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.138 254096 DEBUG oslo_concurrency.lockutils [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.138 254096 DEBUG nova.compute.manager [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Processing event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.139 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.143 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089247.143349, 6affd696-c15d-4401-8512-2aabbf55fd4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.143 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.145 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.152 254096 INFO nova.virt.libvirt.driver [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance spawned successfully.#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.153 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.169 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.172 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.193 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.201 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.201 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.202 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.202 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.203 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.203 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.271 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Successfully created port: 2fd7f15a-e429-4b39-86da-980a7fbc785f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:47:27 np0005535469 podman[346644]: 2025-11-25 16:47:27.288856847 +0000 UTC m=+0.393528676 container create 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.308 254096 INFO nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 14.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.309 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:27 np0005535469 systemd[1]: Started libpod-conmon-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7.scope.
Nov 25 11:47:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.385 254096 INFO nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 15.11 seconds to build instance.#033[00m
Nov 25 11:47:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838921d87160c6df4cb470730be2019fa42a93dba0b126d7b101a3d59de1d523/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.404 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:27 np0005535469 podman[346644]: 2025-11-25 16:47:27.41335408 +0000 UTC m=+0.518025939 container init 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.415 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.976s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:27 np0005535469 podman[346644]: 2025-11-25 16:47:27.421559423 +0000 UTC m=+0.526231252 container start 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:47:27 np0005535469 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : New worker (346697) forked
Nov 25 11:47:27 np0005535469 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : Loading success.
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.481 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] resizing rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:47:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:27.486 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.606 254096 DEBUG nova.objects.instance [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lazy-loading 'migration_context' on Instance uuid 1ccf7cd6-cf8d-400e-820e-940108160fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]: {
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "osd_id": 1,
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "type": "bluestore"
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:    },
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "osd_id": 2,
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "type": "bluestore"
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:    },
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "osd_id": 0,
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:        "type": "bluestore"
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]:    }
Nov 25 11:47:27 np0005535469 exciting_bardeen[346572]: }
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.619 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.620 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Ensure instance console log exists: /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.621 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.622 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:27 np0005535469 nova_compute[254092]: 2025-11-25 16:47:27.622 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:27 np0005535469 systemd[1]: libpod-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope: Deactivated successfully.
Nov 25 11:47:27 np0005535469 systemd[1]: libpod-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope: Consumed 1.016s CPU time.
Nov 25 11:47:27 np0005535469 podman[346774]: 2025-11-25 16:47:27.692579648 +0000 UTC m=+0.031056265 container died 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:47:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9-merged.mount: Deactivated successfully.
Nov 25 11:47:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.4 MiB/s wr, 75 op/s
Nov 25 11:47:27 np0005535469 podman[346774]: 2025-11-25 16:47:27.789252445 +0000 UTC m=+0.127729032 container remove 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:47:27 np0005535469 systemd[1]: libpod-conmon-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope: Deactivated successfully.
Nov 25 11:47:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:47:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:47:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:47:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:47:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8d33e3f0-5479-4331-be94-8630fb77f48c does not exist
Nov 25 11:47:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cd0fef15-99e5-4948-a9b9-bef8def05f2f does not exist
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.033 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Successfully updated port: 2fd7f15a-e429-4b39-86da-980a7fbc785f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.050 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.051 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquired lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.051 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.123 254096 DEBUG nova.compute.manager [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-changed-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.123 254096 DEBUG nova.compute.manager [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Refreshing instance network info cache due to event network-changed-2fd7f15a-e429-4b39-86da-980a7fbc785f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.123 254096 DEBUG oslo_concurrency.lockutils [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.182 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:47:28 np0005535469 nova_compute[254092]: 2025-11-25 16:47:28.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:47:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:47:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 61 op/s
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.769 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updating instance_info_cache with network_info: [{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.790 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Releasing lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.790 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance network_info: |[{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.790 254096 DEBUG oslo_concurrency.lockutils [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.791 254096 DEBUG nova.network.neutron [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Refreshing network info cache for port 2fd7f15a-e429-4b39-86da-980a7fbc785f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.795 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start _get_guest_xml network_info=[{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.800 254096 WARNING nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.807 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.807 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.814 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.815 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.815 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.815 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.816 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.816 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.816 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.818 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.818 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.821 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.908 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.908 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] No waiting events found dispatching network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 WARNING nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received unexpected event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Processing event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 WARNING nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Processing event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] No waiting events found dispatching network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 WARNING nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received unexpected event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.914 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.929 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.957 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089249.9483347, 73301044-3bad-4401-9e30-f009d417f662 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.959 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.963 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance spawned successfully.#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.963 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.968 254096 INFO nova.virt.libvirt.driver [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance spawned successfully.#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.968 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.981 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:29 np0005535469 nova_compute[254092]: 2025-11-25 16:47:29.989 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.011 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.011 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089249.9484584, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.016 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.020 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.021 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.021 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.022 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.022 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.023 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.027 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.027 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.027 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.028 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.028 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.029 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.033 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.037 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.071 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.187 254096 INFO nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 10.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.187 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.232 254096 INFO nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 15.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.233 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019731349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.307 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.331 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.335 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.371 254096 INFO nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 11.99 seconds to build instance.#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.378 254096 INFO nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 16.47 seconds to build instance.#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.499 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.518 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854414798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.858 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.860 254096 DEBUG nova.virt.libvirt.vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-349447593',display_name='tempest-ServerPasswordTestJSON-server-349447593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-349447593',id=91,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a4c85f6be5040518f229e3e2c1c39ae',ramdisk_id='',reservation_id='r-gsqeuqfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1941740454',owner_user_name='tempest-ServerPasswordTestJSON-1941740454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:26Z,user_data=None,user_id='3ccd27eb10a8431bbd43519a883a3970',uuid=1ccf7cd6-cf8d-400e-820e-940108160fa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.860 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converting VIF {"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.861 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.863 254096 DEBUG nova.objects.instance [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ccf7cd6-cf8d-400e-820e-940108160fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.879 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <uuid>1ccf7cd6-cf8d-400e-820e-940108160fa8</uuid>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <name>instance-0000005b</name>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerPasswordTestJSON-server-349447593</nova:name>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:47:29</nova:creationTime>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:user uuid="3ccd27eb10a8431bbd43519a883a3970">tempest-ServerPasswordTestJSON-1941740454-project-member</nova:user>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:project uuid="5a4c85f6be5040518f229e3e2c1c39ae">tempest-ServerPasswordTestJSON-1941740454</nova:project>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <nova:port uuid="2fd7f15a-e429-4b39-86da-980a7fbc785f">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <entry name="serial">1ccf7cd6-cf8d-400e-820e-940108160fa8</entry>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <entry name="uuid">1ccf7cd6-cf8d-400e-820e-940108160fa8</entry>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ccf7cd6-cf8d-400e-820e-940108160fa8_disk">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:14:46:44"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <target dev="tap2fd7f15a-e4"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/console.log" append="off"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:47:30 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:47:30 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:47:30 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:47:30 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.881 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Preparing to wait for external event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.881 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.882 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.882 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.883 254096 DEBUG nova.virt.libvirt.vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-349447593',display_name='tempest-ServerPasswordTestJSON-server-349447593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-349447593',id=91,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a4c85f6be5040518f229e3e2c1c39ae',ramdisk_id='',reservation_id='r-gsqeuqfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1941740454',owner_user_name='tempest-ServerPasswordTestJSON-1941740454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:26Z,user_data=None,user_id='3ccd27eb10a8431bbd43519a883a3970',uuid=1ccf7cd6-cf8d-400e-820e-940108160fa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.883 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converting VIF {"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.884 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.884 254096 DEBUG os_vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.885 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.886 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.890 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fd7f15a-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.890 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fd7f15a-e4, col_values=(('external_ids', {'iface-id': '2fd7f15a-e429-4b39-86da-980a7fbc785f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:46:44', 'vm-uuid': '1ccf7cd6-cf8d-400e-820e-940108160fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:30 np0005535469 NetworkManager[48891]: <info>  [1764089250.8937] manager: (tap2fd7f15a-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.905 254096 INFO os_vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4')#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.953 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.954 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.954 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] No VIF found with MAC fa:16:3e:14:46:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.955 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Using config drive#033[00m
Nov 25 11:47:30 np0005535469 nova_compute[254092]: 2025-11-25 16:47:30.979 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.398 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.399 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.416 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.453 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Creating config drive at /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.457 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpawx_bbfz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.511 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.512 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.518 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.518 254096 INFO nova.compute.claims [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.592 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpawx_bbfz" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.616 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.619 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 306 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.755 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.792 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.793 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deleting local config drive /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config because it was imported into RBD.#033[00m
Nov 25 11:47:31 np0005535469 NetworkManager[48891]: <info>  [1764089251.8537] manager: (tap2fd7f15a-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Nov 25 11:47:31 np0005535469 kernel: tap2fd7f15a-e4: entered promiscuous mode
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:31Z|00859|binding|INFO|Claiming lport 2fd7f15a-e429-4b39-86da-980a7fbc785f for this chassis.
Nov 25 11:47:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:31Z|00860|binding|INFO|2fd7f15a-e429-4b39-86da-980a7fbc785f: Claiming fa:16:3e:14:46:44 10.100.0.5
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.875 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:46:44 10.100.0.5'], port_security=['fa:16:3e:14:46:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1ccf7cd6-cf8d-400e-820e-940108160fa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edeacdb8-47e8-4402-a14f-718b48aff73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a4c85f6be5040518f229e3e2c1c39ae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fa0399d-57fa-4e1b-a9bf-664935612bc3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=503adfaf-62d4-482e-ab7f-70af0baad006, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2fd7f15a-e429-4b39-86da-980a7fbc785f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.876 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2fd7f15a-e429-4b39-86da-980a7fbc785f in datapath edeacdb8-47e8-4402-a14f-718b48aff73b bound to our chassis#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.882 254096 DEBUG nova.network.neutron [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updated VIF entry in instance network info cache for port 2fd7f15a-e429-4b39-86da-980a7fbc785f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.883 254096 DEBUG nova.network.neutron [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updating instance_info_cache with network_info: [{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.888 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network edeacdb8-47e8-4402-a14f-718b48aff73b#033[00m
Nov 25 11:47:31 np0005535469 systemd-udevd[346985]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a11560a6-66eb-49c2-bd20-77bad6e5470a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.901 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapedeacdb8-41 in ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.904 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapedeacdb8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5498a44d-40af-4ab5-a8f7-e44166d1d9f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.907 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8f2cb0-3575-4f28-ac4c-d9f87ebe982c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:31 np0005535469 systemd-machined[216343]: New machine qemu-112-instance-0000005b.
Nov 25 11:47:31 np0005535469 NetworkManager[48891]: <info>  [1764089251.9112] device (tap2fd7f15a-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:47:31 np0005535469 NetworkManager[48891]: <info>  [1764089251.9123] device (tap2fd7f15a-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:47:31 np0005535469 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.926 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[eb60a2d5-121a-4e89-b1c8-80ce771ff753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.935 254096 DEBUG oslo_concurrency.lockutils [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfa3f50-0a53-4e62-9566-ea30f453c780]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:31Z|00861|binding|INFO|Setting lport 2fd7f15a-e429-4b39-86da-980a7fbc785f ovn-installed in OVS
Nov 25 11:47:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:31Z|00862|binding|INFO|Setting lport 2fd7f15a-e429-4b39-86da-980a7fbc785f up in Southbound
Nov 25 11:47:31 np0005535469 nova_compute[254092]: 2025-11-25 16:47:31.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.989 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6e592fc0-6bfb-4fb8-977e-225b545be6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:31 np0005535469 NetworkManager[48891]: <info>  [1764089251.9964] manager: (tapedeacdb8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Nov 25 11:47:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04499e67-2ebd-4579-8f6a-310a3acc28c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.033 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[94ef8cbb-5510-4142-b499-5f3a85b1f9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.036 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2a67c467-1840-43cf-b271-66a8b603e574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 NetworkManager[48891]: <info>  [1764089252.0601] device (tapedeacdb8-40): carrier: link connected
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.067 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc148e9-a6ef-4cf6-a51f-26398b703caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.084 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[395a1941-ab27-4c51-b255-123920c81658]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedeacdb8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:f1:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570965, 'reachable_time': 21580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347028, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.104 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c72cdac-09e3-44ac-9c0a-f0f5569fad3a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:f13c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570965, 'tstamp': 570965}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347029, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.112 254096 DEBUG nova.compute.manager [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.113 254096 DEBUG oslo_concurrency.lockutils [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.113 254096 DEBUG oslo_concurrency.lockutils [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.114 254096 DEBUG oslo_concurrency.lockutils [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.114 254096 DEBUG nova.compute.manager [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Processing event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[22d30857-804e-49a8-ab26-0f29f3682073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedeacdb8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:f1:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570965, 'reachable_time': 21580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347030, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.154 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5173667a-788e-47cb-8626-87d5ca9358ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.223 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[909d66cc-3851-4c59-8890-c2237c4ec5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.224 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedeacdb8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.224 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.225 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedeacdb8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:32 np0005535469 NetworkManager[48891]: <info>  [1764089252.2271] manager: (tapedeacdb8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Nov 25 11:47:32 np0005535469 kernel: tapedeacdb8-40: entered promiscuous mode
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.232 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapedeacdb8-40, col_values=(('external_ids', {'iface-id': '0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:32 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:32Z|00863|binding|INFO|Releasing lport 0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159 from this chassis (sb_readonly=0)
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.250 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/edeacdb8-47e8-4402-a14f-718b48aff73b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/edeacdb8-47e8-4402-a14f-718b48aff73b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.250 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.251 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2e714d-1743-4f37-80cc-e97ea4caa6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.252 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-edeacdb8-47e8-4402-a14f-718b48aff73b
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/edeacdb8-47e8-4402-a14f-718b48aff73b.pid.haproxy
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID edeacdb8-47e8-4402-a14f-718b48aff73b
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.252 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'env', 'PROCESS_TAG=haproxy-edeacdb8-47e8-4402-a14f-718b48aff73b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/edeacdb8-47e8-4402-a14f-718b48aff73b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:47:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163402489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.285 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.296 254096 DEBUG nova.compute.provider_tree [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.316 254096 DEBUG nova.scheduler.client.report [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.338 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.339 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.383 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.384 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.399 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.414 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:47:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.488 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.506 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.507 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.507 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Creating image(s)#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.532 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.589 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.619 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.623 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.661 254096 DEBUG nova.policy [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.694 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.695 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089252.6936932, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.695 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Started (Lifecycle Event)#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.700 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:47:32 np0005535469 podman[347161]: 2025-11-25 16:47:32.712406673 +0000 UTC m=+0.052069166 container create cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.712 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.713 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.717 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.717 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.718 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.746 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.757 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4117640c-3ae9-4568-9034-7a7612ac43fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:32 np0005535469 systemd[1]: Started libpod-conmon-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8.scope.
Nov 25 11:47:32 np0005535469 podman[347161]: 2025-11-25 16:47:32.685146112 +0000 UTC m=+0.024808625 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:47:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:47:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3843f8da4445da577fb4ed2371003e380b705a4c191dcb2bd09c10a7e13d22ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.801 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance spawned successfully.#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.801 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.805 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:32 np0005535469 podman[347161]: 2025-11-25 16:47:32.819522594 +0000 UTC m=+0.159185097 container init cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:47:32 np0005535469 podman[347161]: 2025-11-25 16:47:32.826740609 +0000 UTC m=+0.166403102 container start cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.831 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.831 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089252.6951797, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.831 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.839 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.840 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.840 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.840 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.841 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.841 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:32 np0005535469 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : New worker (347218) forked
Nov 25 11:47:32 np0005535469 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : Loading success.
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.865 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.871 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089252.699573, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.872 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.893 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.900 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.905 254096 INFO nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 6.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.905 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.923 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:32 np0005535469 nova_compute[254092]: 2025-11-25 16:47:32.988 254096 INFO nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 7.84 seconds to build instance.#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.015 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.169 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4117640c-3ae9-4568-9034-7a7612ac43fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.241 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.374 254096 DEBUG nova.objects.instance [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 4117640c-3ae9-4568-9034-7a7612ac43fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.384 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.385 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Ensure instance console log exists: /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.385 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.386 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.386 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:33 np0005535469 nova_compute[254092]: 2025-11-25 16:47:33.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 306 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.536 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Successfully created port: c9e57355-8fcc-40ec-ada3-03c6d0147098 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.607 254096 DEBUG nova.compute.manager [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG oslo_concurrency.lockutils [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG oslo_concurrency.lockutils [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG oslo_concurrency.lockutils [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG nova.compute.manager [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] No waiting events found dispatching network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.609 254096 WARNING nova.compute.manager [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received unexpected event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f for instance with vm_state active and task_state None.#033[00m
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:34 np0005535469 NetworkManager[48891]: <info>  [1764089254.6640] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Nov 25 11:47:34 np0005535469 NetworkManager[48891]: <info>  [1764089254.6651] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00864|binding|INFO|Releasing lport 0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159 from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00865|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00866|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00867|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00868|binding|INFO|Releasing lport 0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159 from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00869|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00870|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:34Z|00871|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 11:47:34 np0005535469 nova_compute[254092]: 2025-11-25 16:47:34.734 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 342 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.2 MiB/s wr, 314 op/s
Nov 25 11:47:35 np0005535469 nova_compute[254092]: 2025-11-25 16:47:35.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.158 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.160 254096 INFO nova.compute.manager [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Terminating instance#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.161 254096 DEBUG nova.compute.manager [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:47:36 np0005535469 kernel: tap2fd7f15a-e4 (unregistering): left promiscuous mode
Nov 25 11:47:36 np0005535469 NetworkManager[48891]: <info>  [1764089256.2077] device (tap2fd7f15a-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:36Z|00872|binding|INFO|Releasing lport 2fd7f15a-e429-4b39-86da-980a7fbc785f from this chassis (sb_readonly=0)
Nov 25 11:47:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:36Z|00873|binding|INFO|Setting lport 2fd7f15a-e429-4b39-86da-980a7fbc785f down in Southbound
Nov 25 11:47:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:36Z|00874|binding|INFO|Removing iface tap2fd7f15a-e4 ovn-installed in OVS
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.226 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:46:44 10.100.0.5'], port_security=['fa:16:3e:14:46:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1ccf7cd6-cf8d-400e-820e-940108160fa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edeacdb8-47e8-4402-a14f-718b48aff73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a4c85f6be5040518f229e3e2c1c39ae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fa0399d-57fa-4e1b-a9bf-664935612bc3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=503adfaf-62d4-482e-ab7f-70af0baad006, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2fd7f15a-e429-4b39-86da-980a7fbc785f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.227 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2fd7f15a-e429-4b39-86da-980a7fbc785f in datapath edeacdb8-47e8-4402-a14f-718b48aff73b unbound from our chassis#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.229 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network edeacdb8-47e8-4402-a14f-718b48aff73b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7146a193-0319-456c-8f4f-b650aa486b0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.230 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b namespace which is not needed anymore#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Nov 25 11:47:36 np0005535469 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 4.114s CPU time.
Nov 25 11:47:36 np0005535469 systemd-machined[216343]: Machine qemu-112-instance-0000005b terminated.
Nov 25 11:47:36 np0005535469 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : haproxy version is 2.8.14-c23fe91
Nov 25 11:47:36 np0005535469 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : path to executable is /usr/sbin/haproxy
Nov 25 11:47:36 np0005535469 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [WARNING]  (347202) : Exiting Master process...
Nov 25 11:47:36 np0005535469 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [ALERT]    (347202) : Current worker (347218) exited with code 143 (Terminated)
Nov 25 11:47:36 np0005535469 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [WARNING]  (347202) : All workers exited. Exiting... (0)
Nov 25 11:47:36 np0005535469 systemd[1]: libpod-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8.scope: Deactivated successfully.
Nov 25 11:47:36 np0005535469 podman[347325]: 2025-11-25 16:47:36.367246643 +0000 UTC m=+0.046028921 container died cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.395 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance destroyed successfully.#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.396 254096 DEBUG nova.objects.instance [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lazy-loading 'resources' on Instance uuid 1ccf7cd6-cf8d-400e-820e-940108160fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.408 254096 DEBUG nova.virt.libvirt.vif [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-349447593',display_name='tempest-ServerPasswordTestJSON-server-349447593',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-349447593',id=91,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a4c85f6be5040518f229e3e2c1c39ae',ramdisk_id='',reservation_id='r-gsqeuqfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1941740454',owner_user_name='tempest-ServerPasswordTestJSON-1941740454-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:35Z,user_data=None,user_id='3ccd27eb10a8431bbd43519a883a3970',uuid=1ccf7cd6-cf8d-400e-820e-940108160fa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.409 254096 DEBUG nova.network.os_vif_util [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converting VIF {"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.410 254096 DEBUG nova.network.os_vif_util [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8-userdata-shm.mount: Deactivated successfully.
Nov 25 11:47:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3843f8da4445da577fb4ed2371003e380b705a4c191dcb2bd09c10a7e13d22ea-merged.mount: Deactivated successfully.
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.411 254096 DEBUG os_vif [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.424 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.425 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fd7f15a-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:36 np0005535469 podman[347325]: 2025-11-25 16:47:36.426034991 +0000 UTC m=+0.104817249 container cleanup cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.435 254096 INFO os_vif [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4')#033[00m
Nov 25 11:47:36 np0005535469 systemd[1]: libpod-conmon-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8.scope: Deactivated successfully.
Nov 25 11:47:36 np0005535469 podman[347362]: 2025-11-25 16:47:36.51506271 +0000 UTC m=+0.053164345 container remove cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.534 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10ca454d-4049-46c8-9555-fa008212397c]: (4, ('Tue Nov 25 04:47:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b (cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8)\ncf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8\nTue Nov 25 04:47:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b (cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8)\ncf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.536 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7a291d-84b7-4337-a416-9e2b4ace8db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.538 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedeacdb8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 kernel: tapedeacdb8-40: left promiscuous mode
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.565 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[011b0dea-0b64-4c14-a71f-ea9ebfc9fe01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.579 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8999fbf-0d2a-43e8-9046-5317abb4c31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88318d1d-b002-4d0e-90f5-c8236bd5bc47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fe09285d-2a00-45e7-b8c9-cd3f8ac872ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570957, 'reachable_time': 33070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347394, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 systemd[1]: run-netns-ovnmeta\x2dedeacdb8\x2d47e8\x2d4402\x2da14f\x2d718b48aff73b.mount: Deactivated successfully.
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.600 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:47:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.600 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c7b4fe-38d3-4c7b-9498-41c86b2d1e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.680 254096 DEBUG nova.compute.manager [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.680 254096 DEBUG nova.compute.manager [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing instance network info cache due to event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.681 254096 DEBUG oslo_concurrency.lockutils [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.681 254096 DEBUG oslo_concurrency.lockutils [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.682 254096 DEBUG nova.network.neutron [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.868 254096 INFO nova.virt.libvirt.driver [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deleting instance files /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8_del#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.870 254096 INFO nova.virt.libvirt.driver [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deletion of /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8_del complete#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.910 254096 INFO nova.compute.manager [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.911 254096 DEBUG oslo.service.loopingcall [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.911 254096 DEBUG nova.compute.manager [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:47:36 np0005535469 nova_compute[254092]: 2025-11-25 16:47:36.912 254096 DEBUG nova.network.neutron [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:47:37 np0005535469 nova_compute[254092]: 2025-11-25 16:47:37.395 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Successfully updated port: c9e57355-8fcc-40ec-ada3-03c6d0147098 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:47:37 np0005535469 nova_compute[254092]: 2025-11-25 16:47:37.411 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:37 np0005535469 nova_compute[254092]: 2025-11-25 16:47:37.412 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:37 np0005535469 nova_compute[254092]: 2025-11-25 16:47:37.412 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:47:37 np0005535469 nova_compute[254092]: 2025-11-25 16:47:37.689 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:47:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 353 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 332 op/s
Nov 25 11:47:38 np0005535469 nova_compute[254092]: 2025-11-25 16:47:38.406 254096 DEBUG nova.network.neutron [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:38 np0005535469 nova_compute[254092]: 2025-11-25 16:47:38.426 254096 INFO nova.compute.manager [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 1.51 seconds to deallocate network for instance.#033[00m
Nov 25 11:47:38 np0005535469 nova_compute[254092]: 2025-11-25 16:47:38.479 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:38 np0005535469 nova_compute[254092]: 2025-11-25 16:47:38.480 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:38 np0005535469 nova_compute[254092]: 2025-11-25 16:47:38.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:38 np0005535469 nova_compute[254092]: 2025-11-25 16:47:38.598 254096 DEBUG oslo_concurrency.processutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.014 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-unplugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.015 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.015 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.016 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.016 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] No waiting events found dispatching network-vif-unplugged-2fd7f15a-e429-4b39-86da-980a7fbc785f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.016 254096 WARNING nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received unexpected event network-vif-unplugged-2fd7f15a-e429-4b39-86da-980a7fbc785f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.017 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-changed-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.017 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Refreshing instance network info cache due to event network-changed-c9e57355-8fcc-40ec-ada3-03c6d0147098. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.018 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408766462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.038 254096 DEBUG oslo_concurrency.processutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.045 254096 DEBUG nova.compute.provider_tree [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.062 254096 DEBUG nova.scheduler.client.report [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.085 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.105 254096 DEBUG nova.network.neutron [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated VIF entry in instance network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.106 254096 DEBUG nova.network.neutron [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.109 254096 INFO nova.scheduler.client.report [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Deleted allocations for instance 1ccf7cd6-cf8d-400e-820e-940108160fa8#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.122 254096 DEBUG oslo_concurrency.lockutils [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.169 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.223 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updating instance_info_cache with network_info: [{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.249 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.251 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance network_info: |[{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.252 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.253 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Refreshing network info cache for port c9e57355-8fcc-40ec-ada3-03c6d0147098 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.257 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start _get_guest_xml network_info=[{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.262 254096 WARNING nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.267 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.267 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.279 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.280 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.280 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.281 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.282 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.282 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.282 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.283 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.284 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.284 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.285 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.286 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.286 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.287 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.291 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 353 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 329 op/s
Nov 25 11:47:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669675625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.850 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.878 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:39 np0005535469 nova_compute[254092]: 2025-11-25 16:47:39.881 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:39Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:e5:4f 10.100.0.14
Nov 25 11:47:39 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:39Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:e5:4f 10.100.0.14
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:47:40
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:47:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:47:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664227085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.322 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.325 254096 DEBUG nova.virt.libvirt.vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=92,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-lutj4rar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:32Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4117640c-3ae9-4568-9034-7a7612ac43fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.325 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.326 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.327 254096 DEBUG nova.objects.instance [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4117640c-3ae9-4568-9034-7a7612ac43fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.340 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <uuid>4117640c-3ae9-4568-9034-7a7612ac43fe</uuid>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <name>instance-0000005c</name>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestJSON-server-694487906</nova:name>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:47:39</nova:creationTime>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <nova:port uuid="c9e57355-8fcc-40ec-ada3-03c6d0147098">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <entry name="serial">4117640c-3ae9-4568-9034-7a7612ac43fe</entry>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <entry name="uuid">4117640c-3ae9-4568-9034-7a7612ac43fe</entry>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4117640c-3ae9-4568-9034-7a7612ac43fe_disk">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f3:93:e6"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <target dev="tapc9e57355-8f"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/console.log" append="off"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:47:40 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:47:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:47:40 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:47:40 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.340 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Preparing to wait for external event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.341 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.341 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.341 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.342 254096 DEBUG nova.virt.libvirt.vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=92,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-lutj4rar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:32Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4117640c-3ae9-4568-9034-7a7612ac43fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.342 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.342 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.343 254096 DEBUG os_vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.344 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.344 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.346 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9e57355-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.347 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9e57355-8f, col_values=(('external_ids', {'iface-id': 'c9e57355-8fcc-40ec-ada3-03c6d0147098', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:93:e6', 'vm-uuid': '4117640c-3ae9-4568-9034-7a7612ac43fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:40 np0005535469 NetworkManager[48891]: <info>  [1764089260.3488] manager: (tapc9e57355-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.355 254096 INFO os_vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f')#033[00m
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.406 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.407 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.407 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:f3:93:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.408 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Using config drive#033[00m
Nov 25 11:47:40 np0005535469 nova_compute[254092]: 2025-11-25 16:47:40.434 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:47:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.259 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Creating config drive at /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config#033[00m
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.264 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnc27hjsb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.401 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnc27hjsb" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.423 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.427 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 333 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.6 MiB/s wr, 400 op/s
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.822 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.826 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deleting local config drive /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config because it was imported into RBD.#033[00m
Nov 25 11:47:41 np0005535469 kernel: tapc9e57355-8f: entered promiscuous mode
Nov 25 11:47:41 np0005535469 NetworkManager[48891]: <info>  [1764089261.8961] manager: (tapc9e57355-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:41Z|00875|binding|INFO|Claiming lport c9e57355-8fcc-40ec-ada3-03c6d0147098 for this chassis.
Nov 25 11:47:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:41Z|00876|binding|INFO|c9e57355-8fcc-40ec-ada3-03c6d0147098: Claiming fa:16:3e:f3:93:e6 10.100.0.5
Nov 25 11:47:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.904 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:93:e6 10.100.0.5'], port_security=['fa:16:3e:f3:93:e6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4117640c-3ae9-4568-9034-7a7612ac43fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c9e57355-8fcc-40ec-ada3-03c6d0147098) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.906 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c9e57355-8fcc-40ec-ada3-03c6d0147098 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis#033[00m
Nov 25 11:47:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.909 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:47:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:41Z|00877|binding|INFO|Setting lport c9e57355-8fcc-40ec-ada3-03c6d0147098 ovn-installed in OVS
Nov 25 11:47:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:41Z|00878|binding|INFO|Setting lport c9e57355-8fcc-40ec-ada3-03c6d0147098 up in Southbound
Nov 25 11:47:41 np0005535469 nova_compute[254092]: 2025-11-25 16:47:41.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:41 np0005535469 systemd-udevd[347558]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:47:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.939 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b025d7a-7293-479d-931a-007668d43e1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:41 np0005535469 systemd-machined[216343]: New machine qemu-113-instance-0000005c.
Nov 25 11:47:41 np0005535469 NetworkManager[48891]: <info>  [1764089261.9512] device (tapc9e57355-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:47:41 np0005535469 NetworkManager[48891]: <info>  [1764089261.9520] device (tapc9e57355-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:47:41 np0005535469 systemd[1]: Started Virtual Machine qemu-113-instance-0000005c.
Nov 25 11:47:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.982 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac32e335-a5eb-4ef5-b54f-08035a9f77c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.986 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1478573-3d38-456e-98d2-910399c02149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.022 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41876e0d-1e25-4342-8f97-82577830dcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.046 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aae22ef5-42a1-40e6-aaa2-e960fcca1624]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347571, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.068 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0323d4-c3ef-4ac8-b9a4-5b02fdf2b248]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347573, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347573, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.070 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.306 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updated VIF entry in instance network info cache for port c9e57355-8fcc-40ec-ada3-03c6d0147098. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.307 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updating instance_info_cache with network_info: [{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.324 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.325 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.325 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.326 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.326 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.326 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] No waiting events found dispatching network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 WARNING nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received unexpected event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-deleted-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing instance network info cache due to event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.328 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.328 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.408 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089262.4076002, 4117640c-3ae9-4568-9034-7a7612ac43fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.409 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Started (Lifecycle Event)#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.424 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.427 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089262.408412, 4117640c-3ae9-4568-9034-7a7612ac43fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.427 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.446 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.449 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.467 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.700 254096 DEBUG nova.compute.manager [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.700 254096 DEBUG oslo_concurrency.lockutils [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG oslo_concurrency.lockutils [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG oslo_concurrency.lockutils [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG nova.compute.manager [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Processing event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.705 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089262.7048876, 4117640c-3ae9-4568-9034-7a7612ac43fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.705 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.706 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.712 254096 INFO nova.virt.libvirt.driver [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance spawned successfully.#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.712 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.726 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.734 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.739 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.740 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.740 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.741 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.741 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.742 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.770 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.820 254096 INFO nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 10.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.821 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.883 254096 INFO nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 11.40 seconds to build instance.#033[00m
Nov 25 11:47:42 np0005535469 nova_compute[254092]: 2025-11-25 16:47:42.896 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:43 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 11:47:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:43Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 11:47:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:43Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 11:47:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:43Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:5b:d2 10.100.0.4
Nov 25 11:47:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:43Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:5b:d2 10.100.0.4
Nov 25 11:47:43 np0005535469 nova_compute[254092]: 2025-11-25 16:47:43.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 333 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 269 op/s
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.274 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updated VIF entry in instance network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.275 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.500 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.801 254096 DEBUG nova.compute.manager [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.801 254096 DEBUG oslo_concurrency.lockutils [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.801 254096 DEBUG oslo_concurrency.lockutils [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.802 254096 DEBUG oslo_concurrency.lockutils [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.802 254096 DEBUG nova.compute.manager [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] No waiting events found dispatching network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:44 np0005535469 nova_compute[254092]: 2025-11-25 16:47:44.802 254096 WARNING nova.compute.manager [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received unexpected event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:47:45 np0005535469 nova_compute[254092]: 2025-11-25 16:47:45.350 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:45 np0005535469 podman[347617]: 2025-11-25 16:47:45.651453293 +0000 UTC m=+0.058011027 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 11:47:45 np0005535469 podman[347616]: 2025-11-25 16:47:45.672999939 +0000 UTC m=+0.081935217 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 11:47:45 np0005535469 podman[347618]: 2025-11-25 16:47:45.67999904 +0000 UTC m=+0.082889364 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller)
Nov 25 11:47:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 388 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.6 MiB/s wr, 441 op/s
Nov 25 11:47:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:46Z|00879|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 11:47:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:46Z|00880|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:47:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:46Z|00881|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 11:47:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.392 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.392 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.393 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.393 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.394 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.395 254096 INFO nova.compute.manager [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Terminating instance#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.396 254096 DEBUG nova.compute.manager [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:47:46 np0005535469 kernel: tapc9e57355-8f (unregistering): left promiscuous mode
Nov 25 11:47:46 np0005535469 NetworkManager[48891]: <info>  [1764089266.4394] device (tapc9e57355-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:46Z|00882|binding|INFO|Releasing lport c9e57355-8fcc-40ec-ada3-03c6d0147098 from this chassis (sb_readonly=0)
Nov 25 11:47:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:46Z|00883|binding|INFO|Setting lport c9e57355-8fcc-40ec-ada3-03c6d0147098 down in Southbound
Nov 25 11:47:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:46Z|00884|binding|INFO|Removing iface tapc9e57355-8f ovn-installed in OVS
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.452 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:93:e6 10.100.0.5'], port_security=['fa:16:3e:f3:93:e6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4117640c-3ae9-4568-9034-7a7612ac43fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c9e57355-8fcc-40ec-ada3-03c6d0147098) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.453 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c9e57355-8fcc-40ec-ada3-03c6d0147098 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.454 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.472 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05f7b275-97a0-4a2a-a667-fddb06a22732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.499 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30120b8e-3ac1-4d69-b23f-843bc7d8acee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.501 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[82969566-53fd-4c30-a4b7-f9a51ff37403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:46 np0005535469 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Nov 25 11:47:46 np0005535469 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Consumed 4.057s CPU time.
Nov 25 11:47:46 np0005535469 systemd-machined[216343]: Machine qemu-113-instance-0000005c terminated.
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.528 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3593018-3b4c-4f91-8356-5657cfcc2c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ff98ab-8ba6-4465-96e2-9b40d419751d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347691, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.562 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f52d542e-168a-4eff-8820-a2e1b081139d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347692, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347692, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.564 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.571 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.634 254096 INFO nova.virt.libvirt.driver [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance destroyed successfully.#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.635 254096 DEBUG nova.objects.instance [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 4117640c-3ae9-4568-9034-7a7612ac43fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.645 254096 DEBUG nova.virt.libvirt.vif [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=92,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-lutj4rar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:42Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4117640c-3ae9-4568-9034-7a7612ac43fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.646 254096 DEBUG nova.network.os_vif_util [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.646 254096 DEBUG nova.network.os_vif_util [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.647 254096 DEBUG os_vif [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.648 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9e57355-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:46 np0005535469 nova_compute[254092]: 2025-11-25 16:47:46.654 254096 INFO os_vif [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f')#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.046 254096 INFO nova.virt.libvirt.driver [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deleting instance files /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe_del#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.046 254096 INFO nova.virt.libvirt.driver [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deletion of /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe_del complete#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.097 254096 INFO nova.compute.manager [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.098 254096 DEBUG oslo.service.loopingcall [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.098 254096 DEBUG nova.compute.manager [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.098 254096 DEBUG nova.network.neutron [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.135 254096 DEBUG nova.compute.manager [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-unplugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.135 254096 DEBUG oslo_concurrency.lockutils [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG oslo_concurrency.lockutils [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG oslo_concurrency.lockutils [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG nova.compute.manager [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] No waiting events found dispatching network-vif-unplugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:47 np0005535469 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG nova.compute.manager [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-unplugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:47:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 405 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.8 MiB/s wr, 324 op/s
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.214 254096 DEBUG nova.network.neutron [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.229 254096 INFO nova.compute.manager [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 1.13 seconds to deallocate network for instance.#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.267 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.268 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.357 254096 DEBUG oslo_concurrency.processutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903473240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.815 254096 DEBUG oslo_concurrency.processutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.821 254096 DEBUG nova.compute.provider_tree [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.837 254096 DEBUG nova.scheduler.client.report [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.861 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.884 254096 INFO nova.scheduler.client.report [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 4117640c-3ae9-4568-9034-7a7612ac43fe#033[00m
Nov 25 11:47:48 np0005535469 nova_compute[254092]: 2025-11-25 16:47:48.948 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.307 254096 DEBUG nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.307 254096 DEBUG oslo_concurrency.lockutils [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG oslo_concurrency.lockutils [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG oslo_concurrency.lockutils [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] No waiting events found dispatching network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 WARNING nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received unexpected event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-deleted-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 405 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 292 op/s
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.777 254096 INFO nova.compute.manager [None req-17d889af-6922-4595-b1ee-7af392aa5130 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Get console output#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.781 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.815 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.817 254096 INFO nova.compute.manager [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Terminating instance#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.818 254096 DEBUG nova.compute.manager [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:47:49 np0005535469 kernel: tapbe2a1b3b-f8 (unregistering): left promiscuous mode
Nov 25 11:47:49 np0005535469 NetworkManager[48891]: <info>  [1764089269.8688] device (tapbe2a1b3b-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:47:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:49Z|00885|binding|INFO|Releasing lport be2a1b3b-f8a0-4a67-9582-54b753171490 from this chassis (sb_readonly=0)
Nov 25 11:47:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:49Z|00886|binding|INFO|Setting lport be2a1b3b-f8a0-4a67-9582-54b753171490 down in Southbound
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:49Z|00887|binding|INFO|Removing iface tapbe2a1b3b-f8 ovn-installed in OVS
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.884 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:e5:4f 10.100.0.14'], port_security=['fa:16:3e:8a:e5:4f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6affd696-c15d-4401-8512-2aabbf55fd4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=be2a1b3b-f8a0-4a67-9582-54b753171490) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.885 163338 INFO neutron.agent.ovn.metadata.agent [-] Port be2a1b3b-f8a0-4a67-9582-54b753171490 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.887 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.903 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d09a81ec-6661-40b8-8efd-81d90cea1e96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:49 np0005535469 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 25 11:47:49 np0005535469 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Consumed 13.932s CPU time.
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.930 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b81061-87b2-478d-b412-5177c2b221af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.932 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ceccc3-8b15-4c53-a503-e241f212d2b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:49 np0005535469 systemd-machined[216343]: Machine qemu-109-instance-00000058 terminated.
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.960 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7da502fd-a298-40d3-98b0-c7d84412ebcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.975 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a25dcfea-8762-40dc-8889-e76898aefe5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347757, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.991 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4dac64d-092d-458f-9fd3-72db059aa456]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347758, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347758, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.993 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:49 np0005535469 nova_compute[254092]: 2025-11-25 16:47:49.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:50.000 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.053 254096 INFO nova.virt.libvirt.driver [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance destroyed successfully.#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.054 254096 DEBUG nova.objects.instance [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 6affd696-c15d-4401-8512-2aabbf55fd4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.070 254096 DEBUG nova.virt.libvirt.vif [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=88,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-nafun8zg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:27Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=6affd696-c15d-4401-8512-2aabbf55fd4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.071 254096 DEBUG nova.network.os_vif_util [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.071 254096 DEBUG nova.network.os_vif_util [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.071 254096 DEBUG os_vif [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.073 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe2a1b3b-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.074 254096 INFO nova.compute.manager [None req-d6266c33-3e7d-44cc-8a05-e4d64e09237a 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Pausing#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.075 254096 DEBUG nova.objects.instance [None req-d6266c33-3e7d-44cc-8a05-e4d64e09237a 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.079 254096 INFO os_vif [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8')#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.107 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089270.1067798, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.107 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.109 254096 DEBUG nova.compute.manager [None req-d6266c33-3e7d-44cc-8a05-e4d64e09237a 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.133 254096 DEBUG nova.compute.manager [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-unplugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.134 254096 DEBUG oslo_concurrency.lockutils [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.134 254096 DEBUG oslo_concurrency.lockutils [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.135 254096 DEBUG oslo_concurrency.lockutils [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.135 254096 DEBUG nova.compute.manager [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] No waiting events found dispatching network-vif-unplugged-be2a1b3b-f8a0-4a67-9582-54b753171490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.135 254096 DEBUG nova.compute.manager [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-unplugged-be2a1b3b-f8a0-4a67-9582-54b753171490 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.140 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.145 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.173 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.400 254096 INFO nova.virt.libvirt.driver [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deleting instance files /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e_del#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.401 254096 INFO nova.virt.libvirt.driver [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deletion of /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e_del complete#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.463 254096 INFO nova.compute.manager [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.463 254096 DEBUG oslo.service.loopingcall [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.464 254096 DEBUG nova.compute.manager [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:47:50 np0005535469 nova_compute[254092]: 2025-11-25 16:47:50.464 254096 DEBUG nova.network.neutron [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:47:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003376475813147099 of space, bias 1.0, pg target 1.0129427439441296 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.393 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089256.392674, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.393 254096 INFO nova.compute.manager [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.420 254096 DEBUG nova.compute.manager [None req-2a6b9779-a31a-4c9b-a6d4-e4f44e66c97d - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 330 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 322 op/s
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.808 254096 DEBUG nova.network.neutron [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.821 254096 INFO nova.compute.manager [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.876 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.876 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.927 254096 DEBUG nova.compute.manager [req-c3788c05-5ef7-445a-9174-ba3376190625 req-a65030dc-3b73-42b1-8c75-2820a77ff0a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-deleted-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:51 np0005535469 nova_compute[254092]: 2025-11-25 16:47:51.978 254096 DEBUG oslo_concurrency.processutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG nova.compute.manager [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG oslo_concurrency.lockutils [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG oslo_concurrency.lockutils [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG oslo_concurrency.lockutils [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.221 254096 DEBUG nova.compute.manager [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] No waiting events found dispatching network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.221 254096 WARNING nova.compute.manager [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received unexpected event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:47:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301542853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.418 254096 DEBUG oslo_concurrency.processutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.424 254096 DEBUG nova.compute.provider_tree [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.445 254096 DEBUG nova.scheduler.client.report [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.464 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.493 254096 INFO nova.compute.manager [None req-8a9fe63d-526c-4d66-9358-b46f7e2afee0 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Get console output#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.495 254096 INFO nova.scheduler.client.report [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 6affd696-c15d-4401-8512-2aabbf55fd4e#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.500 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.561 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.627 254096 INFO nova.compute.manager [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Unpausing#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.630 254096 DEBUG nova.objects.instance [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.652 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089272.651494, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.652 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:47:52 np0005535469 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.657 254096 DEBUG nova.virt.libvirt.guest [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.657 254096 DEBUG nova.compute.manager [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.680 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.683 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:47:52 np0005535469 nova_compute[254092]: 2025-11-25 16:47:52.715 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 25 11:47:53 np0005535469 nova_compute[254092]: 2025-11-25 16:47:53.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:53 np0005535469 nova_compute[254092]: 2025-11-25 16:47:53.634 254096 DEBUG nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:53 np0005535469 nova_compute[254092]: 2025-11-25 16:47:53.664 254096 INFO nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] instance snapshotting#033[00m
Nov 25 11:47:53 np0005535469 nova_compute[254092]: 2025-11-25 16:47:53.665 254096 DEBUG nova.objects.instance [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 330 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 251 op/s
Nov 25 11:47:53 np0005535469 nova_compute[254092]: 2025-11-25 16:47:53.853 254096 INFO nova.virt.libvirt.driver [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning live snapshot process#033[00m
Nov 25 11:47:53 np0005535469 nova_compute[254092]: 2025-11-25 16:47:53.975 254096 DEBUG nova.virt.libvirt.imagebackend [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:47:54 np0005535469 nova_compute[254092]: 2025-11-25 16:47:54.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:54 np0005535469 nova_compute[254092]: 2025-11-25 16:47:54.249 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(38b8841a0f7c4e55aa84127e6a3fd187) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:47:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 25 11:47:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 25 11:47:54 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 25 11:47:54 np0005535469 nova_compute[254092]: 2025-11-25 16:47:54.909 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@38b8841a0f7c4e55aa84127e6a3fd187 to images/658944a7-ebd4-4546-999c-02701f55081a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.001 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/658944a7-ebd4-4546-999c-02701f55081a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.037 254096 INFO nova.compute.manager [None req-d8429df0-7aee-483f-81c6-d4c96fcf2272 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Get console output#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.041 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:47:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2743653227' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:47:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:47:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2743653227' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.501 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(38b8841a0f7c4e55aa84127e6a3fd187) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:47:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 283 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.836 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.837 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.859 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:47:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 25 11:47:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 25 11:47:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.915 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(658944a7-ebd4-4546-999c-02701f55081a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.971 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.972 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.981 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:47:55 np0005535469 nova_compute[254092]: 2025-11-25 16:47:55.982 254096 INFO nova.compute.claims [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:47:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.174 254096 DEBUG nova.compute.manager [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.175 254096 DEBUG nova.compute.manager [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing instance network info cache due to event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.175 254096 DEBUG oslo_concurrency.lockutils [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.176 254096 DEBUG oslo_concurrency.lockutils [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.176 254096 DEBUG nova.network.neutron [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.185 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.279 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.280 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.280 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.281 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.281 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.283 254096 INFO nova.compute.manager [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Terminating instance#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.284 254096 DEBUG nova.compute.manager [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:47:56 np0005535469 kernel: tap54bd7c02-9f (unregistering): left promiscuous mode
Nov 25 11:47:56 np0005535469 NetworkManager[48891]: <info>  [1764089276.3435] device (tap54bd7c02-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:47:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:56Z|00888|binding|INFO|Releasing lport 54bd7c02-9f22-4656-9514-7219e656dbef from this chassis (sb_readonly=0)
Nov 25 11:47:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:56Z|00889|binding|INFO|Setting lport 54bd7c02-9f22-4656-9514-7219e656dbef down in Southbound
Nov 25 11:47:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:47:56Z|00890|binding|INFO|Removing iface tap54bd7c02-9f ovn-installed in OVS
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.366 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:5b:d2 10.100.0.4'], port_security=['fa:16:3e:42:5b:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2122cb4e-4525-451f-a46f-184e4a72cb34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be38e015-3930-495b-9582-fe9707042e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8e904eb-f3d0-4bff-8be5-5af69a444c2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f101b358-01b6-416d-bcc6-f10ed8ec5155, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54bd7c02-9f22-4656-9514-7219e656dbef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.367 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54bd7c02-9f22-4656-9514-7219e656dbef in datapath be38e015-3930-495b-9582-fe9707042e20 unbound from our chassis#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.368 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be38e015-3930-495b-9582-fe9707042e20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9976314-d5e9-4252-8a90-7a99e978c8d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.370 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be38e015-3930-495b-9582-fe9707042e20 namespace which is not needed anymore#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 25 11:47:56 np0005535469 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 13.252s CPU time.
Nov 25 11:47:56 np0005535469 systemd-machined[216343]: Machine qemu-111-instance-0000005a terminated.
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.511 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.520 254096 INFO nova.virt.libvirt.driver [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance destroyed successfully.#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.520 254096 DEBUG nova.objects.instance [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.533 254096 DEBUG nova.virt.libvirt.vif [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399723727',display_name='tempest-TestNetworkAdvancedServerOps-server-1399723727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399723727',id=90,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnJ2dj3tQavSsgt3v0xzD62McsGR8kH7FVN3Mskcpal4JOU2s80ZUbXF/gFef079w4ZACdh3Ov4E4/XDFuKoso7mgUy6/r/VedNuEZjiR2unDQEIrd20/t0Y7CqF7ga+A==',key_name='tempest-TestNetworkAdvancedServerOps-714044061',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-yk420f19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:52Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2122cb4e-4525-451f-a46f-184e4a72cb34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.534 254096 DEBUG nova.network.os_vif_util [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.534 254096 DEBUG nova.network.os_vif_util [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.535 254096 DEBUG os_vif [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.537 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54bd7c02-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.543 254096 INFO os_vif [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f')#033[00m
Nov 25 11:47:56 np0005535469 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : haproxy version is 2.8.14-c23fe91
Nov 25 11:47:56 np0005535469 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : path to executable is /usr/sbin/haproxy
Nov 25 11:47:56 np0005535469 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [ALERT]    (346681) : Current worker (346697) exited with code 143 (Terminated)
Nov 25 11:47:56 np0005535469 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [WARNING]  (346681) : All workers exited. Exiting... (0)
Nov 25 11:47:56 np0005535469 systemd[1]: libpod-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7.scope: Deactivated successfully.
Nov 25 11:47:56 np0005535469 podman[347997]: 2025-11-25 16:47:56.556104259 +0000 UTC m=+0.070745533 container died 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:47:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7-userdata-shm.mount: Deactivated successfully.
Nov 25 11:47:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-838921d87160c6df4cb470730be2019fa42a93dba0b126d7b101a3d59de1d523-merged.mount: Deactivated successfully.
Nov 25 11:47:56 np0005535469 podman[347997]: 2025-11-25 16:47:56.627456329 +0000 UTC m=+0.142097573 container cleanup 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:47:56 np0005535469 systemd[1]: libpod-conmon-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7.scope: Deactivated successfully.
Nov 25 11:47:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038232724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.714 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.720 254096 DEBUG nova.compute.provider_tree [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.732 254096 DEBUG nova.scheduler.client.report [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:56 np0005535469 podman[348055]: 2025-11-25 16:47:56.745283431 +0000 UTC m=+0.092099934 container remove 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.752 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f73a904d-7840-4691-b417-dcf501ce6872]: (4, ('Tue Nov 25 04:47:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20 (92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7)\n92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7\nTue Nov 25 04:47:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20 (92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7)\n92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.754 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3158259-3f07-4dd0-8483-459d797cb5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.755 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe38e015-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 kernel: tapbe38e015-30: left promiscuous mode
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.762 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.763 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.784 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fed1e3-2782-49dd-a8b4-1cbb925134fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.798 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79dadec0-3305-4370-b3f9-f091e09f4d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[513976f3-ed37-418d-a18f-921227326563]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.820 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.820 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b4799a-7b36-4fd5-a10b-3c93c93e4195]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570385, 'reachable_time': 15437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348072, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 systemd[1]: run-netns-ovnmeta\x2dbe38e015\x2d3930\x2d495b\x2d9582\x2dfe9707042e20.mount: Deactivated successfully.
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.830 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be38e015-3930-495b-9582-fe9707042e20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:47:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.831 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9549fae0-775d-4030-9d87-060699653f30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.842 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.861 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:47:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 25 11:47:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 25 11:47:56 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.933 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.934 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.935 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Creating image(s)#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.958 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:56 np0005535469 nova_compute[254092]: 2025-11-25 16:47:56.981 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.002 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.007 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.081 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.082 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.083 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.083 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.102 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.106 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7c679c82-4594-4519-a291-de41650ba66b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.146 254096 DEBUG nova.policy [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.193 254096 INFO nova.virt.libvirt.driver [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deleting instance files /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34_del#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.193 254096 INFO nova.virt.libvirt.driver [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deletion of /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34_del complete#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.249 254096 INFO nova.compute.manager [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.249 254096 DEBUG oslo.service.loopingcall [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.250 254096 DEBUG nova.compute.manager [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.250 254096 DEBUG nova.network.neutron [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG nova.compute.manager [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-unplugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG oslo_concurrency.lockutils [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG oslo_concurrency.lockutils [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG oslo_concurrency.lockutils [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.348 254096 DEBUG nova.compute.manager [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] No waiting events found dispatching network-vif-unplugged-54bd7c02-9f22-4656-9514-7219e656dbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.348 254096 DEBUG nova.compute.manager [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-unplugged-54bd7c02-9f22-4656-9514-7219e656dbef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.417 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7c679c82-4594-4519-a291-de41650ba66b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.472 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.554 254096 DEBUG nova.objects.instance [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 7c679c82-4594-4519-a291-de41650ba66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.569 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Ensure instance console log exists: /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:57 np0005535469 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 323 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.062 254096 DEBUG nova.network.neutron [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updated VIF entry in instance network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.063 254096 DEBUG nova.network.neutron [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.079 254096 DEBUG oslo_concurrency.lockutils [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.367 254096 INFO nova.virt.libvirt.driver [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete#033[00m
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.368 254096 INFO nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 4.69 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.532 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Successfully created port: 59b2ac13-fc64-408f-bbb6-977c064ac64a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:58 np0005535469 nova_compute[254092]: 2025-11-25 16:47:58.650 254096 DEBUG nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.105 254096 DEBUG nova.network.neutron [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.144 254096 DEBUG nova.compute.manager [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-deleted-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.145 254096 INFO nova.compute.manager [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Neutron deleted interface 54bd7c02-9f22-4656-9514-7219e656dbef; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.145 254096 DEBUG nova.network.neutron [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.206 254096 INFO nova.compute.manager [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 1.96 seconds to deallocate network for instance.#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.219 254096 DEBUG nova.compute.manager [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Detach interface failed, port_id=54bd7c02-9f22-4656-9514-7219e656dbef, reason: Instance 2122cb4e-4525-451f-a46f-184e4a72cb34 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.265 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.265 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.387 254096 DEBUG oslo_concurrency.processutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.431 254096 DEBUG nova.compute.manager [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.431 254096 DEBUG oslo_concurrency.lockutils [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.432 254096 DEBUG oslo_concurrency.lockutils [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.432 254096 DEBUG oslo_concurrency.lockutils [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.432 254096 DEBUG nova.compute.manager [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] No waiting events found dispatching network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.433 254096 WARNING nova.compute.manager [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received unexpected event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.571 254096 DEBUG nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.626 254096 INFO nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] instance snapshotting#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.628 254096 DEBUG nova.objects.instance [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.668 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Successfully updated port: 59b2ac13-fc64-408f-bbb6-977c064ac64a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.685 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.686 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.686 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:47:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 323 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 25 11:47:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:47:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958365030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.833 254096 DEBUG oslo_concurrency.processutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.838 254096 DEBUG nova.compute.provider_tree [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.852 254096 DEBUG nova.scheduler.client.report [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.874 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.878 254096 INFO nova.virt.libvirt.driver [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning live snapshot process#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.897 254096 INFO nova.scheduler.client.report [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 2122cb4e-4525-451f-a46f-184e4a72cb34#033[00m
Nov 25 11:47:59 np0005535469 nova_compute[254092]: 2025-11-25 16:47:59.960 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.053 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.060 254096 DEBUG nova.virt.libvirt.imagebackend [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.244 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(a9099c36e23a49a0929de2f63a1a0978) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.616 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updating instance_info_cache with network_info: [{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.633 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.633 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance network_info: |[{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.635 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start _get_guest_xml network_info=[{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.638 254096 WARNING nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.644 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.645 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.648 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.649 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.650 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.650 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.650 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.653 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:00 np0005535469 nova_compute[254092]: 2025-11-25 16:48:00.655 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/382977419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.126 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.146 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.150 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.197 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@a9099c36e23a49a0929de2f63a1a0978 to images/0e96adb9-b508-4139-a36d-294a9f197de1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.226 254096 DEBUG nova.compute.manager [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-changed-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.226 254096 DEBUG nova.compute.manager [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Refreshing instance network info cache due to event network-changed-59b2ac13-fc64-408f-bbb6-977c064ac64a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.227 254096 DEBUG oslo_concurrency.lockutils [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.227 254096 DEBUG oslo_concurrency.lockutils [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.227 254096 DEBUG nova.network.neutron [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Refreshing network info cache for port 59b2ac13-fc64-408f-bbb6-977c064ac64a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.288 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/0e96adb9-b508-4139-a36d-294a9f197de1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222757036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.629 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.631 254096 DEBUG nova.virt.libvirt.vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-824118161',display_name='tempest-ServersTestJSON-server-824118161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-824118161',id=93,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-razyaibd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:56Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=7c679c82-4594-4519-a291-de41650ba66b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.631 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.632 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.633 254096 DEBUG nova.objects.instance [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c679c82-4594-4519-a291-de41650ba66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.633 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089266.6316898, 4117640c-3ae9-4568-9034-7a7612ac43fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.634 254096 INFO nova.compute.manager [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.663 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <uuid>7c679c82-4594-4519-a291-de41650ba66b</uuid>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <name>instance-0000005d</name>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestJSON-server-824118161</nova:name>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:00</nova:creationTime>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <nova:port uuid="59b2ac13-fc64-408f-bbb6-977c064ac64a">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <entry name="serial">7c679c82-4594-4519-a291-de41650ba66b</entry>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <entry name="uuid">7c679c82-4594-4519-a291-de41650ba66b</entry>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7c679c82-4594-4519-a291-de41650ba66b_disk">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7c679c82-4594-4519-a291-de41650ba66b_disk.config">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d2:fa:3e"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <target dev="tap59b2ac13-fc"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/console.log" append="off"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:01 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:01 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:01 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:01 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Preparing to wait for external event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.665 254096 DEBUG nova.virt.libvirt.vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-824118161',display_name='tempest-ServersTestJSON-server-824118161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-824118161',id=93,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-razyaibd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:56Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=7c679c82-4594-4519-a291-de41650ba66b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.665 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.665 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.666 254096 DEBUG os_vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.668 254096 DEBUG nova.compute.manager [None req-10132a64-2d84-4845-a5fb-540d8d2668aa - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.670 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59b2ac13-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.670 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59b2ac13-fc, col_values=(('external_ids', {'iface-id': '59b2ac13-fc64-408f-bbb6-977c064ac64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:fa:3e', 'vm-uuid': '7c679c82-4594-4519-a291-de41650ba66b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:01 np0005535469 NetworkManager[48891]: <info>  [1764089281.6728] manager: (tap59b2ac13-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.679 254096 INFO os_vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc')#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.750 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.751 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.751 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:d2:fa:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.751 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Using config drive#033[00m
Nov 25 11:48:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 325 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 278 op/s
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.829 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:01 np0005535469 nova_compute[254092]: 2025-11-25 16:48:01.920 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(a9099c36e23a49a0929de2f63a1a0978) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:48:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 25 11:48:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 25 11:48:02 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.090 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(0e96adb9-b508-4139-a36d-294a9f197de1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.137 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.138 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.153 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.219 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.220 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.226 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.226 254096 INFO nova.compute.claims [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.369 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.413 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Creating config drive at /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.421 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9b8jsqum execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.562 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9b8jsqum" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.600 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.605 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config 7c679c82-4594-4519-a291-de41650ba66b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.767 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config 7c679c82-4594-4519-a291-de41650ba66b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.768 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deleting local config drive /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config because it was imported into RBD.#033[00m
Nov 25 11:48:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275781388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:02 np0005535469 kernel: tap59b2ac13-fc: entered promiscuous mode
Nov 25 11:48:02 np0005535469 NetworkManager[48891]: <info>  [1764089282.8141] manager: (tap59b2ac13-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Nov 25 11:48:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:02Z|00891|binding|INFO|Claiming lport 59b2ac13-fc64-408f-bbb6-977c064ac64a for this chassis.
Nov 25 11:48:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:02Z|00892|binding|INFO|59b2ac13-fc64-408f-bbb6-977c064ac64a: Claiming fa:16:3e:d2:fa:3e 10.100.0.10
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.814 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.821 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:fa:3e 10.100.0.10'], port_security=['fa:16:3e:d2:fa:3e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7c679c82-4594-4519-a291-de41650ba66b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=59b2ac13-fc64-408f-bbb6-977c064ac64a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.823 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 59b2ac13-fc64-408f-bbb6-977c064ac64a in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.824 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.830 254096 DEBUG nova.compute.provider_tree [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:02 np0005535469 systemd-udevd[348561]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c17569a-7762-4a75-a41f-d48f98e0c62b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:02 np0005535469 systemd-machined[216343]: New machine qemu-114-instance-0000005d.
Nov 25 11:48:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:02Z|00893|binding|INFO|Setting lport 59b2ac13-fc64-408f-bbb6-977c064ac64a ovn-installed in OVS
Nov 25 11:48:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:02Z|00894|binding|INFO|Setting lport 59b2ac13-fc64-408f-bbb6-977c064ac64a up in Southbound
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.846 254096 DEBUG nova.scheduler.client.report [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:02 np0005535469 NetworkManager[48891]: <info>  [1764089282.8554] device (tap59b2ac13-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:02 np0005535469 NetworkManager[48891]: <info>  [1764089282.8563] device (tap59b2ac13-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:02 np0005535469 systemd[1]: Started Virtual Machine qemu-114-instance-0000005d.
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.869 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.869 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.870 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d8197771-6024-4323-89c5-0804912f1cc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.873 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[696eed09-0054-4577-b9ef-e7ab0cf25fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.903 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f8635db5-5d98-4aaf-a517-4313de15d999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.925 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.926 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.925 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea6fb4-8af2-4512-bba5-3457dc875625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348574, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.941 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc009b8b-cf69-4b5f-a4d2-7a19daa57d42]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348576, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348576, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.958 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.960 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.962 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:02 np0005535469 nova_compute[254092]: 2025-11-25 16:48:02.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.058 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.060 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.061 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating image(s)#033[00m
Nov 25 11:48:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 25 11:48:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 25 11:48:03 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.109 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.142 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.169 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.173 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.210 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089283.1891844, 7c679c82-4594-4519-a291-de41650ba66b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.211 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Started (Lifecycle Event)#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.219 254096 DEBUG nova.network.neutron [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updated VIF entry in instance network info cache for port 59b2ac13-fc64-408f-bbb6-977c064ac64a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.220 254096 DEBUG nova.network.neutron [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updating instance_info_cache with network_info: [{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.237 254096 DEBUG oslo_concurrency.lockutils [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.239 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089283.1896975, 7c679c82-4594-4519-a291-de41650ba66b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.239 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.261 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.265 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.266 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.266 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.266 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.287 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.290 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.326 254096 DEBUG nova.policy [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '013868ddd96f43a49458a4615ab1f41b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '544c4f84ca494482aea8e55248fe4c62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.330 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.338 254096 DEBUG nova.compute.manager [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.339 254096 DEBUG oslo_concurrency.lockutils [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.339 254096 DEBUG oslo_concurrency.lockutils [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.339 254096 DEBUG oslo_concurrency.lockutils [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.340 254096 DEBUG nova.compute.manager [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Processing event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.341 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.345 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.348 254096 INFO nova.virt.libvirt.driver [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance spawned successfully.#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.349 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.353 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.353 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089283.3434644, 7c679c82-4594-4519-a291-de41650ba66b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.354 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.377 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.378 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.379 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.379 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.380 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.380 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.388 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.394 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.457 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.488 254096 INFO nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 6.55 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.489 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.560 254096 INFO nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 7.61 seconds to build instance.#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.580 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.625 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.683 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] resizing rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:48:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 325 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 213 op/s
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.772 254096 DEBUG nova.objects.instance [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.785 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.786 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Ensure instance console log exists: /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.786 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.786 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:03 np0005535469 nova_compute[254092]: 2025-11-25 16:48:03.787 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:04Z|00895|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:48:04 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:04Z|00896|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 11:48:04 np0005535469 nova_compute[254092]: 2025-11-25 16:48:04.148 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Successfully created port: 431770e1-476d-40b3-8477-419b69aa4fe9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:48:04 np0005535469 nova_compute[254092]: 2025-11-25 16:48:04.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:04 np0005535469 nova_compute[254092]: 2025-11-25 16:48:04.707 254096 INFO nova.virt.libvirt.driver [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete#033[00m
Nov 25 11:48:04 np0005535469 nova_compute[254092]: 2025-11-25 16:48:04.707 254096 INFO nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 5.06 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:48:04 np0005535469 nova_compute[254092]: 2025-11-25 16:48:04.981 254096 DEBUG nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.049 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089270.048559, 6affd696-c15d-4401-8512-2aabbf55fd4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.049 254096 INFO nova.compute.manager [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.082 254096 DEBUG nova.compute.manager [None req-3d6e68b1-083b-42ad-b640-3b95337ec761 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.601 254096 DEBUG nova.compute.manager [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.601 254096 DEBUG oslo_concurrency.lockutils [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.602 254096 DEBUG oslo_concurrency.lockutils [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.602 254096 DEBUG oslo_concurrency.lockutils [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.603 254096 DEBUG nova.compute.manager [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] No waiting events found dispatching network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.603 254096 WARNING nova.compute.manager [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received unexpected event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a for instance with vm_state active and task_state None.#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.653 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Successfully updated port: 431770e1-476d-40b3-8477-419b69aa4fe9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.670 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 396 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 13 MiB/s wr, 468 op/s
Nov 25 11:48:05 np0005535469 nova_compute[254092]: 2025-11-25 16:48:05.845 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:48:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 25 11:48:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 25 11:48:06 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.373 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.411 254096 INFO nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] instance snapshotting#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.413 254096 DEBUG nova.objects.instance [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.655 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.670 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.670 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance network_info: |[{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.672 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start _get_guest_xml network_info=[{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.679 254096 WARNING nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.681 254096 INFO nova.virt.libvirt.driver [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning live snapshot process#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.684 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.684 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.687 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.687 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.688 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.688 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.688 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.690 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.690 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.690 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.691 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.691 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.693 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.829 254096 DEBUG nova.virt.libvirt.imagebackend [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.973 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.974 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.974 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.974 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.975 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.976 254096 INFO nova.compute.manager [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Terminating instance#033[00m
Nov 25 11:48:06 np0005535469 nova_compute[254092]: 2025-11-25 16:48:06.977 254096 DEBUG nova.compute.manager [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:48:07 np0005535469 kernel: tap59b2ac13-fc (unregistering): left promiscuous mode
Nov 25 11:48:07 np0005535469 NetworkManager[48891]: <info>  [1764089287.0130] device (tap59b2ac13-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:48:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:07Z|00897|binding|INFO|Releasing lport 59b2ac13-fc64-408f-bbb6-977c064ac64a from this chassis (sb_readonly=0)
Nov 25 11:48:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:07Z|00898|binding|INFO|Setting lport 59b2ac13-fc64-408f-bbb6-977c064ac64a down in Southbound
Nov 25 11:48:07 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:07Z|00899|binding|INFO|Removing iface tap59b2ac13-fc ovn-installed in OVS
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.033 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:fa:3e 10.100.0.10'], port_security=['fa:16:3e:d2:fa:3e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7c679c82-4594-4519-a291-de41650ba66b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=59b2ac13-fc64-408f-bbb6-977c064ac64a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.034 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 59b2ac13-fc64-408f-bbb6-977c064ac64a in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.035 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.043 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.054 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc64461-802a-400c-91e3-3b3d5bb182fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.056 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(1a827f2145e345a2a2591ad42a9e04ef) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:48:07 np0005535469 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 25 11:48:07 np0005535469 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Consumed 4.004s CPU time.
Nov 25 11:48:07 np0005535469 systemd-machined[216343]: Machine qemu-114-instance-0000005d terminated.
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.084 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6795109a-9594-406f-9b2e-45ede786313b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.087 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7a77af6a-d907-41e7-9cab-fc250b6d4fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.116 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[32ef197c-e61e-4c45-b0ed-a12a69dd82d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667458367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8688971a-c27d-4d7b-8469-25cf3a5d2bbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348868, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.148 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.151 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3900905e-f38f-4e44-aaeb-fb44babd6463]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348871, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348871, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.153 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 25 11:48:07 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.176 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.177 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.179 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.179 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.200 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.206 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.246 254096 INFO nova.virt.libvirt.driver [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance destroyed successfully.#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.246 254096 DEBUG nova.objects.instance [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 7c679c82-4594-4519-a291-de41650ba66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.264 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@1a827f2145e345a2a2591ad42a9e04ef to images/ca6a619e-78ad-49e2-956b-e212b3350627 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.293 254096 DEBUG nova.virt.libvirt.vif [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-824118161',display_name='tempest-ServersTestJSON-server-824118161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-824118161',id=93,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-razyaibd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:05Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=7c679c82-4594-4519-a291-de41650ba66b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.293 254096 DEBUG nova.network.os_vif_util [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.294 254096 DEBUG nova.network.os_vif_util [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.294 254096 DEBUG os_vif [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.296 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.296 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59b2ac13-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.298 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.300 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.303 254096 INFO os_vif [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc')#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.377 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/ca6a619e-78ad-49e2-956b-e212b3350627 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:48:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492836529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.676 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-changed-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Refreshing instance network info cache due to event network-changed-431770e1-476d-40b3-8477-419b69aa4fe9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG nova.network.neutron [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Refreshing network info cache for port 431770e1-476d-40b3-8477-419b69aa4fe9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.679 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.680 254096 DEBUG nova.virt.libvirt.vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:02Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.680 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.681 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.682 254096 DEBUG nova.objects.instance [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.703 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <uuid>435ae693-6844-49ae-977b-ec3aa89cfe70</uuid>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <name>instance-0000005e</name>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueTestJSON-server-328897245</nova:name>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:06</nova:creationTime>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <nova:port uuid="431770e1-476d-40b3-8477-419b69aa4fe9">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <entry name="serial">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <entry name="uuid">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e9:e7:de"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <target dev="tap431770e1-47"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/console.log" append="off"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:07 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:07 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:07 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:07 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.705 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Preparing to wait for external event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.705 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.706 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.707 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.708 254096 DEBUG nova.virt.libvirt.vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:02Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.709 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.710 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.711 254096 DEBUG os_vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.712 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.712 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.713 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.721 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431770e1-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.722 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap431770e1-47, col_values=(('external_ids', {'iface-id': '431770e1-476d-40b3-8477-419b69aa4fe9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:e7:de', 'vm-uuid': '435ae693-6844-49ae-977b-ec3aa89cfe70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 NetworkManager[48891]: <info>  [1764089287.7250] manager: (tap431770e1-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.725 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.731 254096 INFO os_vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47')#033[00m
Nov 25 11:48:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 451 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 387 op/s
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.820 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.821 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.821 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:e9:e7:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.821 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Using config drive#033[00m
Nov 25 11:48:07 np0005535469 nova_compute[254092]: 2025-11-25 16:48:07.845 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:08 np0005535469 nova_compute[254092]: 2025-11-25 16:48:08.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:08 np0005535469 nova_compute[254092]: 2025-11-25 16:48:08.880 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating config drive at /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config#033[00m
Nov 25 11:48:08 np0005535469 nova_compute[254092]: 2025-11-25 16:48:08.885 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4l6iibha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:08 np0005535469 nova_compute[254092]: 2025-11-25 16:48:08.939 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(1a827f2145e345a2a2591ad42a9e04ef) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.038 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4l6iibha" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.056 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.059 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.091 254096 INFO nova.virt.libvirt.driver [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deleting instance files /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b_del#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.093 254096 INFO nova.virt.libvirt.driver [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deletion of /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b_del complete#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.150 254096 INFO nova.compute.manager [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 2.17 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.150 254096 DEBUG oslo.service.loopingcall [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.151 254096 DEBUG nova.compute.manager [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.151 254096 DEBUG nova.network.neutron [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.193 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.194 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deleting local config drive /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config because it was imported into RBD.#033[00m
Nov 25 11:48:09 np0005535469 kernel: tap431770e1-47: entered promiscuous mode
Nov 25 11:48:09 np0005535469 systemd-udevd[348842]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:09 np0005535469 NetworkManager[48891]: <info>  [1764089289.2429] manager: (tap431770e1-47): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Nov 25 11:48:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 25 11:48:09 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:09Z|00900|binding|INFO|Claiming lport 431770e1-476d-40b3-8477-419b69aa4fe9 for this chassis.
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:09 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:09Z|00901|binding|INFO|431770e1-476d-40b3-8477-419b69aa4fe9: Claiming fa:16:3e:e9:e7:de 10.100.0.8
Nov 25 11:48:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 25 11:48:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 25 11:48:09 np0005535469 NetworkManager[48891]: <info>  [1764089289.2533] device (tap431770e1-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:09 np0005535469 NetworkManager[48891]: <info>  [1764089289.2542] device (tap431770e1-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.253 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.254 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis#033[00m
Nov 25 11:48:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.255 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:48:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.256 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[938360a9-3858-4f3c-9927-09b52999f173]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:09 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:09Z|00902|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 ovn-installed in OVS
Nov 25 11:48:09 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:09Z|00903|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 up in Southbound
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:09 np0005535469 systemd-machined[216343]: New machine qemu-115-instance-0000005e.
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.278 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(ca6a619e-78ad-49e2-956b-e212b3350627) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:48:09 np0005535469 systemd[1]: Started Virtual Machine qemu-115-instance-0000005e.
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089289.6311123, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.631 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Started (Lifecycle Event)#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.649 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.653 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089289.6312323, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.653 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.668 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 451 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 368 op/s
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.789 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] No waiting events found dispatching network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 WARNING nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received unexpected event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.791 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Processing event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.791 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.794 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089289.7939894, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.794 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.796 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.799 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance spawned successfully.#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.799 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.815 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.820 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.823 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.823 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.824 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.824 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.824 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.825 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.843 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.848 254096 DEBUG nova.network.neutron [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updated VIF entry in instance network info cache for port 431770e1-476d-40b3-8477-419b69aa4fe9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.848 254096 DEBUG nova.network.neutron [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.863 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-unplugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.865 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] No waiting events found dispatching network-vif-unplugged-59b2ac13-fc64-408f-bbb6-977c064ac64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.865 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-unplugged-59b2ac13-fc64-408f-bbb6-977c064ac64a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.873 254096 INFO nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 6.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.873 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.937 254096 INFO nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 7.75 seconds to build instance.#033[00m
Nov 25 11:48:09 np0005535469 nova_compute[254092]: 2025-11-25 16:48:09.958 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:48:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 25 11:48:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 25 11:48:10 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 25 11:48:10 np0005535469 nova_compute[254092]: 2025-11-25 16:48:10.847 254096 DEBUG nova.network.neutron [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:10 np0005535469 nova_compute[254092]: 2025-11-25 16:48:10.868 254096 INFO nova.compute.manager [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 1.72 seconds to deallocate network for instance.#033[00m
Nov 25 11:48:10 np0005535469 nova_compute[254092]: 2025-11-25 16:48:10.940 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:10 np0005535469 nova_compute[254092]: 2025-11-25 16:48:10.941 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.063 254096 INFO nova.compute.manager [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Rescuing#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.064 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.064 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.064 254096 DEBUG nova.network.neutron [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.068 254096 DEBUG oslo_concurrency.processutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844377541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.517 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089276.5155525, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.518 254096 INFO nova.compute.manager [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.524 254096 DEBUG oslo_concurrency.processutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.529 254096 DEBUG nova.compute.provider_tree [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.542 254096 INFO nova.virt.libvirt.driver [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.543 254096 INFO nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 5.11 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.553 254096 DEBUG nova.compute.manager [None req-654a6030-2233-4f95-bfa9-e7986b9973d8 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.563 254096 DEBUG nova.scheduler.client.report [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.592 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.647 254096 INFO nova.scheduler.client.report [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 7c679c82-4594-4519-a291-de41650ba66b#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.709 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 484 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 8.3 MiB/s wr, 310 op/s
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.834 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.835 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.835 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting image 658944a7-ebd4-4546-999c-02701f55081a _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.902 254096 DEBUG nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.903 254096 DEBUG oslo_concurrency.lockutils [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.903 254096 DEBUG oslo_concurrency.lockutils [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.903 254096 DEBUG oslo_concurrency.lockutils [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.904 254096 DEBUG nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.904 254096 WARNING nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:48:11 np0005535469 nova_compute[254092]: 2025-11-25 16:48:11.904 254096 DEBUG nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-deleted-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 25 11:48:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 25 11:48:12 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 25 11:48:12 np0005535469 nova_compute[254092]: 2025-11-25 16:48:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:12 np0005535469 nova_compute[254092]: 2025-11-25 16:48:12.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:12 np0005535469 nova_compute[254092]: 2025-11-25 16:48:12.828 254096 DEBUG nova.network.neutron [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:12 np0005535469 nova_compute[254092]: 2025-11-25 16:48:12.845 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:13 np0005535469 nova_compute[254092]: 2025-11-25 16:48:13.057 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:48:13 np0005535469 nova_compute[254092]: 2025-11-25 16:48:13.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:13 np0005535469 nova_compute[254092]: 2025-11-25 16:48:13.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:13 np0005535469 nova_compute[254092]: 2025-11-25 16:48:13.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:13.626 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 484 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 291 op/s
Nov 25 11:48:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 25 11:48:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 25 11:48:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.511 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.512 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.735 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.802 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.803 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.809 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.809 254096 INFO nova.compute.claims [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:48:14 np0005535469 nova_compute[254092]: 2025-11-25 16:48:14.984 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 25 11:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 25 11:48:15 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 25 11:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994193304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.461 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.467 254096 DEBUG nova.compute.provider_tree [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.481 254096 DEBUG nova.scheduler.client.report [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.508 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.509 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.552 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.552 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.567 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.579 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.677 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.679 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.679 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Creating image(s)#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.700 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.720 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.738 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.744 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 412 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.9 MiB/s wr, 351 op/s
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.775 254096 DEBUG nova.policy [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.813 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.813 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.814 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.814 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.833 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:15 np0005535469 nova_compute[254092]: 2025-11-25 16:48:15.836 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4677de7c-6625-4c98-a065-214341d8bfea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 25 11:48:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 25 11:48:16 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.471 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4677de7c-6625-4c98-a065-214341d8bfea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.568 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:48:16 np0005535469 podman[349329]: 2025-11-25 16:48:16.660157451 +0000 UTC m=+0.071332310 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:48:16 np0005535469 podman[349337]: 2025-11-25 16:48:16.681056238 +0000 UTC m=+0.089975676 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.682 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Successfully created port: cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.746 254096 DEBUG nova.objects.instance [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:16 np0005535469 podman[349346]: 2025-11-25 16:48:16.756474178 +0000 UTC m=+0.152268359 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.764 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.764 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Ensure instance console log exists: /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.765 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.765 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:16 np0005535469 nova_compute[254092]: 2025-11-25 16:48:16.765 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 290 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 213 op/s
Nov 25 11:48:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792957288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:17 np0005535469 nova_compute[254092]: 2025-11-25 16:48:17.975 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.066 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.067 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.073 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.081 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.082 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.085 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Successfully updated port: cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.103 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.104 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.104 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.269 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.302 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.303 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3255MB free_disk=59.87629318237305GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.303 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.304 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8b20d119-17cb-4742-9223-90e5020f93a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73301044-3bad-4401-9e30-f009d417f662 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 435ae693-6844-49ae-977b-ec3aa89cfe70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 4677de7c-6625-4c98-a065-214341d8bfea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.368 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.368 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.440 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3567587403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.865 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.871 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.891 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.908 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:48:18 np0005535469 nova_compute[254092]: 2025-11-25 16:48:18.908 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.128 254096 DEBUG nova.compute.manager [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-changed-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.129 254096 DEBUG nova.compute.manager [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Refreshing instance network info cache due to event network-changed-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.130 254096 DEBUG oslo_concurrency.lockutils [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.283 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updating instance_info_cache with network_info: [{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.298 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.298 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance network_info: |[{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.299 254096 DEBUG oslo_concurrency.lockutils [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.299 254096 DEBUG nova.network.neutron [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Refreshing network info cache for port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.304 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start _get_guest_xml network_info=[{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.352 254096 WARNING nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.357 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.357 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.360 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.360 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.363 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.363 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.363 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.366 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 290 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 195 op/s
Nov 25 11:48:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3923795873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.804 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.827 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.831 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.909 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.910 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.911 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:48:19 np0005535469 nova_compute[254092]: 2025-11-25 16:48:19.930 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8b20d119-17cb-4742-9223-90e5020f93a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417193328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.293 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.294 254096 DEBUG nova.virt.libvirt.vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1206046660',display_name='tempest-ServersTestJSON-server-1206046660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1206046660',id=95,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-g8d6ljpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:15Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4677de7c-6625-4c98-a065-214341d8bfea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.295 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.295 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.297 254096 DEBUG nova.objects.instance [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.309 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <uuid>4677de7c-6625-4c98-a065-214341d8bfea</uuid>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <name>instance-0000005f</name>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersTestJSON-server-1206046660</nova:name>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:19</nova:creationTime>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <nova:port uuid="cce0c8fe-e83f-4422-aeb3-8b1e6bafa462">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <entry name="serial">4677de7c-6625-4c98-a065-214341d8bfea</entry>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <entry name="uuid">4677de7c-6625-4c98-a065-214341d8bfea</entry>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4677de7c-6625-4c98-a065-214341d8bfea_disk">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4677de7c-6625-4c98-a065-214341d8bfea_disk.config">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:87:86:24"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <target dev="tapcce0c8fe-e8"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/console.log" append="off"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:20 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:20 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:20 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:20 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.310 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Preparing to wait for external event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.311 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.311 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.311 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.312 254096 DEBUG nova.virt.libvirt.vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1206046660',display_name='tempest-ServersTestJSON-server-1206046660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1206046660',id=95,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-g8d6ljpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:15Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4677de7c-6625-4c98-a065-214341d8bfea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.314 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.315 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.315 254096 DEBUG os_vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.316 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.317 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.320 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.320 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce0c8fe-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.321 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcce0c8fe-e8, col_values=(('external_ids', {'iface-id': 'cce0c8fe-e83f-4422-aeb3-8b1e6bafa462', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:86:24', 'vm-uuid': '4677de7c-6625-4c98-a065-214341d8bfea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:20 np0005535469 NetworkManager[48891]: <info>  [1764089300.3719] manager: (tapcce0c8fe-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.380 254096 INFO os_vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8')#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.426 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.427 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.427 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:87:86:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.428 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Using config drive#033[00m
Nov 25 11:48:20 np0005535469 nova_compute[254092]: 2025-11-25 16:48:20.461 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 25 11:48:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 25 11:48:21 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.382 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Creating config drive at /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.387 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_vmfqscv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.482 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.483 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.502 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.549 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_vmfqscv" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.583 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.588 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config 4677de7c-6625-4c98-a065-214341d8bfea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.643 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.644 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.651 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.652 254096 INFO nova.compute.claims [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.738 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config 4677de7c-6625-4c98-a065-214341d8bfea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.739 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deleting local config drive /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config because it was imported into RBD.#033[00m
Nov 25 11:48:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 293 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 3.3 MiB/s wr, 131 op/s
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.790 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:21 np0005535469 kernel: tapcce0c8fe-e8: entered promiscuous mode
Nov 25 11:48:21 np0005535469 NetworkManager[48891]: <info>  [1764089301.8150] manager: (tapcce0c8fe-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Nov 25 11:48:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:21Z|00904|binding|INFO|Claiming lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for this chassis.
Nov 25 11:48:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:21Z|00905|binding|INFO|cce0c8fe-e83f-4422-aeb3-8b1e6bafa462: Claiming fa:16:3e:87:86:24 10.100.0.12
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.825 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:86:24 10.100.0.12'], port_security=['fa:16:3e:87:86:24 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4677de7c-6625-4c98-a065-214341d8bfea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.831 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:48:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:21Z|00906|binding|INFO|Setting lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 ovn-installed in OVS
Nov 25 11:48:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:21Z|00907|binding|INFO|Setting lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 up in Southbound
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:21 np0005535469 systemd-udevd[349611]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.854 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feb6c0f0-f0c4-4e4d-9759-5922fedcf653]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:21 np0005535469 NetworkManager[48891]: <info>  [1764089301.8643] device (tapcce0c8fe-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:21 np0005535469 systemd-machined[216343]: New machine qemu-116-instance-0000005f.
Nov 25 11:48:21 np0005535469 NetworkManager[48891]: <info>  [1764089301.8657] device (tapcce0c8fe-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:21 np0005535469 systemd[1]: Started Virtual Machine qemu-116-instance-0000005f.
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.891 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1c34d9-5445-48c1-8193-1e69706d3590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.894 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4563ac5c-c8ef-4ef4-9b46-368d027c7aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.931 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eef048af-126f-4f71-b265-94a34e7958b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.955 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4956e56e-e924-49f0-a415-cad306ac5f24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349634, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.980 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[835cfa5e-210b-4c9b-accb-e90066b1cfc2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349645, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349645, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.982 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:21 np0005535469 nova_compute[254092]: 2025-11-25 16:48:21.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.986 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.085 254096 DEBUG nova.network.neutron [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updated VIF entry in instance network info cache for port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.086 254096 DEBUG nova.network.neutron [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updating instance_info_cache with network_info: [{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.107 254096 DEBUG oslo_concurrency.lockutils [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396387784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.241 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089287.2068388, 7c679c82-4594-4519-a291-de41650ba66b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.242 254096 INFO nova.compute.manager [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.248 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.254 254096 DEBUG nova.compute.provider_tree [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.257 254096 DEBUG nova.compute.manager [None req-15bd64d7-1578-488c-abe3-8f1c470026e8 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.266 254096 DEBUG nova.scheduler.client.report [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.283 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.283 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.312 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089302.3115609, 4677de7c-6625-4c98-a065-214341d8bfea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.312 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Started (Lifecycle Event)#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.334 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.337 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.337 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.344 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089302.311709, 4677de7c-6625-4c98-a065-214341d8bfea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.344 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.366 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.367 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.371 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.387 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.387 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.479 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.481 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.481 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Creating image(s)#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.502 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.524 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.544 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.548 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.600 254096 DEBUG nova.policy [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.603 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updating instance_info_cache with network_info: [{"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.622 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.623 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.623 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.640 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.641 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.641 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.641 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.664 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.669 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2e848add-8417-4307-8b01-f0d1c1a76cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:22 np0005535469 nova_compute[254092]: 2025-11-25 16:48:22.969 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2e848add-8417-4307-8b01-f0d1c1a76cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.053 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.113 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.191 254096 DEBUG nova.objects.instance [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e848add-8417-4307-8b01-f0d1c1a76cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.225 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.226 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Ensure instance console log exists: /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.226 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.226 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.227 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.441 254096 DEBUG nova.compute.manager [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.442 254096 DEBUG oslo_concurrency.lockutils [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.442 254096 DEBUG oslo_concurrency.lockutils [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.443 254096 DEBUG oslo_concurrency.lockutils [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.443 254096 DEBUG nova.compute.manager [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Processing event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.446 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.449 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089303.4490504, 4677de7c-6625-4c98-a065-214341d8bfea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.449 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.451 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.455 254096 INFO nova.virt.libvirt.driver [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance spawned successfully.#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.455 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.480 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.487 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.487 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.488 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.488 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.489 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.489 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.493 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.534 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.603 254096 INFO nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 7.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.603 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 293 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 2.7 MiB/s wr, 106 op/s
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.867 254096 INFO nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 9.08 seconds to build instance.#033[00m
Nov 25 11:48:23 np0005535469 nova_compute[254092]: 2025-11-25 16:48:23.900 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.204 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.284 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Successfully created port: 5a3f34de-d3de-439b-ac8f-baabc77892b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.782 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.783 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.800 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.853 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.854 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.860 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:48:24 np0005535469 nova_compute[254092]: 2025-11-25 16:48:24.860 254096 INFO nova.compute.claims [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.060 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.519 254096 DEBUG nova.compute.manager [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.519 254096 DEBUG oslo_concurrency.lockutils [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 DEBUG oslo_concurrency.lockutils [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 DEBUG oslo_concurrency.lockutils [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 DEBUG nova.compute.manager [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] No waiting events found dispatching network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 WARNING nova.compute.manager [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received unexpected event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:48:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799670337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:25 np0005535469 kernel: tap431770e1-47 (unregistering): left promiscuous mode
Nov 25 11:48:25 np0005535469 NetworkManager[48891]: <info>  [1764089305.5532] device (tap431770e1-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:48:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:25Z|00908|binding|INFO|Releasing lport 431770e1-476d-40b3-8477-419b69aa4fe9 from this chassis (sb_readonly=0)
Nov 25 11:48:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:25Z|00909|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 down in Southbound
Nov 25 11:48:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:25Z|00910|binding|INFO|Removing iface tap431770e1-47 ovn-installed in OVS
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.560 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.572 254096 DEBUG nova.compute.provider_tree [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.571 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.572 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis#033[00m
Nov 25 11:48:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.574 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:48:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.575 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1262ab7f-d941-4488-8b15-5c8841a4fecd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.581 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.594 254096 DEBUG nova.scheduler.client.report [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.620 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.621 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:48:25 np0005535469 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 25 11:48:25 np0005535469 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Consumed 13.219s CPU time.
Nov 25 11:48:25 np0005535469 systemd-machined[216343]: Machine qemu-115-instance-0000005e terminated.
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.706 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.707 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.722 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.742 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:48:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 360 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.7 MiB/s wr, 203 op/s
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.830 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.832 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.832 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Creating image(s)#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.860 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.890 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.920 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:25 np0005535469 nova_compute[254092]: 2025-11-25 16:48:25.924 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.016 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.018 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.018 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.019 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.050 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.054 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0098976-026f-43d8-b686-b2658f9aded9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.128 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.139 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance destroyed successfully.#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.139 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'numa_topology' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.161 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Attempting rescue#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.162 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.169 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.170 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating image(s)#033[00m
Nov 25 11:48:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.201 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.207 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.255 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.283 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.288 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.361 254096 DEBUG nova.policy [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23f6db77558a477bbd8b8b46cb4107d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.400 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.401 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.435 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.444 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.762 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0098976-026f-43d8-b686-b2658f9aded9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:26 np0005535469 nova_compute[254092]: 2025-11-25 16:48:26.884 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] resizing rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.037 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Successfully updated port: 5a3f34de-d3de-439b-ac8f-baabc77892b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.040 254096 DEBUG nova.compute.manager [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.040 254096 DEBUG oslo_concurrency.lockutils [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 DEBUG oslo_concurrency.lockutils [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 DEBUG oslo_concurrency.lockutils [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 DEBUG nova.compute.manager [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 WARNING nova.compute.manager [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.124 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.124 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.124 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.182 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.738s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.183 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.193 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.194 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start _get_guest_xml network_info=[{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:e9:e7:de"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.195 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.235 254096 DEBUG nova.objects.instance [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid e0098976-026f-43d8-b686-b2658f9aded9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.239 254096 WARNING nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.244 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.244 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.245 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.246 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Ensure instance console log exists: /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.246 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.247 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.247 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.251 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.252 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.252 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.253 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.253 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.256 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.256 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.269 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.430 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.531 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.531 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.550 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:48:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781331037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.743 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:27 np0005535469 nova_compute[254092]: 2025-11-25 16:48:27.745 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 372 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 249 op/s
Nov 25 11:48:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272110234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.228 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.231 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220709705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.760 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.762 254096 DEBUG nova.virt.libvirt.vif [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:09Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:e9:e7:de"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.762 254096 DEBUG nova.network.os_vif_util [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:e9:e7:de"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.763 254096 DEBUG nova.network.os_vif_util [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.765 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.778 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <uuid>435ae693-6844-49ae-977b-ec3aa89cfe70</uuid>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <name>instance-0000005e</name>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueTestJSON-server-328897245</nova:name>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:27</nova:creationTime>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <nova:port uuid="431770e1-476d-40b3-8477-419b69aa4fe9">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <entry name="serial">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <entry name="uuid">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <target dev="vdb" bus="virtio"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e9:e7:de"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <target dev="tap431770e1-47"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/console.log" append="off"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:28 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:28 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:28 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:28 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.794 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance destroyed successfully.#033[00m
Nov 25 11:48:28 np0005535469 podman[350390]: 2025-11-25 16:48:28.807756663 +0000 UTC m=+0.078952456 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.846 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.847 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.847 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.847 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:e9:e7:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.848 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Using config drive#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.870 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.890 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Successfully created port: 591e580e-30bb-4c0d-b1fb-96d45eca5626 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.910 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:28 np0005535469 nova_compute[254092]: 2025-11-25 16:48:28.936 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'keypairs' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:28 np0005535469 podman[350390]: 2025-11-25 16:48:28.938221398 +0000 UTC m=+0.209417201 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:48:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:48:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:48:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 372 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 249 op/s
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.844 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.859 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.860 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance network_info: |[{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.863 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start _get_guest_xml network_info=[{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.870 254096 WARNING nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.876 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.877 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.879 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.880 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.880 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.881 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.881 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.881 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.882 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.882 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.882 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.884 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.887 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.946 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.947 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing instance network info cache due to event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.947 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.948 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:29 np0005535469 nova_compute[254092]: 2025-11-25 16:48:29.948 254096 DEBUG nova.network.neutron [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.016 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating config drive at /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.023 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1lyutho execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.170 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1lyutho" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.193 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.197 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040587605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.370 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.403 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.409 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.468 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.472 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deleting local config drive /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue because it was imported into RBD.#033[00m
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 36a2a155-5a2e-446c-96fa-4112f914e17e does not exist
Nov 25 11:48:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a22f5ec4-2a5b-4ad6-ae60-7032494a86eb does not exist
Nov 25 11:48:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 12d5a424-8f7a-47e9-9f86-1d9a5891d430 does not exist
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.489 254096 DEBUG oslo_concurrency.lockutils [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.489 254096 DEBUG oslo_concurrency.lockutils [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.489 254096 DEBUG nova.compute.manager [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.501 254096 DEBUG nova.compute.manager [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.503 254096 DEBUG nova.objects.instance [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'flavor' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:30 np0005535469 kernel: tap431770e1-47: entered promiscuous mode
Nov 25 11:48:30 np0005535469 NetworkManager[48891]: <info>  [1764089310.5443] manager: (tap431770e1-47): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Nov 25 11:48:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:30Z|00911|binding|INFO|Claiming lport 431770e1-476d-40b3-8477-419b69aa4fe9 for this chassis.
Nov 25 11:48:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:30Z|00912|binding|INFO|431770e1-476d-40b3-8477-419b69aa4fe9: Claiming fa:16:3e:e9:e7:de 10.100.0.8
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.555 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.557 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis#033[00m
Nov 25 11:48:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.558 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:48:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.559 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb816db-9476-44b7-bbaf-ba69d9fcc7f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.560 254096 DEBUG nova.virt.libvirt.driver [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:30Z|00913|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 ovn-installed in OVS
Nov 25 11:48:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:30Z|00914|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 up in Southbound
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:48:30 np0005535469 systemd-udevd[350817]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:30 np0005535469 systemd-machined[216343]: New machine qemu-117-instance-0000005e.
Nov 25 11:48:30 np0005535469 systemd[1]: Started Virtual Machine qemu-117-instance-0000005e.
Nov 25 11:48:30 np0005535469 NetworkManager[48891]: <info>  [1764089310.6027] device (tap431770e1-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:30 np0005535469 NetworkManager[48891]: <info>  [1764089310.6038] device (tap431770e1-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529209879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.925 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.926 254096 DEBUG nova.virt.libvirt.vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-721299492',display_name='tempest-TestNetworkAdvancedServerOps-server-721299492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-721299492',id=96,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIuS2h4G62fbKa1D8fQiWbH65PFkkRVLBed4wrkEeUlM++S4qN/mZDJoxB0We0lR2SolGZ26Txk6Ir9O+1WqdMaVC9PS7NmiU/+hEPFN6YieX+/K6w93NwRm1fHYEK0fbg==',key_name='tempest-TestNetworkAdvancedServerOps-1463569288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-oidm56b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2e848add-8417-4307-8b01-f0d1c1a76cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.927 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.927 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.929 254096 DEBUG nova.objects.instance [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e848add-8417-4307-8b01-f0d1c1a76cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.952 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <uuid>2e848add-8417-4307-8b01-f0d1c1a76cea</uuid>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <name>instance-00000060</name>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-721299492</nova:name>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:29</nova:creationTime>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <nova:port uuid="5a3f34de-d3de-439b-ac8f-baabc77892b4">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <entry name="serial">2e848add-8417-4307-8b01-f0d1c1a76cea</entry>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <entry name="uuid">2e848add-8417-4307-8b01-f0d1c1a76cea</entry>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2e848add-8417-4307-8b01-f0d1c1a76cea_disk">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:9d:43:a6"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <target dev="tap5a3f34de-d3"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/console.log" append="off"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:30 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:30 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:30 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:30 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.953 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Preparing to wait for external event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.954 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.954 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.954 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.955 254096 DEBUG nova.virt.libvirt.vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-721299492',display_name='tempest-TestNetworkAdvancedServerOps-server-721299492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-721299492',id=96,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIuS2h4G62fbKa1D8fQiWbH65PFkkRVLBed4wrkEeUlM++S4qN/mZDJoxB0We0lR2SolGZ26Txk6Ir9O+1WqdMaVC9PS7NmiU/+hEPFN6YieX+/K6w93NwRm1fHYEK0fbg==',key_name='tempest-TestNetworkAdvancedServerOps-1463569288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-oidm56b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2e848add-8417-4307-8b01-f0d1c1a76cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.955 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.956 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.956 254096 DEBUG os_vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.957 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.957 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.970 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a3f34de-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.970 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a3f34de-d3, col_values=(('external_ids', {'iface-id': '5a3f34de-d3de-439b-ac8f-baabc77892b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:43:a6', 'vm-uuid': '2e848add-8417-4307-8b01-f0d1c1a76cea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 NetworkManager[48891]: <info>  [1764089310.9743] manager: (tap5a3f34de-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.977 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Successfully updated port: 591e580e-30bb-4c0d-b1fb-96d45eca5626 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.984 254096 INFO os_vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3')#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.993 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.993 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:30 np0005535469 nova_compute[254092]: 2025-11-25 16:48:30.993 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.030 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.030 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.030 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:9d:43:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.031 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Using config drive#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.061 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 25 11:48:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 25 11:48:31 np0005535469 podman[351010]: 2025-11-25 16:48:31.185592632 +0000 UTC m=+0.072063220 container create adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 11:48:31 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 25 11:48:31 np0005535469 podman[351010]: 2025-11-25 16:48:31.139949751 +0000 UTC m=+0.026420359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:48:31 np0005535469 systemd[1]: Started libpod-conmon-adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd.scope.
Nov 25 11:48:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.335 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 435ae693-6844-49ae-977b-ec3aa89cfe70 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.336 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089311.3346312, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.337 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.344 254096 DEBUG nova.compute.manager [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:31 np0005535469 podman[351010]: 2025-11-25 16:48:31.358047458 +0000 UTC m=+0.244518146 container init adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.368 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:31 np0005535469 podman[351010]: 2025-11-25 16:48:31.373397115 +0000 UTC m=+0.259867723 container start adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.375 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:31 np0005535469 confident_stonebraker[351049]: 167 167
Nov 25 11:48:31 np0005535469 systemd[1]: libpod-adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd.scope: Deactivated successfully.
Nov 25 11:48:31 np0005535469 podman[351010]: 2025-11-25 16:48:31.388428803 +0000 UTC m=+0.274899401 container attach adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 11:48:31 np0005535469 podman[351010]: 2025-11-25 16:48:31.389102891 +0000 UTC m=+0.275573499 container died adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.523 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.527 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089311.3391182, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.528 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Started (Lifecycle Event)#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.550 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:31 np0005535469 nova_compute[254092]: 2025-11-25 16:48:31.554 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7faedfd7ad0420e5d337465ce11ea6845304f4dd701442214f59c00405d8ecf6-merged.mount: Deactivated successfully.
Nov 25 11:48:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 465 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.0 MiB/s wr, 254 op/s
Nov 25 11:48:31 np0005535469 podman[351010]: 2025-11-25 16:48:31.989596811 +0000 UTC m=+0.876067399 container remove adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:48:32 np0005535469 systemd[1]: libpod-conmon-adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd.scope: Deactivated successfully.
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.095 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:48:32 np0005535469 podman[351075]: 2025-11-25 16:48:32.21775857 +0000 UTC m=+0.050648857 container create 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:48:32 np0005535469 systemd[1]: Started libpod-conmon-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope.
Nov 25 11:48:32 np0005535469 podman[351075]: 2025-11-25 16:48:32.199221626 +0000 UTC m=+0.032111973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:48:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:48:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:32 np0005535469 podman[351075]: 2025-11-25 16:48:32.33550811 +0000 UTC m=+0.168398407 container init 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:48:32 np0005535469 podman[351075]: 2025-11-25 16:48:32.343695583 +0000 UTC m=+0.176585880 container start 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 11:48:32 np0005535469 podman[351075]: 2025-11-25 16:48:32.346620552 +0000 UTC m=+0.179510859 container attach 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.741 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Creating config drive at /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config#033[00m
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.746 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbghwnam execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.889 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbghwnam" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.911 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.914 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.986 254096 DEBUG nova.network.neutron [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updated VIF entry in instance network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:32 np0005535469 nova_compute[254092]: 2025-11-25 16:48:32.987 254096 DEBUG nova.network.neutron [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.000 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.001 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.001 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.001 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.002 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.002 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.002 254096 WARNING nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.104 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.105 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deleting local config drive /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config because it was imported into RBD.#033[00m
Nov 25 11:48:33 np0005535469 kernel: tap5a3f34de-d3: entered promiscuous mode
Nov 25 11:48:33 np0005535469 NetworkManager[48891]: <info>  [1764089313.1697] manager: (tap5a3f34de-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Nov 25 11:48:33 np0005535469 systemd-udevd[350849]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:33Z|00915|binding|INFO|Claiming lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 for this chassis.
Nov 25 11:48:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:33Z|00916|binding|INFO|5a3f34de-d3de-439b-ac8f-baabc77892b4: Claiming fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.183 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.186 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 bound to our chassis#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.188 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa283c2c-b597-4970-842d-f5f2b621b5f0#033[00m
Nov 25 11:48:33 np0005535469 NetworkManager[48891]: <info>  [1764089313.1992] device (tap5a3f34de-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:33 np0005535469 NetworkManager[48891]: <info>  [1764089313.1999] device (tap5a3f34de-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:33Z|00917|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 ovn-installed in OVS
Nov 25 11:48:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:33Z|00918|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 up in Southbound
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.207 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6eb014-3651-48b7-80c2-57f0b0d02378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.209 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa283c2c-b1 in ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.212 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa283c2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f219ff6-f059-4cf7-8eb3-015674adff84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.216 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1f34cf-c59e-4b48-b742-aac6b4190341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 systemd-machined[216343]: New machine qemu-118-instance-00000060.
Nov 25 11:48:33 np0005535469 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.235 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f4364bba-f1d4-4c2e-81d1-18d20989c9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.256 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b43a7e7c-411d-40d0-a91f-da6e3a4450bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.300 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[422697a4-1fef-4171-a1b4-ccdc8d4d69de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 NetworkManager[48891]: <info>  [1764089313.3103] manager: (tapaa283c2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.309 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[90d43bbb-22d5-48c7-904b-9d7f0d2b85a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.348 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cde833cb-8133-43fd-ba4b-1a12e4d43826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.354 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9ece9534-13b1-4c1e-86a6-c0a2beaad894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 NetworkManager[48891]: <info>  [1764089313.3847] device (tapaa283c2c-b0): carrier: link connected
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.389 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[285b28ea-ed5b-4401-b6cc-d1c7fcb06d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.411 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6bdabc9-ad56-4c2f-a43d-4a5cae0307d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577097, 'reachable_time': 23598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351203, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.426 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cb9de9-f274-4750-91e0-5356cbbe7c55]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:7d28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 577097, 'tstamp': 577097}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351204, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.453 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f36ae46f-b528-446c-bb57-817e30210d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577097, 'reachable_time': 23598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351205, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 practical_noyce[351091]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:48:33 np0005535469 practical_noyce[351091]: --> relative data size: 1.0
Nov 25 11:48:33 np0005535469 practical_noyce[351091]: --> All data devices are unavailable
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.506 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[353f6775-6764-44d8-b196-902b3b8fec95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 systemd[1]: libpod-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope: Deactivated successfully.
Nov 25 11:48:33 np0005535469 systemd[1]: libpod-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope: Consumed 1.056s CPU time.
Nov 25 11:48:33 np0005535469 podman[351075]: 2025-11-25 16:48:33.529407145 +0000 UTC m=+1.362297452 container died 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:48:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb-merged.mount: Deactivated successfully.
Nov 25 11:48:33 np0005535469 podman[351075]: 2025-11-25 16:48:33.600731543 +0000 UTC m=+1.433621830 container remove 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.605 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c15df015-b420-4e25-a4ef-0e381ea6685f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.607 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa283c2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:33 np0005535469 kernel: tapaa283c2c-b0: entered promiscuous mode
Nov 25 11:48:33 np0005535469 NetworkManager[48891]: <info>  [1764089313.6477] manager: (tapaa283c2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:33 np0005535469 systemd[1]: libpod-conmon-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope: Deactivated successfully.
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.654 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa283c2c-b0, col_values=(('external_ids', {'iface-id': '82c4ad4d-388e-4238-98b3-8d58946e7829'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:33Z|00919|binding|INFO|Releasing lport 82c4ad4d-388e-4238-98b3-8d58946e7829 from this chassis (sb_readonly=0)
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.681 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.682 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7028ce90-2ee8-4848-9dc0-159d2f5720ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.687 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:48:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.687 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'env', 'PROCESS_TAG=haproxy-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa283c2c-b597-4970-842d-f5f2b621b5f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:48:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 465 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.0 MiB/s wr, 254 op/s
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.955 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089313.9552457, 2e848add-8417-4307-8b01-f0d1c1a76cea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Started (Lifecycle Event)#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.975 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.980 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089313.956693, 2e848add-8417-4307-8b01-f0d1c1a76cea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:33 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.981 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:33.999 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.004 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.022 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.096 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.096 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 WARNING nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-changed-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.098 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Refreshing instance network info cache due to event network-changed-591e580e-30bb-4c0d-b1fb-96d45eca5626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:34 np0005535469 nova_compute[254092]: 2025-11-25 16:48:34.098 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:34 np0005535469 podman[351396]: 2025-11-25 16:48:34.18680279 +0000 UTC m=+0.066108568 container create c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:48:34 np0005535469 systemd[1]: Started libpod-conmon-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope.
Nov 25 11:48:34 np0005535469 podman[351396]: 2025-11-25 16:48:34.153349961 +0000 UTC m=+0.032655769 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:48:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:48:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f65b33bb2cefdb9521aad3458f01f2c8320441c773aab270729c09b9835d7b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:34 np0005535469 podman[351396]: 2025-11-25 16:48:34.301886437 +0000 UTC m=+0.181192215 container init c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:48:34 np0005535469 podman[351396]: 2025-11-25 16:48:34.307406337 +0000 UTC m=+0.186712105 container start c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 11:48:34 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : New worker (351441) forked
Nov 25 11:48:34 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : Loading success.
Nov 25 11:48:34 np0005535469 podman[351464]: 2025-11-25 16:48:34.488879829 +0000 UTC m=+0.047605325 container create eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:48:34 np0005535469 systemd[1]: Started libpod-conmon-eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd.scope.
Nov 25 11:48:34 np0005535469 podman[351464]: 2025-11-25 16:48:34.466629984 +0000 UTC m=+0.025355500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:48:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:48:34 np0005535469 podman[351464]: 2025-11-25 16:48:34.587137979 +0000 UTC m=+0.145863485 container init eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:48:34 np0005535469 podman[351464]: 2025-11-25 16:48:34.595751673 +0000 UTC m=+0.154477169 container start eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 11:48:34 np0005535469 podman[351464]: 2025-11-25 16:48:34.599626498 +0000 UTC m=+0.158352024 container attach eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:48:34 np0005535469 xenodochial_almeida[351481]: 167 167
Nov 25 11:48:34 np0005535469 podman[351464]: 2025-11-25 16:48:34.604348717 +0000 UTC m=+0.163074213 container died eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 11:48:34 np0005535469 systemd[1]: libpod-eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd.scope: Deactivated successfully.
Nov 25 11:48:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b6932a81371c4f706c700a68e6e80b00ab29666650c1835507375f8f9bbd8ac4-merged.mount: Deactivated successfully.
Nov 25 11:48:34 np0005535469 podman[351464]: 2025-11-25 16:48:34.647680484 +0000 UTC m=+0.206405980 container remove eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:48:34 np0005535469 systemd[1]: libpod-conmon-eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd.scope: Deactivated successfully.
Nov 25 11:48:34 np0005535469 podman[351505]: 2025-11-25 16:48:34.85873002 +0000 UTC m=+0.043454883 container create 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:48:34 np0005535469 systemd[1]: Started libpod-conmon-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope.
Nov 25 11:48:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:48:34 np0005535469 podman[351505]: 2025-11-25 16:48:34.843360111 +0000 UTC m=+0.028084994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:48:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:34 np0005535469 podman[351505]: 2025-11-25 16:48:34.962187441 +0000 UTC m=+0.146912334 container init 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:48:34 np0005535469 podman[351505]: 2025-11-25 16:48:34.968448911 +0000 UTC m=+0.153173774 container start 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:48:34 np0005535469 podman[351505]: 2025-11-25 16:48:34.972977574 +0000 UTC m=+0.157702457 container attach 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.126 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updating instance_info_cache with network_info: [{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.151 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.152 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance network_info: |[{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.153 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.153 254096 DEBUG nova.network.neutron [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Refreshing network info cache for port 591e580e-30bb-4c0d-b1fb-96d45eca5626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.156 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start _get_guest_xml network_info=[{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.163 254096 WARNING nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.173 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.175 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.180 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.181 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.182 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.182 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.183 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.183 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.184 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.184 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.185 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.185 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.186 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.186 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.186 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.187 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.192 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911482340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.713 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.745 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]: {
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:    "0": [
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:        {
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "devices": [
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "/dev/loop3"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            ],
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_name": "ceph_lv0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_size": "21470642176",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "name": "ceph_lv0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "tags": {
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cluster_name": "ceph",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.crush_device_class": "",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.encrypted": "0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osd_id": "0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.type": "block",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.vdo": "0"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            },
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "type": "block",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "vg_name": "ceph_vg0"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:        }
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:    ],
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:    "1": [
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:        {
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "devices": [
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "/dev/loop4"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            ],
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_name": "ceph_lv1",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_size": "21470642176",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "name": "ceph_lv1",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "tags": {
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cluster_name": "ceph",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.crush_device_class": "",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.encrypted": "0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osd_id": "1",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.type": "block",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.vdo": "0"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            },
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "type": "block",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "vg_name": "ceph_vg1"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:        }
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:    ],
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:    "2": [
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:        {
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "devices": [
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "/dev/loop5"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            ],
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_name": "ceph_lv2",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_size": "21470642176",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "name": "ceph_lv2",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "tags": {
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.cluster_name": "ceph",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.crush_device_class": "",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.encrypted": "0",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osd_id": "2",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.type": "block",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:                "ceph.vdo": "0"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            },
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "type": "block",
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:            "vg_name": "ceph_vg2"
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:        }
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]:    ]
Nov 25 11:48:35 np0005535469 nostalgic_vaughan[351522]: }
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.751 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.9 MiB/s wr, 174 op/s
Nov 25 11:48:35 np0005535469 systemd[1]: libpod-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope: Deactivated successfully.
Nov 25 11:48:35 np0005535469 conmon[351522]: conmon 16af17d5d38ab1ac0adf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope/container/memory.events
Nov 25 11:48:35 np0005535469 podman[351572]: 2025-11-25 16:48:35.832485461 +0000 UTC m=+0.026986564 container died 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:48:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19-merged.mount: Deactivated successfully.
Nov 25 11:48:35 np0005535469 podman[351572]: 2025-11-25 16:48:35.928228844 +0000 UTC m=+0.122729947 container remove 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 11:48:35 np0005535469 systemd[1]: libpod-conmon-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope: Deactivated successfully.
Nov 25 11:48:35 np0005535469 nova_compute[254092]: 2025-11-25 16:48:35.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614159682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.394 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.395 254096 DEBUG nova.virt.libvirt.vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-738000202',display_name='tempest-ServerActionsTestOtherB-server-738000202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-738000202',id=97,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-cz0mxxg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:25Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=e0098976-026f-43d8-b686-b2658f9aded9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.396 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.397 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.398 254096 DEBUG nova.objects.instance [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0098976-026f-43d8-b686-b2658f9aded9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.424 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <uuid>e0098976-026f-43d8-b686-b2658f9aded9</uuid>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <name>instance-00000061</name>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerActionsTestOtherB-server-738000202</nova:name>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:35</nova:creationTime>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <nova:port uuid="591e580e-30bb-4c0d-b1fb-96d45eca5626">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <entry name="serial">e0098976-026f-43d8-b686-b2658f9aded9</entry>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <entry name="uuid">e0098976-026f-43d8-b686-b2658f9aded9</entry>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e0098976-026f-43d8-b686-b2658f9aded9_disk">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e0098976-026f-43d8-b686-b2658f9aded9_disk.config">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:7d:85:c3"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <target dev="tap591e580e-30"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/console.log" append="off"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:36 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:36 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:36 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:36 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.425 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Preparing to wait for external event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.425 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.426 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.426 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.427 254096 DEBUG nova.virt.libvirt.vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-738000202',display_name='tempest-ServerActionsTestOtherB-server-738000202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-738000202',id=97,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-cz0mxxg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:25Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=e0098976-026f-43d8-b686-b2658f9aded9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.427 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.428 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.428 254096 DEBUG os_vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.429 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap591e580e-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap591e580e-30, col_values=(('external_ids', {'iface-id': '591e580e-30bb-4c0d-b1fb-96d45eca5626', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:85:c3', 'vm-uuid': 'e0098976-026f-43d8-b686-b2658f9aded9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:36 np0005535469 NetworkManager[48891]: <info>  [1764089316.4364] manager: (tap591e580e-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.449 254096 INFO os_vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30')#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.581 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.582 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.582 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:7d:85:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.582 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Using config drive#033[00m
Nov 25 11:48:36 np0005535469 nova_compute[254092]: 2025-11-25 16:48:36.625 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:36 np0005535469 podman[351766]: 2025-11-25 16:48:36.684253648 +0000 UTC m=+0.067759712 container create 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:48:36 np0005535469 systemd[1]: Started libpod-conmon-98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6.scope.
Nov 25 11:48:36 np0005535469 podman[351766]: 2025-11-25 16:48:36.656962907 +0000 UTC m=+0.040468961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:48:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:48:36 np0005535469 podman[351766]: 2025-11-25 16:48:36.918092803 +0000 UTC m=+0.301598887 container init 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:48:36 np0005535469 podman[351766]: 2025-11-25 16:48:36.927543819 +0000 UTC m=+0.311049883 container start 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 11:48:36 np0005535469 determined_shannon[351786]: 167 167
Nov 25 11:48:36 np0005535469 systemd[1]: libpod-98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6.scope: Deactivated successfully.
Nov 25 11:48:37 np0005535469 podman[351766]: 2025-11-25 16:48:37.053923524 +0000 UTC m=+0.437429608 container attach 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:48:37 np0005535469 podman[351766]: 2025-11-25 16:48:37.054412488 +0000 UTC m=+0.437918572 container died 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:48:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-90d49041b59eec2a9039dd14b5e8a99a464a2baeab226cef2a02078f18820fd6-merged.mount: Deactivated successfully.
Nov 25 11:48:37 np0005535469 podman[351766]: 2025-11-25 16:48:37.142889652 +0000 UTC m=+0.526395716 container remove 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:48:37 np0005535469 systemd[1]: libpod-conmon-98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6.scope: Deactivated successfully.
Nov 25 11:48:37 np0005535469 podman[351809]: 2025-11-25 16:48:37.345704124 +0000 UTC m=+0.043397081 container create 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 11:48:37 np0005535469 systemd[1]: Started libpod-conmon-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope.
Nov 25 11:48:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:48:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:37 np0005535469 podman[351809]: 2025-11-25 16:48:37.327992942 +0000 UTC m=+0.025685919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:48:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:48:37 np0005535469 podman[351809]: 2025-11-25 16:48:37.437691483 +0000 UTC m=+0.135384450 container init 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:48:37 np0005535469 podman[351809]: 2025-11-25 16:48:37.44418059 +0000 UTC m=+0.141873547 container start 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:48:37 np0005535469 podman[351809]: 2025-11-25 16:48:37.44787617 +0000 UTC m=+0.145569147 container attach 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:48:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.081 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Creating config drive at /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config#033[00m
Nov 25 11:48:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:38Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:86:24 10.100.0.12
Nov 25 11:48:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:38Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:86:24 10.100.0.12
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.088 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyhx0q1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.230 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyhx0q1t" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.259 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.263 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config e0098976-026f-43d8-b686-b2658f9aded9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.369 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.370 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.371 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.371 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.371 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.372 254096 WARNING nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.372 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.372 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.373 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.373 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.373 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Processing event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.374 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.393 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.398 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089318.3918734, 2e848add-8417-4307-8b01-f0d1c1a76cea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.399 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.410 254096 INFO nova.virt.libvirt.driver [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance spawned successfully.#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.410 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.428 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.435 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.438 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.439 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.439 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.440 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.440 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.441 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.445 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config e0098976-026f-43d8-b686-b2658f9aded9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.445 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deleting local config drive /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config because it was imported into RBD.#033[00m
Nov 25 11:48:38 np0005535469 crazy_payne[351826]: {
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "osd_id": 1,
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "type": "bluestore"
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:    },
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "osd_id": 2,
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "type": "bluestore"
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:    },
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "osd_id": 0,
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:        "type": "bluestore"
Nov 25 11:48:38 np0005535469 crazy_payne[351826]:    }
Nov 25 11:48:38 np0005535469 crazy_payne[351826]: }
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.470 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:38 np0005535469 systemd[1]: libpod-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope: Deactivated successfully.
Nov 25 11:48:38 np0005535469 systemd[1]: libpod-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope: Consumed 1.023s CPU time.
Nov 25 11:48:38 np0005535469 conmon[351826]: conmon 7c4fa8ab5f0fbfa47ff1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope/container/memory.events
Nov 25 11:48:38 np0005535469 podman[351809]: 2025-11-25 16:48:38.491176591 +0000 UTC m=+1.188869548 container died 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.494 254096 INFO nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 16.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.495 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:38 np0005535469 kernel: tap591e580e-30: entered promiscuous mode
Nov 25 11:48:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445-merged.mount: Deactivated successfully.
Nov 25 11:48:38 np0005535469 NetworkManager[48891]: <info>  [1764089318.5313] manager: (tap591e580e-30): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Nov 25 11:48:38 np0005535469 podman[351809]: 2025-11-25 16:48:38.558654056 +0000 UTC m=+1.256347013 container remove 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:48:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:38Z|00920|binding|INFO|Claiming lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 for this chassis.
Nov 25 11:48:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:38Z|00921|binding|INFO|591e580e-30bb-4c0d-b1fb-96d45eca5626: Claiming fa:16:3e:7d:85:c3 10.100.0.14
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.595 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:85:c3 10.100.0.14'], port_security=['fa:16:3e:7d:85:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e0098976-026f-43d8-b686-b2658f9aded9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=591e580e-30bb-4c0d-b1fb-96d45eca5626) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.595 254096 INFO nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 16.97 seconds to build instance.#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.596 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 591e580e-30bb-4c0d-b1fb-96d45eca5626 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.598 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919#033[00m
Nov 25 11:48:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:38Z|00922|binding|INFO|Setting lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 ovn-installed in OVS
Nov 25 11:48:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:38Z|00923|binding|INFO|Setting lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 up in Southbound
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.612 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:38 np0005535469 systemd[1]: libpod-conmon-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope: Deactivated successfully.
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.627 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ac1649-cab5-48a4-bf4c-722e1a5fba92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:38 np0005535469 systemd-udevd[351925]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:48:38 np0005535469 systemd-machined[216343]: New machine qemu-119-instance-00000061.
Nov 25 11:48:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:48:38 np0005535469 systemd[1]: Started Virtual Machine qemu-119-instance-00000061.
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:38 np0005535469 NetworkManager[48891]: <info>  [1764089318.6567] device (tap591e580e-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:38 np0005535469 NetworkManager[48891]: <info>  [1764089318.6575] device (tap591e580e-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 36cba9bb-6e07-41b3-a1da-fc25d2fb2bfb does not exist
Nov 25 11:48:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c72f1883-c574-4dd4-b3d8-6abf7906f1ef does not exist
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.675 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41c46145-29fc-420f-af97-0a51da3d0101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.678 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eca482ed-428b-443b-95e0-f59cd239cee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.720 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6e6a57-cb7f-44db-820f-b7f172d3af9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9232a739-a63a-42a7-b860-01a535b6b076]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351960, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c787ac4-7c49-4544-b4fd-62a0ee480ac0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351964, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351964, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.768 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:38 np0005535469 nova_compute[254092]: 2025-11-25 16:48:38.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.777 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.779 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.163 254096 DEBUG nova.network.neutron [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updated VIF entry in instance network info cache for port 591e580e-30bb-4c0d-b1fb-96d45eca5626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.164 254096 DEBUG nova.network.neutron [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updating instance_info_cache with network_info: [{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.183 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:39.394 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:39.395 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:48:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:39.396 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.413 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089319.4134295, e0098976-026f-43d8-b686-b2658f9aded9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.414 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Started (Lifecycle Event)#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.445 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.450 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089319.4158354, e0098976-026f-43d8-b686-b2658f9aded9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.450 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.486 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.490 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.508 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.983 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.983 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:39 np0005535469 nova_compute[254092]: 2025-11-25 16:48:39.997 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.069 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.069 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.076 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.076 254096 INFO nova.compute.claims [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:48:40
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.289 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.445 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.446 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.447 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.447 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.448 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.448 254096 WARNING nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.449 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.449 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.450 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.450 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.451 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Processing event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.451 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.452 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.452 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.453 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.453 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] No waiting events found dispatching network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.454 254096 WARNING nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received unexpected event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.456 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.465 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089320.4644217, e0098976-026f-43d8-b686-b2658f9aded9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.466 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.470 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.477 254096 INFO nova.virt.libvirt.driver [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance spawned successfully.#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.478 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.497 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.506 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.510 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.511 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.511 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.512 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.512 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.513 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.534 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.562 254096 INFO nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 14.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.563 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.621 254096 INFO nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 15.78 seconds to build instance.#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.668 254096 DEBUG nova.virt.libvirt.driver [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.682 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580077054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.845 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.853 254096 DEBUG nova.compute.provider_tree [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.868 254096 DEBUG nova.scheduler.client.report [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.892 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.893 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.937 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.938 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.961 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:48:40 np0005535469 nova_compute[254092]: 2025-11-25 16:48:40.984 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.068 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.070 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.070 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating image(s)#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.105 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.143 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.171 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.176 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.228 254096 DEBUG nova.policy [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '013868ddd96f43a49458a4615ab1f41b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '544c4f84ca494482aea8e55248fe4c62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.278 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.279 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.308 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.313 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:41 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.643 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.713 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] resizing rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:48:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 498 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 241 op/s
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.815 254096 DEBUG nova.objects.instance [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.829 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.829 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Ensure instance console log exists: /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.830 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.830 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:41 np0005535469 nova_compute[254092]: 2025-11-25 16:48:41.830 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:42 np0005535469 nova_compute[254092]: 2025-11-25 16:48:42.252 254096 INFO nova.compute.manager [None req-928dc9f7-2545-4a7a-82ad-01f56046a4b4 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Get console output#033[00m
Nov 25 11:48:42 np0005535469 nova_compute[254092]: 2025-11-25 16:48:42.260 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:48:42 np0005535469 nova_compute[254092]: 2025-11-25 16:48:42.672 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Successfully created port: acb7d65c-0259-4a39-94f8-d7f64637a340 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:48:43 np0005535469 kernel: tapcce0c8fe-e8 (unregistering): left promiscuous mode
Nov 25 11:48:43 np0005535469 NetworkManager[48891]: <info>  [1764089323.0133] device (tapcce0c8fe-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:48:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:43Z|00924|binding|INFO|Releasing lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 from this chassis (sb_readonly=0)
Nov 25 11:48:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:43Z|00925|binding|INFO|Setting lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 down in Southbound
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:43Z|00926|binding|INFO|Removing iface tapcce0c8fe-e8 ovn-installed in OVS
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.043 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:86:24 10.100.0.12'], port_security=['fa:16:3e:87:86:24 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4677de7c-6625-4c98-a065-214341d8bfea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.044 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.046 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.075 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ede233-fa98-45a5-9fe6-35275301765b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.116 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[adaaa9ee-062d-4f7b-b105-c944cdd27c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:43 np0005535469 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 25 11:48:43 np0005535469 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Consumed 15.025s CPU time.
Nov 25 11:48:43 np0005535469 systemd-machined[216343]: Machine qemu-116-instance-0000005f terminated.
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.126 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4f530e-0917-4e7f-98ce-b907e9f33695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.161 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c78730-c67c-4ccd-a517-715c263ac65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86f5cd6f-fc65-411d-b8b0-ae8cf22d640a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352231, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.205 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a700c62f-51d4-460b-931e-21be55262838]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352232, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352232, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.207 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.209 254096 DEBUG nova.compute.manager [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG nova.compute.manager [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing instance network info cache due to event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG oslo_concurrency.lockutils [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG oslo_concurrency.lockutils [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG nova.network.neutron [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.214 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.214 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.215 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.215 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.231 254096 DEBUG nova.compute.manager [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-unplugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.231 254096 DEBUG oslo_concurrency.lockutils [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 DEBUG oslo_concurrency.lockutils [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 DEBUG oslo_concurrency.lockutils [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 DEBUG nova.compute.manager [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] No waiting events found dispatching network-vif-unplugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 WARNING nova.compute.manager [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received unexpected event network-vif-unplugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for instance with vm_state active and task_state powering-off.#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.340 254096 INFO nova.compute.manager [None req-0bf81e2e-b183-4c87-98ac-310975a44648 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Get console output#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.345 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.750 254096 INFO nova.virt.libvirt.driver [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.756 254096 INFO nova.virt.libvirt.driver [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance destroyed successfully.#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.756 254096 DEBUG nova.objects.instance [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.764 254096 DEBUG nova.compute.manager [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 498 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Nov 25 11:48:43 np0005535469 nova_compute[254092]: 2025-11-25 16:48:43.803 254096 DEBUG oslo_concurrency.lockutils [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.100 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Successfully updated port: acb7d65c-0259-4a39-94f8-d7f64637a340 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.115 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.115 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.115 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.281 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.430 254096 INFO nova.compute.manager [None req-fdd6ad2a-b038-4244-a98d-0f753ea0b6f9 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Get console output#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.438 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.641 254096 DEBUG nova.network.neutron [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updated VIF entry in instance network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.642 254096 DEBUG nova.network.neutron [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:44 np0005535469 nova_compute[254092]: 2025-11-25 16:48:44.664 254096 DEBUG oslo_concurrency.lockutils [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.255 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.271 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.272 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance network_info: |[{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.274 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start _get_guest_xml network_info=[{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.278 254096 WARNING nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.284 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.284 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.287 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.288 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.288 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.289 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.289 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.289 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.290 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.290 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.290 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.291 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.291 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.292 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.292 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.292 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.297 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.552 254096 DEBUG nova.compute.manager [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-changed-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.554 254096 DEBUG nova.compute.manager [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Refreshing instance network info cache due to event network-changed-acb7d65c-0259-4a39-94f8-d7f64637a340. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.554 254096 DEBUG oslo_concurrency.lockutils [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.554 254096 DEBUG oslo_concurrency.lockutils [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.555 254096 DEBUG nova.network.neutron [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Refreshing network info cache for port acb7d65c-0259-4a39-94f8-d7f64637a340 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.627 254096 DEBUG nova.compute.manager [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.628 254096 DEBUG oslo_concurrency.lockutils [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.629 254096 DEBUG oslo_concurrency.lockutils [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.630 254096 DEBUG oslo_concurrency.lockutils [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.630 254096 DEBUG nova.compute.manager [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] No waiting events found dispatching network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.631 254096 WARNING nova.compute.manager [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received unexpected event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:48:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 529 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.3 MiB/s wr, 304 op/s
Nov 25 11:48:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825936751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.830 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.862 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:45 np0005535469 nova_compute[254092]: 2025-11-25 16:48:45.869 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927801959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.333 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.336 254096 DEBUG nova.virt.libvirt.vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:41Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.337 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.338 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.341 254096 DEBUG nova.objects.instance [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.354 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <uuid>6b74b880-45f6-4f10-b09f-2696629a42e9</uuid>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <name>instance-00000062</name>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueTestJSON-server-1318564784</nova:name>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:45</nova:creationTime>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <nova:port uuid="acb7d65c-0259-4a39-94f8-d7f64637a340">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <entry name="serial">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <entry name="uuid">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:b2:d7:96"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <target dev="tapacb7d65c-02"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/console.log" append="off"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:46 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:46 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:46 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:46 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.361 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Preparing to wait for external event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.361 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.362 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.362 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.363 254096 DEBUG nova.virt.libvirt.vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:41Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.364 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.365 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.366 254096 DEBUG os_vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.368 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.368 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.376 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacb7d65c-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.377 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacb7d65c-02, col_values=(('external_ids', {'iface-id': 'acb7d65c-0259-4a39-94f8-d7f64637a340', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:d7:96', 'vm-uuid': '6b74b880-45f6-4f10-b09f-2696629a42e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:46 np0005535469 NetworkManager[48891]: <info>  [1764089326.3811] manager: (tapacb7d65c-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.393 254096 INFO os_vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02')#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.449 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.450 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.450 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:b2:d7:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.451 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Using config drive#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.483 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.870 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.872 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.872 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.873 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.873 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.876 254096 INFO nova.compute.manager [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Terminating instance#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.878 254096 DEBUG nova.compute.manager [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.891 254096 INFO nova.virt.libvirt.driver [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance destroyed successfully.#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.892 254096 DEBUG nova.objects.instance [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.903 254096 DEBUG nova.virt.libvirt.vif [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1206046660',display_name='tempest-Íñstáñcé-1734518290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1206046660',id=95,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-g8d6ljpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:44Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4677de7c-6625-4c98-a065-214341d8bfea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.904 254096 DEBUG nova.network.os_vif_util [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.906 254096 DEBUG nova.network.os_vif_util [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.906 254096 DEBUG os_vif [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.911 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce0c8fe-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.923 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating config drive at /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.932 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46_uoyrr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.994 254096 DEBUG nova.network.neutron [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updated VIF entry in instance network info cache for port acb7d65c-0259-4a39-94f8-d7f64637a340. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:46 np0005535469 nova_compute[254092]: 2025-11-25 16:48:46.995 254096 DEBUG nova.network.neutron [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.002 254096 INFO os_vif [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8')#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.030 254096 DEBUG oslo_concurrency.lockutils [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.107 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46_uoyrr" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.149 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.156 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.627 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.628 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deleting local config drive /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config because it was imported into RBD.#033[00m
Nov 25 11:48:47 np0005535469 kernel: tapacb7d65c-02: entered promiscuous mode
Nov 25 11:48:47 np0005535469 NetworkManager[48891]: <info>  [1764089327.6948] manager: (tapacb7d65c-02): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Nov 25 11:48:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:47Z|00927|binding|INFO|Claiming lport acb7d65c-0259-4a39-94f8-d7f64637a340 for this chassis.
Nov 25 11:48:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:47Z|00928|binding|INFO|acb7d65c-0259-4a39-94f8-d7f64637a340: Claiming fa:16:3e:b2:d7:96 10.100.0.2
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.715 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:47 np0005535469 podman[352391]: 2025-11-25 16:48:47.716228743 +0000 UTC m=+0.118869121 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 11:48:47 np0005535469 podman[352392]: 2025-11-25 16:48:47.712822781 +0000 UTC m=+0.116973879 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 11:48:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.717 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis#033[00m
Nov 25 11:48:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.718 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:47Z|00929|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 ovn-installed in OVS
Nov 25 11:48:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:47Z|00930|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 up in Southbound
Nov 25 11:48:47 np0005535469 nova_compute[254092]: 2025-11-25 16:48:47.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.726 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[102de5a2-0da6-47ce-9fb2-c91d9fa3c52b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:47 np0005535469 systemd-udevd[352462]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:47 np0005535469 systemd-machined[216343]: New machine qemu-120-instance-00000062.
Nov 25 11:48:47 np0005535469 podman[352393]: 2025-11-25 16:48:47.755943713 +0000 UTC m=+0.160497613 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:48:47 np0005535469 NetworkManager[48891]: <info>  [1764089327.7576] device (tapacb7d65c-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:47 np0005535469 systemd[1]: Started Virtual Machine qemu-120-instance-00000062.
Nov 25 11:48:47 np0005535469 NetworkManager[48891]: <info>  [1764089327.7590] device (tapacb7d65c-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 544 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 276 op/s
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.348 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089328.3473387, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.348 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Started (Lifecycle Event)#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.365 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.371 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089328.3475723, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.371 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.388 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.398 254096 INFO nova.virt.libvirt.driver [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deleting instance files /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea_del#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.400 254096 INFO nova.virt.libvirt.driver [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deletion of /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea_del complete#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.408 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.429 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.490 254096 INFO nova.compute.manager [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 1.61 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.492 254096 DEBUG oslo.service.loopingcall [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.492 254096 DEBUG nova.compute.manager [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.492 254096 DEBUG nova.network.neutron [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:48:48 np0005535469 nova_compute[254092]: 2025-11-25 16:48:48.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 544 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 242 op/s
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.193 254096 DEBUG nova.compute.manager [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.195 254096 DEBUG oslo_concurrency.lockutils [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.196 254096 DEBUG oslo_concurrency.lockutils [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.197 254096 DEBUG oslo_concurrency.lockutils [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.197 254096 DEBUG nova.compute.manager [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Processing event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.198 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.213 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089330.212602, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.214 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.215 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.229 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance spawned successfully.#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.230 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.236170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330236243, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1791, "num_deletes": 258, "total_data_size": 2546075, "memory_usage": 2597120, "flush_reason": "Manual Compaction"}
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.241 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330254009, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2494121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38392, "largest_seqno": 40182, "table_properties": {"data_size": 2485958, "index_size": 4849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18152, "raw_average_key_size": 20, "raw_value_size": 2469143, "raw_average_value_size": 2825, "num_data_blocks": 214, "num_entries": 874, "num_filter_entries": 874, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089186, "oldest_key_time": 1764089186, "file_creation_time": 1764089330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 17881 microseconds, and 6956 cpu microseconds.
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.254064) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2494121 bytes OK
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.254089) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.255826) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.255842) EVENT_LOG_v1 {"time_micros": 1764089330255837, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.255864) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2538200, prev total WAL file size 2538200, number of live WAL files 2.
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.256812) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2435KB)], [86(8673KB)]
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330256946, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11375391, "oldest_snapshot_seqno": -1}
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.261 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.266 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.267 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.267 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.268 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.269 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.269 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.302 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6504 keys, 9725769 bytes, temperature: kUnknown
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330341098, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9725769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9681409, "index_size": 26970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165699, "raw_average_key_size": 25, "raw_value_size": 9564101, "raw_average_value_size": 1470, "num_data_blocks": 1085, "num_entries": 6504, "num_filter_entries": 6504, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.341376) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9725769 bytes
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.343113) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.1 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.5) write-amplify(3.9) OK, records in: 7034, records dropped: 530 output_compression: NoCompression
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.343143) EVENT_LOG_v1 {"time_micros": 1764089330343131, "job": 50, "event": "compaction_finished", "compaction_time_micros": 84220, "compaction_time_cpu_micros": 46733, "output_level": 6, "num_output_files": 1, "total_output_size": 9725769, "num_input_records": 7034, "num_output_records": 6504, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330343823, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.344 254096 INFO nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 9.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.345 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330346490, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.256623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:48:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.409 254096 INFO nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 10.37 seconds to build instance.#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.424 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.580 254096 DEBUG nova.network.neutron [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.607 254096 INFO nova.compute.manager [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 2.11 seconds to deallocate network for instance.#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.672 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.672 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.775 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.775 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.792 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.853 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:50 np0005535469 nova_compute[254092]: 2025-11-25 16:48:50.919 254096 DEBUG oslo_concurrency.processutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004427185078465846 of space, bias 1.0, pg target 1.3281555235397537 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 11:48:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791875219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.494 254096 DEBUG oslo_concurrency.processutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.501 254096 DEBUG nova.compute.provider_tree [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.518 254096 DEBUG nova.scheduler.client.report [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.551 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.556 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.564 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.564 254096 INFO nova.compute.claims [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.582 254096 INFO nova.scheduler.client.report [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 4677de7c-6625-4c98-a065-214341d8bfea#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.711 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 467 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.0 MiB/s wr, 328 op/s
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.885 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.953 254096 INFO nova.compute.manager [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Rescuing#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.954 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.954 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:51 np0005535469 nova_compute[254092]: 2025-11-25 16:48:51.954 254096 DEBUG nova.network.neutron [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.264 254096 DEBUG nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG oslo_concurrency.lockutils [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG oslo_concurrency.lockutils [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG oslo_concurrency.lockutils [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.266 254096 WARNING nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.266 254096 DEBUG nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-deleted-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357231945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.489 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.497 254096 DEBUG nova.compute.provider_tree [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.512 254096 DEBUG nova.scheduler.client.report [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.530 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.531 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.578 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.578 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.600 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.620 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.717 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.719 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.720 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Creating image(s)#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.748 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.781 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.830 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.849 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.915 254096 DEBUG nova.policy [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23f6db77558a477bbd8b8b46cb4107d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.974 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.976 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.977 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:52 np0005535469 nova_compute[254092]: 2025-11-25 16:48:52.977 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.017 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.031 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9b189bbf-2581-4656-83da-12707f48dccc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.124 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.124 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.125 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.125 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.126 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.127 254096 INFO nova.compute.manager [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Terminating instance#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.128 254096 DEBUG nova.compute.manager [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:48:53 np0005535469 kernel: tap419102e4-bc (unregistering): left promiscuous mode
Nov 25 11:48:53 np0005535469 NetworkManager[48891]: <info>  [1764089333.2151] device (tap419102e4-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:48:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:53Z|00931|binding|INFO|Releasing lport 419102e4-bcb4-496b-b45c-fba9f5525746 from this chassis (sb_readonly=0)
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.227 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:53Z|00932|binding|INFO|Setting lport 419102e4-bcb4-496b-b45c-fba9f5525746 down in Southbound
Nov 25 11:48:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:53Z|00933|binding|INFO|Removing iface tap419102e4-bc ovn-installed in OVS
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.239 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:0f:a2 10.100.0.7'], port_security=['fa:16:3e:08:0f:a2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8b20d119-17cb-4742-9223-90e5020f93a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=419102e4-bcb4-496b-b45c-fba9f5525746) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.240 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 419102e4-bcb4-496b-b45c-fba9f5525746 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.242 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1431a6bc-93c8-4db5-a148-b2950f02c941, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88c90851-0864-456f-8e39-43348ab1578c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.245 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 namespace which is not needed anymore#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.258 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Deactivated successfully.
Nov 25 11:48:53 np0005535469 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Consumed 19.289s CPU time.
Nov 25 11:48:53 np0005535469 systemd-machined[216343]: Machine qemu-103-instance-00000053 terminated.
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.357 254096 DEBUG nova.network.neutron [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.383 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.404 254096 INFO nova.virt.libvirt.driver [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance destroyed successfully.#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.405 254096 DEBUG nova.objects.instance [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 8b20d119-17cb-4742-9223-90e5020f93a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.421 254096 DEBUG nova.compute.manager [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-unplugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.421 254096 DEBUG oslo_concurrency.lockutils [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG oslo_concurrency.lockutils [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG oslo_concurrency.lockutils [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG nova.compute.manager [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] No waiting events found dispatching network-vif-unplugged-419102e4-bcb4-496b-b45c-fba9f5525746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG nova.compute.manager [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-unplugged-419102e4-bcb4-496b-b45c-fba9f5525746 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.424 254096 DEBUG nova.virt.libvirt.vif [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1354265431',display_name='tempest-₡-1354265431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1354265431',id=83,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-wvl75eev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:23Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=8b20d119-17cb-4742-9223-90e5020f93a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.425 254096 DEBUG nova.network.os_vif_util [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.425 254096 DEBUG nova.network.os_vif_util [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.426 254096 DEBUG os_vif [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.429 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap419102e4-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.441 254096 INFO os_vif [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc')#033[00m
Nov 25 11:48:53 np0005535469 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : haproxy version is 2.8.14-c23fe91
Nov 25 11:48:53 np0005535469 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : path to executable is /usr/sbin/haproxy
Nov 25 11:48:53 np0005535469 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [WARNING]  (342061) : Exiting Master process...
Nov 25 11:48:53 np0005535469 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [ALERT]    (342061) : Current worker (342063) exited with code 143 (Terminated)
Nov 25 11:48:53 np0005535469 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [WARNING]  (342061) : All workers exited. Exiting... (0)
Nov 25 11:48:53 np0005535469 systemd[1]: libpod-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815.scope: Deactivated successfully.
Nov 25 11:48:53 np0005535469 podman[352682]: 2025-11-25 16:48:53.512486738 +0000 UTC m=+0.066222261 container died 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:48:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815-userdata-shm.mount: Deactivated successfully.
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.555 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9b189bbf-2581-4656-83da-12707f48dccc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1774b9fa250e4e770fd47cef979c1f2ebcd875e71157ccd0e21761837005a179-merged.mount: Deactivated successfully.
Nov 25 11:48:53 np0005535469 podman[352682]: 2025-11-25 16:48:53.58137059 +0000 UTC m=+0.135106093 container cleanup 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:48:53 np0005535469 systemd[1]: libpod-conmon-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815.scope: Deactivated successfully.
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.684 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] resizing rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:48:53 np0005535469 podman[352742]: 2025-11-25 16:48:53.71349612 +0000 UTC m=+0.096425301 container remove 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.729 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Successfully created port: e7e60738-4c0d-46ae-a9b6-1477573be82f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.729 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[953fe9ab-af21-4469-93c6-3d691222ef27]: (4, ('Tue Nov 25 04:48:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 (5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815)\n5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815\nTue Nov 25 04:48:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 (5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815)\n5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.732 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[db59737d-6311-4c79-9cf2-0595c575b70e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.733 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.734 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 kernel: tap1431a6bc-90: left promiscuous mode
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.767 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b91476-b86d-44a6-a911-49603c839748]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.785 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b677bc25-d2f7-4542-a885-c8383223f867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87c5c771-2cf6-4979-a385-3cac5086c510]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 467 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.810 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[272f2f83-19db-4e60-95f5-83937c01215d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563963, 'reachable_time': 32097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352796, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 systemd[1]: run-netns-ovnmeta\x2d1431a6bc\x2d93c8\x2d4db5\x2da148\x2db2950f02c941.mount: Deactivated successfully.
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.818 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:48:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.818 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[39f8aff9-22e1-4775-bd2b-228c99818e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.887 254096 DEBUG nova.objects.instance [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.903 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.904 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Ensure instance console log exists: /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.904 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.904 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:53 np0005535469 nova_compute[254092]: 2025-11-25 16:48:53.905 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:54 np0005535469 nova_compute[254092]: 2025-11-25 16:48:54.052 254096 INFO nova.virt.libvirt.driver [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Deleting instance files /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7_del#033[00m
Nov 25 11:48:54 np0005535469 nova_compute[254092]: 2025-11-25 16:48:54.053 254096 INFO nova.virt.libvirt.driver [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Deletion of /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7_del complete#033[00m
Nov 25 11:48:54 np0005535469 nova_compute[254092]: 2025-11-25 16:48:54.123 254096 INFO nova.compute.manager [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:48:54 np0005535469 nova_compute[254092]: 2025-11-25 16:48:54.124 254096 DEBUG oslo.service.loopingcall [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:48:54 np0005535469 nova_compute[254092]: 2025-11-25 16:48:54.124 254096 DEBUG nova.compute.manager [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:48:54 np0005535469 nova_compute[254092]: 2025-11-25 16:48:54.124 254096 DEBUG nova.network.neutron [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:48:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:55Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 11:48:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:55Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 11:48:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:48:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/870670633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:48:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:48:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/870670633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.487 254096 DEBUG nova.compute.manager [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG oslo_concurrency.lockutils [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG oslo_concurrency.lockutils [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG oslo_concurrency.lockutils [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG nova.compute.manager [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] No waiting events found dispatching network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 WARNING nova.compute.manager [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received unexpected event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:48:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:55Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:85:c3 10.100.0.14
Nov 25 11:48:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:55Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:85:c3 10.100.0.14
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.559 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Successfully updated port: e7e60738-4c0d-46ae-a9b6-1477573be82f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.571 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.572 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.572 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.610 254096 DEBUG nova.network.neutron [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.635 254096 INFO nova.compute.manager [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 1.51 seconds to deallocate network for instance.#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.653 254096 DEBUG nova.compute.manager [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.654 254096 DEBUG nova.compute.manager [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing instance network info cache due to event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.654 254096 DEBUG oslo_concurrency.lockutils [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.679 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.679 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.772 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:48:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 491 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.5 MiB/s wr, 352 op/s
Nov 25 11:48:55 np0005535469 nova_compute[254092]: 2025-11-25 16:48:55.846 254096 DEBUG oslo_concurrency.processutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:48:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:48:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216732842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:48:56 np0005535469 nova_compute[254092]: 2025-11-25 16:48:56.326 254096 DEBUG oslo_concurrency.processutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:56 np0005535469 nova_compute[254092]: 2025-11-25 16:48:56.333 254096 DEBUG nova.compute.provider_tree [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:48:56 np0005535469 nova_compute[254092]: 2025-11-25 16:48:56.348 254096 DEBUG nova.scheduler.client.report [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:48:56 np0005535469 nova_compute[254092]: 2025-11-25 16:48:56.366 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:56 np0005535469 nova_compute[254092]: 2025-11-25 16:48:56.409 254096 INFO nova.scheduler.client.report [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 8b20d119-17cb-4742-9223-90e5020f93a7#033[00m
Nov 25 11:48:56 np0005535469 nova_compute[254092]: 2025-11-25 16:48:56.496 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.319 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.335 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.336 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance network_info: |[{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.337 254096 DEBUG oslo_concurrency.lockutils [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.337 254096 DEBUG nova.network.neutron [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.341 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start _get_guest_xml network_info=[{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.346 254096 WARNING nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.351 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.351 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.357 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.357 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.358 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.358 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.361 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.361 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.365 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.721 254096 DEBUG nova.compute.manager [req-b7884fa5-2768-4534-a684-6950f234c848 req-a1ac26cd-9ce8-44e5-9e96-333fc89aa09e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-deleted-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 489 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 MiB/s wr, 326 op/s
Nov 25 11:48:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030378047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.866 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.923 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:57 np0005535469 nova_compute[254092]: 2025-11-25 16:48:57.928 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.269 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089323.2686913, 4677de7c-6625-4c98-a065-214341d8bfea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.270 254096 INFO nova.compute.manager [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.292 254096 DEBUG nova.compute.manager [None req-656ea098-bdf1-4d6a-978f-720237bd02b4 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:48:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:48:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266950693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.421 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.423 254096 DEBUG nova.virt.libvirt.vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1282242960',display_name='tempest-ServerActionsTestOtherB-server-1282242960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1282242960',id=99,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-ao13046e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:52Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=9b189bbf-2581-4656-83da-12707f48dccc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.424 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.425 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.427 254096 DEBUG nova.objects.instance [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.445 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <uuid>9b189bbf-2581-4656-83da-12707f48dccc</uuid>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <name>instance-00000063</name>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerActionsTestOtherB-server-1282242960</nova:name>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:48:57</nova:creationTime>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <nova:port uuid="e7e60738-4c0d-46ae-a9b6-1477573be82f">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <entry name="serial">9b189bbf-2581-4656-83da-12707f48dccc</entry>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <entry name="uuid">9b189bbf-2581-4656-83da-12707f48dccc</entry>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9b189bbf-2581-4656-83da-12707f48dccc_disk">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9b189bbf-2581-4656-83da-12707f48dccc_disk.config">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f4:8d:f7"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <target dev="tape7e60738-4c"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/console.log" append="off"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:48:58 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:48:58 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:48:58 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:48:58 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.447 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Preparing to wait for external event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.447 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.448 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.448 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.449 254096 DEBUG nova.virt.libvirt.vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1282242960',display_name='tempest-ServerActionsTestOtherB-server-1282242960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1282242960',id=99,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-ao13046e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:52Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=9b189bbf-2581-4656-83da-12707f48dccc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.449 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.450 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.450 254096 DEBUG os_vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.451 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.452 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.454 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7e60738-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7e60738-4c, col_values=(('external_ids', {'iface-id': 'e7e60738-4c0d-46ae-a9b6-1477573be82f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:8d:f7', 'vm-uuid': '9b189bbf-2581-4656-83da-12707f48dccc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:58 np0005535469 NetworkManager[48891]: <info>  [1764089338.4576] manager: (tape7e60738-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.466 254096 INFO os_vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c')#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.524 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.525 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.525 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:f4:8d:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.526 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Using config drive#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.553 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:58 np0005535469 nova_compute[254092]: 2025-11-25 16:48:58.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.003 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Creating config drive at /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.008 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6au5smbz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.167 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6au5smbz" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.195 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.200 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config 9b189bbf-2581-4656-83da-12707f48dccc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.248 254096 DEBUG nova.network.neutron [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updated VIF entry in instance network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.250 254096 DEBUG nova.network.neutron [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.263 254096 DEBUG oslo_concurrency.lockutils [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.394 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config 9b189bbf-2581-4656-83da-12707f48dccc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.396 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Deleting local config drive /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config because it was imported into RBD.#033[00m
Nov 25 11:48:59 np0005535469 kernel: tape7e60738-4c: entered promiscuous mode
Nov 25 11:48:59 np0005535469 NetworkManager[48891]: <info>  [1764089339.4423] manager: (tape7e60738-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Nov 25 11:48:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:59Z|00934|binding|INFO|Claiming lport e7e60738-4c0d-46ae-a9b6-1477573be82f for this chassis.
Nov 25 11:48:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:59Z|00935|binding|INFO|e7e60738-4c0d-46ae-a9b6-1477573be82f: Claiming fa:16:3e:f4:8d:f7 10.100.0.10
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.453 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:8d:f7 10.100.0.10'], port_security=['fa:16:3e:f4:8d:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b189bbf-2581-4656-83da-12707f48dccc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7e60738-4c0d-46ae-a9b6-1477573be82f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.454 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7e60738-4c0d-46ae-a9b6-1477573be82f in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.456 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919#033[00m
Nov 25 11:48:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:59Z|00936|binding|INFO|Setting lport e7e60738-4c0d-46ae-a9b6-1477573be82f ovn-installed in OVS
Nov 25 11:48:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:48:59Z|00937|binding|INFO|Setting lport e7e60738-4c0d-46ae-a9b6-1477573be82f up in Southbound
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.475 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f10007b0-c016-469a-90e9-bc60c2a2758c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:59 np0005535469 systemd-udevd[352974]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:48:59 np0005535469 systemd-machined[216343]: New machine qemu-121-instance-00000063.
Nov 25 11:48:59 np0005535469 NetworkManager[48891]: <info>  [1764089339.4962] device (tape7e60738-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:48:59 np0005535469 NetworkManager[48891]: <info>  [1764089339.4970] device (tape7e60738-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:48:59 np0005535469 systemd[1]: Started Virtual Machine qemu-121-instance-00000063.
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.508 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e3c497-76a8-4f2c-a45d-0f0db1454a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.512 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[204c05c4-51ac-4910-bdf6-fee952e6705e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da2a4c4b-516c-4c34-88b4-ad10c44a6989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6015a2a-f7b5-48e1-a57f-a295a437bec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352984, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.571 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5de19d5d-4cea-4d2c-a4fe-fd25e15fa083]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352987, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352987, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.572 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:48:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:48:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 489 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.0 MiB/s wr, 303 op/s
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.950 254096 DEBUG nova.compute.manager [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.951 254096 DEBUG oslo_concurrency.lockutils [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.951 254096 DEBUG oslo_concurrency.lockutils [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.952 254096 DEBUG oslo_concurrency.lockutils [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:48:59 np0005535469 nova_compute[254092]: 2025-11-25 16:48:59.952 254096 DEBUG nova.compute.manager [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Processing event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.016 254096 INFO nova.compute.manager [None req-8dd18f63-bd90-48a8-abad-43109304b6de 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Get console output#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.023 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.055 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089340.0546904, 9b189bbf-2581-4656-83da-12707f48dccc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.056 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Started (Lifecycle Event)#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.058 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.083 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.086 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.093 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance spawned successfully.#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.094 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.120 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.121 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089340.0559328, 9b189bbf-2581-4656-83da-12707f48dccc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.121 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.128 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.128 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.128 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.129 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.129 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.129 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.136 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.139 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089340.0819197, 9b189bbf-2581-4656-83da-12707f48dccc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.139 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.165 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.169 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.192 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.230 254096 INFO nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Took 7.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.231 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.309 254096 INFO nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Took 9.48 seconds to build instance.#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.326 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.589 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.589 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.589 254096 INFO nova.compute.manager [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Rebooting instance#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.602 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.602 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:00 np0005535469 nova_compute[254092]: 2025-11-25 16:49:00.602 254096 DEBUG nova.network.neutron [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:49:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 500 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 340 op/s
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.181 254096 DEBUG nova.compute.manager [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.181 254096 DEBUG oslo_concurrency.lockutils [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 DEBUG oslo_concurrency.lockutils [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 DEBUG oslo_concurrency.lockutils [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 DEBUG nova.compute.manager [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] No waiting events found dispatching network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 WARNING nova.compute.manager [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received unexpected event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.502 254096 INFO nova.compute.manager [None req-1d5dcb97-81d5-4a32-92d8-10a29cd5ec87 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Pausing#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.503 254096 DEBUG nova.objects.instance [None req-1d5dcb97-81d5-4a32-92d8-10a29cd5ec87 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.526 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089342.526359, 9b189bbf-2581-4656-83da-12707f48dccc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.526 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.528 254096 DEBUG nova.compute.manager [None req-1d5dcb97-81d5-4a32-92d8-10a29cd5ec87 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.549 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.552 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:02 np0005535469 nova_compute[254092]: 2025-11-25 16:49:02.574 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 25 11:49:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:03Z|00938|binding|INFO|Releasing lport 82c4ad4d-388e-4238-98b3-8d58946e7829 from this chassis (sb_readonly=0)
Nov 25 11:49:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:03Z|00939|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:49:03 np0005535469 nova_compute[254092]: 2025-11-25 16:49:03.381 254096 DEBUG nova.network.neutron [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:03 np0005535469 nova_compute[254092]: 2025-11-25 16:49:03.394 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:03 np0005535469 nova_compute[254092]: 2025-11-25 16:49:03.395 254096 DEBUG nova.compute.manager [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:03 np0005535469 nova_compute[254092]: 2025-11-25 16:49:03.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:03 np0005535469 nova_compute[254092]: 2025-11-25 16:49:03.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:03 np0005535469 nova_compute[254092]: 2025-11-25 16:49:03.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 500 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 255 op/s
Nov 25 11:49:03 np0005535469 nova_compute[254092]: 2025-11-25 16:49:03.806 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:49:05 np0005535469 kernel: tap5a3f34de-d3 (unregistering): left promiscuous mode
Nov 25 11:49:05 np0005535469 NetworkManager[48891]: <info>  [1764089345.7940] device (tap5a3f34de-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 517 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.4 MiB/s wr, 336 op/s
Nov 25 11:49:05 np0005535469 nova_compute[254092]: 2025-11-25 16:49:05.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:05Z|00940|binding|INFO|Releasing lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 from this chassis (sb_readonly=0)
Nov 25 11:49:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:05Z|00941|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 down in Southbound
Nov 25 11:49:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:05Z|00942|binding|INFO|Removing iface tap5a3f34de-d3 ovn-installed in OVS
Nov 25 11:49:05 np0005535469 nova_compute[254092]: 2025-11-25 16:49:05.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.815 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.816 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 unbound from our chassis#033[00m
Nov 25 11:49:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.817 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa283c2c-b597-4970-842d-f5f2b621b5f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:49:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.818 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a04b7832-016e-4096-8130-5af89298e006]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.818 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace which is not needed anymore#033[00m
Nov 25 11:49:05 np0005535469 nova_compute[254092]: 2025-11-25 16:49:05.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:05 np0005535469 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 25 11:49:05 np0005535469 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 16.991s CPU time.
Nov 25 11:49:05 np0005535469 systemd-machined[216343]: Machine qemu-118-instance-00000060 terminated.
Nov 25 11:49:05 np0005535469 nova_compute[254092]: 2025-11-25 16:49:05.956 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:05 np0005535469 nova_compute[254092]: 2025-11-25 16:49:05.957 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:05 np0005535469 nova_compute[254092]: 2025-11-25 16:49:05.957 254096 INFO nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Shelving#033[00m
Nov 25 11:49:05 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : haproxy version is 2.8.14-c23fe91
Nov 25 11:49:05 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : path to executable is /usr/sbin/haproxy
Nov 25 11:49:05 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [WARNING]  (351439) : Exiting Master process...
Nov 25 11:49:05 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [ALERT]    (351439) : Current worker (351441) exited with code 143 (Terminated)
Nov 25 11:49:05 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [WARNING]  (351439) : All workers exited. Exiting... (0)
Nov 25 11:49:05 np0005535469 systemd[1]: libpod-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope: Deactivated successfully.
Nov 25 11:49:05 np0005535469 conmon[351428]: conmon c66a1eef8323cd955df0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope/container/memory.events
Nov 25 11:49:05 np0005535469 podman[353057]: 2025-11-25 16:49:05.971960486 +0000 UTC m=+0.049498986 container died c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 11:49:06 np0005535469 kernel: tape7e60738-4c (unregistering): left promiscuous mode
Nov 25 11:49:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431-userdata-shm.mount: Deactivated successfully.
Nov 25 11:49:06 np0005535469 NetworkManager[48891]: <info>  [1764089346.0377] device (tape7e60738-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9f65b33bb2cefdb9521aad3458f01f2c8320441c773aab270729c09b9835d7b2-merged.mount: Deactivated successfully.
Nov 25 11:49:06 np0005535469 podman[353057]: 2025-11-25 16:49:06.053912523 +0000 UTC m=+0.131450993 container cleanup c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00943|binding|INFO|Releasing lport e7e60738-4c0d-46ae-a9b6-1477573be82f from this chassis (sb_readonly=0)
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00944|binding|INFO|Setting lport e7e60738-4c0d-46ae-a9b6-1477573be82f down in Southbound
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00945|binding|INFO|Removing iface tape7e60738-4c ovn-installed in OVS
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.060 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.068 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:8d:f7 10.100.0.10'], port_security=['fa:16:3e:f4:8d:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b189bbf-2581-4656-83da-12707f48dccc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7e60738-4c0d-46ae-a9b6-1477573be82f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:06 np0005535469 systemd[1]: libpod-conmon-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope: Deactivated successfully.
Nov 25 11:49:06 np0005535469 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 25 11:49:06 np0005535469 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Consumed 3.013s CPU time.
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.089 254096 DEBUG nova.compute.manager [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.090 254096 DEBUG oslo_concurrency.lockutils [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.090 254096 DEBUG oslo_concurrency.lockutils [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:06 np0005535469 systemd-machined[216343]: Machine qemu-121-instance-00000063 terminated.
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.091 254096 DEBUG oslo_concurrency.lockutils [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.091 254096 DEBUG nova.compute.manager [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.091 254096 WARNING nova.compute.manager [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state reboot_started.#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.092 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 podman[353101]: 2025-11-25 16:49:06.132738426 +0000 UTC m=+0.047116452 container remove c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.139 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[908dd548-6edf-4b8b-b782-8074be0486b3]: (4, ('Tue Nov 25 04:49:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431)\nc66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431\nTue Nov 25 04:49:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431)\nc66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.141 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38dd46c2-94e9-4f4f-b1fc-c2cbcc944cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.142 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 kernel: tapaa283c2c-b0: left promiscuous mode
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.172 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.176 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[52292048-6ae0-405f-b6a5-9a6745d1ef28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[099b889f-c811-499f-bf64-8a1eaeb175e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3cebd893-1b5d-41bd-bfa9-d7963f1735a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1db8266b-66c3-46f5-a1b0-385547f8b555]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577088, 'reachable_time': 41301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353125, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 systemd[1]: run-netns-ovnmeta\x2daa283c2c\x2db597\x2d4970\x2d842d\x2df5f2b621b5f0.mount: Deactivated successfully.
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.216 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.216 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c39e737c-7511-47ab-b61e-9431b832e89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.220 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7e60738-4c0d-46ae-a9b6-1477573be82f in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.224 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.221 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance destroyed successfully.#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.221 254096 DEBUG nova.objects.instance [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.245 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad52e2b7-9486-44d9-8bed-0274a73d4012]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.276 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2b91ee-c226-4019-a25b-e22bbb55c319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.280 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba055905-299c-48f6-ab56-4db6e35a3ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[658c1efa-8273-4bab-b281-6326aaf0baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.331 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abf15020-966b-45bf-bbfe-15337430124a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353138, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.348 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8fd1fc-56bb-4690-970f-ee84ae85ba2b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353139, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353139, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.350 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.360 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.362 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.541 254096 INFO nova.virt.libvirt.driver [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance shutdown successfully.#033[00m
Nov 25 11:49:06 np0005535469 kernel: tap5a3f34de-d3: entered promiscuous mode
Nov 25 11:49:06 np0005535469 systemd-udevd[353036]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:49:06 np0005535469 NetworkManager[48891]: <info>  [1764089346.6041] manager: (tap5a3f34de-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00946|binding|INFO|Claiming lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 for this chassis.
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00947|binding|INFO|5a3f34de-d3de-439b-ac8f-baabc77892b4: Claiming fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.615 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.616 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 bound to our chassis#033[00m
Nov 25 11:49:06 np0005535469 NetworkManager[48891]: <info>  [1764089346.6172] device (tap5a3f34de-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.618 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa283c2c-b597-4970-842d-f5f2b621b5f0#033[00m
Nov 25 11:49:06 np0005535469 NetworkManager[48891]: <info>  [1764089346.6188] device (tap5a3f34de-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00948|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 ovn-installed in OVS
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00949|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 up in Southbound
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.630 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[669f9e97-076d-48ac-b47d-0284b86d3431]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.631 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa283c2c-b1 in ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.635 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa283c2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.636 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[514cce67-2d5a-4010-8c47-83045e49760b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 systemd-machined[216343]: New machine qemu-122-instance-00000060.
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.638 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62ff4d35-d1eb-4095-8399-a457d24f46c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.650 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2480afcb-6563-4fba-b68f-34d36632e7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 systemd[1]: Started Virtual Machine qemu-122-instance-00000060.
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.652 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Beginning cold snapshot process#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.673 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0a22af-ebb4-4e77-947d-fe959eeb1f2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.700 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a28c156f-4da0-4d51-a07a-bdb8168f28e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.705 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86ce96e7-d68b-4475-8def-5cfe88d396ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 NetworkManager[48891]: <info>  [1764089346.7069] manager: (tapaa283c2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.737 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29477321-c854-4dd9-b463-85306684ad05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.740 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dcdcbc88-87ab-44f0-82bf-0250adaa6501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 NetworkManager[48891]: <info>  [1764089346.7635] device (tapaa283c2c-b0): carrier: link connected
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.769 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c4969bd4-2362-4fc4-9d09-3b6d28c5ada4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b859f69-e3f0-4679-bf68-764bce827b42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580435, 'reachable_time': 42442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353222, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.796 254096 DEBUG nova.virt.libvirt.imagebackend [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.803 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94c17118-2f8c-47b5-b9bc-db4fd6f24a70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:7d28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580435, 'tstamp': 580435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353226, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.821 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab8e76a-8868-4524-96ee-4d9d8c3ad104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580435, 'reachable_time': 42442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353227, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.849 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c34f56bc-7d9d-4c11-b830-0312468f37aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe4919e-b0c4-4116-b756-f87cefa5c359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.918 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.918 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.918 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa283c2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:06 np0005535469 NetworkManager[48891]: <info>  [1764089346.9237] manager: (tapaa283c2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 kernel: tapaa283c2c-b0: entered promiscuous mode
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.929 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa283c2c-b0, col_values=(('external_ids', {'iface-id': '82c4ad4d-388e-4238-98b3-8d58946e7829'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:06Z|00950|binding|INFO|Releasing lport 82c4ad4d-388e-4238-98b3-8d58946e7829 from this chassis (sb_readonly=0)
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 nova_compute[254092]: 2025-11-25 16:49:06.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.957 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[391de876-7fc1-43c8-848a-efc6a8084776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.959 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:49:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.959 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'env', 'PROCESS_TAG=haproxy-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa283c2c-b597-4970-842d-f5f2b621b5f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.102 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(9c5fa938ca29422babcbba9eafcabdc9) on rbd image(9b189bbf-2581-4656-83da-12707f48dccc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.151 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 2e848add-8417-4307-8b01-f0d1c1a76cea due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.153 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089347.1076672, 2e848add-8417-4307-8b01-f0d1c1a76cea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.153 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.158 254096 INFO nova.virt.libvirt.driver [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance running successfully.#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.158 254096 INFO nova.virt.libvirt.driver [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance soft rebooted successfully.#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.158 254096 DEBUG nova.compute.manager [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.175 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.178 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.194 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.195 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089347.1096506, 2e848add-8417-4307-8b01-f0d1c1a76cea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.195 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Started (Lifecycle Event)#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.218 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.221 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:07 np0005535469 nova_compute[254092]: 2025-11-25 16:49:07.223 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:07 np0005535469 podman[353324]: 2025-11-25 16:49:07.349918342 +0000 UTC m=+0.048793436 container create ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:49:07 np0005535469 systemd[1]: Started libpod-conmon-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9.scope.
Nov 25 11:49:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:07 np0005535469 podman[353324]: 2025-11-25 16:49:07.325393186 +0000 UTC m=+0.024268300 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:49:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0b2008feb8517d6fb5f144bab4ee20658f8fd3f00332a77040521f977c12c63/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:07 np0005535469 podman[353324]: 2025-11-25 16:49:07.442696444 +0000 UTC m=+0.141571568 container init ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:49:07 np0005535469 podman[353324]: 2025-11-25 16:49:07.44844532 +0000 UTC m=+0.147320414 container start ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 11:49:07 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : New worker (353347) forked
Nov 25 11:49:07 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : Loading success.
Nov 25 11:49:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 522 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 211 op/s
Nov 25 11:49:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 25 11:49:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 25 11:49:07 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.011 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/9b189bbf-2581-4656-83da-12707f48dccc_disk@9c5fa938ca29422babcbba9eafcabdc9 to images/e2fc087f-e2ce-46f4-acb0-4d7a135a78a9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.091 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/e2fc087f-e2ce-46f4-acb0-4d7a135a78a9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.230 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.231 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.231 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.231 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.232 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.232 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.233 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-unplugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.233 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.233 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] No waiting events found dispatching network-vif-unplugged-e7e60738-4c0d-46ae-a9b6-1477573be82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received unexpected event network-vif-unplugged-e7e60738-4c0d-46ae-a9b6-1477573be82f for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.235 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.235 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.235 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.236 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] No waiting events found dispatching network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.236 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received unexpected event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.236 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.238 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.238 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.238 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.291 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(9c5fa938ca29422babcbba9eafcabdc9) on rbd image(9b189bbf-2581-4656-83da-12707f48dccc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.368 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089333.3668997, 8b20d119-17cb-4742-9223-90e5020f93a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.369 254096 INFO nova.compute.manager [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.387 254096 DEBUG nova.compute.manager [None req-422fe19d-dec5-4646-a6fe-f9f60d422d60 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 25 11:49:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 25 11:49:08 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 25 11:49:08 np0005535469 nova_compute[254092]: 2025-11-25 16:49:08.996 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(e2fc087f-e2ce-46f4-acb0-4d7a135a78a9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:49:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 522 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 163 op/s
Nov 25 11:49:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 25 11:49:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 25 11:49:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 25 11:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:49:10 np0005535469 nova_compute[254092]: 2025-11-25 16:49:10.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.207 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Snapshot image upload complete#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.208 254096 DEBUG nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.276 254096 INFO nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Shelve offloading#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.284 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance destroyed successfully.#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.284 254096 DEBUG nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.286 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.286 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.286 254096 DEBUG nova.network.neutron [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:49:11 np0005535469 nova_compute[254092]: 2025-11-25 16:49:11.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.1 MiB/s wr, 332 op/s
Nov 25 11:49:12 np0005535469 nova_compute[254092]: 2025-11-25 16:49:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:13 np0005535469 nova_compute[254092]: 2025-11-25 16:49:13.368 254096 DEBUG nova.network.neutron [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:13 np0005535469 nova_compute[254092]: 2025-11-25 16:49:13.388 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:13 np0005535469 nova_compute[254092]: 2025-11-25 16:49:13.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:13.626 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:13.627 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:13 np0005535469 nova_compute[254092]: 2025-11-25 16:49:13.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.7 MiB/s wr, 278 op/s
Nov 25 11:49:14 np0005535469 nova_compute[254092]: 2025-11-25 16:49:14.849 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.129 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance destroyed successfully.#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.130 254096 DEBUG nova.objects.instance [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.141 254096 DEBUG nova.virt.libvirt.vif [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1282242960',display_name='tempest-ServerActionsTestOtherB-server-1282242960',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1282242960',id=99,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-ao13046e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:11.208042',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e2fc087f-e2ce-46f4-acb0-4d7a135a78a9'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:06Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=9b189bbf-2581-4656-83da-12707f48dccc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.141 254096 DEBUG nova.network.os_vif_util [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.142 254096 DEBUG nova.network.os_vif_util [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.143 254096 DEBUG os_vif [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7e60738-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.152 254096 INFO os_vif [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c')#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.228 254096 DEBUG nova.compute.manager [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.229 254096 DEBUG nova.compute.manager [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing instance network info cache due to event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.229 254096 DEBUG oslo_concurrency.lockutils [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.229 254096 DEBUG oslo_concurrency.lockutils [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.230 254096 DEBUG nova.network.neutron [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.544 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Deleting instance files /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc_del#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.545 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Deletion of /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc_del complete#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.776 254096 INFO nova.scheduler.client.report [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance 9b189bbf-2581-4656-83da-12707f48dccc#033[00m
Nov 25 11:49:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 232 op/s
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.821 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:15 np0005535469 nova_compute[254092]: 2025-11-25 16:49:15.822 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.107 254096 DEBUG oslo_concurrency.processutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 25 11:49:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 25 11:49:16 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:49:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869621011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.543 254096 DEBUG oslo_concurrency.processutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.549 254096 DEBUG nova.compute.provider_tree [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.563 254096 DEBUG nova.scheduler.client.report [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.581 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.610 254096 DEBUG nova.network.neutron [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updated VIF entry in instance network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.612 254096 DEBUG nova.network.neutron [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tape7e60738-4c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.630 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 10.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:16 np0005535469 nova_compute[254092]: 2025-11-25 16:49:16.637 254096 DEBUG oslo_concurrency.lockutils [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:17 np0005535469 kernel: tapacb7d65c-02 (unregistering): left promiscuous mode
Nov 25 11:49:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:17Z|00951|binding|INFO|Releasing lport acb7d65c-0259-4a39-94f8-d7f64637a340 from this chassis (sb_readonly=0)
Nov 25 11:49:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:17Z|00952|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 down in Southbound
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:17 np0005535469 NetworkManager[48891]: <info>  [1764089357.1895] device (tapacb7d65c-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:17Z|00953|binding|INFO|Removing iface tapacb7d65c-02 ovn-installed in OVS
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.202 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.204 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis#033[00m
Nov 25 11:49:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.205 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:49:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.207 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9f8006-d86a-4b40-9471-f1cf8de558de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:17 np0005535469 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 11:49:17 np0005535469 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Consumed 16.476s CPU time.
Nov 25 11:49:17 np0005535469 systemd-machined[216343]: Machine qemu-120-instance-00000062 terminated.
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 574 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 253 op/s
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.864 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance shutdown successfully after 24 seconds.#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.869 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.870 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.889 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Attempting rescue#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.890 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.896 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.896 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating image(s)#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.923 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.927 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.959 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.983 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:17 np0005535469 nova_compute[254092]: 2025-11-25 16:49:17.986 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.060 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.061 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.062 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.062 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.086 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.089 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.346 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.348 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.359 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.360 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start _get_guest_xml network_info=[{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:b2:d7:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.360 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.376 254096 WARNING nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.381 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.382 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.385 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.386 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.386 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.386 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.387 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.387 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.405 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:18 np0005535469 podman[353621]: 2025-11-25 16:49:18.664547239 +0000 UTC m=+0.068548053 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:49:18 np0005535469 podman[353611]: 2025-11-25 16:49:18.668319632 +0000 UTC m=+0.068820641 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:18 np0005535469 podman[353622]: 2025-11-25 16:49:18.71056307 +0000 UTC m=+0.113312140 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:49:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:49:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796629910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.899 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:18 np0005535469 nova_compute[254092]: 2025-11-25 16:49:18.900 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:49:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349042337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.398 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.399 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.665 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.666 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.666 254096 INFO nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Shelving#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.683 254096 DEBUG nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:49:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 574 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 206 op/s
Nov 25 11:49:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:49:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954827956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.890 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.891 254096 DEBUG nova.virt.libvirt.vif [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:50Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:b2:d7:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.892 254096 DEBUG nova.network.os_vif_util [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:b2:d7:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.892 254096 DEBUG nova.network.os_vif_util [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.893 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.911 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <uuid>6b74b880-45f6-4f10-b09f-2696629a42e9</uuid>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <name>instance-00000062</name>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueTestJSON-server-1318564784</nova:name>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:49:18</nova:creationTime>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <nova:port uuid="acb7d65c-0259-4a39-94f8-d7f64637a340">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <entry name="serial">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <entry name="uuid">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <target dev="vdb" bus="virtio"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:b2:d7:96"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <target dev="tapacb7d65c-02"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/console.log" append="off"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:49:19 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:49:19 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:49:19 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:49:19 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.921 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.#033[00m
Nov 25 11:49:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326653674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.966 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.966 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.967 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.967 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:b2:d7:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.967 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Using config drive#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.991 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:19 np0005535469 nova_compute[254092]: 2025-11-25 16:49:19.996 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.010 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.041 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'keypairs' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.091 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.091 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.091 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.103 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.104 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.107 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:20 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:20Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.336 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.337 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2968MB free_disk=59.71924591064453GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.338 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.338 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.404 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73301044-3bad-4401-9e30-f009d417f662 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.404 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 435ae693-6844-49ae-977b-ec3aa89cfe70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.405 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2e848add-8417-4307-8b01-f0d1c1a76cea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.406 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e0098976-026f-43d8-b686-b2658f9aded9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 6b74b880-45f6-4f10-b09f-2696629a42e9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.408 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.490 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227010931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.949 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.954 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.968 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.989 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:49:20 np0005535469 nova_compute[254092]: 2025-11-25 16:49:20.990 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.085 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating config drive at /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.091 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_s7xgsjx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.212 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089346.2106142, 9b189bbf-2581-4656-83da-12707f48dccc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.212 254096 INFO nova.compute.manager [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.230 254096 DEBUG nova.compute.manager [None req-1ac10283-0f07-4e4b-a1fb-7fbea56a3f32 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.234 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_s7xgsjx" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.258 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.261 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.298 254096 DEBUG nova.compute.manager [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG oslo_concurrency.lockutils [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG oslo_concurrency.lockutils [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG oslo_concurrency.lockutils [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG nova.compute.manager [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 WARNING nova.compute.manager [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.423 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.424 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deleting local config drive /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue because it was imported into RBD.#033[00m
Nov 25 11:49:21 np0005535469 kernel: tapacb7d65c-02: entered promiscuous mode
Nov 25 11:49:21 np0005535469 NetworkManager[48891]: <info>  [1764089361.4926] manager: (tapacb7d65c-02): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Nov 25 11:49:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:21Z|00954|binding|INFO|Claiming lport acb7d65c-0259-4a39-94f8-d7f64637a340 for this chassis.
Nov 25 11:49:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:21Z|00955|binding|INFO|acb7d65c-0259-4a39-94f8-d7f64637a340: Claiming fa:16:3e:b2:d7:96 10.100.0.2
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.532 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.533 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.534 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.535 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eff8398c-2fe6-433a-a5cb-a388e7c1b727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:21Z|00956|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 ovn-installed in OVS
Nov 25 11:49:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:21Z|00957|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 up in Southbound
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:21 np0005535469 systemd-machined[216343]: New machine qemu-123-instance-00000062.
Nov 25 11:49:21 np0005535469 systemd-udevd[353845]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:49:21 np0005535469 systemd[1]: Started Virtual Machine qemu-123-instance-00000062.
Nov 25 11:49:21 np0005535469 NetworkManager[48891]: <info>  [1764089361.5725] device (tapacb7d65c-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:49:21 np0005535469 NetworkManager[48891]: <info>  [1764089361.5736] device (tapacb7d65c-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:49:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 579 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Nov 25 11:49:21 np0005535469 kernel: tap792a5867-7e (unregistering): left promiscuous mode
Nov 25 11:49:21 np0005535469 NetworkManager[48891]: <info>  [1764089361.9435] device (tap792a5867-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:21Z|00958|binding|INFO|Releasing lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 from this chassis (sb_readonly=0)
Nov 25 11:49:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:21Z|00959|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 down in Southbound
Nov 25 11:49:21 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:21Z|00960|binding|INFO|Removing iface tap792a5867-7e ovn-installed in OVS
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.960 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.962 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.964 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.984 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57d0d611-d6a5-402e-8434-293fd583f386]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.988 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.989 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.989 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.992 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 6b74b880-45f6-4f10-b09f-2696629a42e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.992 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089361.992157, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.993 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:49:21 np0005535469 nova_compute[254092]: 2025-11-25 16:49:21.996 254096 DEBUG nova.compute.manager [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:21 np0005535469 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 25 11:49:22 np0005535469 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 18.793s CPU time.
Nov 25 11:49:22 np0005535469 systemd-machined[216343]: Machine qemu-110-instance-00000059 terminated.
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.015 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[31b60150-2274-4a3c-a787-0992dd3ef747]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.018 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c99faf4d-1865-42fe-882f-b50e67f2a233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.046 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2373f5b0-1ab5-4bb2-a657-481b75b09ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.051 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.055 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.055 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.056 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.057 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.068 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54e297e8-e832-4662-b495-e2ae154e30ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353922, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.080 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.081 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089361.994401, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.081 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Started (Lifecycle Event)#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.088 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[392d1cc6-5b26-4f4b-93f1-ffd0edbf7598]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353923, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353923, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.090 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.096 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.097 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.097 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.097 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.099 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.102 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.199 254096 DEBUG nova.compute.manager [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.199 254096 DEBUG oslo_concurrency.lockutils [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 DEBUG oslo_concurrency.lockutils [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 DEBUG oslo_concurrency.lockutils [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 DEBUG nova.compute.manager [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 WARNING nova.compute.manager [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state shelving.#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.701 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.707 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.#033[00m
Nov 25 11:49:22 np0005535469 nova_compute[254092]: 2025-11-25 16:49:22.707 254096 DEBUG nova.objects.instance [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'numa_topology' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.009 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning cold snapshot process#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.162 254096 DEBUG nova.virt.libvirt.imagebackend [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.507 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.507 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.507 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 WARNING nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 WARNING nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 WARNING nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.592 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(6afd30d4955a4854892e4c2ef0b91fab) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:49:23 np0005535469 nova_compute[254092]: 2025-11-25 16:49:23.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 579 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.217 254096 INFO nova.compute.manager [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Unrescuing#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.217 254096 DEBUG oslo_concurrency.lockutils [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.218 254096 DEBUG oslo_concurrency.lockutils [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.218 254096 DEBUG nova.network.neutron [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:49:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 25 11:49:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 25 11:49:24 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.291 254096 DEBUG nova.compute.manager [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.291 254096 DEBUG oslo_concurrency.lockutils [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 DEBUG oslo_concurrency.lockutils [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 DEBUG oslo_concurrency.lockutils [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 DEBUG nova.compute.manager [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 WARNING nova.compute.manager [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.313 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@6afd30d4955a4854892e4c2ef0b91fab to images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.428 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.710 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.752 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.754 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.759 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:24 np0005535469 nova_compute[254092]: 2025-11-25 16:49:24.832 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(6afd30d4955a4854892e4c2ef0b91fab) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:49:25 np0005535469 nova_compute[254092]: 2025-11-25 16:49:25.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 25 11:49:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 25 11:49:25 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 25 11:49:25 np0005535469 nova_compute[254092]: 2025-11-25 16:49:25.322 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(3855d8b5-0ce2-4690-ac71-e43d7c3e5764) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:49:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 640 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Nov 25 11:49:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.260 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:49:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 25 11:49:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 25 11:49:26 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.372 254096 INFO nova.compute.manager [None req-bf000a48-4c8d-4ec7-9285-6d1f000d0061 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Get console output#033[00m
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.379 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.387 254096 DEBUG nova.network.neutron [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.401 254096 DEBUG oslo_concurrency.lockutils [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.402 254096 DEBUG nova.objects.instance [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'flavor' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:26 np0005535469 kernel: tapacb7d65c-02 (unregistering): left promiscuous mode
Nov 25 11:49:26 np0005535469 NetworkManager[48891]: <info>  [1764089366.4768] device (tapacb7d65c-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:26Z|00961|binding|INFO|Releasing lport acb7d65c-0259-4a39-94f8-d7f64637a340 from this chassis (sb_readonly=0)
Nov 25 11:49:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:26Z|00962|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 down in Southbound
Nov 25 11:49:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:26Z|00963|binding|INFO|Removing iface tapacb7d65c-02 ovn-installed in OVS
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:26 np0005535469 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:26 np0005535469 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000062.scope: Consumed 4.839s CPU time.
Nov 25 11:49:26 np0005535469 systemd-machined[216343]: Machine qemu-123-instance-00000062 terminated.
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.566 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.569 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis#033[00m
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.570 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1469518d-3ee8-44e0-a4ee-6e53be32dd82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.666 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.#033[00m
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.666 254096 DEBUG nova.objects.instance [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:26 np0005535469 kernel: tapacb7d65c-02: entered promiscuous mode
Nov 25 11:49:26 np0005535469 systemd-udevd[354081]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:49:26 np0005535469 NetworkManager[48891]: <info>  [1764089366.7790] manager: (tapacb7d65c-02): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Nov 25 11:49:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:26Z|00964|binding|INFO|Claiming lport acb7d65c-0259-4a39-94f8-d7f64637a340 for this chassis.
Nov 25 11:49:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:26Z|00965|binding|INFO|acb7d65c-0259-4a39-94f8-d7f64637a340: Claiming fa:16:3e:b2:d7:96 10.100.0.2
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.790 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.791 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis#033[00m
Nov 25 11:49:26 np0005535469 NetworkManager[48891]: <info>  [1764089366.7940] device (tapacb7d65c-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.794 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:49:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.795 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[819b359c-941b-4f5b-961c-30950c5b768b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:26 np0005535469 NetworkManager[48891]: <info>  [1764089366.7963] device (tapacb7d65c-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:26Z|00966|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 ovn-installed in OVS
Nov 25 11:49:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:26Z|00967|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 up in Southbound
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.817 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:26 np0005535469 systemd-machined[216343]: New machine qemu-124-instance-00000062.
Nov 25 11:49:26 np0005535469 nova_compute[254092]: 2025-11-25 16:49:26.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:26 np0005535469 systemd[1]: Started Virtual Machine qemu-124-instance-00000062.
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.227 254096 DEBUG nova.compute.manager [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.228 254096 DEBUG oslo_concurrency.lockutils [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.228 254096 DEBUG oslo_concurrency.lockutils [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.229 254096 DEBUG oslo_concurrency.lockutils [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.229 254096 DEBUG nova.compute.manager [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.229 254096 WARNING nova.compute.manager [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.369 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 6b74b880-45f6-4f10-b09f-2696629a42e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.370 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089367.3682847, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.370 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.393 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.398 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.422 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.422 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089367.3733737, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.422 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Started (Lifecycle Event)#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.443 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.453 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.477 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.740 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.741 254096 DEBUG nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 660 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.816 254096 DEBUG nova.compute.manager [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.938 254096 INFO nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Shelve offloading#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.946 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.946 254096 DEBUG nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.949 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.949 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:27 np0005535469 nova_compute[254092]: 2025-11-25 16:49:27.949 254096 DEBUG nova.network.neutron [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.361 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.362 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.362 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.362 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.363 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.364 254096 INFO nova.compute.manager [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Terminating instance#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.365 254096 DEBUG nova.compute.manager [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:49:28 np0005535469 kernel: tap5a3f34de-d3 (unregistering): left promiscuous mode
Nov 25 11:49:28 np0005535469 NetworkManager[48891]: <info>  [1764089368.4163] device (tap5a3f34de-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:28Z|00968|binding|INFO|Releasing lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 from this chassis (sb_readonly=0)
Nov 25 11:49:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:28Z|00969|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 down in Southbound
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:28Z|00970|binding|INFO|Removing iface tap5a3f34de-d3 ovn-installed in OVS
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.435 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.437 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 unbound from our chassis#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.448 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa283c2c-b597-4970-842d-f5f2b621b5f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.450 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3f2ea185-d7ac-4355-959c-ce88493ba9ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.451 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace which is not needed anymore#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 25 11:49:28 np0005535469 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000060.scope: Consumed 12.921s CPU time.
Nov 25 11:49:28 np0005535469 systemd-machined[216343]: Machine qemu-122-instance-00000060 terminated.
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.603 254096 INFO nova.virt.libvirt.driver [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance destroyed successfully.#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.603 254096 DEBUG nova.objects.instance [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 2e848add-8417-4307-8b01-f0d1c1a76cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:28 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : haproxy version is 2.8.14-c23fe91
Nov 25 11:49:28 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : path to executable is /usr/sbin/haproxy
Nov 25 11:49:28 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [WARNING]  (353345) : Exiting Master process...
Nov 25 11:49:28 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [ALERT]    (353345) : Current worker (353347) exited with code 143 (Terminated)
Nov 25 11:49:28 np0005535469 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [WARNING]  (353345) : All workers exited. Exiting... (0)
Nov 25 11:49:28 np0005535469 systemd[1]: libpod-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9.scope: Deactivated successfully.
Nov 25 11:49:28 np0005535469 podman[354208]: 2025-11-25 16:49:28.631729619 +0000 UTC m=+0.073791916 container died ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:49:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9-userdata-shm.mount: Deactivated successfully.
Nov 25 11:49:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a0b2008feb8517d6fb5f144bab4ee20658f8fd3f00332a77040521f977c12c63-merged.mount: Deactivated successfully.
Nov 25 11:49:28 np0005535469 podman[354208]: 2025-11-25 16:49:28.684340179 +0000 UTC m=+0.126402446 container cleanup ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:49:28 np0005535469 systemd[1]: libpod-conmon-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9.scope: Deactivated successfully.
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 podman[354248]: 2025-11-25 16:49:28.760780806 +0000 UTC m=+0.049809085 container remove ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.768 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae80180c-0adc-49e2-8f83-5b2936a32961]: (4, ('Tue Nov 25 04:49:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9)\nef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9\nTue Nov 25 04:49:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9)\nef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.768 254096 DEBUG nova.virt.libvirt.vif [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-721299492',display_name='tempest-TestNetworkAdvancedServerOps-server-721299492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-721299492',id=96,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIuS2h4G62fbKa1D8fQiWbH65PFkkRVLBed4wrkEeUlM++S4qN/mZDJoxB0We0lR2SolGZ26Txk6Ir9O+1WqdMaVC9PS7NmiU/+hEPFN6YieX+/K6w93NwRm1fHYEK0fbg==',key_name='tempest-TestNetworkAdvancedServerOps-1463569288',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-oidm56b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:07Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2e848add-8417-4307-8b01-f0d1c1a76cea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.769 254096 DEBUG nova.network.os_vif_util [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.769 254096 DEBUG nova.network.os_vif_util [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.770 254096 DEBUG os_vif [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92171fbe-66f3-4b5a-acda-df25dece9b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.771 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.772 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a3f34de-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:28 np0005535469 kernel: tapaa283c2c-b0: left promiscuous mode
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.797 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.800 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa515e8c-c7e4-4cb6-9ac7-547bad3146b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:28 np0005535469 nova_compute[254092]: 2025-11-25 16:49:28.804 254096 INFO os_vif [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3')#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.818 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[233e6c89-0b3c-4d1c-aa4d-59962aacb5e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.820 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[89795bc5-89bf-45cb-b02e-e315432f154f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.835 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a04c50a-19bb-4651-bd85-2a631b27ffc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580428, 'reachable_time': 20487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354283, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:28 np0005535469 systemd[1]: run-netns-ovnmeta\x2daa283c2c\x2db597\x2d4970\x2d842d\x2df5f2b621b5f0.mount: Deactivated successfully.
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.841 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:49:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.841 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb6faca-16b0-4df8-ad91-49d3ac4bd0b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.153 254096 INFO nova.virt.libvirt.driver [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deleting instance files /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea_del#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.153 254096 INFO nova.virt.libvirt.driver [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deletion of /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea_del complete#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.223 254096 INFO nova.compute.manager [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.224 254096 DEBUG oslo.service.loopingcall [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.224 254096 DEBUG nova.compute.manager [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.224 254096 DEBUG nova.network.neutron [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.354 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.355 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.355 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.355 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 WARNING nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 WARNING nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing instance network info cache due to event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.358 254096 DEBUG nova.network.neutron [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.749 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.751 254096 INFO nova.compute.manager [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Terminating instance#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.752 254096 DEBUG nova.compute.manager [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:49:29 np0005535469 kernel: tapacb7d65c-02 (unregistering): left promiscuous mode
Nov 25 11:49:29 np0005535469 NetworkManager[48891]: <info>  [1764089369.7950] device (tapacb7d65c-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:29Z|00971|binding|INFO|Releasing lport acb7d65c-0259-4a39-94f8-d7f64637a340 from this chassis (sb_readonly=0)
Nov 25 11:49:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:29Z|00972|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 down in Southbound
Nov 25 11:49:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:29Z|00973|binding|INFO|Removing iface tapacb7d65c-02 ovn-installed in OVS
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 660 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.826 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.828 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis#033[00m
Nov 25 11:49:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.829 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:49:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.830 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab2e61f-1fd5-43e8-8442-28932bfb88c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:29 np0005535469 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 11:49:29 np0005535469 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Consumed 2.948s CPU time.
Nov 25 11:49:29 np0005535469 systemd-machined[216343]: Machine qemu-124-instance-00000062 terminated.
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.948 254096 DEBUG nova.network.neutron [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.972 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.994 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.#033[00m
Nov 25 11:49:29 np0005535469 nova_compute[254092]: 2025-11-25 16:49:29.995 254096 DEBUG nova.objects.instance [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.008 254096 DEBUG nova.virt.libvirt.vif [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:27Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.009 254096 DEBUG nova.network.os_vif_util [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.010 254096 DEBUG nova.network.os_vif_util [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.010 254096 DEBUG os_vif [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.012 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacb7d65c-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.020 254096 INFO os_vif [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02')#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.307 254096 INFO nova.virt.libvirt.driver [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deleting instance files /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9_del#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.308 254096 INFO nova.virt.libvirt.driver [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deletion of /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9_del complete#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.354 254096 INFO nova.compute.manager [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.355 254096 DEBUG oslo.service.loopingcall [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.355 254096 DEBUG nova.compute.manager [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.355 254096 DEBUG nova.network.neutron [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.665 254096 DEBUG nova.network.neutron [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.688 254096 INFO nova.compute.manager [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 1.46 seconds to deallocate network for instance.#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.731 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.731 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:30 np0005535469 nova_compute[254092]: 2025-11-25 16:49:30.934 254096 DEBUG oslo_concurrency.processutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732144333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.402 254096 DEBUG oslo_concurrency.processutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.410 254096 DEBUG nova.compute.provider_tree [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.423 254096 DEBUG nova.scheduler.client.report [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.447 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.451 254096 DEBUG nova.network.neutron [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updated VIF entry in instance network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.451 254096 DEBUG nova.network.neutron [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.476 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.477 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.478 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.478 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.478 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.479 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.479 254096 WARNING nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.479 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.480 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.480 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.481 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.481 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.481 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.483 254096 INFO nova.scheduler.client.report [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 2e848add-8417-4307-8b01-f0d1c1a76cea#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.556 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.647 254096 DEBUG nova.network.neutron [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.668 254096 INFO nova.compute.manager [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 1.31 seconds to deallocate network for instance.#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.689 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.690 254096 DEBUG nova.objects.instance [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.713 254096 DEBUG nova.virt.libvirt.vif [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:27.741538',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3855d8b5-0ce2-4690-ac71-e43d7c3e5764'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.714 254096 DEBUG nova.network.os_vif_util [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.715 254096 DEBUG nova.network.os_vif_util [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.715 254096 DEBUG os_vif [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.718 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap792a5867-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.721 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.721 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.727 254096 INFO os_vif [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.801 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.803 254096 WARNING nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.803 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.805 254096 WARNING nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.805 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-deleted-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.805 254096 INFO nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Neutron deleted interface 5a3f34de-d3de-439b-ac8f-baabc77892b4; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.806 254096 DEBUG nova.network.neutron [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 25 11:49:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 487 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 488 op/s
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Detach interface failed, port_id=5a3f34de-d3de-439b-ac8f-baabc77892b4, reason: Instance 2e848add-8417-4307-8b01-f0d1c1a76cea could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.810 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.810 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.810 254096 WARNING nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:49:31 np0005535469 nova_compute[254092]: 2025-11-25 16:49:31.853 254096 DEBUG oslo_concurrency.processutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.051 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting instance files /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.052 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deletion of /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del complete#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.151 254096 INFO nova.scheduler.client.report [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance 73301044-3bad-4401-9e30-f009d417f662#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.202 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168623913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.312 254096 DEBUG oslo_concurrency.processutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.318 254096 DEBUG nova.compute.provider_tree [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.333 254096 DEBUG nova.scheduler.client.report [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.350 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.352 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.377 254096 INFO nova.scheduler.client.report [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Deleted allocations for instance 6b74b880-45f6-4f10-b09f-2696629a42e9#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.434 254096 DEBUG oslo_concurrency.processutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.476 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.715 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.716 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.729 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.800 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711582819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.914 254096 DEBUG oslo_concurrency.processutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.920 254096 DEBUG nova.compute.provider_tree [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.933 254096 DEBUG nova.scheduler.client.report [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.949 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.952 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.963 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.963 254096 INFO nova.compute.claims [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.983 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.983 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.983 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.984 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.984 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.985 254096 INFO nova.compute.manager [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Terminating instance#033[00m
Nov 25 11:49:32 np0005535469 nova_compute[254092]: 2025-11-25 16:49:32.986 254096 DEBUG nova.compute.manager [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.019 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:33 np0005535469 kernel: tap431770e1-47 (unregistering): left promiscuous mode
Nov 25 11:49:33 np0005535469 NetworkManager[48891]: <info>  [1764089373.0401] device (tap431770e1-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:33Z|00974|binding|INFO|Releasing lport 431770e1-476d-40b3-8477-419b69aa4fe9 from this chassis (sb_readonly=0)
Nov 25 11:49:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:33Z|00975|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 down in Southbound
Nov 25 11:49:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:33Z|00976|binding|INFO|Removing iface tap431770e1-47 ovn-installed in OVS
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.099 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.105 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.107 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis#033[00m
Nov 25 11:49:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.108 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:49:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.109 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e96aaa9d-5ea6-422f-8948-84b09357f29c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.117 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.129 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:33 np0005535469 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 25 11:49:33 np0005535469 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Consumed 17.719s CPU time.
Nov 25 11:49:33 np0005535469 systemd-machined[216343]: Machine qemu-117-instance-0000005e terminated.
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.224 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance destroyed successfully.#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.225 254096 DEBUG nova.objects.instance [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.238 254096 DEBUG nova.virt.libvirt.vif [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:31Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.238 254096 DEBUG nova.network.os_vif_util [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.239 254096 DEBUG nova.network.os_vif_util [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.240 254096 DEBUG os_vif [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.242 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431770e1-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.248 254096 INFO os_vif [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47')#033[00m
Nov 25 11:49:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142638453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.615 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.620 254096 DEBUG nova.compute.provider_tree [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.633 254096 DEBUG nova.scheduler.client.report [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.655 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.656 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.704 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.705 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.728 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.742 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:49:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 487 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 310 op/s
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.830 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.832 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.832 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Creating image(s)#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.854 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.886 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.918 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.923 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.985 254096 INFO nova.virt.libvirt.driver [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deleting instance files /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70_del#033[00m
Nov 25 11:49:33 np0005535469 nova_compute[254092]: 2025-11-25 16:49:33.986 254096 INFO nova.virt.libvirt.driver [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deletion of /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70_del complete#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.015 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.017 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.017 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.018 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.043 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.048 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.094 254096 DEBUG nova.policy [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5aefdc701af340eba9e8201f5065511e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6ce5017d19f45bcb3b13bf55faa9493', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.101 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-deleted-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.101 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.101 254096 DEBUG oslo_concurrency.lockutils [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.102 254096 DEBUG oslo_concurrency.lockutils [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.102 254096 DEBUG oslo_concurrency.lockutils [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.102 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.103 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.109 254096 INFO nova.compute.manager [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.109 254096 DEBUG oslo.service.loopingcall [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.110 254096 DEBUG nova.compute.manager [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.110 254096 DEBUG nova.network.neutron [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.351 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.416 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] resizing rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.505 254096 DEBUG nova.objects.instance [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f5465d3-64cd-46fb-af8f-3b29aef5123d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.516 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.517 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Ensure instance console log exists: /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.517 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.517 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.518 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:34Z|00977|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:49:34 np0005535469 nova_compute[254092]: 2025-11-25 16:49:34.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:35 np0005535469 nova_compute[254092]: 2025-11-25 16:49:35.531 254096 DEBUG nova.network.neutron [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:35 np0005535469 nova_compute[254092]: 2025-11-25 16:49:35.543 254096 INFO nova.compute.manager [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 1.43 seconds to deallocate network for instance.#033[00m
Nov 25 11:49:35 np0005535469 nova_compute[254092]: 2025-11-25 16:49:35.576 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:35 np0005535469 nova_compute[254092]: 2025-11-25 16:49:35.576 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:35 np0005535469 nova_compute[254092]: 2025-11-25 16:49:35.644 254096 DEBUG oslo_concurrency.processutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 346 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 356 op/s
Nov 25 11:49:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/476113371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.109 254096 DEBUG oslo_concurrency.processutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.121 254096 DEBUG nova.compute.provider_tree [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.139 254096 DEBUG nova.scheduler.client.report [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.163 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.194 254096 DEBUG nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.194 254096 DEBUG oslo_concurrency.lockutils [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 DEBUG oslo_concurrency.lockutils [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 DEBUG oslo_concurrency.lockutils [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 DEBUG nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 WARNING nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.196 254096 DEBUG nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-deleted-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 25 11:49:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 25 11:49:36 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.212 254096 INFO nova.scheduler.client.report [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Deleted allocations for instance 435ae693-6844-49ae-977b-ec3aa89cfe70#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.273 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Successfully created port: 9fefcfde-9e55-4ed2-8521-ee26704af28c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:49:36 np0005535469 nova_compute[254092]: 2025-11-25 16:49:36.303 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.204 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089362.2036006, 73301044-3bad-4401-9e30-f009d417f662 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.205 254096 INFO nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.229 254096 DEBUG nova.compute.manager [None req-8992db9e-1535-44aa-9f94-440498fe4e51 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.416 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Successfully updated port: 9fefcfde-9e55-4ed2-8521-ee26704af28c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.429 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.429 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquired lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.429 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:49:37 np0005535469 nova_compute[254092]: 2025-11-25 16:49:37.658 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:49:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 331 op/s
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.289 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.289 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.289 254096 INFO nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Unshelving#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.296 254096 DEBUG nova.compute.manager [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-changed-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.297 254096 DEBUG nova.compute.manager [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Refreshing instance network info cache due to event network-changed-9fefcfde-9e55-4ed2-8521-ee26704af28c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.297 254096 DEBUG oslo_concurrency.lockutils [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.361 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.362 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.366 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_requests' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.378 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'numa_topology' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.385 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.385 254096 INFO nova.compute.claims [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.537 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.856 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updating instance_info_cache with network_info: [{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.890 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Releasing lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.891 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance network_info: |[{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.892 254096 DEBUG oslo_concurrency.lockutils [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.892 254096 DEBUG nova.network.neutron [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Refreshing network info cache for port 9fefcfde-9e55-4ed2-8521-ee26704af28c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.896 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start _get_guest_xml network_info=[{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.902 254096 WARNING nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.910 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.911 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.918 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.920 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.920 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.921 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.923 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.923 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.926 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.927 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.927 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.928 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.928 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.929 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.930 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.930 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.936 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532709748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:38 np0005535469 nova_compute[254092]: 2025-11-25 16:49:38.996 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.004 254096 DEBUG nova.compute.provider_tree [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.025 254096 DEBUG nova.scheduler.client.report [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.048 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858223094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.397 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.425 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.431 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.487 254096 INFO nova.network.neutron [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:49:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d7eaa06a-174b-45e5-878a-d99ca07c3208 does not exist
Nov 25 11:49:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5375b7fb-238a-4959-b9ee-72f5004738fd does not exist
Nov 25 11:49:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8115eac4-eb4f-4556-9fdc-df210229f695 does not exist
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:49:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 331 op/s
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710776305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.895 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.897 254096 DEBUG nova.virt.libvirt.vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-2033799726',id=100,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6ce5017d19f45bcb3b13bf55faa9493',ramdisk_id='',reservation_id='r-ey2dhuzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:33Z,user_data=None,user_id='5aefdc701af340eba9e8201f5065511e',uuid=6f5465d3-64cd-46fb-af8f-3b29aef5123d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.898 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converting VIF {"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.899 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.900 254096 DEBUG nova.objects.instance [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f5465d3-64cd-46fb-af8f-3b29aef5123d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.911 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <uuid>6f5465d3-64cd-46fb-af8f-3b29aef5123d</uuid>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <name>instance-00000064</name>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-2033799726</nova:name>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:49:38</nova:creationTime>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:user uuid="5aefdc701af340eba9e8201f5065511e">tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member</nova:user>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:project uuid="b6ce5017d19f45bcb3b13bf55faa9493">tempest-ServersNegativeTestMultiTenantJSON-2074703932</nova:project>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <nova:port uuid="9fefcfde-9e55-4ed2-8521-ee26704af28c">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <entry name="serial">6f5465d3-64cd-46fb-af8f-3b29aef5123d</entry>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <entry name="uuid">6f5465d3-64cd-46fb-af8f-3b29aef5123d</entry>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ab:07:67"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <target dev="tap9fefcfde-9e"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/console.log" append="off"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:49:39 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:49:39 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:49:39 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:49:39 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.912 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Preparing to wait for external event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.912 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.913 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.913 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.914 254096 DEBUG nova.virt.libvirt.vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-2033799726',id=100,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6ce5017d19f45bcb3b13bf55faa9493',ramdisk_id='',reservation_id='r-ey2dhuzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:33Z,user_data=None,user_id='5aefdc701af340eba9e8201f5065511e',uuid=6f5465d3-64cd-46fb-af8f-3b29aef5123d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.914 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converting VIF {"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.915 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.915 254096 DEBUG os_vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.916 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.916 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fefcfde-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fefcfde-9e, col_values=(('external_ids', {'iface-id': '9fefcfde-9e55-4ed2-8521-ee26704af28c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:07:67', 'vm-uuid': '6f5465d3-64cd-46fb-af8f-3b29aef5123d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:39 np0005535469 NetworkManager[48891]: <info>  [1764089379.9237] manager: (tap9fefcfde-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.932 254096 INFO os_vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e')#033[00m
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:49:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:49:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:39.974 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:39.975 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.991 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.992 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.992 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] No VIF found with MAC fa:16:3e:ab:07:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:49:39 np0005535469 nova_compute[254092]: 2025-11-25 16:49:39.993 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Using config drive#033[00m
Nov 25 11:49:40 np0005535469 nova_compute[254092]: 2025-11-25 16:49:40.015 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:49:40
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root']
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:49:40 np0005535469 podman[355038]: 2025-11-25 16:49:40.351546337 +0000 UTC m=+0.049431534 container create b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 11:49:40 np0005535469 systemd[1]: Started libpod-conmon-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope.
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:49:40 np0005535469 podman[355038]: 2025-11-25 16:49:40.328724846 +0000 UTC m=+0.026609853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:49:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:40 np0005535469 podman[355038]: 2025-11-25 16:49:40.467919169 +0000 UTC m=+0.165804156 container init b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:49:40 np0005535469 podman[355038]: 2025-11-25 16:49:40.476380239 +0000 UTC m=+0.174265226 container start b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:49:40 np0005535469 podman[355038]: 2025-11-25 16:49:40.479177215 +0000 UTC m=+0.177062192 container attach b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:49:40 np0005535469 systemd[1]: libpod-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope: Deactivated successfully.
Nov 25 11:49:40 np0005535469 kind_jang[355054]: 167 167
Nov 25 11:49:40 np0005535469 conmon[355054]: conmon b181c1198f75f26cd4a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope/container/memory.events
Nov 25 11:49:40 np0005535469 podman[355059]: 2025-11-25 16:49:40.532580556 +0000 UTC m=+0.028486244 container died b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:49:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-240233f781cae97ed622517824bd205b119899a8ef95fe8e4f74da7328f38683-merged.mount: Deactivated successfully.
Nov 25 11:49:40 np0005535469 podman[355059]: 2025-11-25 16:49:40.570299652 +0000 UTC m=+0.066205330 container remove b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:49:40 np0005535469 systemd[1]: libpod-conmon-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope: Deactivated successfully.
Nov 25 11:49:40 np0005535469 podman[355081]: 2025-11-25 16:49:40.769100175 +0000 UTC m=+0.045415766 container create 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:49:40 np0005535469 systemd[1]: Started libpod-conmon-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope.
Nov 25 11:49:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:40 np0005535469 podman[355081]: 2025-11-25 16:49:40.751765123 +0000 UTC m=+0.028080734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:49:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:40 np0005535469 podman[355081]: 2025-11-25 16:49:40.869327437 +0000 UTC m=+0.145643048 container init 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:49:40 np0005535469 podman[355081]: 2025-11-25 16:49:40.880507671 +0000 UTC m=+0.156823262 container start 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:49:40 np0005535469 podman[355081]: 2025-11-25 16:49:40.885267211 +0000 UTC m=+0.161582842 container attach 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.064 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Creating config drive at /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.074 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbytd9blv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.239 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbytd9blv" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.281 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.288 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.467 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.469 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deleting local config drive /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config because it was imported into RBD.#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.480 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.481 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.481 254096 DEBUG nova.network.neutron [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:49:41 np0005535469 kernel: tap9fefcfde-9e: entered promiscuous mode
Nov 25 11:49:41 np0005535469 NetworkManager[48891]: <info>  [1764089381.5273] manager: (tap9fefcfde-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:41Z|00978|binding|INFO|Claiming lport 9fefcfde-9e55-4ed2-8521-ee26704af28c for this chassis.
Nov 25 11:49:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:41Z|00979|binding|INFO|9fefcfde-9e55-4ed2-8521-ee26704af28c: Claiming fa:16:3e:ab:07:67 10.100.0.7
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.572 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:07:67 10.100.0.7'], port_security=['fa:16:3e:ab:07:67 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f5465d3-64cd-46fb-af8f-3b29aef5123d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6ce5017d19f45bcb3b13bf55faa9493', 'neutron:revision_number': '2', 'neutron:security_group_ids': '25d23027-5b7a-4134-9db5-2818d483fb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65852f31-5e85-4656-9a53-3d977e20f573, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fefcfde-9e55-4ed2-8521-ee26704af28c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.573 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fefcfde-9e55-4ed2-8521-ee26704af28c in datapath e3082221-dfbe-4119-bc6f-940f05f1b99c bound to our chassis#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.575 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3082221-dfbe-4119-bc6f-940f05f1b99c#033[00m
Nov 25 11:49:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:41Z|00980|binding|INFO|Setting lport 9fefcfde-9e55-4ed2-8521-ee26704af28c ovn-installed in OVS
Nov 25 11:49:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:41Z|00981|binding|INFO|Setting lport 9fefcfde-9e55-4ed2-8521-ee26704af28c up in Southbound
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:41 np0005535469 systemd-udevd[355152]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[523db5e3-4893-4f66-8fe5-c1b874fc9757]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.592 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape3082221-d1 in ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.597 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape3082221-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4116c432-f6a3-4456-bce4-c026ef69081c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb906349-2f75-4b29-b446-b9345095d474]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 NetworkManager[48891]: <info>  [1764089381.6104] device (tap9fefcfde-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:49:41 np0005535469 NetworkManager[48891]: <info>  [1764089381.6114] device (tap9fefcfde-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:49:41 np0005535469 systemd-machined[216343]: New machine qemu-125-instance-00000064.
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.611 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd62473-578e-4887-8e38-783113b0cb1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 systemd[1]: Started Virtual Machine qemu-125-instance-00000064.
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1a9bb8-d573-4248-acb4-7ef2d6b20030]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.676 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[98375983-0a8f-49d7-a67c-53dc7dcbfce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.684 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bb64c1b4-d96f-44f4-be5f-64195145f487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 NetworkManager[48891]: <info>  [1764089381.6861] manager: (tape3082221-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.723 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5c048a-3028-4e7d-be9d-7574c0fa6cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.729 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5c6db5-69e8-4ff5-88de-4ed64664bcdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 NetworkManager[48891]: <info>  [1764089381.7574] device (tape3082221-d0): carrier: link connected
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.769 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[309a8f14-a3d1-4082-819d-2e35f10fc8a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.791 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0f4748-b972-4997-843d-b07222225f77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3082221-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:f3:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583934, 'reachable_time': 16757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355199, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 292 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.813 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8692047a-c59c-48d0-99ee-980826675e97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:f3c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 583934, 'tstamp': 583934}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355201, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.835 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c996469f-e842-4919-9f10-534142fb7481]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3082221-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:f3:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583934, 'reachable_time': 16757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355204, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.864 254096 DEBUG nova.compute.manager [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.864 254096 DEBUG nova.compute.manager [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing instance network info cache due to event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.865 254096 DEBUG oslo_concurrency.lockutils [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0342ae91-e180-40ba-b908-f4d9e4bc7423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.887 254096 DEBUG nova.network.neutron [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updated VIF entry in instance network info cache for port 9fefcfde-9e55-4ed2-8521-ee26704af28c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.887 254096 DEBUG nova.network.neutron [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updating instance_info_cache with network_info: [{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.898 254096 DEBUG oslo_concurrency.lockutils [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[029f02e5-178e-4319-9876-776a322ad663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.954 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3082221-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.954 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.954 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3082221-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:41 np0005535469 NetworkManager[48891]: <info>  [1764089381.9576] manager: (tape3082221-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Nov 25 11:49:41 np0005535469 kernel: tape3082221-d0: entered promiscuous mode
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.960 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3082221-d0, col_values=(('external_ids', {'iface-id': '00283bd6-1ec2-4a8e-b502-76396999cb36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:41 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:41Z|00982|binding|INFO|Releasing lport 00283bd6-1ec2-4a8e-b502-76396999cb36 from this chassis (sb_readonly=0)
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.980 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e3082221-dfbe-4119-bc6f-940f05f1b99c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e3082221-dfbe-4119-bc6f-940f05f1b99c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:49:41 np0005535469 nova_compute[254092]: 2025-11-25 16:49:41.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2d1575-a535-4ffa-9296-2ddb6d996317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.984 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-e3082221-dfbe-4119-bc6f-940f05f1b99c
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/e3082221-dfbe-4119-bc6f-940f05f1b99c.pid.haproxy
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID e3082221-dfbe-4119-bc6f-940f05f1b99c
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:49:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.986 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'env', 'PROCESS_TAG=haproxy-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e3082221-dfbe-4119-bc6f-940f05f1b99c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:49:42 np0005535469 upbeat_hypatia[355097]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:49:42 np0005535469 upbeat_hypatia[355097]: --> relative data size: 1.0
Nov 25 11:49:42 np0005535469 upbeat_hypatia[355097]: --> All data devices are unavailable
Nov 25 11:49:42 np0005535469 systemd[1]: libpod-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope: Deactivated successfully.
Nov 25 11:49:42 np0005535469 systemd[1]: libpod-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope: Consumed 1.060s CPU time.
Nov 25 11:49:42 np0005535469 podman[355081]: 2025-11-25 16:49:42.059705656 +0000 UTC m=+1.336021247 container died 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:49:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075-merged.mount: Deactivated successfully.
Nov 25 11:49:42 np0005535469 podman[355081]: 2025-11-25 16:49:42.124733283 +0000 UTC m=+1.401048874 container remove 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:49:42 np0005535469 systemd[1]: libpod-conmon-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope: Deactivated successfully.
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.341 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089382.341018, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.342 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Started (Lifecycle Event)#033[00m
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.359 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.368 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089382.3436253, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.368 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.386 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.390 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:42 np0005535469 podman[355354]: 2025-11-25 16:49:42.391603166 +0000 UTC m=+0.058416549 container create 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:49:42 np0005535469 systemd[1]: Started libpod-conmon-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2.scope.
Nov 25 11:49:42 np0005535469 podman[355354]: 2025-11-25 16:49:42.362934147 +0000 UTC m=+0.029747560 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:49:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb944fbef87fb94ba68f50ced07af8fad100a0cb07b8d51f22d49a8cd9a98b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:42 np0005535469 podman[355354]: 2025-11-25 16:49:42.482201718 +0000 UTC m=+0.149015171 container init 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:49:42 np0005535469 podman[355354]: 2025-11-25 16:49:42.492313353 +0000 UTC m=+0.159126766 container start 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:49:42 np0005535469 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : New worker (355420) forked
Nov 25 11:49:42 np0005535469 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : Loading success.
Nov 25 11:49:42 np0005535469 podman[355469]: 2025-11-25 16:49:42.812312339 +0000 UTC m=+0.042717332 container create 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:49:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:42Z|00983|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:49:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:42Z|00984|binding|INFO|Releasing lport 00283bd6-1ec2-4a8e-b502-76396999cb36 from this chassis (sb_readonly=0)
Nov 25 11:49:42 np0005535469 systemd[1]: Started libpod-conmon-080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc.scope.
Nov 25 11:49:42 np0005535469 podman[355469]: 2025-11-25 16:49:42.794061622 +0000 UTC m=+0.024466645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:49:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:42 np0005535469 podman[355469]: 2025-11-25 16:49:42.959085327 +0000 UTC m=+0.189490360 container init 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 11:49:42 np0005535469 nova_compute[254092]: 2025-11-25 16:49:42.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:42 np0005535469 podman[355469]: 2025-11-25 16:49:42.974588529 +0000 UTC m=+0.204993532 container start 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:49:42 np0005535469 podman[355469]: 2025-11-25 16:49:42.977654442 +0000 UTC m=+0.208059465 container attach 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:49:42 np0005535469 charming_kare[355485]: 167 167
Nov 25 11:49:42 np0005535469 systemd[1]: libpod-080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc.scope: Deactivated successfully.
Nov 25 11:49:42 np0005535469 podman[355469]: 2025-11-25 16:49:42.983872661 +0000 UTC m=+0.214277694 container died 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:49:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c8225f822df926bf9ccf7cb5ef2f5397d4ae2c2c19543fa44c592a739fb5bd16-merged.mount: Deactivated successfully.
Nov 25 11:49:43 np0005535469 podman[355469]: 2025-11-25 16:49:43.019819888 +0000 UTC m=+0.250224891 container remove 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:49:43 np0005535469 systemd[1]: libpod-conmon-080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc.scope: Deactivated successfully.
Nov 25 11:49:43 np0005535469 podman[355509]: 2025-11-25 16:49:43.208872455 +0000 UTC m=+0.046339430 container create 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:49:43 np0005535469 systemd[1]: Started libpod-conmon-2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3.scope.
Nov 25 11:49:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:43 np0005535469 podman[355509]: 2025-11-25 16:49:43.18809091 +0000 UTC m=+0.025557645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:49:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:43 np0005535469 podman[355509]: 2025-11-25 16:49:43.296870097 +0000 UTC m=+0.134336842 container init 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:49:43 np0005535469 podman[355509]: 2025-11-25 16:49:43.304524485 +0000 UTC m=+0.141991210 container start 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 11:49:43 np0005535469 podman[355509]: 2025-11-25 16:49:43.30729591 +0000 UTC m=+0.144762635 container attach 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.598 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089368.5971568, 2e848add-8417-4307-8b01-f0d1c1a76cea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.599 254096 INFO nova.compute.manager [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.627 254096 DEBUG nova.compute.manager [None req-55078cc3-f84b-4eb5-a926-7339d9dd4e9c - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.735 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.799 254096 DEBUG nova.network.neutron [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.811 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.812 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:49:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 292 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.814 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating image(s)#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.836 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.840 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.842 254096 DEBUG oslo_concurrency.lockutils [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.842 254096 DEBUG nova.network.neutron [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.873 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.898 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.902 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "ac65f45ff36d3fc6c00b94e1164f55245d2a4ddb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.903 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "ac65f45ff36d3fc6c00b94e1164f55245d2a4ddb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.949 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.950 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.950 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.950 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Processing event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.952 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] No waiting events found dispatching network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.952 254096 WARNING nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received unexpected event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.952 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.957 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089383.9569314, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.959 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.974 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.980 254096 INFO nova.virt.libvirt.driver [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance spawned successfully.#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.981 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:49:43 np0005535469 nova_compute[254092]: 2025-11-25 16:49:43.984 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.005 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.007 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.007 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.063 254096 INFO nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 10.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.063 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.113 254096 INFO nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 11.34 seconds to build instance.#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.125 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:44 np0005535469 eager_tu[355525]: {
Nov 25 11:49:44 np0005535469 eager_tu[355525]:    "0": [
Nov 25 11:49:44 np0005535469 eager_tu[355525]:        {
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "devices": [
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "/dev/loop3"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            ],
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_name": "ceph_lv0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_size": "21470642176",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "name": "ceph_lv0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "tags": {
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cluster_name": "ceph",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.crush_device_class": "",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.encrypted": "0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osd_id": "0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.type": "block",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.vdo": "0"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            },
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "type": "block",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "vg_name": "ceph_vg0"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:        }
Nov 25 11:49:44 np0005535469 eager_tu[355525]:    ],
Nov 25 11:49:44 np0005535469 eager_tu[355525]:    "1": [
Nov 25 11:49:44 np0005535469 eager_tu[355525]:        {
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "devices": [
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "/dev/loop4"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            ],
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_name": "ceph_lv1",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_size": "21470642176",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "name": "ceph_lv1",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "tags": {
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cluster_name": "ceph",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.crush_device_class": "",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.encrypted": "0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osd_id": "1",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.type": "block",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.vdo": "0"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            },
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "type": "block",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "vg_name": "ceph_vg1"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:        }
Nov 25 11:49:44 np0005535469 eager_tu[355525]:    ],
Nov 25 11:49:44 np0005535469 eager_tu[355525]:    "2": [
Nov 25 11:49:44 np0005535469 eager_tu[355525]:        {
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "devices": [
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "/dev/loop5"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            ],
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_name": "ceph_lv2",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_size": "21470642176",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "name": "ceph_lv2",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "tags": {
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.cluster_name": "ceph",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.crush_device_class": "",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.encrypted": "0",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osd_id": "2",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.type": "block",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:                "ceph.vdo": "0"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            },
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "type": "block",
Nov 25 11:49:44 np0005535469 eager_tu[355525]:            "vg_name": "ceph_vg2"
Nov 25 11:49:44 np0005535469 eager_tu[355525]:        }
Nov 25 11:49:44 np0005535469 eager_tu[355525]:    ]
Nov 25 11:49:44 np0005535469 eager_tu[355525]: }
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.151 254096 DEBUG nova.virt.libvirt.imagebackend [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 11:49:44 np0005535469 systemd[1]: libpod-2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3.scope: Deactivated successfully.
Nov 25 11:49:44 np0005535469 podman[355509]: 2025-11-25 16:49:44.155920262 +0000 UTC m=+0.993386987 container died 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:49:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e-merged.mount: Deactivated successfully.
Nov 25 11:49:44 np0005535469 podman[355509]: 2025-11-25 16:49:44.210731341 +0000 UTC m=+1.048198066 container remove 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:49:44 np0005535469 systemd[1]: libpod-conmon-2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3.scope: Deactivated successfully.
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.234 254096 DEBUG nova.virt.libvirt.imagebackend [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.235 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764@snap to None/73301044-3bad-4401-9e30-f009d417f662_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.377 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "ac65f45ff36d3fc6c00b94e1164f55245d2a4ddb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.528 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.581 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening vms/73301044-3bad-4401-9e30-f009d417f662_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.895 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Image rbd:vms/73301044-3bad-4401-9e30-f009d417f662_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.896 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.896 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Ensure instance console log exists: /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.896 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.897 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.897 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.899 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start _get_guest_xml network_info=[{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:49:19Z,direct_url=<?>,disk_format='raw',id=3855d8b5-0ce2-4690-ac71-e43d7c3e5764,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-932750089-shelved',owner='fbf763b31dad40d6b0d7285dc017dd89',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:49:44 np0005535469 podman[355898]: 2025-11-25 16:49:44.903362643 +0000 UTC m=+0.054935903 container create 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.903 254096 WARNING nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.914 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.915 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.919 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.920 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.920 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.920 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:49:19Z,direct_url=<?>,disk_format='raw',id=3855d8b5-0ce2-4690-ac71-e43d7c3e5764,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-932750089-shelved',owner='fbf763b31dad40d6b0d7285dc017dd89',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.935 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:44 np0005535469 systemd[1]: Started libpod-conmon-08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583.scope.
Nov 25 11:49:44 np0005535469 podman[355898]: 2025-11-25 16:49:44.877964923 +0000 UTC m=+0.029538203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:49:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.992 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089369.99103, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:44 np0005535469 nova_compute[254092]: 2025-11-25 16:49:44.994 254096 INFO nova.compute.manager [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:49:44 np0005535469 podman[355898]: 2025-11-25 16:49:44.996615428 +0000 UTC m=+0.148188708 container init 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:49:45 np0005535469 podman[355898]: 2025-11-25 16:49:45.005986822 +0000 UTC m=+0.157560082 container start 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:49:45 np0005535469 podman[355898]: 2025-11-25 16:49:45.010550077 +0000 UTC m=+0.162123347 container attach 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:49:45 np0005535469 jolly_knuth[355914]: 167 167
Nov 25 11:49:45 np0005535469 systemd[1]: libpod-08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583.scope: Deactivated successfully.
Nov 25 11:49:45 np0005535469 podman[355898]: 2025-11-25 16:49:45.015674545 +0000 UTC m=+0.167247805 container died 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 11:49:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a8b85034cc7da8c625c9755656ef539a478399e18f377315b9587f4509e0e4b-merged.mount: Deactivated successfully.
Nov 25 11:49:45 np0005535469 podman[355898]: 2025-11-25 16:49:45.058336935 +0000 UTC m=+0.209910195 container remove 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:49:45 np0005535469 systemd[1]: libpod-conmon-08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583.scope: Deactivated successfully.
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.082 254096 DEBUG nova.compute.manager [None req-095c2373-d5af-4036-82a8-c7930bec053c - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:45 np0005535469 podman[355958]: 2025-11-25 16:49:45.277447269 +0000 UTC m=+0.057117843 container create 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:49:45 np0005535469 systemd[1]: Started libpod-conmon-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope.
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:45 np0005535469 podman[355958]: 2025-11-25 16:49:45.254434214 +0000 UTC m=+0.034104818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:49:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:49:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:49:45 np0005535469 podman[355958]: 2025-11-25 16:49:45.378415863 +0000 UTC m=+0.158086467 container init 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:49:45 np0005535469 podman[355958]: 2025-11-25 16:49:45.389851774 +0000 UTC m=+0.169522348 container start 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:49:45 np0005535469 podman[355958]: 2025-11-25 16:49:45.394760147 +0000 UTC m=+0.174430751 container attach 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 11:49:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:49:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774064259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.457 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.478 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.482 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 351 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.5 MiB/s wr, 133 op/s
Nov 25 11:49:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:49:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376739314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.910 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.913 254096 DEBUG nova.virt.libvirt.vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='3855d8b5-0ce2-4690-ac71-e43d7c3e5764',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:27.741538',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3855d8b5-0ce2-4690-ac71-e43d7c3e5764'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.913 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.914 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.915 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.927 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <uuid>73301044-3bad-4401-9e30-f009d417f662</uuid>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <name>instance-00000059</name>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerActionsTestOtherB-server-932750089</nova:name>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:49:44</nova:creationTime>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="3855d8b5-0ce2-4690-ac71-e43d7c3e5764"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <nova:port uuid="792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <entry name="serial">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <entry name="uuid">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk.config">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:c4:5c:49"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <target dev="tap792a5867-7e"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log" append="off"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:49:45 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:49:45 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:49:45 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:49:45 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.929 254096 DEBUG nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Preparing to wait for external event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.929 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.929 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.930 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.930 254096 DEBUG nova.virt.libvirt.vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='3855d8b5-0ce2-4690-ac71-e43d7c3e5764',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:27.741538',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3855d8b5-0ce2-4690-ac71-e43d7c3e5764'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.931 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.931 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.932 254096 DEBUG os_vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.933 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.933 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.937 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap792a5867-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.938 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap792a5867-7e, col_values=(('external_ids', {'iface-id': '792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:5c:49', 'vm-uuid': '73301044-3bad-4401-9e30-f009d417f662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:45 np0005535469 NetworkManager[48891]: <info>  [1764089385.9407] manager: (tap792a5867-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:45 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.950 254096 INFO os_vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:45.999 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.000 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.000 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:c4:5c:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.001 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Using config drive#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.023 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.047 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.098 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'keypairs' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]: {
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "osd_id": 1,
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "type": "bluestore"
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:    },
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "osd_id": 2,
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "type": "bluestore"
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:    },
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "osd_id": 0,
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:        "type": "bluestore"
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]:    }
Nov 25 11:49:46 np0005535469 goofy_driscoll[355973]: }
Nov 25 11:49:46 np0005535469 systemd[1]: libpod-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope: Deactivated successfully.
Nov 25 11:49:46 np0005535469 systemd[1]: libpod-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope: Consumed 1.007s CPU time.
Nov 25 11:49:46 np0005535469 podman[355958]: 2025-11-25 16:49:46.409782351 +0000 UTC m=+1.189452955 container died 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:49:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397-merged.mount: Deactivated successfully.
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.468 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating config drive at /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config#033[00m
Nov 25 11:49:46 np0005535469 podman[355958]: 2025-11-25 16:49:46.472511275 +0000 UTC m=+1.252181849 container remove 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:49:46 np0005535469 systemd[1]: libpod-conmon-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope: Deactivated successfully.
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.480 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxj3y20r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:49:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:49:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.516 254096 DEBUG nova.network.neutron [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated VIF entry in instance network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.517 254096 DEBUG nova.network.neutron [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:49:46 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 955e7524-0de7-4467-b8ef-ac5be120e29d does not exist
Nov 25 11:49:46 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 17b4b376-758c-4c31-8589-fda4a3e6a081 does not exist
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.543 254096 DEBUG oslo_concurrency.lockutils [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.623 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxj3y20r" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.656 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.660 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.799 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.800 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting local config drive /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config because it was imported into RBD.#033[00m
Nov 25 11:49:46 np0005535469 NetworkManager[48891]: <info>  [1764089386.8496] manager: (tap792a5867-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Nov 25 11:49:46 np0005535469 kernel: tap792a5867-7e: entered promiscuous mode
Nov 25 11:49:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:46Z|00985|binding|INFO|Claiming lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for this chassis.
Nov 25 11:49:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:46Z|00986|binding|INFO|792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3: Claiming fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.860 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.862 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis#033[00m
Nov 25 11:49:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.865 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919#033[00m
Nov 25 11:49:46 np0005535469 systemd-udevd[356181]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:49:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:46Z|00987|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 ovn-installed in OVS
Nov 25 11:49:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:46Z|00988|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 up in Southbound
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:46 np0005535469 nova_compute[254092]: 2025-11-25 16:49:46.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.887 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c631a6c3-a12f-459f-9318-e45e40de64ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:46 np0005535469 NetworkManager[48891]: <info>  [1764089386.8890] device (tap792a5867-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:49:46 np0005535469 NetworkManager[48891]: <info>  [1764089386.8948] device (tap792a5867-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:49:46 np0005535469 systemd-machined[216343]: New machine qemu-126-instance-00000059.
Nov 25 11:49:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.918 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[83d1fc75-b1a0-4ddf-990c-51196b42fc71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:46 np0005535469 systemd[1]: Started Virtual Machine qemu-126-instance-00000059.
Nov 25 11:49:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.920 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6514bc2-b38b-4b37-b169-11325f0687fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.945 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c13d4fe4-959f-449d-accc-331e41303632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:49:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.963 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c87dde-c483-4bf9-b119-e8e4d55663f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356191, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a455f8fc-5793-4710-9476-5a5644a1f56c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356195, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356195, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.011 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.428 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089387.4271107, 73301044-3bad-4401-9e30-f009d417f662 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.429 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Started (Lifecycle Event)#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.456 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.462 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089387.427582, 73301044-3bad-4401-9e30-f009d417f662 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.481 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.486 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.510 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.551 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.552 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.553 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.553 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.554 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.555 254096 INFO nova.compute.manager [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Terminating instance#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.556 254096 DEBUG nova.compute.manager [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:49:47 np0005535469 kernel: tap9fefcfde-9e (unregistering): left promiscuous mode
Nov 25 11:49:47 np0005535469 NetworkManager[48891]: <info>  [1764089387.5967] device (tap9fefcfde-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:47Z|00989|binding|INFO|Releasing lport 9fefcfde-9e55-4ed2-8521-ee26704af28c from this chassis (sb_readonly=0)
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:47Z|00990|binding|INFO|Setting lport 9fefcfde-9e55-4ed2-8521-ee26704af28c down in Southbound
Nov 25 11:49:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:47Z|00991|binding|INFO|Removing iface tap9fefcfde-9e ovn-installed in OVS
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.612 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:07:67 10.100.0.7'], port_security=['fa:16:3e:ab:07:67 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f5465d3-64cd-46fb-af8f-3b29aef5123d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6ce5017d19f45bcb3b13bf55faa9493', 'neutron:revision_number': '4', 'neutron:security_group_ids': '25d23027-5b7a-4134-9db5-2818d483fb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65852f31-5e85-4656-9a53-3d977e20f573, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fefcfde-9e55-4ed2-8521-ee26704af28c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.613 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fefcfde-9e55-4ed2-8521-ee26704af28c in datapath e3082221-dfbe-4119-bc6f-940f05f1b99c unbound from our chassis#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.614 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3082221-dfbe-4119-bc6f-940f05f1b99c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c78639d6-970a-4360-bb8b-7155d06ddfce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.616 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c namespace which is not needed anymore#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.625 254096 DEBUG nova.compute.manager [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.625 254096 DEBUG oslo_concurrency.lockutils [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.626 254096 DEBUG oslo_concurrency.lockutils [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.626 254096 DEBUG oslo_concurrency.lockutils [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.626 254096 DEBUG nova.compute.manager [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Processing event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.627 254096 DEBUG nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089387.6315713, 73301044-3bad-4401-9e30-f009d417f662 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.632 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.633 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.636 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance spawned successfully.#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.647 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.651 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:49:47 np0005535469 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 25 11:49:47 np0005535469 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Consumed 4.307s CPU time.
Nov 25 11:49:47 np0005535469 systemd-machined[216343]: Machine qemu-125-instance-00000064 terminated.
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.673 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:49:47 np0005535469 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : haproxy version is 2.8.14-c23fe91
Nov 25 11:49:47 np0005535469 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : path to executable is /usr/sbin/haproxy
Nov 25 11:49:47 np0005535469 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [WARNING]  (355418) : Exiting Master process...
Nov 25 11:49:47 np0005535469 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [ALERT]    (355418) : Current worker (355420) exited with code 143 (Terminated)
Nov 25 11:49:47 np0005535469 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [WARNING]  (355418) : All workers exited. Exiting... (0)
Nov 25 11:49:47 np0005535469 systemd[1]: libpod-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2.scope: Deactivated successfully.
Nov 25 11:49:47 np0005535469 podman[356261]: 2025-11-25 16:49:47.769249434 +0000 UTC m=+0.051722716 container died 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.797 254096 INFO nova.virt.libvirt.driver [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance destroyed successfully.#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.799 254096 DEBUG nova.objects.instance [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lazy-loading 'resources' on Instance uuid 6f5465d3-64cd-46fb-af8f-3b29aef5123d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2-userdata-shm.mount: Deactivated successfully.
Nov 25 11:49:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-19bb944fbef87fb94ba68f50ced07af8fad100a0cb07b8d51f22d49a8cd9a98b-merged.mount: Deactivated successfully.
Nov 25 11:49:47 np0005535469 podman[356261]: 2025-11-25 16:49:47.808999535 +0000 UTC m=+0.091472807 container cleanup 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.814 254096 DEBUG nova.virt.libvirt.vif [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-2033799726',id=100,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6ce5017d19f45bcb3b13bf55faa9493',ramdisk_id='',reservation_id='r-ey2dhuzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:44Z,user_data=None,user_id='5aefdc701af340eba9e8201f5065511e',uuid=6f5465d3-64cd-46fb-af8f-3b29aef5123d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.0 MiB/s wr, 142 op/s
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.816 254096 DEBUG nova.network.os_vif_util [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converting VIF {"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.818 254096 DEBUG nova.network.os_vif_util [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.818 254096 DEBUG os_vif [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.833 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fefcfde-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.838 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.840 254096 INFO os_vif [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e')#033[00m
Nov 25 11:49:47 np0005535469 systemd[1]: libpod-conmon-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2.scope: Deactivated successfully.
Nov 25 11:49:47 np0005535469 podman[356301]: 2025-11-25 16:49:47.906882515 +0000 UTC m=+0.053275289 container remove 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.913 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae89306a-dea0-48c5-b349-9ad9f25abcba]: (4, ('Tue Nov 25 04:49:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c (2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2)\n2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2\nTue Nov 25 04:49:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c (2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2)\n2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.915 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[14c8e8bb-0471-4585-b96c-e3add2821fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3082221-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 kernel: tape3082221-d0: left promiscuous mode
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 nova_compute[254092]: 2025-11-25 16:49:47.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.943 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d765e31-e03a-463b-8597-611fe807247e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88523aa4-da2e-4053-aaf4-bf60c78623c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06709875-1403-49f0-a0f6-0bc75d816858]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.976 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[819854e3-2ae3-4a67-9d4b-76e8bf7b21a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583925, 'reachable_time': 26434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356334, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:47 np0005535469 systemd[1]: run-netns-ovnmeta\x2de3082221\x2ddfbe\x2d4119\x2dbc6f\x2d940f05f1b99c.mount: Deactivated successfully.
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.982 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:49:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.982 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[53561cb7-4df5-4e8a-9172-53c7ec9cb760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.218 254096 DEBUG nova.compute.manager [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-unplugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG oslo_concurrency.lockutils [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG oslo_concurrency.lockutils [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG oslo_concurrency.lockutils [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG nova.compute.manager [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] No waiting events found dispatching network-vif-unplugged-9fefcfde-9e55-4ed2-8521-ee26704af28c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG nova.compute.manager [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-unplugged-9fefcfde-9e55-4ed2-8521-ee26704af28c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.223 254096 INFO nova.virt.libvirt.driver [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deleting instance files /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d_del#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.224 254096 INFO nova.virt.libvirt.driver [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deletion of /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d_del complete#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.226 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089373.221139, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.226 254096 INFO nova.compute.manager [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.254 254096 DEBUG nova.compute.manager [None req-380c019b-0bc3-42c4-8e02-82b241fdb8bb - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.289 254096 INFO nova.compute.manager [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.290 254096 DEBUG oslo.service.loopingcall [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.293 254096 DEBUG nova.compute.manager [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.293 254096 DEBUG nova.network.neutron [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:49:48 np0005535469 nova_compute[254092]: 2025-11-25 16:49:48.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:48.977 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 25 11:49:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 25 11:49:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.618 254096 DEBUG nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:49:49 np0005535469 podman[356337]: 2025-11-25 16:49:49.663750548 +0000 UTC m=+0.064295708 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:49:49 np0005535469 podman[356336]: 2025-11-25 16:49:49.683166676 +0000 UTC m=+0.093202965 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:49:49 np0005535469 podman[356338]: 2025-11-25 16:49:49.697712601 +0000 UTC m=+0.102300182 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.703 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.739 254096 DEBUG nova.compute.manager [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.739 254096 DEBUG oslo_concurrency.lockutils [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 DEBUG oslo_concurrency.lockutils [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 DEBUG oslo_concurrency.lockutils [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 DEBUG nova.compute.manager [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 WARNING nova.compute.manager [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.763 254096 DEBUG nova.network.neutron [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.798 254096 INFO nova.compute.manager [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 1.50 seconds to deallocate network for instance.#033[00m
Nov 25 11:49:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 4.7 MiB/s wr, 165 op/s
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.860 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.861 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:49 np0005535469 nova_compute[254092]: 2025-11-25 16:49:49.982 254096 DEBUG oslo_concurrency.processutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.372 254096 DEBUG nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.373 254096 DEBUG oslo_concurrency.lockutils [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.374 254096 DEBUG oslo_concurrency.lockutils [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.375 254096 DEBUG oslo_concurrency.lockutils [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.375 254096 DEBUG nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] No waiting events found dispatching network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.376 254096 WARNING nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received unexpected event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.376 254096 DEBUG nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-deleted-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041399877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.455 254096 DEBUG oslo_concurrency.processutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.462 254096 DEBUG nova.compute.provider_tree [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.483 254096 DEBUG nova.scheduler.client.report [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.514 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.549 254096 INFO nova.scheduler.client.report [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Deleted allocations for instance 6f5465d3-64cd-46fb-af8f-3b29aef5123d#033[00m
Nov 25 11:49:50 np0005535469 nova_compute[254092]: 2025-11-25 16:49:50.630 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001864924688252914 of space, bias 1.0, pg target 0.5594774064758742 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001770365305723774 of space, bias 1.0, pg target 0.5311095917171322 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:49:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 246 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 329 op/s
Nov 25 11:49:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 25 11:49:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 25 11:49:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 25 11:49:52 np0005535469 nova_compute[254092]: 2025-11-25 16:49:52.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:52 np0005535469 nova_compute[254092]: 2025-11-25 16:49:52.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:53 np0005535469 nova_compute[254092]: 2025-11-25 16:49:53.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 246 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 1.5 MiB/s wr, 304 op/s
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.618 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.620 254096 INFO nova.compute.manager [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Terminating instance#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.621 254096 DEBUG nova.compute.manager [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:49:54 np0005535469 kernel: tap591e580e-30 (unregistering): left promiscuous mode
Nov 25 11:49:54 np0005535469 NetworkManager[48891]: <info>  [1764089394.6721] device (tap591e580e-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:54Z|00992|binding|INFO|Releasing lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 from this chassis (sb_readonly=0)
Nov 25 11:49:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:54Z|00993|binding|INFO|Setting lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 down in Southbound
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:54Z|00994|binding|INFO|Removing iface tap591e580e-30 ovn-installed in OVS
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.693 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:85:c3 10.100.0.14'], port_security=['fa:16:3e:7d:85:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e0098976-026f-43d8-b686-b2658f9aded9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=591e580e-30bb-4c0d-b1fb-96d45eca5626) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 591e580e-30bb-4c0d-b1fb-96d45eca5626 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.695 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.715 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecc9c52-f9d0-49c5-ae2f-3cb9adc729e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.745 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[daf3174b-b7e4-4339-ba55-3b54e99a530a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:54 np0005535469 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.749 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf01dbae-8a24-468e-b45e-c803499c0635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:54 np0005535469 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Consumed 17.624s CPU time.
Nov 25 11:49:54 np0005535469 systemd-machined[216343]: Machine qemu-119-instance-00000061 terminated.
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.778 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5f95a154-3137-41fe-83f3-cc2a0a2cac98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.798 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef065ac-1a49-4b0f-bd40-2882f63708ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356433, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.816 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91a36401-97fd-41cb-8dd4-3fc89a661022]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356434, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356434, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.817 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.824 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.824 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.858 254096 INFO nova.virt.libvirt.driver [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance destroyed successfully.#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.859 254096 DEBUG nova.objects.instance [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid e0098976-026f-43d8-b686-b2658f9aded9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.874 254096 DEBUG nova.virt.libvirt.vif [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-738000202',display_name='tempest-ServerActionsTestOtherB-server-738000202',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-738000202',id=97,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-cz0mxxg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:40Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=e0098976-026f-43d8-b686-b2658f9aded9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.875 254096 DEBUG nova.network.os_vif_util [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.876 254096 DEBUG nova.network.os_vif_util [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.877 254096 DEBUG os_vif [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.879 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap591e580e-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:49:54 np0005535469 nova_compute[254092]: 2025-11-25 16:49:54.886 254096 INFO os_vif [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30')#033[00m
Nov 25 11:49:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:49:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543115138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:49:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:49:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543115138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.267 254096 INFO nova.virt.libvirt.driver [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deleting instance files /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9_del#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.268 254096 INFO nova.virt.libvirt.driver [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deletion of /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9_del complete#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.538 254096 INFO nova.compute.manager [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.539 254096 DEBUG oslo.service.loopingcall [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.540 254096 DEBUG nova.compute.manager [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.540 254096 DEBUG nova.network.neutron [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.644 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.645 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.671 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.758 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.759 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.775 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.775 254096 INFO nova.compute.claims [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:49:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 150 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 28 KiB/s wr, 269 op/s
Nov 25 11:49:55 np0005535469 nova_compute[254092]: 2025-11-25 16:49:55.964 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:49:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 25 11:49:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 25 11:49:56 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 25 11:49:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803006270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.395 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.401 254096 DEBUG nova.compute.provider_tree [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.416 254096 DEBUG nova.scheduler.client.report [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.432 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.432 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.471 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.472 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.490 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.505 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:49:56 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:56Z|00995|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.600 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.601 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.602 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating image(s)#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.638 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.669 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.699 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.704 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.770 254096 DEBUG nova.network.neutron [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.789 254096 INFO nova.compute.manager [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.795 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.799 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.799 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.800 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.834 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.840 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.904 254096 DEBUG nova.policy [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.911 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.911 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:56 np0005535469 nova_compute[254092]: 2025-11-25 16:49:56.914 254096 DEBUG nova.compute.manager [req-1720b1c4-a30e-4d9a-a4f8-56e6928a8c2f req-4b1a7d78-bdc3-4ad5-8ca3-9aec2c1d902c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-vif-deleted-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.020 254096 DEBUG oslo_concurrency.processutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.130 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.191 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.275 254096 DEBUG nova.objects.instance [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.293 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Ensure instance console log exists: /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:49:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139327757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.512 254096 DEBUG oslo_concurrency.processutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.520 254096 DEBUG nova.compute.provider_tree [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.533 254096 DEBUG nova.scheduler.client.report [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.555 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.594 254096 INFO nova.scheduler.client.report [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance e0098976-026f-43d8-b686-b2658f9aded9#033[00m
Nov 25 11:49:57 np0005535469 nova_compute[254092]: 2025-11-25 16:49:57.686 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 121 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 29 KiB/s wr, 277 op/s
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.677 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.678 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.678 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.678 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.679 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.679 254096 INFO nova.compute.manager [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Terminating instance#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.680 254096 DEBUG nova.compute.manager [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:49:58 np0005535469 kernel: tap792a5867-7e (unregistering): left promiscuous mode
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.723 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Successfully created port: 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:49:58 np0005535469 NetworkManager[48891]: <info>  [1764089398.7252] device (tap792a5867-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:49:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:58Z|00996|binding|INFO|Releasing lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 from this chassis (sb_readonly=0)
Nov 25 11:49:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:58Z|00997|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 down in Southbound
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.734 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:58 np0005535469 ovn_controller[153477]: 2025-11-25T16:49:58Z|00998|binding|INFO|Removing iface tap792a5867-7e ovn-installed in OVS
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.741 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:49:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.742 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis#033[00m
Nov 25 11:49:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.744 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b8c77e-8369-4eab-a81e-0825e5fa2919, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:49:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf9bdbc-6b0b-4e0b-bc5e-a9cec7112ab7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.746 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 namespace which is not needed anymore#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.758 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:58 np0005535469 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 25 11:49:58 np0005535469 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000059.scope: Consumed 11.929s CPU time.
Nov 25 11:49:58 np0005535469 systemd-machined[216343]: Machine qemu-126-instance-00000059 terminated.
Nov 25 11:49:58 np0005535469 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : haproxy version is 2.8.14-c23fe91
Nov 25 11:49:58 np0005535469 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : path to executable is /usr/sbin/haproxy
Nov 25 11:49:58 np0005535469 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [WARNING]  (345858) : Exiting Master process...
Nov 25 11:49:58 np0005535469 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [ALERT]    (345858) : Current worker (345862) exited with code 143 (Terminated)
Nov 25 11:49:58 np0005535469 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [WARNING]  (345858) : All workers exited. Exiting... (0)
Nov 25 11:49:58 np0005535469 systemd[1]: libpod-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a.scope: Deactivated successfully.
Nov 25 11:49:58 np0005535469 podman[356701]: 2025-11-25 16:49:58.880608657 +0000 UTC m=+0.043188805 container died 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.916 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.916 254096 DEBUG nova.objects.instance [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.933 254096 DEBUG nova.virt.libvirt.vif [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.933 254096 DEBUG nova.network.os_vif_util [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.934 254096 DEBUG nova.network.os_vif_util [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.934 254096 DEBUG os_vif [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.936 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap792a5867-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a-userdata-shm.mount: Deactivated successfully.
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-94e7daa1a78ba94c079dcbea794e084a9276b64e1bfa6e1a2b7fa4bec0c3a08d-merged.mount: Deactivated successfully.
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:49:58 np0005535469 nova_compute[254092]: 2025-11-25 16:49:58.945 254096 INFO os_vif [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')#033[00m
Nov 25 11:49:58 np0005535469 podman[356701]: 2025-11-25 16:49:58.951975276 +0000 UTC m=+0.114555414 container cleanup 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:49:58 np0005535469 systemd[1]: libpod-conmon-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a.scope: Deactivated successfully.
Nov 25 11:49:59 np0005535469 podman[356751]: 2025-11-25 16:49:59.016425688 +0000 UTC m=+0.042969249 container remove 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.022 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85068c00-16cb-410c-a68a-3cc3774c1bd5]: (4, ('Tue Nov 25 04:49:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 (85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a)\n85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a\nTue Nov 25 04:49:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 (85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a)\n85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.024 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eac1db0f-54ee-4065-b31a-835aa30f4dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.024 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:59 np0005535469 kernel: tap34b8c77e-80: left promiscuous mode
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.033 254096 DEBUG nova.compute.manager [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.033 254096 DEBUG oslo_concurrency.lockutils [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.033 254096 DEBUG oslo_concurrency.lockutils [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.034 254096 DEBUG oslo_concurrency.lockutils [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.034 254096 DEBUG nova.compute.manager [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.034 254096 DEBUG nova.compute.manager [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6166128-596a-4dfd-9b88-91c0629df42a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.061 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec47f9f-3f58-46cb-9b04-9c17ea3c8598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.062 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fb03e1-edfc-4f59-a2db-01939dc9e0c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.076 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85451b47-6371-415a-830f-37550e7eb757]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570025, 'reachable_time': 31964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356769, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.078 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:49:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.078 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[45b6122a-df29-4dad-9edc-c7e42bd758b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:49:59 np0005535469 systemd[1]: run-netns-ovnmeta\x2d34b8c77e\x2d8369\x2d4eab\x2da81e\x2d0825e5fa2919.mount: Deactivated successfully.
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.277 254096 INFO nova.virt.libvirt.driver [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting instance files /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.278 254096 INFO nova.virt.libvirt.driver [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deletion of /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del complete#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.335 254096 INFO nova.compute.manager [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.335 254096 DEBUG oslo.service.loopingcall [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.336 254096 DEBUG nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:49:59 np0005535469 nova_compute[254092]: 2025-11-25 16:49:59.336 254096 DEBUG nova.network.neutron [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:49:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 121 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 72 op/s
Nov 25 11:50:00 np0005535469 nova_compute[254092]: 2025-11-25 16:50:00.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.118 254096 DEBUG nova.compute.manager [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.119 254096 DEBUG oslo_concurrency.lockutils [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.119 254096 DEBUG oslo_concurrency.lockutils [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.119 254096 DEBUG oslo_concurrency.lockutils [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.120 254096 DEBUG nova.compute.manager [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.120 254096 WARNING nova.compute.manager [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:50:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 25 11:50:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 25 11:50:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.568 254096 DEBUG nova.network.neutron [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.597 254096 INFO nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 2.26 seconds to deallocate network for instance.#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.611 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Successfully updated port: 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.623 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.623 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.623 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.676 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.677 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.736 254096 DEBUG oslo_concurrency.processutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 2.7 MiB/s wr, 157 op/s
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.883 254096 DEBUG nova.compute.manager [req-353bbaea-bb40-49aa-8207-5d7db89643c4 req-64f59ce9-cbc3-4e75-9730-5668d190cd48 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-deleted-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:01 np0005535469 nova_compute[254092]: 2025-11-25 16:50:01.898 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:50:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:50:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737371638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.183 254096 DEBUG oslo_concurrency.processutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.189 254096 DEBUG nova.compute.provider_tree [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.203 254096 DEBUG nova.scheduler.client.report [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.245 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.291 254096 INFO nova.scheduler.client.report [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance 73301044-3bad-4401-9e30-f009d417f662#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.361 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.793 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089387.791985, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.794 254096 INFO nova.compute.manager [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:50:02 np0005535469 nova_compute[254092]: 2025-11-25 16:50:02.813 254096 DEBUG nova.compute.manager [None req-825b68e2-0e59-4ca3-ac76-64f5ab19d7b1 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.210 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.226 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.226 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance network_info: |[{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.229 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start _get_guest_xml network_info=[{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.233 254096 WARNING nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.243 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.244 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.247 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.248 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.248 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.249 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.250 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.250 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.251 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.251 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.251 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.252 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.252 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.253 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.253 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.253 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.259 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191750307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.722 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.743 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.748 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.998 254096 DEBUG nova.compute.manager [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.998 254096 DEBUG nova.compute.manager [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing instance network info cache due to event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.999 254096 DEBUG oslo_concurrency.lockutils [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.999 254096 DEBUG oslo_concurrency.lockutils [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:03 np0005535469 nova_compute[254092]: 2025-11-25 16:50:03.999 254096 DEBUG nova.network.neutron [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.084 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.084 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.098 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:50:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970103562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.199 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.201 254096 DEBUG nova.virt.libvirt.vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:56Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.201 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.202 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.203 254096 DEBUG nova.objects.instance [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.213 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.213 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.217 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <uuid>1ae1094f-81aa-490c-80ca-4eba95f46cac</uuid>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <name>instance-00000065</name>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-922142806</nova:name>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:50:03</nova:creationTime>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <nova:port uuid="1d6ef4a2-8289-4c88-b3f3-481435a4dab0">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <entry name="serial">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <entry name="uuid">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f8:6f:25"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <target dev="tap1d6ef4a2-82"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log" append="off"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:50:04 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:50:04 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:50:04 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:50:04 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.218 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Preparing to wait for external event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.218 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.218 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.219 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.219 254096 DEBUG nova.virt.libvirt.vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:56Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.220 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.220 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.221 254096 DEBUG os_vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.222 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.222 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.226 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d6ef4a2-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.227 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d6ef4a2-82, col_values=(('external_ids', {'iface-id': '1d6ef4a2-8289-4c88-b3f3-481435a4dab0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:6f:25', 'vm-uuid': '1ae1094f-81aa-490c-80ca-4eba95f46cac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:04 np0005535469 NetworkManager[48891]: <info>  [1764089404.2292] manager: (tap1d6ef4a2-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.232 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.232 254096 INFO nova.compute.claims [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.235 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.236 254096 INFO os_vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.293 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.294 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.294 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:f8:6f:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.295 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Using config drive#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.315 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.372 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:50:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17999023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.814 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.820 254096 DEBUG nova.compute.provider_tree [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.833 254096 DEBUG nova.scheduler.client.report [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.852 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.852 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.891 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.891 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.918 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:50:04 np0005535469 nova_compute[254092]: 2025-11-25 16:50:04.940 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.054 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.056 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.056 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating image(s)#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.075 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.096 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.116 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.118 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.192 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.193 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.193 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.194 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.213 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.217 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.263 254096 DEBUG nova.policy [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '734253d3f2e84904968d9db3044df1c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.464 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.529 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] resizing rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.564 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating config drive at /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.569 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8prsh3i4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.647 254096 DEBUG nova.objects.instance [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.667 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.668 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Ensure instance console log exists: /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.668 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.669 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.669 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.708 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8prsh3i4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.729 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.732 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.879 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.880 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting local config drive /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config because it was imported into RBD.#033[00m
Nov 25 11:50:05 np0005535469 kernel: tap1d6ef4a2-82: entered promiscuous mode
Nov 25 11:50:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:05Z|00999|binding|INFO|Claiming lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for this chassis.
Nov 25 11:50:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:05Z|01000|binding|INFO|1d6ef4a2-8289-4c88-b3f3-481435a4dab0: Claiming fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:05 np0005535469 NetworkManager[48891]: <info>  [1764089405.9253] manager: (tap1d6ef4a2-82): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Nov 25 11:50:05 np0005535469 nova_compute[254092]: 2025-11-25 16:50:05.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:05 np0005535469 systemd-udevd[357114]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:50:05 np0005535469 NetworkManager[48891]: <info>  [1764089405.9639] device (tap1d6ef4a2-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:50:05 np0005535469 NetworkManager[48891]: <info>  [1764089405.9649] device (tap1d6ef4a2-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:50:05 np0005535469 systemd-machined[216343]: New machine qemu-127-instance-00000065.
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:06 np0005535469 systemd[1]: Started Virtual Machine qemu-127-instance-00000065.
Nov 25 11:50:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:06Z|01001|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 ovn-installed in OVS
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.008 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:06Z|01002|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 up in Southbound
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.066 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.067 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee bound to our chassis#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.068 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9840ff40-ec43-46f9-ab52-3d9495f203ee#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6c42e2-ad03-417d-92d0-2534f3639dd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.083 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9840ff40-e1 in ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.085 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9840ff40-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd48acf2-119c-4b30-9433-49e1cbfccf17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7db2a5e7-9359-466e-9189-1f38e2335c72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.096 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[089d0317-57c5-41df-91be-9b220759b0a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.111 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef81785e-a5e2-44fd-bd88-db8c3764bdfb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.139 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2cdf75-56e0-482e-87b9-716fd7ab65e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 NetworkManager[48891]: <info>  [1764089406.1467] manager: (tap9840ff40-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.145 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fef9b6d9-6833-4a10-ac76-8ddc2d854fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.174 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a20f1d0b-9c9a-4907-9bf9-47b38bee5f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.177 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff4e8f1-953a-4686-bf16-fc4d6d897c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 NetworkManager[48891]: <info>  [1764089406.1993] device (tap9840ff40-e0): carrier: link connected
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.205 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e701594a-482f-42c4-9306-fa66c4d1d4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.221 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd9ff3e-6abf-477c-9394-8a633777500a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586379, 'reachable_time': 16720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357150, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.235 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07aa20f8-52d8-4297-ae5b-ccaeea4fea74]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:4dad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586379, 'tstamp': 586379}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357151, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.255 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[23751494-8258-4012-bf17-c3048e494e61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586379, 'reachable_time': 16720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357160, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.291 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a37f8486-f5b6-4c1d-8e30-b376ff2f709c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.352 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec1702e-da6c-4a1d-a27e-5ca927379b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.353 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.353 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.354 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9840ff40-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:06 np0005535469 NetworkManager[48891]: <info>  [1764089406.3562] manager: (tap9840ff40-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 25 11:50:06 np0005535469 kernel: tap9840ff40-e0: entered promiscuous mode
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.359 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9840ff40-e0, col_values=(('external_ids', {'iface-id': '217facd0-6092-44c8-9430-efb8d36c211a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:06 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:06Z|01003|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.376 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.377 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce535d1-becb-4809-95a7-a8d1a6738634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.377 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:50:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.378 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'env', 'PROCESS_TAG=haproxy-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9840ff40-ec43-46f9-ab52-3d9495f203ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.399 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089406.3993363, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.400 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Started (Lifecycle Event)#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.418 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.422 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089406.3995037, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.422 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.439 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.455 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.728 254096 DEBUG nova.compute.manager [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.728 254096 DEBUG oslo_concurrency.lockutils [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.729 254096 DEBUG oslo_concurrency.lockutils [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.729 254096 DEBUG oslo_concurrency.lockutils [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.729 254096 DEBUG nova.compute.manager [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Processing event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.730 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.733 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089406.7331688, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.733 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.735 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.738 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance spawned successfully.#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.738 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:50:06 np0005535469 podman[357226]: 2025-11-25 16:50:06.741495428 +0000 UTC m=+0.051343746 container create 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.751 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.754 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.764 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.764 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.765 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.765 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.765 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.766 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.774 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:50:06 np0005535469 systemd[1]: Started libpod-conmon-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14.scope.
Nov 25 11:50:06 np0005535469 podman[357226]: 2025-11-25 16:50:06.715246905 +0000 UTC m=+0.025095233 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:50:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.816 254096 INFO nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 10.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.817 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb50cdc7de6082c53fe78a6bea78f3cc609faf5a93f7a225931635c4a0532c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:06 np0005535469 podman[357226]: 2025-11-25 16:50:06.834855515 +0000 UTC m=+0.144703843 container init 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:50:06 np0005535469 podman[357226]: 2025-11-25 16:50:06.840679493 +0000 UTC m=+0.150527801 container start 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:50:06 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : New worker (357247) forked
Nov 25 11:50:06 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : Loading success.
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.899 254096 INFO nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 11.18 seconds to build instance.#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.907 254096 DEBUG nova.network.neutron [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updated VIF entry in instance network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.907 254096 DEBUG nova.network.neutron [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.917 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:06 np0005535469 nova_compute[254092]: 2025-11-25 16:50:06.919 254096 DEBUG oslo_concurrency.lockutils [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:07 np0005535469 nova_compute[254092]: 2025-11-25 16:50:07.011 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Successfully created port: d6e67173-6a72-4200-9963-90668ed663e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:50:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 103 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Nov 25 11:50:08 np0005535469 nova_compute[254092]: 2025-11-25 16:50:08.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:08 np0005535469 nova_compute[254092]: 2025-11-25 16:50:08.849 254096 DEBUG nova.compute.manager [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:08 np0005535469 nova_compute[254092]: 2025-11-25 16:50:08.850 254096 DEBUG oslo_concurrency.lockutils [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:08 np0005535469 nova_compute[254092]: 2025-11-25 16:50:08.850 254096 DEBUG oslo_concurrency.lockutils [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:08 np0005535469 nova_compute[254092]: 2025-11-25 16:50:08.850 254096 DEBUG oslo_concurrency.lockutils [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:08 np0005535469 nova_compute[254092]: 2025-11-25 16:50:08.851 254096 DEBUG nova.compute.manager [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:08 np0005535469 nova_compute[254092]: 2025-11-25 16:50:08.851 254096 WARNING nova.compute.manager [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.120 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Successfully updated port: d6e67173-6a72-4200-9963-90668ed663e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.139 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.140 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.140 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.229 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 103 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.842 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.857 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089394.8559422, e0098976-026f-43d8-b686-b2658f9aded9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.857 254096 INFO nova.compute.manager [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:50:09 np0005535469 nova_compute[254092]: 2025-11-25 16:50:09.875 254096 DEBUG nova.compute.manager [None req-ee93c43a-f676-4987-a4ea-7ab2ef0eaaad - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:50:10 np0005535469 nova_compute[254092]: 2025-11-25 16:50:10.942 254096 DEBUG nova.compute.manager [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:10 np0005535469 nova_compute[254092]: 2025-11-25 16:50:10.943 254096 DEBUG nova.compute.manager [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:10 np0005535469 nova_compute[254092]: 2025-11-25 16:50:10.943 254096 DEBUG oslo_concurrency.lockutils [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.548 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.582 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.583 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance network_info: |[{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.583 254096 DEBUG oslo_concurrency.lockutils [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.584 254096 DEBUG nova.network.neutron [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.588 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start _get_guest_xml network_info=[{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.592 254096 WARNING nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.597 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.598 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.608 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.608 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.609 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.609 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.614 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:11 np0005535469 NetworkManager[48891]: <info>  [1764089411.7021] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:11 np0005535469 NetworkManager[48891]: <info>  [1764089411.7030] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:11Z|01004|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 11:50:11 np0005535469 nova_compute[254092]: 2025-11-25 16:50:11.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Nov 25 11:50:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102446208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.077 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.110 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.117 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075876687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.576 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.579 254096 DEBUG nova.virt.libvirt.vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:04Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.580 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.581 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.583 254096 DEBUG nova.objects.instance [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.596 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <uuid>8e8f0fb8-4b3c-40dd-9317-94bedc736376</uuid>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <name>instance-00000066</name>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1379098021</nova:name>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:50:11</nova:creationTime>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:user uuid="734253d3f2e84904968d9db3044df1c8">tempest-ServerRescueTestJSONUnderV235-1568478678-project-member</nova:user>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:project uuid="423cb78fb5f54c46b9867a6f07d0cf95">tempest-ServerRescueTestJSONUnderV235-1568478678</nova:project>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <nova:port uuid="d6e67173-6a72-4200-9963-90668ed663e4">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <entry name="serial">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <entry name="uuid">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:fa:b3:74"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <target dev="tapd6e67173-6a"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/console.log" append="off"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:50:12 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:50:12 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:50:12 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:50:12 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.602 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Preparing to wait for external event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.602 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.602 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.603 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.603 254096 DEBUG nova.virt.libvirt.vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:04Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.604 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.605 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.605 254096 DEBUG os_vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.606 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.607 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.613 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6e67173-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.613 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6e67173-6a, col_values=(('external_ids', {'iface-id': 'd6e67173-6a72-4200-9963-90668ed663e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:b3:74', 'vm-uuid': '8e8f0fb8-4b3c-40dd-9317-94bedc736376'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:12 np0005535469 NetworkManager[48891]: <info>  [1764089412.6158] manager: (tapd6e67173-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.623 254096 INFO os_vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a')#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.665 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.666 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.666 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No VIF found with MAC fa:16:3e:fa:b3:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.667 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Using config drive#033[00m
Nov 25 11:50:12 np0005535469 nova_compute[254092]: 2025-11-25 16:50:12.689 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.075 254096 DEBUG nova.compute.manager [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.075 254096 DEBUG nova.compute.manager [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing instance network info cache due to event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.076 254096 DEBUG oslo_concurrency.lockutils [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.076 254096 DEBUG oslo_concurrency.lockutils [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.076 254096 DEBUG nova.network.neutron [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.296 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating config drive at /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.300 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24vw2lix execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.436 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24vw2lix" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.464 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.468 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.574 254096 DEBUG nova.network.neutron [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.576 254096 DEBUG nova.network.neutron [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.595 254096 DEBUG oslo_concurrency.lockutils [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.627 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.627 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.628 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deleting local config drive /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config because it was imported into RBD.#033[00m
Nov 25 11:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.629 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:13 np0005535469 kernel: tapd6e67173-6a: entered promiscuous mode
Nov 25 11:50:13 np0005535469 NetworkManager[48891]: <info>  [1764089413.6830] manager: (tapd6e67173-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Nov 25 11:50:13 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:13Z|01005|binding|INFO|Claiming lport d6e67173-6a72-4200-9963-90668ed663e4 for this chassis.
Nov 25 11:50:13 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:13Z|01006|binding|INFO|d6e67173-6a72-4200-9963-90668ed663e4: Claiming fa:16:3e:fa:b3:74 10.100.0.13
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.692 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 bound to our chassis#033[00m
Nov 25 11:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.695 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d848db-0bc6-4281-9d4b-ed8d211413aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:13 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:13Z|01007|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 ovn-installed in OVS
Nov 25 11:50:13 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:13Z|01008|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 up in Southbound
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:13 np0005535469 systemd-udevd[357391]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:50:13 np0005535469 systemd-machined[216343]: New machine qemu-128-instance-00000066.
Nov 25 11:50:13 np0005535469 NetworkManager[48891]: <info>  [1764089413.7455] device (tapd6e67173-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:50:13 np0005535469 NetworkManager[48891]: <info>  [1764089413.7467] device (tapd6e67173-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:50:13 np0005535469 systemd[1]: Started Virtual Machine qemu-128-instance-00000066.
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.915 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089398.9143293, 73301044-3bad-4401-9e30-f009d417f662 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.916 254096 INFO nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:50:13 np0005535469 nova_compute[254092]: 2025-11-25 16:50:13.934 254096 DEBUG nova.compute.manager [None req-af2c918e-6be4-4018-a420-efec636f530c - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.141 254096 DEBUG nova.compute.manager [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.141 254096 DEBUG oslo_concurrency.lockutils [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.142 254096 DEBUG oslo_concurrency.lockutils [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.142 254096 DEBUG oslo_concurrency.lockutils [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.142 254096 DEBUG nova.compute.manager [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Processing event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.312 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089414.3121905, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.313 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Started (Lifecycle Event)#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.315 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.318 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.325 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance spawned successfully.#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.326 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.330 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.334 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.354 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.355 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.356 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.357 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.358 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.358 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.366 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.367 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089414.3146741, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.367 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.396 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.402 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089414.3175724, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.403 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.421 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.425 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.430 254096 INFO nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 9.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.431 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.456 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.495 254096 INFO nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 10.31 seconds to build instance.#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.529 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.741 254096 DEBUG nova.network.neutron [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updated VIF entry in instance network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.741 254096 DEBUG nova.network.neutron [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:14 np0005535469 nova_compute[254092]: 2025-11-25 16:50:14.755 254096 DEBUG oslo_concurrency.lockutils [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:15 np0005535469 nova_compute[254092]: 2025-11-25 16:50:15.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 11:50:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.234 254096 DEBUG nova.compute.manager [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG oslo_concurrency.lockutils [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG oslo_concurrency.lockutils [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG oslo_concurrency.lockutils [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG nova.compute.manager [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.236 254096 WARNING nova.compute.manager [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.602 254096 INFO nova.compute.manager [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Rescuing#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.602 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.603 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:16 np0005535469 nova_compute[254092]: 2025-11-25 16:50:16.603 254096 DEBUG nova.network.neutron [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:50:17 np0005535469 nova_compute[254092]: 2025-11-25 16:50:17.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:17 np0005535469 nova_compute[254092]: 2025-11-25 16:50:17.705 254096 DEBUG nova.network.neutron [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:17 np0005535469 nova_compute[254092]: 2025-11-25 16:50:17.723 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 11:50:18 np0005535469 nova_compute[254092]: 2025-11-25 16:50:18.072 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:50:18 np0005535469 nova_compute[254092]: 2025-11-25 16:50:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:18 np0005535469 nova_compute[254092]: 2025-11-25 16:50:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:18 np0005535469 nova_compute[254092]: 2025-11-25 16:50:18.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:50:18 np0005535469 nova_compute[254092]: 2025-11-25 16:50:18.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:18 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:18Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 11:50:18 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:18Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 11:50:19 np0005535469 nova_compute[254092]: 2025-11-25 16:50:19.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:20 np0005535469 podman[357446]: 2025-11-25 16:50:20.64174961 +0000 UTC m=+0.056968260 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 11:50:20 np0005535469 podman[357445]: 2025-11-25 16:50:20.662006361 +0000 UTC m=+0.076698885 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:50:20 np0005535469 podman[357447]: 2025-11-25 16:50:20.681148611 +0000 UTC m=+0.094031627 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 11:50:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:50:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310197540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:50:20 np0005535469 nova_compute[254092]: 2025-11-25 16:50:20.990 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.071 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:50:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.241 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.243 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3460MB free_disk=59.94648361206055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.243 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.244 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.333 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1ae1094f-81aa-490c-80ca-4eba95f46cac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.334 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8e8f0fb8-4b3c-40dd-9317-94bedc736376 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.334 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.334 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.352 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.393 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.394 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.411 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.435 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 222 op/s
Nov 25 11:50:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:50:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215157792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:21 np0005535469 nova_compute[254092]: 2025-11-25 16:50:21.987 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:50:22 np0005535469 nova_compute[254092]: 2025-11-25 16:50:22.001 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:50:22 np0005535469 nova_compute[254092]: 2025-11-25 16:50:22.020 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:50:22 np0005535469 nova_compute[254092]: 2025-11-25 16:50:22.021 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:22 np0005535469 nova_compute[254092]: 2025-11-25 16:50:22.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:22 np0005535469 nova_compute[254092]: 2025-11-25 16:50:22.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:23 np0005535469 nova_compute[254092]: 2025-11-25 16:50:23.016 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:23 np0005535469 nova_compute[254092]: 2025-11-25 16:50:23.017 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:50:23 np0005535469 nova_compute[254092]: 2025-11-25 16:50:23.017 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:50:23 np0005535469 nova_compute[254092]: 2025-11-25 16:50:23.052 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:50:23 np0005535469 nova_compute[254092]: 2025-11-25 16:50:23.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 11:50:25 np0005535469 nova_compute[254092]: 2025-11-25 16:50:25.667 254096 INFO nova.compute.manager [None req-f8855c07-08ce-4e4f-98da-d3d5c7b6ff36 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Get console output#033[00m
Nov 25 11:50:25 np0005535469 nova_compute[254092]: 2025-11-25 16:50:25.674 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:50:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 11:50:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.738 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.739 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.747 254096 INFO nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Rebuilding instance#033[00m
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.763 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.876 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.876 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.881 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:50:26 np0005535469 nova_compute[254092]: 2025-11-25 16:50:26.882 254096 INFO nova.compute.claims [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.015 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.363 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.382 254096 DEBUG nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.433 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_requests' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:50:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796676940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.444 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.454 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.460 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.463 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.467 254096 DEBUG nova.compute.provider_tree [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.479 254096 DEBUG nova.scheduler.client.report [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.484 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.488 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.501 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.502 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.550 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.550 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.579 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.595 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.696 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.698 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.698 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Creating image(s)#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.721 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.743 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.766 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.772 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.839 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.841 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.841 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.842 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.865 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.869 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:27 np0005535469 nova_compute[254092]: 2025-11-25 16:50:27.937 254096 DEBUG nova.policy [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91f64879f99b40f69cdf49bceea9af2b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5ef0cd3e375456d9e1b561f7929fc4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.114 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.199 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.275 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] resizing rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.570 254096 DEBUG nova.objects.instance [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lazy-loading 'migration_context' on Instance uuid 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.589 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Successfully created port: 0127bd66-2e67-465a-8205-164198287c55 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.592 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Ensure instance console log exists: /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:28 np0005535469 nova_compute[254092]: 2025-11-25 16:50:28.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.504 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Successfully updated port: 0127bd66-2e67-465a-8205-164198287c55 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.520 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.520 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquired lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.520 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.773 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:50:29 np0005535469 kernel: tap1d6ef4a2-82 (unregistering): left promiscuous mode
Nov 25 11:50:29 np0005535469 NetworkManager[48891]: <info>  [1764089429.8027] device (tap1d6ef4a2-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:29Z|01009|binding|INFO|Releasing lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 from this chassis (sb_readonly=0)
Nov 25 11:50:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:29Z|01010|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 down in Southbound
Nov 25 11:50:29 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:29Z|01011|binding|INFO|Removing iface tap1d6ef4a2-82 ovn-installed in OVS
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.819 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.821 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis#033[00m
Nov 25 11:50:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.822 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:50:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eab52cb8-0a0e-4176-b120-aebd470236e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.824 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace which is not needed anymore#033[00m
Nov 25 11:50:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 92 op/s
Nov 25 11:50:29 np0005535469 nova_compute[254092]: 2025-11-25 16:50:29.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:29 np0005535469 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 25 11:50:29 np0005535469 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Consumed 12.896s CPU time.
Nov 25 11:50:29 np0005535469 systemd-machined[216343]: Machine qemu-127-instance-00000065 terminated.
Nov 25 11:50:29 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : haproxy version is 2.8.14-c23fe91
Nov 25 11:50:29 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : path to executable is /usr/sbin/haproxy
Nov 25 11:50:29 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [WARNING]  (357245) : Exiting Master process...
Nov 25 11:50:29 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [WARNING]  (357245) : Exiting Master process...
Nov 25 11:50:29 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [ALERT]    (357245) : Current worker (357247) exited with code 143 (Terminated)
Nov 25 11:50:29 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [WARNING]  (357245) : All workers exited. Exiting... (0)
Nov 25 11:50:29 np0005535469 systemd[1]: libpod-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14.scope: Deactivated successfully.
Nov 25 11:50:29 np0005535469 podman[357759]: 2025-11-25 16:50:29.995592742 +0000 UTC m=+0.054329128 container died 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:50:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14-userdata-shm.mount: Deactivated successfully.
Nov 25 11:50:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cfb50cdc7de6082c53fe78a6bea78f3cc609faf5a93f7a225931635c4a0532c2-merged.mount: Deactivated successfully.
Nov 25 11:50:30 np0005535469 kernel: tap1d6ef4a2-82: entered promiscuous mode
Nov 25 11:50:30 np0005535469 kernel: tap1d6ef4a2-82 (unregistering): left promiscuous mode
Nov 25 11:50:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:30Z|01012|binding|INFO|Claiming lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for this chassis.
Nov 25 11:50:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:30Z|01013|binding|INFO|1d6ef4a2-8289-4c88-b3f3-481435a4dab0: Claiming fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 podman[357759]: 2025-11-25 16:50:30.051963324 +0000 UTC m=+0.110699710 container cleanup 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.058 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:30Z|01014|binding|INFO|Releasing lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 from this chassis (sb_readonly=0)
Nov 25 11:50:30 np0005535469 systemd[1]: libpod-conmon-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14.scope: Deactivated successfully.
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.071 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.078 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:30 np0005535469 podman[357796]: 2025-11-25 16:50:30.129424779 +0000 UTC m=+0.049105935 container remove 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0186c583-335f-44ca-b4c3-2b61e66c4487]: (4, ('Tue Nov 25 04:50:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14)\n4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14\nTue Nov 25 04:50:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14)\n4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d55e1b56-789c-4d56-826c-7ac0db176700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:30 np0005535469 kernel: tap9840ff40-e0: left promiscuous mode
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.161 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7238efd1-1ee8-4447-adc7-310da9094d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.175 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddabd908-e55e-43f8-a525-6414d4e20bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da1543ee-11e7-463f-9ee0-338f683afc82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.194 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04a62cb1-88af-4134-b6a5-bedb7a814804]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586372, 'reachable_time': 22213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357815, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.199 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.199 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fc51324b-2052-470e-aa78-fee352494bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 systemd[1]: run-netns-ovnmeta\x2d9840ff40\x2dec43\x2d46f9\x2dab52\x2d3d9495f203ee.mount: Deactivated successfully.
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.200 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.201 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.202 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[51e6d0ad-4304-4fd5-8b28-ad8770ebe522]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.203 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.204 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.205 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80065cf7-6cca-41e6-b7d0-133822e43831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.228 254096 DEBUG nova.compute.manager [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-changed-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.229 254096 DEBUG nova.compute.manager [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Refreshing instance network info cache due to event network-changed-0127bd66-2e67-465a-8205-164198287c55. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.229 254096 DEBUG oslo_concurrency.lockutils [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:30 np0005535469 kernel: tapd6e67173-6a (unregistering): left promiscuous mode
Nov 25 11:50:30 np0005535469 NetworkManager[48891]: <info>  [1764089430.4503] device (tapd6e67173-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.463 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:30Z|01015|binding|INFO|Releasing lport d6e67173-6a72-4200-9963-90668ed663e4 from this chassis (sb_readonly=0)
Nov 25 11:50:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:30Z|01016|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 down in Southbound
Nov 25 11:50:30 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:30Z|01017|binding|INFO|Removing iface tapd6e67173-6a ovn-installed in OVS
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.474 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.476 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 unbound from our chassis#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.477 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:50:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.478 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92f5edd6-8f94-4629-9398-0ea4efea81f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.519 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 11:50:30 np0005535469 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 25 11:50:30 np0005535469 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Consumed 13.778s CPU time.
Nov 25 11:50:30 np0005535469 systemd-machined[216343]: Machine qemu-128-instance-00000066 terminated.
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.529 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance destroyed successfully.#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.536 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance destroyed successfully.#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.537 254096 DEBUG nova.virt.libvirt.vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:26Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.538 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.539 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.540 254096 DEBUG os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.545 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d6ef4a2-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.556 254096 INFO os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')#033[00m
Nov 25 11:50:30 np0005535469 NetworkManager[48891]: <info>  [1764089430.6851] manager: (tapd6e67173-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.905 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting instance files /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.908 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deletion of /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del complete#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.929 254096 DEBUG nova.compute.manager [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.929 254096 DEBUG oslo_concurrency.lockutils [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.930 254096 DEBUG oslo_concurrency.lockutils [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.930 254096 DEBUG oslo_concurrency.lockutils [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.930 254096 DEBUG nova.compute.manager [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.931 254096 WARNING nova.compute.manager [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.963 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updating instance_info_cache with network_info: [{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.990 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Releasing lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.991 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance network_info: |[{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.991 254096 DEBUG oslo_concurrency.lockutils [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.992 254096 DEBUG nova.network.neutron [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Refreshing network info cache for port 0127bd66-2e67-465a-8205-164198287c55 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:30 np0005535469 nova_compute[254092]: 2025-11-25 16:50:30.996 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start _get_guest_xml network_info=[{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.003 254096 WARNING nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.021 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.022 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.031 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.032 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.032 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.032 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.033 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.033 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.035 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.035 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.035 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.036 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.039 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.089 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.090 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating image(s)#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.113 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.136 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.161 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.167 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.205 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.213 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance destroyed successfully.#033[00m
Nov 25 11:50:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.216 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.236 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Attempting rescue#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.237 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.242 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.243 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating image(s)#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.263 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.268 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.269 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.271 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.290 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.294 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.355 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.385 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.390 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.484 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.487 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.490 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.490 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.516 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.519 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906346555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.553 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.554 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.599 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.603 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:31 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.685 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.791 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.792 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.819 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.820 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start _get_guest_xml network_info=[{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "vif_mac": "fa:16:3e:fa:b3:74"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.820 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'resources' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.829 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.830 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Ensure instance console log exists: /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.831 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.831 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.831 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.833 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start _get_guest_xml network_info=[{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:50:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 219 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.840 254096 WARNING nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.842 254096 WARNING nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.845 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.845 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.846 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.846 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.849 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.849 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.850 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.850 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.853 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.853 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.853 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.854 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.854 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.856 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.856 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.856 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.858 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.874 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:31 np0005535469 nova_compute[254092]: 2025-11-25 16:50:31.919 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559605262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.109 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.111 254096 DEBUG nova.virt.libvirt.vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1145656902',display_name='tempest-ServerAddressesTestJSON-server-1145656902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1145656902',id=103,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5ef0cd3e375456d9e1b561f7929fc4f',ramdisk_id='',reservation_id='r-s9s0yj68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1476412269',owner_user_name='tempest-ServerAddressesTestJSON-1476412269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:27Z,user_data=None,user_id='91f64879f99b40f69cdf49bceea9af2b',uuid=7e50c80e-03fd-47ec-854f-f4e5d45c1c82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.111 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converting VIF {"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.112 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.113 254096 DEBUG nova.objects.instance [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.137 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <uuid>7e50c80e-03fd-47ec-854f-f4e5d45c1c82</uuid>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <name>instance-00000067</name>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerAddressesTestJSON-server-1145656902</nova:name>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:50:31</nova:creationTime>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:user uuid="91f64879f99b40f69cdf49bceea9af2b">tempest-ServerAddressesTestJSON-1476412269-project-member</nova:user>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:project uuid="b5ef0cd3e375456d9e1b561f7929fc4f">tempest-ServerAddressesTestJSON-1476412269</nova:project>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:port uuid="0127bd66-2e67-465a-8205-164198287c55">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="serial">7e50c80e-03fd-47ec-854f-f4e5d45c1c82</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="uuid">7e50c80e-03fd-47ec-854f-f4e5d45c1c82</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:9a:60:a7"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <target dev="tap0127bd66-2e"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/console.log" append="off"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:50:32 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:50:32 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.138 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Preparing to wait for external event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.143 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.143 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.143 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.144 254096 DEBUG nova.virt.libvirt.vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1145656902',display_name='tempest-ServerAddressesTestJSON-server-1145656902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1145656902',id=103,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5ef0cd3e375456d9e1b561f7929fc4f',ramdisk_id='',reservation_id='r-s9s0yj68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1476412269',owner_user_name='tempest-ServerAddressesTestJSON-1476412269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:27Z,user_data=None,user_id='91f64879f99b40f69cdf49bceea9af2b',uuid=7e50c80e-03fd-47ec-854f-f4e5d45c1c82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.144 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converting VIF {"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.145 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.146 254096 DEBUG os_vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.147 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.147 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.151 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0127bd66-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.151 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0127bd66-2e, col_values=(('external_ids', {'iface-id': '0127bd66-2e67-465a-8205-164198287c55', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:60:a7', 'vm-uuid': '7e50c80e-03fd-47ec-854f-f4e5d45c1c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 NetworkManager[48891]: <info>  [1764089432.1543] manager: (tap0127bd66-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.161 254096 INFO os_vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e')#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.222 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.222 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.223 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] No VIF found with MAC fa:16:3e:9a:60:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.224 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Using config drive#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.245 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966878006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766728261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.370 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.371 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.371 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.372 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.372 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.372 254096 WARNING nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.374 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.374 254096 WARNING nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.375 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.394 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.398 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.433 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.435 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643128134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.840 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.843 254096 DEBUG nova.virt.libvirt.vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:31Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.843 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.844 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.847 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <uuid>1ae1094f-81aa-490c-80ca-4eba95f46cac</uuid>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <name>instance-00000065</name>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:50:32 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:50:32 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-922142806</nova:name>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:50:31</nova:creationTime>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <nova:port uuid="1d6ef4a2-8289-4c88-b3f3-481435a4dab0">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="serial">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="uuid">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f8:6f:25"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <target dev="tap1d6ef4a2-82"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log" append="off"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:50:32 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:50:32 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:50:32 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:50:32 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.860 254096 DEBUG nova.virt.libvirt.vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:31Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.860 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.861 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.861 254096 DEBUG os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.862 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.862 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.863 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d6ef4a2-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d6ef4a2-82, col_values=(('external_ids', {'iface-id': '1d6ef4a2-8289-4c88-b3f3-481435a4dab0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:6f:25', 'vm-uuid': '1ae1094f-81aa-490c-80ca-4eba95f46cac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 NetworkManager[48891]: <info>  [1764089432.8694] manager: (tap1d6ef4a2-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.871 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.881 254096 INFO os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')#033[00m
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050487547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.906 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.907 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.975 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.976 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.976 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:f8:6f:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:50:32 np0005535469 nova_compute[254092]: 2025-11-25 16:50:32.977 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Using config drive#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.001 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.025 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.051 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'keypairs' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.092 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Creating config drive at /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.097 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7ilik5h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.129 254096 DEBUG nova.compute.manager [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.129 254096 DEBUG oslo_concurrency.lockutils [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.130 254096 DEBUG oslo_concurrency.lockutils [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.130 254096 DEBUG oslo_concurrency.lockutils [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.130 254096 DEBUG nova.compute.manager [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.131 254096 WARNING nova.compute.manager [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.234 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7ilik5h" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.257 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.260 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:50:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531098191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.363 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.365 254096 DEBUG nova.virt.libvirt.vif [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:14Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "vif_mac": "fa:16:3e:fa:b3:74"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.366 254096 DEBUG nova.network.os_vif_util [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "vif_mac": "fa:16:3e:fa:b3:74"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.367 254096 DEBUG nova.network.os_vif_util [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.368 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.379 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <uuid>8e8f0fb8-4b3c-40dd-9317-94bedc736376</uuid>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <name>instance-00000066</name>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1379098021</nova:name>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:50:31</nova:creationTime>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:user uuid="734253d3f2e84904968d9db3044df1c8">tempest-ServerRescueTestJSONUnderV235-1568478678-project-member</nova:user>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:project uuid="423cb78fb5f54c46b9867a6f07d0cf95">tempest-ServerRescueTestJSONUnderV235-1568478678</nova:project>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <nova:port uuid="d6e67173-6a72-4200-9963-90668ed663e4">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <entry name="serial">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <entry name="uuid">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <target dev="vdb" bus="virtio"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:fa:b3:74"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <target dev="tapd6e67173-6a"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/console.log" append="off"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:50:33 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:50:33 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:50:33 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:50:33 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.394 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance destroyed successfully.#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.414 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.414 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deleting local config drive /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config because it was imported into RBD.#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.458 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No VIF found with MAC fa:16:3e:fa:b3:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.459 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Using config drive#033[00m
Nov 25 11:50:33 np0005535469 NetworkManager[48891]: <info>  [1764089433.4796] manager: (tap0127bd66-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/416)
Nov 25 11:50:33 np0005535469 kernel: tap0127bd66-2e: entered promiscuous mode
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.487 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:33Z|01018|binding|INFO|Claiming lport 0127bd66-2e67-465a-8205-164198287c55 for this chassis.
Nov 25 11:50:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:33Z|01019|binding|INFO|0127bd66-2e67-465a-8205-164198287c55: Claiming fa:16:3e:9a:60:a7 10.100.0.13
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.499 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.501 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:60:a7 10.100.0.13'], port_security=['fa:16:3e:9a:60:a7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7e50c80e-03fd-47ec-854f-f4e5d45c1c82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5ef0cd3e375456d9e1b561f7929fc4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ebd3670-30be-4da5-91ef-ad015b6cc911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3a36a72-9f53-4efe-835b-b2c470e0b4b1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0127bd66-2e67-465a-8205-164198287c55) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.503 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0127bd66-2e67-465a-8205-164198287c55 in datapath 4ca03ef0-bdbb-4378-ace4-e94a9e273068 bound to our chassis#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.505 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca03ef0-bdbb-4378-ace4-e94a9e273068#033[00m
Nov 25 11:50:33 np0005535469 systemd-udevd[358421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.517 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.518 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f17fedd-ba1d-4f79-a699-07c916a4a97f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:33Z|01020|binding|INFO|Setting lport 0127bd66-2e67-465a-8205-164198287c55 ovn-installed in OVS
Nov 25 11:50:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:33Z|01021|binding|INFO|Setting lport 0127bd66-2e67-465a-8205-164198287c55 up in Southbound
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.519 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca03ef0-b1 in ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:50:33 np0005535469 systemd-machined[216343]: New machine qemu-129-instance-00000067.
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.521 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca03ef0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f00af28-fe89-4a70-a8b3-657e9a8056a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.522 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92d21f32-eab8-4ea9-94ee-a02f10026ea5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:33 np0005535469 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Nov 25 11:50:33 np0005535469 NetworkManager[48891]: <info>  [1764089433.5338] device (tap0127bd66-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:50:33 np0005535469 NetworkManager[48891]: <info>  [1764089433.5344] device (tap0127bd66-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.541 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bd720e7f-ab48-42b5-8b68-98ff0b50f776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.557 254096 DEBUG nova.network.neutron [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updated VIF entry in instance network info cache for port 0127bd66-2e67-465a-8205-164198287c55. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.558 254096 DEBUG nova.network.neutron [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updating instance_info_cache with network_info: [{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.561 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'keypairs' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.561 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d88cffb-183f-42e6-9b1e-b9ddd1c4a740]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.585 254096 DEBUG oslo_concurrency.lockutils [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.595 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f69083-0b80-410e-b2b3-f0c3d759eb5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[550b68c3-9da2-4ed2-bc5a-2e93fb1a5ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 NetworkManager[48891]: <info>  [1764089433.6032] manager: (tap4ca03ef0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/417)
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.637 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc9b403-cda7-4a8d-bd64-ae3664ea361e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.641 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[558b759a-ae50-48c9-80b6-b80bf9aa9ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.656 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating config drive at /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.663 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgswrpu96 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:33 np0005535469 NetworkManager[48891]: <info>  [1764089433.6715] device (tap4ca03ef0-b0): carrier: link connected
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.679 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6726719e-9137-497f-a704-1e64ee245fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f112e4e5-2a0e-45c4-8758-2ed84fe061d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca03ef0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:e8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589126, 'reachable_time': 15131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358461, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.725 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1551cff-93a8-434f-abaf-3f9c790905d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:e8ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589126, 'tstamp': 589126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358463, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e24932d0-0148-4b2f-b21b-4563259e513a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca03ef0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:e8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589126, 'reachable_time': 15131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358466, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.780 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[411d49a1-79d1-4f87-a25e-7c249e898006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.823 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgswrpu96" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 219 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 4.0 MiB/s wr, 109 op/s
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.854 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53504e03-8dbc-4ad8-844c-461bdf019e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.855 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca03ef0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.855 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.856 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca03ef0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:33 np0005535469 NetworkManager[48891]: <info>  [1764089433.8587] manager: (tap4ca03ef0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.863 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:33 np0005535469 kernel: tap4ca03ef0-b0: entered promiscuous mode
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.869 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca03ef0-b0, col_values=(('external_ids', {'iface-id': '2a505731-bbaa-45d2-aebd-3f227fca1251'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:33 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:33Z|01022|binding|INFO|Releasing lport 2a505731-bbaa-45d2-aebd-3f227fca1251 from this chassis (sb_readonly=0)
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.874 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.912 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca03ef0-bdbb-4378-ace4-e94a9e273068.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca03ef0-bdbb-4378-ace4-e94a9e273068.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.914 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b75a1980-c690-445b-ac1d-0799acc00c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.915 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-4ca03ef0-bdbb-4378-ace4-e94a9e273068
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/4ca03ef0-bdbb-4378-ace4-e94a9e273068.pid.haproxy
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 4ca03ef0-bdbb-4378-ace4-e94a9e273068
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:50:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.917 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'env', 'PROCESS_TAG=haproxy-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca03ef0-bdbb-4378-ace4-e94a9e273068.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:50:33 np0005535469 nova_compute[254092]: 2025-11-25 16:50:33.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.023 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.0229478, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.024 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Started (Lifecycle Event)#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.044 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.049 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.0271955, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.049 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.065 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.067 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.067 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting local config drive /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config because it was imported into RBD.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.071 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.089 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.1440] manager: (tap1d6ef4a2-82): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Nov 25 11:50:34 np0005535469 systemd-udevd[358449]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:50:34 np0005535469 kernel: tap1d6ef4a2-82: entered promiscuous mode
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01023|binding|INFO|Claiming lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for this chassis.
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01024|binding|INFO|1d6ef4a2-8289-4c88-b3f3-481435a4dab0: Claiming fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.162 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.1676] device (tap1d6ef4a2-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.1690] device (tap1d6ef4a2-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01025|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 ovn-installed in OVS
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01026|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 up in Southbound
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 systemd-machined[216343]: New machine qemu-130-instance-00000065.
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.212 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating config drive at /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue#033[00m
Nov 25 11:50:34 np0005535469 systemd[1]: Started Virtual Machine qemu-130-instance-00000065.
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.217 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjp3658uz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:34 np0005535469 podman[358610]: 2025-11-25 16:50:34.378342564 +0000 UTC m=+0.050643147 container create 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.383 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjp3658uz" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:34 np0005535469 systemd[1]: Started libpod-conmon-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6.scope.
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.422 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.427 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:34 np0005535469 podman[358610]: 2025-11-25 16:50:34.352918853 +0000 UTC m=+0.025219456 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:50:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc33fcdae98ce1a9441422b4975d67e1938a46ed9b71c93f2ea2242b9f8bf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:34 np0005535469 podman[358610]: 2025-11-25 16:50:34.464875415 +0000 UTC m=+0.137176018 container init 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:50:34 np0005535469 podman[358610]: 2025-11-25 16:50:34.475768011 +0000 UTC m=+0.148068614 container start 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:50:34 np0005535469 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : New worker (358693) forked
Nov 25 11:50:34 np0005535469 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : Loading success.
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.557 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 1ae1094f-81aa-490c-80ca-4eba95f46cac due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.558 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.5569906, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.558 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.561 254096 DEBUG nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.561 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.566 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance spawned successfully.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.567 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.584 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.584 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.586 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9840ff40-ec43-46f9-ab52-3d9495f203ee#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.590 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.592 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.594 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.595 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.595 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.595 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.596 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.596 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7aed5562-b3ee-45d5-bd8c-ecd91f4f1f41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.597 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9840ff40-e1 in ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.601 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deleting local config drive /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue because it was imported into RBD.#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.602 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9840ff40-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd932ce5-a904-4935-955c-00ec28f26d43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d17d71d7-4eb6-4c38-bfea-6d41a74c260a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.617 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c01c15-1f64-4189-88c7-878b964680da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8136dd30-7f21-4ba8-8ba1-819660c73218]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.637 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.638 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.5606065, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.638 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Started (Lifecycle Event)#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.658 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.658 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Processing event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] No waiting events found dispatching network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 WARNING nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received unexpected event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 WARNING nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.663 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.668 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.668 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.676 254096 DEBUG nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.682 254096 INFO nova.virt.libvirt.driver [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance spawned successfully.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.682 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:50:34 np0005535469 kernel: tapd6e67173-6a: entered promiscuous mode
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.6849] manager: (tapd6e67173-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.685 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.6664329, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01027|binding|INFO|Claiming lport d6e67173-6a72-4200-9963-90668ed663e4 for this chassis.
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01028|binding|INFO|d6e67173-6a72-4200-9963-90668ed663e4: Claiming fa:16:3e:fa:b3:74 10.100.0.13
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.690 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.689 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a1006805-c235-41c3-82ef-b708771709d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.698 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '5', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.7020] manager: (tap9840ff40-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/421)
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b59b2d65-1172-4f08-935b-3b5ea9ff906f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.7107] device (tapd6e67173-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.7124] device (tapd6e67173-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.714 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01029|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 ovn-installed in OVS
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01030|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 up in Southbound
Nov 25 11:50:34 np0005535469 systemd-udevd[358454]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.720 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.721 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.721 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.722 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.722 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.723 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.730 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:34 np0005535469 systemd-machined[216343]: New machine qemu-131-instance-00000066.
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.753 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[968776fc-2c67-49d0-9f06-00c5da73e170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.756 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[611c0c40-f715-4c19-b0da-57929ae5dc66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 systemd[1]: Started Virtual Machine qemu-131-instance-00000066.
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.774 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.775 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.776 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.776 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.7863] device (tap9840ff40-e0): carrier: link connected
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.791 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a2425454-5ea4-4460-a8da-069b8524632c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.811 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55985556-6445-4b54-9b48-154144385409]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589237, 'reachable_time': 36550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358754, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.828 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[027a166a-c7cf-4f71-8850-52086ffe7ace]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:4dad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589237, 'tstamp': 589237}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358756, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.831 254096 INFO nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 7.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.831 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.846 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.857 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93f9adad-79ac-4b57-9bf5-c6024d0d4bf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589237, 'reachable_time': 36550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358757, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.886 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[270356fb-fde0-4a16-b33d-ee8c0c0dacca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.894 254096 INFO nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 8.05 seconds to build instance.#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.908 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.964 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[425b20a8-679a-4ec2-aecb-bab779e965f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.967 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9840ff40-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 kernel: tap9840ff40-e0: entered promiscuous mode
Nov 25 11:50:34 np0005535469 NetworkManager[48891]: <info>  [1764089434.9726] manager: (tap9840ff40-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.976 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9840ff40-e0, col_values=(('external_ids', {'iface-id': '217facd0-6092-44c8-9430-efb8d36c211a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:34Z|01031|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 11:50:34 np0005535469 nova_compute[254092]: 2025-11-25 16:50:34.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.979 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.984 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b9edb053-b95e-4606-b2e9-c865171556a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.986 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:50:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.986 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'env', 'PROCESS_TAG=haproxy-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9840ff40-ec43-46f9-ab52-3d9495f203ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.214 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 8e8f0fb8-4b3c-40dd-9317-94bedc736376 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.214 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089435.2140608, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.214 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.221 254096 DEBUG nova.compute.manager [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.242 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.245 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.274 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.275 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089435.2191, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Started (Lifecycle Event)#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.290 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.293 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.409 254096 DEBUG nova.compute.manager [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG oslo_concurrency.lockutils [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG oslo_concurrency.lockutils [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG oslo_concurrency.lockutils [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG nova.compute.manager [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:35 np0005535469 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 WARNING nova.compute.manager [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:50:35 np0005535469 podman[358849]: 2025-11-25 16:50:35.428986475 +0000 UTC m=+0.054855481 container create 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 11:50:35 np0005535469 systemd[1]: Started libpod-conmon-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e.scope.
Nov 25 11:50:35 np0005535469 podman[358849]: 2025-11-25 16:50:35.400937863 +0000 UTC m=+0.026806889 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:50:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad744a85431f20015a7676bfb11197180bce1ae809f0c271dcc2b50469a0e66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:35 np0005535469 podman[358849]: 2025-11-25 16:50:35.514041307 +0000 UTC m=+0.139910333 container init 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 11:50:35 np0005535469 podman[358849]: 2025-11-25 16:50:35.520733839 +0000 UTC m=+0.146602845 container start 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 11:50:35 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : New worker (358870) forked
Nov 25 11:50:35 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : Loading success.
Nov 25 11:50:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:35.626 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 unbound from our chassis#033[00m
Nov 25 11:50:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:35.627 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:50:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:35.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7118599f-2d6a-4c70-8f50-d3b634ef99c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 237 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 6.5 MiB/s wr, 168 op/s
Nov 25 11:50:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.152 254096 DEBUG nova.compute.manager [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.152 254096 DEBUG oslo_concurrency.lockutils [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.152 254096 DEBUG oslo_concurrency.lockutils [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.153 254096 DEBUG oslo_concurrency.lockutils [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.153 254096 DEBUG nova.compute.manager [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.153 254096 WARNING nova.compute.manager [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.528 254096 DEBUG nova.compute.manager [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.528 254096 DEBUG oslo_concurrency.lockutils [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 DEBUG oslo_concurrency.lockutils [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 DEBUG oslo_concurrency.lockutils [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 DEBUG nova.compute.manager [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 WARNING nova.compute.manager [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.664 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.665 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.665 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.666 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.666 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.667 254096 INFO nova.compute.manager [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Terminating instance#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.668 254096 DEBUG nova.compute.manager [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:50:37 np0005535469 kernel: tap0127bd66-2e (unregistering): left promiscuous mode
Nov 25 11:50:37 np0005535469 NetworkManager[48891]: <info>  [1764089437.7140] device (tap0127bd66-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:50:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:37Z|01032|binding|INFO|Releasing lport 0127bd66-2e67-465a-8205-164198287c55 from this chassis (sb_readonly=0)
Nov 25 11:50:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:37Z|01033|binding|INFO|Setting lport 0127bd66-2e67-465a-8205-164198287c55 down in Southbound
Nov 25 11:50:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:37Z|01034|binding|INFO|Removing iface tap0127bd66-2e ovn-installed in OVS
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.732 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:60:a7 10.100.0.13'], port_security=['fa:16:3e:9a:60:a7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7e50c80e-03fd-47ec-854f-f4e5d45c1c82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5ef0cd3e375456d9e1b561f7929fc4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ebd3670-30be-4da5-91ef-ad015b6cc911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3a36a72-9f53-4efe-835b-b2c470e0b4b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0127bd66-2e67-465a-8205-164198287c55) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.734 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0127bd66-2e67-465a-8205-164198287c55 in datapath 4ca03ef0-bdbb-4378-ace4-e94a9e273068 unbound from our chassis#033[00m
Nov 25 11:50:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.735 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca03ef0-bdbb-4378-ace4-e94a9e273068, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:50:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.736 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4ff995-4193-43dc-8c25-0e0c6eebd461]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.737 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 namespace which is not needed anymore#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:37 np0005535469 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 25 11:50:37 np0005535469 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 3.396s CPU time.
Nov 25 11:50:37 np0005535469 systemd-machined[216343]: Machine qemu-129-instance-00000067 terminated.
Nov 25 11:50:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 260 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.5 MiB/s wr, 239 op/s
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:37 np0005535469 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : haproxy version is 2.8.14-c23fe91
Nov 25 11:50:37 np0005535469 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : path to executable is /usr/sbin/haproxy
Nov 25 11:50:37 np0005535469 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [WARNING]  (358691) : Exiting Master process...
Nov 25 11:50:37 np0005535469 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [ALERT]    (358691) : Current worker (358693) exited with code 143 (Terminated)
Nov 25 11:50:37 np0005535469 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [WARNING]  (358691) : All workers exited. Exiting... (0)
Nov 25 11:50:37 np0005535469 systemd[1]: libpod-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6.scope: Deactivated successfully.
Nov 25 11:50:37 np0005535469 podman[358901]: 2025-11-25 16:50:37.886986232 +0000 UTC m=+0.051854240 container died 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.907 254096 INFO nova.virt.libvirt.driver [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance destroyed successfully.#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.908 254096 DEBUG nova.objects.instance [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lazy-loading 'resources' on Instance uuid 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.918 254096 DEBUG nova.virt.libvirt.vif [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:50:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1145656902',display_name='tempest-ServerAddressesTestJSON-server-1145656902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1145656902',id=103,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5ef0cd3e375456d9e1b561f7929fc4f',ramdisk_id='',reservation_id='r-s9s0yj68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1476412269',owner_user_name='tempest-ServerAddressesTestJSON-1476412269-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:50:34Z,user_data=None,user_id='91f64879f99b40f69cdf49bceea9af2b',uuid=7e50c80e-03fd-47ec-854f-f4e5d45c1c82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.918 254096 DEBUG nova.network.os_vif_util [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converting VIF {"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.919 254096 DEBUG nova.network.os_vif_util [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.920 254096 DEBUG os_vif [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.922 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0127bd66-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:37 np0005535469 nova_compute[254092]: 2025-11-25 16:50:37.926 254096 INFO os_vif [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e')#033[00m
Nov 25 11:50:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6-userdata-shm.mount: Deactivated successfully.
Nov 25 11:50:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e5cc33fcdae98ce1a9441422b4975d67e1938a46ed9b71c93f2ea2242b9f8bf5-merged.mount: Deactivated successfully.
Nov 25 11:50:37 np0005535469 podman[358901]: 2025-11-25 16:50:37.942232983 +0000 UTC m=+0.107100991 container cleanup 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 11:50:37 np0005535469 systemd[1]: libpod-conmon-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6.scope: Deactivated successfully.
Nov 25 11:50:38 np0005535469 podman[358956]: 2025-11-25 16:50:38.007108087 +0000 UTC m=+0.042866217 container remove 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[679d8805-b3e8-4f70-9d88-47d48c455f41]: (4, ('Tue Nov 25 04:50:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 (5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6)\n5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6\nTue Nov 25 04:50:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 (5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6)\n5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.015 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54f23304-81bc-4067-811c-ca1cb9e75ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca03ef0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:38 np0005535469 kernel: tap4ca03ef0-b0: left promiscuous mode
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.037 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c001e-8e18-4e6a-9bda-b6de3201c011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.051 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71b3adf1-1354-4e1c-a38f-2f6c85f950d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70f93300-81d2-4cfb-b7d1-ca913ff54dbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.073 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f745431-2ae6-4f09-8113-08073df1af82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589118, 'reachable_time': 24290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358974, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:38 np0005535469 systemd[1]: run-netns-ovnmeta\x2d4ca03ef0\x2dbdbb\x2d4378\x2dace4\x2de94a9e273068.mount: Deactivated successfully.
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.078 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:50:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.078 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[797fdee9-36c4-4345-a466-d51e072f0f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.273 254096 INFO nova.virt.libvirt.driver [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deleting instance files /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_del#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.273 254096 INFO nova.virt.libvirt.driver [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deletion of /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_del complete#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.333 254096 INFO nova.compute.manager [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.333 254096 DEBUG oslo.service.loopingcall [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.334 254096 DEBUG nova.compute.manager [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.334 254096 DEBUG nova.network.neutron [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.864 254096 DEBUG nova.network.neutron [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.888 254096 INFO nova.compute.manager [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 0.55 seconds to deallocate network for instance.#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.929 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:38 np0005535469 nova_compute[254092]: 2025-11-25 16:50:38.930 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.001 254096 DEBUG oslo_concurrency.processutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.268 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-unplugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] No waiting events found dispatching network-vif-unplugged-0127bd66-2e67-465a-8205-164198287c55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.270 254096 WARNING nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received unexpected event network-vif-unplugged-0127bd66-2e67-465a-8205-164198287c55 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.270 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.270 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] No waiting events found dispatching network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 WARNING nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received unexpected event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:50:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:50:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121778999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.444 254096 DEBUG oslo_concurrency.processutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.451 254096 DEBUG nova.compute.provider_tree [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.474 254096 DEBUG nova.scheduler.client.report [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.502 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.542 254096 INFO nova.scheduler.client.report [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Deleted allocations for instance 7e50c80e-03fd-47ec-854f-f4e5d45c1c82#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.608 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:39 np0005535469 nova_compute[254092]: 2025-11-25 16:50:39.699 254096 DEBUG nova.compute.manager [req-5f8e02dd-af29-4ae4-bcb7-24d0d6b25789 req-f755a63a-a787-47f7-a34c-ee8f38eef33f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-deleted-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 260 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.3 MiB/s wr, 236 op/s
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:50:40
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.mgr']
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:50:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:40.725 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:40.726 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:50:40 np0005535469 nova_compute[254092]: 2025-11-25 16:50:40.727 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:41 np0005535469 nova_compute[254092]: 2025-11-25 16:50:41.373 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:41 np0005535469 nova_compute[254092]: 2025-11-25 16:50:41.374 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:41 np0005535469 nova_compute[254092]: 2025-11-25 16:50:41.374 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:41 np0005535469 nova_compute[254092]: 2025-11-25 16:50:41.374 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:41 np0005535469 nova_compute[254092]: 2025-11-25 16:50:41.375 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.3 MiB/s wr, 415 op/s
Nov 25 11:50:42 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:42Z|01035|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 11:50:42 np0005535469 nova_compute[254092]: 2025-11-25 16:50:42.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:42 np0005535469 nova_compute[254092]: 2025-11-25 16:50:42.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.031 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.032 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.044 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.045 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.045 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.046 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.046 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.046 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:43 np0005535469 nova_compute[254092]: 2025-11-25 16:50:43.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 309 op/s
Nov 25 11:50:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 309 op/s
Nov 25 11:50:45 np0005535469 nova_compute[254092]: 2025-11-25 16:50:45.873 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:45 np0005535469 nova_compute[254092]: 2025-11-25 16:50:45.874 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:45 np0005535469 nova_compute[254092]: 2025-11-25 16:50:45.888 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:46 np0005535469 nova_compute[254092]: 2025-11-25 16:50:46.502 254096 DEBUG nova.compute.manager [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:46 np0005535469 nova_compute[254092]: 2025-11-25 16:50:46.503 254096 DEBUG nova.compute.manager [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:46 np0005535469 nova_compute[254092]: 2025-11-25 16:50:46.503 254096 DEBUG oslo_concurrency.lockutils [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:46 np0005535469 nova_compute[254092]: 2025-11-25 16:50:46.504 254096 DEBUG oslo_concurrency.lockutils [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:46 np0005535469 nova_compute[254092]: 2025-11-25 16:50:46.504 254096 DEBUG nova.network.neutron [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:50:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 721711fc-174e-4807-937f-5c530133b518 does not exist
Nov 25 11:50:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 404e2cdc-65ee-4c36-a0f4-568cd9ad848d does not exist
Nov 25 11:50:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 815330fb-ca71-4cc0-a171-a5e7e865e457 does not exist
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:50:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:50:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:47.728 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:47Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 11:50:47 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:47Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 11:50:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1021 KiB/s wr, 251 op/s
Nov 25 11:50:47 np0005535469 nova_compute[254092]: 2025-11-25 16:50:47.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:48 np0005535469 podman[359267]: 2025-11-25 16:50:48.01453647 +0000 UTC m=+0.038355414 container create 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:50:48 np0005535469 systemd[1]: Started libpod-conmon-8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1.scope.
Nov 25 11:50:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:50:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:50:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:50:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:48 np0005535469 podman[359267]: 2025-11-25 16:50:48.087169784 +0000 UTC m=+0.110988758 container init 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:50:48 np0005535469 podman[359267]: 2025-11-25 16:50:47.999847801 +0000 UTC m=+0.023666765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:50:48 np0005535469 podman[359267]: 2025-11-25 16:50:48.096350763 +0000 UTC m=+0.120169707 container start 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 11:50:48 np0005535469 podman[359267]: 2025-11-25 16:50:48.099502319 +0000 UTC m=+0.123321283 container attach 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:50:48 np0005535469 fervent_proskuriakova[359284]: 167 167
Nov 25 11:50:48 np0005535469 systemd[1]: libpod-8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1.scope: Deactivated successfully.
Nov 25 11:50:48 np0005535469 podman[359267]: 2025-11-25 16:50:48.105044529 +0000 UTC m=+0.128863493 container died 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:50:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-09dbd5f94d0dc87e8310e86655ceae2c153a3fe0c9ced25ff6ff1b2b897eb1ad-merged.mount: Deactivated successfully.
Nov 25 11:50:48 np0005535469 podman[359267]: 2025-11-25 16:50:48.139352401 +0000 UTC m=+0.163171345 container remove 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:50:48 np0005535469 systemd[1]: libpod-conmon-8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1.scope: Deactivated successfully.
Nov 25 11:50:48 np0005535469 podman[359309]: 2025-11-25 16:50:48.296287886 +0000 UTC m=+0.036659437 container create fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 11:50:48 np0005535469 systemd[1]: Started libpod-conmon-fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a.scope.
Nov 25 11:50:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:48 np0005535469 podman[359309]: 2025-11-25 16:50:48.366379532 +0000 UTC m=+0.106751113 container init fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 11:50:48 np0005535469 podman[359309]: 2025-11-25 16:50:48.374962335 +0000 UTC m=+0.115333886 container start fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:50:48 np0005535469 podman[359309]: 2025-11-25 16:50:48.280717754 +0000 UTC m=+0.021089325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:50:48 np0005535469 podman[359309]: 2025-11-25 16:50:48.378077619 +0000 UTC m=+0.118449180 container attach fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:50:48 np0005535469 nova_compute[254092]: 2025-11-25 16:50:48.532 254096 DEBUG nova.network.neutron [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:48 np0005535469 nova_compute[254092]: 2025-11-25 16:50:48.533 254096 DEBUG nova.network.neutron [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:48 np0005535469 nova_compute[254092]: 2025-11-25 16:50:48.554 254096 DEBUG oslo_concurrency.lockutils [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:48 np0005535469 nova_compute[254092]: 2025-11-25 16:50:48.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:49 np0005535469 cool_mayer[359325]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:50:49 np0005535469 cool_mayer[359325]: --> relative data size: 1.0
Nov 25 11:50:49 np0005535469 cool_mayer[359325]: --> All data devices are unavailable
Nov 25 11:50:49 np0005535469 systemd[1]: libpod-fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a.scope: Deactivated successfully.
Nov 25 11:50:49 np0005535469 podman[359309]: 2025-11-25 16:50:49.383174382 +0000 UTC m=+1.123545953 container died fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:50:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896-merged.mount: Deactivated successfully.
Nov 25 11:50:49 np0005535469 podman[359309]: 2025-11-25 16:50:49.434508788 +0000 UTC m=+1.174880339 container remove fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:50:49 np0005535469 systemd[1]: libpod-conmon-fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a.scope: Deactivated successfully.
Nov 25 11:50:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.2 KiB/s wr, 179 op/s
Nov 25 11:50:50 np0005535469 podman[359509]: 2025-11-25 16:50:50.005044522 +0000 UTC m=+0.043256206 container create 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:50:50 np0005535469 systemd[1]: Started libpod-conmon-5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b.scope.
Nov 25 11:50:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:50 np0005535469 podman[359509]: 2025-11-25 16:50:49.981533343 +0000 UTC m=+0.019745047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:50:50 np0005535469 podman[359509]: 2025-11-25 16:50:50.080611165 +0000 UTC m=+0.118822929 container init 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:50:50 np0005535469 podman[359509]: 2025-11-25 16:50:50.087557565 +0000 UTC m=+0.125769249 container start 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 11:50:50 np0005535469 podman[359509]: 2025-11-25 16:50:50.090805253 +0000 UTC m=+0.129017027 container attach 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 11:50:50 np0005535469 hungry_brown[359525]: 167 167
Nov 25 11:50:50 np0005535469 systemd[1]: libpod-5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b.scope: Deactivated successfully.
Nov 25 11:50:50 np0005535469 podman[359509]: 2025-11-25 16:50:50.093785064 +0000 UTC m=+0.131996748 container died 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:50:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8e681784ef391bc1351498510fea6beec1fa39ec86e897cffcd84887bb034711-merged.mount: Deactivated successfully.
Nov 25 11:50:50 np0005535469 podman[359509]: 2025-11-25 16:50:50.133451191 +0000 UTC m=+0.171662875 container remove 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:50:50 np0005535469 systemd[1]: libpod-conmon-5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b.scope: Deactivated successfully.
Nov 25 11:50:50 np0005535469 podman[359549]: 2025-11-25 16:50:50.305725864 +0000 UTC m=+0.039135035 container create 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:50:50 np0005535469 systemd[1]: Started libpod-conmon-0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c.scope.
Nov 25 11:50:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:50 np0005535469 podman[359549]: 2025-11-25 16:50:50.36777519 +0000 UTC m=+0.101184381 container init 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:50:50 np0005535469 podman[359549]: 2025-11-25 16:50:50.383524718 +0000 UTC m=+0.116933899 container start 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:50:50 np0005535469 podman[359549]: 2025-11-25 16:50:50.290227022 +0000 UTC m=+0.023636213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:50:50 np0005535469 podman[359549]: 2025-11-25 16:50:50.387162867 +0000 UTC m=+0.120572038 container attach 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:50:51 np0005535469 nova_compute[254092]: 2025-11-25 16:50:51.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]: {
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:    "0": [
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:        {
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "devices": [
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "/dev/loop3"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            ],
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_name": "ceph_lv0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_size": "21470642176",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "name": "ceph_lv0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "tags": {
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cluster_name": "ceph",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.crush_device_class": "",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.encrypted": "0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osd_id": "0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.type": "block",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.vdo": "0"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            },
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "type": "block",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "vg_name": "ceph_vg0"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:        }
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:    ],
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:    "1": [
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:        {
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "devices": [
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "/dev/loop4"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            ],
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_name": "ceph_lv1",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_size": "21470642176",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "name": "ceph_lv1",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "tags": {
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cluster_name": "ceph",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.crush_device_class": "",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.encrypted": "0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osd_id": "1",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.type": "block",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.vdo": "0"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            },
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "type": "block",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "vg_name": "ceph_vg1"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:        }
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:    ],
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:    "2": [
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:        {
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "devices": [
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "/dev/loop5"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            ],
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_name": "ceph_lv2",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_size": "21470642176",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "name": "ceph_lv2",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "tags": {
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.cluster_name": "ceph",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.crush_device_class": "",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.encrypted": "0",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osd_id": "2",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.type": "block",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:                "ceph.vdo": "0"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            },
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "type": "block",
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:            "vg_name": "ceph_vg2"
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:        }
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]:    ]
Nov 25 11:50:51 np0005535469 amazing_mclean[359566]: }
Nov 25 11:50:51 np0005535469 systemd[1]: libpod-0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c.scope: Deactivated successfully.
Nov 25 11:50:51 np0005535469 podman[359549]: 2025-11-25 16:50:51.17001596 +0000 UTC m=+0.903425131 container died 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:50:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb-merged.mount: Deactivated successfully.
Nov 25 11:50:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:51 np0005535469 podman[359549]: 2025-11-25 16:50:51.228767467 +0000 UTC m=+0.962176638 container remove 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:50:51 np0005535469 systemd[1]: libpod-conmon-0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c.scope: Deactivated successfully.
Nov 25 11:50:51 np0005535469 podman[359578]: 2025-11-25 16:50:51.270736487 +0000 UTC m=+0.066358293 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 25 11:50:51 np0005535469 podman[359575]: 2025-11-25 16:50:51.275832236 +0000 UTC m=+0.073510418 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:50:51 np0005535469 podman[359580]: 2025-11-25 16:50:51.3057572 +0000 UTC m=+0.102190419 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014552733484730091 of space, bias 1.0, pg target 0.4365820045419027 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:50:51 np0005535469 podman[359792]: 2025-11-25 16:50:51.784027617 +0000 UTC m=+0.035515177 container create 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:50:51 np0005535469 systemd[1]: Started libpod-conmon-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope.
Nov 25 11:50:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 246 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.1 MiB/s wr, 303 op/s
Nov 25 11:50:51 np0005535469 podman[359792]: 2025-11-25 16:50:51.854720328 +0000 UTC m=+0.106207908 container init 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:50:51 np0005535469 podman[359792]: 2025-11-25 16:50:51.861598374 +0000 UTC m=+0.113085934 container start 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:50:51 np0005535469 podman[359792]: 2025-11-25 16:50:51.768491324 +0000 UTC m=+0.019978904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:50:51 np0005535469 podman[359792]: 2025-11-25 16:50:51.864809251 +0000 UTC m=+0.116296831 container attach 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:50:51 np0005535469 compassionate_mclean[359808]: 167 167
Nov 25 11:50:51 np0005535469 systemd[1]: libpod-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope: Deactivated successfully.
Nov 25 11:50:51 np0005535469 conmon[359808]: conmon 75c434d134eda183cdfa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope/container/memory.events
Nov 25 11:50:51 np0005535469 podman[359792]: 2025-11-25 16:50:51.867457683 +0000 UTC m=+0.118945233 container died 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:50:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f9f656753524d7bf23fb7bb5736800c824fe8613d6aa610aef4035c50da5cdbb-merged.mount: Deactivated successfully.
Nov 25 11:50:51 np0005535469 podman[359792]: 2025-11-25 16:50:51.898730973 +0000 UTC m=+0.150218533 container remove 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:50:51 np0005535469 systemd[1]: libpod-conmon-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope: Deactivated successfully.
Nov 25 11:50:52 np0005535469 podman[359833]: 2025-11-25 16:50:52.059755599 +0000 UTC m=+0.036837051 container create 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:50:52 np0005535469 systemd[1]: Started libpod-conmon-9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e.scope.
Nov 25 11:50:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:50:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:50:52 np0005535469 podman[359833]: 2025-11-25 16:50:52.134915272 +0000 UTC m=+0.111996744 container init 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:50:52 np0005535469 podman[359833]: 2025-11-25 16:50:52.044459483 +0000 UTC m=+0.021540935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:50:52 np0005535469 podman[359833]: 2025-11-25 16:50:52.144082801 +0000 UTC m=+0.121164253 container start 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 11:50:52 np0005535469 podman[359833]: 2025-11-25 16:50:52.147328249 +0000 UTC m=+0.124409731 container attach 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:50:52 np0005535469 nova_compute[254092]: 2025-11-25 16:50:52.903 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089437.9012377, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:50:52 np0005535469 nova_compute[254092]: 2025-11-25 16:50:52.905 254096 INFO nova.compute.manager [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:50:52 np0005535469 nova_compute[254092]: 2025-11-25 16:50:52.926 254096 DEBUG nova.compute.manager [None req-777fcc67-0921-4979-b407-d29dd0ddeeda - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:50:52 np0005535469 nova_compute[254092]: 2025-11-25 16:50:52.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]: {
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "osd_id": 1,
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "type": "bluestore"
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:    },
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "osd_id": 2,
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "type": "bluestore"
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:    },
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "osd_id": 0,
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:        "type": "bluestore"
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]:    }
Nov 25 11:50:53 np0005535469 vigilant_lalande[359850]: }
Nov 25 11:50:53 np0005535469 systemd[1]: libpod-9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e.scope: Deactivated successfully.
Nov 25 11:50:53 np0005535469 podman[359833]: 2025-11-25 16:50:53.124139594 +0000 UTC m=+1.101221046 container died 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 11:50:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8-merged.mount: Deactivated successfully.
Nov 25 11:50:53 np0005535469 podman[359833]: 2025-11-25 16:50:53.174223346 +0000 UTC m=+1.151304798 container remove 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:50:53 np0005535469 systemd[1]: libpod-conmon-9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e.scope: Deactivated successfully.
Nov 25 11:50:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:50:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:50:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:50:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:50:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 94ef5dfb-78a6-439f-a4ab-2d2a28fc0953 does not exist
Nov 25 11:50:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 62c071c4-feae-4b32-bc57-7722c6e5e6d5 does not exist
Nov 25 11:50:53 np0005535469 nova_compute[254092]: 2025-11-25 16:50:53.474 254096 DEBUG nova.compute.manager [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:53 np0005535469 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG nova.compute.manager [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:53 np0005535469 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG oslo_concurrency.lockutils [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:53 np0005535469 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG oslo_concurrency.lockutils [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:53 np0005535469 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG nova.network.neutron [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:53 np0005535469 nova_compute[254092]: 2025-11-25 16:50:53.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 246 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 936 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 11:50:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:50:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:50:54 np0005535469 nova_compute[254092]: 2025-11-25 16:50:54.774 254096 INFO nova.compute.manager [None req-cfc15dc1-eb5e-45e8-953e-11fc16cb86e3 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Get console output#033[00m
Nov 25 11:50:54 np0005535469 nova_compute[254092]: 2025-11-25 16:50:54.780 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:50:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:50:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634020956' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:50:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:50:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634020956' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.889 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.889 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.889 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.890 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.890 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.891 254096 INFO nova.compute.manager [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Terminating instance#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.892 254096 DEBUG nova.compute.manager [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:50:55 np0005535469 kernel: tap1d6ef4a2-82 (unregistering): left promiscuous mode
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.946 254096 DEBUG nova.network.neutron [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.947 254096 DEBUG nova.network.neutron [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:55 np0005535469 NetworkManager[48891]: <info>  [1764089455.9480] device (tap1d6ef4a2-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:55Z|01036|binding|INFO|Releasing lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 from this chassis (sb_readonly=0)
Nov 25 11:50:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:55Z|01037|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 down in Southbound
Nov 25 11:50:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:55Z|01038|binding|INFO|Removing iface tap1d6ef4a2-82 ovn-installed in OVS
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.966 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.967 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis#033[00m
Nov 25 11:50:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.968 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.968 254096 DEBUG oslo_concurrency.lockutils [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.969 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0010904a-76e9-4e6b-8de5-0695164be027]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.969 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace which is not needed anymore#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.986 254096 DEBUG nova.compute.manager [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.986 254096 DEBUG nova.compute.manager [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing instance network info cache due to event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.987 254096 DEBUG oslo_concurrency.lockutils [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.987 254096 DEBUG oslo_concurrency.lockutils [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.987 254096 DEBUG nova.network.neutron [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:50:55 np0005535469 nova_compute[254092]: 2025-11-25 16:50:55.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:56 np0005535469 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 25 11:50:56 np0005535469 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Consumed 13.280s CPU time.
Nov 25 11:50:56 np0005535469 systemd-machined[216343]: Machine qemu-130-instance-00000065 terminated.
Nov 25 11:50:56 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : haproxy version is 2.8.14-c23fe91
Nov 25 11:50:56 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : path to executable is /usr/sbin/haproxy
Nov 25 11:50:56 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [WARNING]  (358868) : Exiting Master process...
Nov 25 11:50:56 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [WARNING]  (358868) : Exiting Master process...
Nov 25 11:50:56 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [ALERT]    (358868) : Current worker (358870) exited with code 143 (Terminated)
Nov 25 11:50:56 np0005535469 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [WARNING]  (358868) : All workers exited. Exiting... (0)
Nov 25 11:50:56 np0005535469 systemd[1]: libpod-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e.scope: Deactivated successfully.
Nov 25 11:50:56 np0005535469 podman[359969]: 2025-11-25 16:50:56.100508957 +0000 UTC m=+0.042512326 container died 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:50:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e-userdata-shm.mount: Deactivated successfully.
Nov 25 11:50:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1ad744a85431f20015a7676bfb11197180bce1ae809f0c271dcc2b50469a0e66-merged.mount: Deactivated successfully.
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.131 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance destroyed successfully.#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.131 254096 DEBUG nova.objects.instance [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:56 np0005535469 podman[359969]: 2025-11-25 16:50:56.139549638 +0000 UTC m=+0.081552977 container cleanup 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:50:56 np0005535469 systemd[1]: libpod-conmon-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e.scope: Deactivated successfully.
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.152 254096 DEBUG nova.virt.libvirt.vif [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:50:34Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.153 254096 DEBUG nova.network.os_vif_util [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.154 254096 DEBUG nova.network.os_vif_util [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.154 254096 DEBUG os_vif [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.157 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d6ef4a2-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.161 254096 INFO os_vif [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')#033[00m
Nov 25 11:50:56 np0005535469 podman[360008]: 2025-11-25 16:50:56.196714312 +0000 UTC m=+0.037154541 container remove 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.203 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9ec949-78ac-4e85-9b7a-69dc1356fc0d]: (4, ('Tue Nov 25 04:50:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e)\n98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e\nTue Nov 25 04:50:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e)\n98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.205 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[89b9e397-39a1-479b-8e77-66d6002cc2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.206 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:56 np0005535469 kernel: tap9840ff40-e0: left promiscuous mode
Nov 25 11:50:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.227 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[611c6990-b575-4eee-b85a-ab51d9b12bf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.250 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc97cff-720c-44bf-a637-b451dec0d5f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.252 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40069065-5614-4aa4-907f-d582c55106f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.273 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb365f9-e54b-4bbc-8b63-4cffe1e9d721]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589226, 'reachable_time': 27767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360042, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.275 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:50:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.275 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[150d4f9a-8d28-427d-a2b9-d5f3acf1c33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:56 np0005535469 systemd[1]: run-netns-ovnmeta\x2d9840ff40\x2dec43\x2d46f9\x2dab52\x2d3d9495f203ee.mount: Deactivated successfully.
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.538 254096 INFO nova.virt.libvirt.driver [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting instance files /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.539 254096 INFO nova.virt.libvirt.driver [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deletion of /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del complete#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.599 254096 INFO nova.compute.manager [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.599 254096 DEBUG oslo.service.loopingcall [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.600 254096 DEBUG nova.compute.manager [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:50:56 np0005535469 nova_compute[254092]: 2025-11-25 16:50:56.600 254096 DEBUG nova.network.neutron [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.583 254096 DEBUG nova.network.neutron [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updated VIF entry in instance network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.584 254096 DEBUG nova.network.neutron [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.605 254096 DEBUG oslo_concurrency.lockutils [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.640 254096 DEBUG nova.network.neutron [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.659 254096 INFO nova.compute.manager [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.704 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.704 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.742 254096 DEBUG nova.compute.manager [req-1796bac1-a391-4aa1-961f-1c355039a02d req-0c991cbc-ea7f-4ce1-8f18-42e8a26cc244 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-deleted-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:57 np0005535469 nova_compute[254092]: 2025-11-25 16:50:57.789 254096 DEBUG oslo_concurrency.processutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:50:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 11:50:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:50:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838783938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.256 254096 DEBUG oslo_concurrency.processutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.265 254096 DEBUG nova.compute.provider_tree [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.280 254096 DEBUG nova.scheduler.client.report [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.306 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.336 254096 INFO nova.scheduler.client.report [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 1ae1094f-81aa-490c-80ca-4eba95f46cac#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.415 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.512 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.512 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.513 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.513 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.513 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 WARNING nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.515 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.515 254096 WARNING nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.896 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.897 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.897 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.898 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.898 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.900 254096 INFO nova.compute.manager [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Terminating instance#033[00m
Nov 25 11:50:58 np0005535469 nova_compute[254092]: 2025-11-25 16:50:58.901 254096 DEBUG nova.compute.manager [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:50:58 np0005535469 kernel: tapd6e67173-6a (unregistering): left promiscuous mode
Nov 25 11:50:58 np0005535469 NetworkManager[48891]: <info>  [1764089458.9717] device (tapd6e67173-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:59Z|01039|binding|INFO|Releasing lport d6e67173-6a72-4200-9963-90668ed663e4 from this chassis (sb_readonly=0)
Nov 25 11:50:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:59Z|01040|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 down in Southbound
Nov 25 11:50:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:50:59Z|01041|binding|INFO|Removing iface tapd6e67173-6a ovn-installed in OVS
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.024 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '8', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:50:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.027 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 unbound from our chassis#033[00m
Nov 25 11:50:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.028 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 11:50:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.029 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54caa6dd-ae03-4e80-9942-df648d55d63e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:59 np0005535469 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 25 11:50:59 np0005535469 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Consumed 12.990s CPU time.
Nov 25 11:50:59 np0005535469 systemd-machined[216343]: Machine qemu-131-instance-00000066 terminated.
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.140 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance destroyed successfully.#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.141 254096 DEBUG nova.objects.instance [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'resources' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.155 254096 DEBUG nova.virt.libvirt.vif [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:50:35Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.156 254096 DEBUG nova.network.os_vif_util [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.157 254096 DEBUG nova.network.os_vif_util [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.157 254096 DEBUG os_vif [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.159 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6e67173-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.166 254096 INFO os_vif [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a')#033[00m
Nov 25 11:50:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 2.2 MiB/s wr, 125 op/s
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.864 254096 INFO nova.virt.libvirt.driver [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deleting instance files /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376_del#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.865 254096 INFO nova.virt.libvirt.driver [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deletion of /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376_del complete#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 INFO nova.compute.manager [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 DEBUG oslo.service.loopingcall [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 DEBUG nova.compute.manager [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:50:59 np0005535469 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 DEBUG nova.network.neutron [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:51:00 np0005535469 nova_compute[254092]: 2025-11-25 16:51:00.228 254096 DEBUG nova.compute.manager [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:00 np0005535469 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG oslo_concurrency.lockutils [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:00 np0005535469 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG oslo_concurrency.lockutils [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:00 np0005535469 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG oslo_concurrency.lockutils [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:00 np0005535469 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG nova.compute.manager [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:00 np0005535469 nova_compute[254092]: 2025-11-25 16:51:00.230 254096 DEBUG nova.compute.manager [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.281 254096 DEBUG nova.network.neutron [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.304 254096 INFO nova.compute.manager [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.366 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.367 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.426 254096 DEBUG oslo_concurrency.processutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.470 254096 DEBUG nova.compute.manager [req-01e9e385-d50c-4886-99fa-b455db9714df req-159b171a-c14d-4380-a4e3-315e5b25d418 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-deleted-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 41 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 991 KiB/s rd, 2.2 MiB/s wr, 206 op/s
Nov 25 11:51:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:51:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1199419544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.866 254096 DEBUG oslo_concurrency.processutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.871 254096 DEBUG nova.compute.provider_tree [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.891 254096 DEBUG nova.scheduler.client.report [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.911 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:01 np0005535469 nova_compute[254092]: 2025-11-25 16:51:01.961 254096 INFO nova.scheduler.client.report [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Deleted allocations for instance 8e8f0fb8-4b3c-40dd-9317-94bedc736376#033[00m
Nov 25 11:51:02 np0005535469 nova_compute[254092]: 2025-11-25 16:51:02.037 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:02 np0005535469 nova_compute[254092]: 2025-11-25 16:51:02.379 254096 DEBUG nova.compute.manager [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:02 np0005535469 nova_compute[254092]: 2025-11-25 16:51:02.379 254096 DEBUG oslo_concurrency.lockutils [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:02 np0005535469 nova_compute[254092]: 2025-11-25 16:51:02.379 254096 DEBUG oslo_concurrency.lockutils [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:02 np0005535469 nova_compute[254092]: 2025-11-25 16:51:02.380 254096 DEBUG oslo_concurrency.lockutils [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:02 np0005535469 nova_compute[254092]: 2025-11-25 16:51:02.380 254096 DEBUG nova.compute.manager [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:02 np0005535469 nova_compute[254092]: 2025-11-25 16:51:02.380 254096 WARNING nova.compute.manager [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:51:03 np0005535469 nova_compute[254092]: 2025-11-25 16:51:03.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 41 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 25 KiB/s wr, 83 op/s
Nov 25 11:51:04 np0005535469 nova_compute[254092]: 2025-11-25 16:51:04.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 25 KiB/s wr, 83 op/s
Nov 25 11:51:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.8 KiB/s wr, 81 op/s
Nov 25 11:51:08 np0005535469 nova_compute[254092]: 2025-11-25 16:51:08.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:09 np0005535469 nova_compute[254092]: 2025-11-25 16:51:09.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 81 op/s
Nov 25 11:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:51:11 np0005535469 nova_compute[254092]: 2025-11-25 16:51:11.126 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089456.125063, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:11 np0005535469 nova_compute[254092]: 2025-11-25 16:51:11.127 254096 INFO nova.compute.manager [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:51:11 np0005535469 nova_compute[254092]: 2025-11-25 16:51:11.144 254096 DEBUG nova.compute.manager [None req-fd2afcbb-5fd2-4393-a3bf-3001c8b9254f - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 81 op/s
Nov 25 11:51:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:13.627 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:13.628 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:13.628 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:13 np0005535469 nova_compute[254092]: 2025-11-25 16:51:13.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:51:14 np0005535469 nova_compute[254092]: 2025-11-25 16:51:14.139 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089459.1373086, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:14 np0005535469 nova_compute[254092]: 2025-11-25 16:51:14.139 254096 INFO nova.compute.manager [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:51:14 np0005535469 nova_compute[254092]: 2025-11-25 16:51:14.155 254096 DEBUG nova.compute.manager [None req-3f205dbb-e7c0-46b4-9fff-5e27584ff0be - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:14 np0005535469 nova_compute[254092]: 2025-11-25 16:51:14.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:14 np0005535469 nova_compute[254092]: 2025-11-25 16:51:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:15 np0005535469 nova_compute[254092]: 2025-11-25 16:51:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:51:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:16 np0005535469 nova_compute[254092]: 2025-11-25 16:51:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:51:18 np0005535469 nova_compute[254092]: 2025-11-25 16:51:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:18 np0005535469 nova_compute[254092]: 2025-11-25 16:51:18.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:19 np0005535469 nova_compute[254092]: 2025-11-25 16:51:19.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:19 np0005535469 nova_compute[254092]: 2025-11-25 16:51:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:19 np0005535469 nova_compute[254092]: 2025-11-25 16:51:19.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:51:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:51:20 np0005535469 nova_compute[254092]: 2025-11-25 16:51:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.397 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.397 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.416 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.498 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.499 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.507 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.507 254096 INFO nova.compute.claims [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.620 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:21 np0005535469 podman[360128]: 2025-11-25 16:51:21.650630246 +0000 UTC m=+0.064186525 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 11:51:21 np0005535469 podman[360127]: 2025-11-25 16:51:21.711386747 +0000 UTC m=+0.125880172 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:51:21 np0005535469 podman[360129]: 2025-11-25 16:51:21.728562824 +0000 UTC m=+0.140020946 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.833 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.833 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.855 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:51:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:51:21 np0005535469 nova_compute[254092]: 2025-11-25 16:51:21.912 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:51:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314132498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.113 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.121 254096 DEBUG nova.compute.provider_tree [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.143 254096 DEBUG nova.scheduler.client.report [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.163 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.166 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.166 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.198 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.208 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.209 254096 INFO nova.compute.claims [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.269 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "26a03ea9-69ed-410a-b248-693f9abf1db2" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.270 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "26a03ea9-69ed-410a-b248-693f9abf1db2" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.280 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "26a03ea9-69ed-410a-b248-693f9abf1db2" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.280 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.329 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.329 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.346 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.361 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.372 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:51:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226227465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.596 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.599 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.601 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.601 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Creating image(s)#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.624 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.646 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.668 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.672 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.747 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.748 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.748 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.749 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.769 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.772 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2fa32ddb-072c-480c-9df3-a207412beb72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:51:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253117060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.805 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.811 254096 DEBUG nova.compute.provider_tree [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.828 254096 DEBUG nova.scheduler.client.report [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.850 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.851 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.857 254096 DEBUG nova.policy [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b117cf5d3a76422aacd4d3a62d7b2f0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'faf3ae8544684cac802ef962ea89ba52', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.913 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.914 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.935 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.951 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.986 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.988 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3845MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.988 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:22 np0005535469 nova_compute[254092]: 2025-11-25 16:51:22.988 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.022 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2fa32ddb-072c-480c-9df3-a207412beb72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.054 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.055 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.055 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Creating image(s)#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.075 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.095 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.114 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.119 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.170 254096 DEBUG nova.policy [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.218 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.220 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.220 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.221 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.243 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.247 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7368c721-3e2a-4635-b2d8-5703d20438d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.290 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2fa32ddb-072c-480c-9df3-a207412beb72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7368c721-3e2a-4635-b2d8-5703d20438d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.302 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] resizing rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.433 254096 DEBUG nova.objects.instance [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lazy-loading 'migration_context' on Instance uuid 2fa32ddb-072c-480c-9df3-a207412beb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.437 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.504 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.505 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Ensure instance console log exists: /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.505 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.506 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.506 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.523 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7368c721-3e2a-4635-b2d8-5703d20438d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.593 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.632 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Successfully created port: 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.695 254096 DEBUG nova.objects.instance [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.708 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.709 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Ensure instance console log exists: /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.710 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.710 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.710 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:51:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:51:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402197744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.903 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.910 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.927 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.951 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:51:23 np0005535469 nova_compute[254092]: 2025-11-25 16:51:23.952 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.468 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Successfully created port: 0cdb5ab1-8463-4494-a522-360862f2152e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.816 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Successfully updated port: 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.835 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.835 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquired lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.835 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.949 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.949 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.963 254096 DEBUG nova.compute.manager [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-changed-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.965 254096 DEBUG nova.compute.manager [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Refreshing instance network info cache due to event network-changed-774bb0a0-1853-424e-a6c2-1ee07d3bbf61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.965 254096 DEBUG oslo_concurrency.lockutils [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.970 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.970 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.970 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.994 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.994 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:51:24 np0005535469 nova_compute[254092]: 2025-11-25 16:51:24.995 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.025 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:51:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 107 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.8 MiB/s wr, 28 op/s
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.878 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updating instance_info_cache with network_info: [{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.893 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Releasing lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.894 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance network_info: |[{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.894 254096 DEBUG oslo_concurrency.lockutils [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.894 254096 DEBUG nova.network.neutron [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Refreshing network info cache for port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.898 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start _get_guest_xml network_info=[{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.902 254096 WARNING nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.910 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.910 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.914 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.915 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.915 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.916 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.916 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.916 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:51:25 np0005535469 nova_compute[254092]: 2025-11-25 16:51:25.922 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.013 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Successfully updated port: 0cdb5ab1-8463-4494-a522-360862f2152e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.032 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.032 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.032 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.225 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:51:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.280 254096 DEBUG nova.compute.manager [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.281 254096 DEBUG nova.compute.manager [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing instance network info cache due to event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.281 254096 DEBUG oslo_concurrency.lockutils [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:51:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986318622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.360 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.381 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.385 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:51:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313407795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.838 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.840 254096 DEBUG nova.virt.libvirt.vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-12066405',display_name='tempest-ServerGroupTestJSON-server-12066405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-12066405',id=104,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faf3ae8544684cac802ef962ea89ba52',ramdisk_id='',reservation_id='r-yx0g6j90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-671381879',owner_user_name='tempest-ServerGroupTestJSON-671381879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='b117cf5d3a76422aacd4d3a62d7b2f0e',uuid=2fa32ddb-072c-480c-9df3-a207412beb72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.840 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converting VIF {"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.841 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.842 254096 DEBUG nova.objects.instance [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2fa32ddb-072c-480c-9df3-a207412beb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.855 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <uuid>2fa32ddb-072c-480c-9df3-a207412beb72</uuid>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <name>instance-00000068</name>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerGroupTestJSON-server-12066405</nova:name>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:51:25</nova:creationTime>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:user uuid="b117cf5d3a76422aacd4d3a62d7b2f0e">tempest-ServerGroupTestJSON-671381879-project-member</nova:user>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:project uuid="faf3ae8544684cac802ef962ea89ba52">tempest-ServerGroupTestJSON-671381879</nova:project>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <nova:port uuid="774bb0a0-1853-424e-a6c2-1ee07d3bbf61">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <entry name="serial">2fa32ddb-072c-480c-9df3-a207412beb72</entry>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <entry name="uuid">2fa32ddb-072c-480c-9df3-a207412beb72</entry>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2fa32ddb-072c-480c-9df3-a207412beb72_disk">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2fa32ddb-072c-480c-9df3-a207412beb72_disk.config">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:8c:33:c8"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <target dev="tap774bb0a0-18"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/console.log" append="off"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:51:26 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:51:26 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:51:26 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:51:26 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Preparing to wait for external event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.857 254096 DEBUG nova.virt.libvirt.vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-12066405',display_name='tempest-ServerGroupTestJSON-server-12066405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-12066405',id=104,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faf3ae8544684cac802ef962ea89ba52',ramdisk_id='',reservation_id='r-yx0g6j90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-671381879',owner_user_name='tempest-ServerGroupTestJSON-671381879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='b117cf5d3a76422aacd4d3a62d7b2f0e',uuid=2fa32ddb-072c-480c-9df3-a207412beb72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.857 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converting VIF {"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.858 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.858 254096 DEBUG os_vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.859 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.860 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.862 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap774bb0a0-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.862 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap774bb0a0-18, col_values=(('external_ids', {'iface-id': '774bb0a0-1853-424e-a6c2-1ee07d3bbf61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:33:c8', 'vm-uuid': '2fa32ddb-072c-480c-9df3-a207412beb72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:26 np0005535469 NetworkManager[48891]: <info>  [1764089486.8648] manager: (tap774bb0a0-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.869 254096 INFO os_vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18')#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.902 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.903 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.903 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] No VIF found with MAC fa:16:3e:8c:33:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.904 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Using config drive#033[00m
Nov 25 11:51:26 np0005535469 nova_compute[254092]: 2025-11-25 16:51:26.921 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.172 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Creating config drive at /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.177 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgkecmlu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.316 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgkecmlu" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.344 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.348 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.511 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.513 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deleting local config drive /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config because it was imported into RBD.#033[00m
Nov 25 11:51:28 np0005535469 kernel: tap774bb0a0-18: entered promiscuous mode
Nov 25 11:51:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:28Z|01042|binding|INFO|Claiming lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for this chassis.
Nov 25 11:51:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:28Z|01043|binding|INFO|774bb0a0-1853-424e-a6c2-1ee07d3bbf61: Claiming fa:16:3e:8c:33:c8 10.100.0.6
Nov 25 11:51:28 np0005535469 NetworkManager[48891]: <info>  [1764089488.5623] manager: (tap774bb0a0-18): new Tun device (/org/freedesktop/NetworkManager/Devices/424)
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.576 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:33:c8 10.100.0.6'], port_security=['fa:16:3e:8c:33:c8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fa32ddb-072c-480c-9df3-a207412beb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faf3ae8544684cac802ef962ea89ba52', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d73d11b-858b-4b1c-bcef-f58415708764', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2880425b-36a3-47bb-868a-17d2ff8251e1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=774bb0a0-1853-424e-a6c2-1ee07d3bbf61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.577 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 in datapath 8973f8f9-6cab-4292-a0e3-cbd494454b03 bound to our chassis#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.578 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8973f8f9-6cab-4292-a0e3-cbd494454b03#033[00m
Nov 25 11:51:28 np0005535469 systemd-udevd[360750]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[97b833f0-51c4-4446-bf3f-1004c3f14a09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.592 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8973f8f9-61 in ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.594 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8973f8f9-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f974fa5-182e-4c5d-93bd-85696364c86c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 systemd-machined[216343]: New machine qemu-132-instance-00000068.
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.595 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71ccc7b9-0a4f-4ba4-a38a-fce51c46f081]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 NetworkManager[48891]: <info>  [1764089488.6041] device (tap774bb0a0-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:51:28 np0005535469 NetworkManager[48891]: <info>  [1764089488.6050] device (tap774bb0a0-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.607 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[e31b6275-542f-4253-85f2-a3b2479bffa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 systemd[1]: Started Virtual Machine qemu-132-instance-00000068.
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.632 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[520a989d-db69-4920-b4c2-8131c133d2f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:28Z|01044|binding|INFO|Setting lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 ovn-installed in OVS
Nov 25 11:51:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:28Z|01045|binding|INFO|Setting lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 up in Southbound
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.661 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3d278178-2fc8-45c6-b5cf-57100268b820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 systemd-udevd[360753]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:51:28 np0005535469 NetworkManager[48891]: <info>  [1764089488.6672] manager: (tap8973f8f9-60): new Veth device (/org/freedesktop/NetworkManager/Devices/425)
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.666 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ad9791-7946-43b8-8d37-01986a2be7fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.696 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e64d10-1430-4f60-b0a2-1166c1f1ef2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.699 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef2558f-f0f1-43ce-b4e6-1cb15d588659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 NetworkManager[48891]: <info>  [1764089488.7200] device (tap8973f8f9-60): carrier: link connected
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.724 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[de4ba498-c44a-4de7-9403-abbda315537b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c38bd5c8-a2d5-4706-b4ab-057585ba8b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8973f8f9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:4c:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594631, 'reachable_time': 18626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360782, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.758 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[797d90b0-040d-45ca-b441-146e40b2a585]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:4c7d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594631, 'tstamp': 594631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360783, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[580a3660-ba22-40ef-ae1f-a3eed5e50ded]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8973f8f9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:4c:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594631, 'reachable_time': 18626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360784, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0feff510-6b57-43ae-8740-30e1aeef2ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eba7513f-dce6-4276-96a0-9a82a2449952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.876 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8973f8f9-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8973f8f9-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:28 np0005535469 NetworkManager[48891]: <info>  [1764089488.8798] manager: (tap8973f8f9-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 25 11:51:28 np0005535469 kernel: tap8973f8f9-60: entered promiscuous mode
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.882 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8973f8f9-60, col_values=(('external_ids', {'iface-id': 'd96728a6-6965-48bf-820a-d4fbc1efd7cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:28Z|01046|binding|INFO|Releasing lport d96728a6-6965-48bf-820a-d4fbc1efd7cb from this chassis (sb_readonly=0)
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.900 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8973f8f9-6cab-4292-a0e3-cbd494454b03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8973f8f9-6cab-4292-a0e3-cbd494454b03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bf9c8d-9926-40ed-964a-8947ddd89f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.901 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-8973f8f9-6cab-4292-a0e3-cbd494454b03
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/8973f8f9-6cab-4292-a0e3-cbd494454b03.pid.haproxy
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 8973f8f9-6cab-4292-a0e3-cbd494454b03
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:51:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.903 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'env', 'PROCESS_TAG=haproxy-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8973f8f9-6cab-4292-a0e3-cbd494454b03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.993 254096 DEBUG nova.compute.manager [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.993 254096 DEBUG oslo_concurrency.lockutils [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.994 254096 DEBUG oslo_concurrency.lockutils [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.994 254096 DEBUG oslo_concurrency.lockutils [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:28 np0005535469 nova_compute[254092]: 2025-11-25 16:51:28.994 254096 DEBUG nova.compute.manager [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Processing event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.048 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.064 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.065 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance network_info: |[{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.065 254096 DEBUG oslo_concurrency.lockutils [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.065 254096 DEBUG nova.network.neutron [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.068 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start _get_guest_xml network_info=[{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.073 254096 WARNING nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.083 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.084 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.093 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.094 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.094 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.094 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.095 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.095 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.096 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.096 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.097 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.097 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.102 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:29 np0005535469 podman[360827]: 2025-11-25 16:51:29.292594768 +0000 UTC m=+0.048888920 container create ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.300 254096 DEBUG nova.network.neutron [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updated VIF entry in instance network info cache for port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.301 254096 DEBUG nova.network.neutron [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updating instance_info_cache with network_info: [{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.314 254096 DEBUG oslo_concurrency.lockutils [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:51:29 np0005535469 systemd[1]: Started libpod-conmon-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b.scope.
Nov 25 11:51:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f60641c11c202cd98160d15300f1c97d553f0de6fcab634db88f9b72731431e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:29 np0005535469 podman[360827]: 2025-11-25 16:51:29.269051327 +0000 UTC m=+0.025345489 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:51:29 np0005535469 podman[360827]: 2025-11-25 16:51:29.372219391 +0000 UTC m=+0.128513553 container init ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:51:29 np0005535469 podman[360827]: 2025-11-25 16:51:29.377956017 +0000 UTC m=+0.134250179 container start ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 11:51:29 np0005535469 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : New worker (360893) forked
Nov 25 11:51:29 np0005535469 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : Loading success.
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.489 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089489.4886696, 2fa32ddb-072c-480c-9df3-a207412beb72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.489 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Started (Lifecycle Event)#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.491 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.498 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.501 254096 INFO nova.virt.libvirt.driver [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance spawned successfully.#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.502 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.522 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.527 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.537 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.537 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.538 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.538 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.539 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.539 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.545 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.546 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089489.4895108, 2fa32ddb-072c-480c-9df3-a207412beb72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.546 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.574 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.579 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089489.498728, 2fa32ddb-072c-480c-9df3-a207412beb72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.579 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:51:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:51:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221567832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.597 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.601 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.606 254096 INFO nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 7.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.607 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.616 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.617 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.637 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.641 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.683 254096 INFO nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 8.21 seconds to build instance.#033[00m
Nov 25 11:51:29 np0005535469 nova_compute[254092]: 2025-11-25 16:51:29.704 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 11:51:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:51:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/528829560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.114 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.115 254096 DEBUG nova.virt.libvirt.vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.116 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.117 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.118 254096 DEBUG nova.objects.instance [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.130 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <uuid>7368c721-3e2a-4635-b2d8-5703d20438d3</uuid>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <name>instance-00000069</name>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1076158717</nova:name>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:51:29</nova:creationTime>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <nova:port uuid="0cdb5ab1-8463-4494-a522-360862f2152e">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <entry name="serial">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <entry name="uuid">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ba:99:c7"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <target dev="tap0cdb5ab1-84"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/console.log" append="off"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:51:30 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:51:30 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:51:30 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:51:30 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.135 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Preparing to wait for external event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.136 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.136 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.136 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.137 254096 DEBUG nova.virt.libvirt.vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.138 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.139 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.139 254096 DEBUG os_vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.141 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cdb5ab1-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0cdb5ab1-84, col_values=(('external_ids', {'iface-id': '0cdb5ab1-8463-4494-a522-360862f2152e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:99:c7', 'vm-uuid': '7368c721-3e2a-4635-b2d8-5703d20438d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:30 np0005535469 NetworkManager[48891]: <info>  [1764089490.1479] manager: (tap0cdb5ab1-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.156 254096 INFO os_vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.200 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.201 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.201 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:ba:99:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.202 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Using config drive#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.224 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.685 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Creating config drive at /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.690 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4raipwxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.831 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4raipwxe" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.858 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:30 np0005535469 nova_compute[254092]: 2025-11-25 16:51:30.862 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.012 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.014 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deleting local config drive /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config because it was imported into RBD.#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.050 254096 DEBUG nova.network.neutron [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updated VIF entry in instance network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.051 254096 DEBUG nova.network.neutron [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.063 254096 DEBUG oslo_concurrency.lockutils [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:51:31 np0005535469 NetworkManager[48891]: <info>  [1764089491.0668] manager: (tap0cdb5ab1-84): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Nov 25 11:51:31 np0005535469 kernel: tap0cdb5ab1-84: entered promiscuous mode
Nov 25 11:51:31 np0005535469 systemd-udevd[360779]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01047|binding|INFO|Claiming lport 0cdb5ab1-8463-4494-a522-360862f2152e for this chassis.
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01048|binding|INFO|0cdb5ab1-8463-4494-a522-360862f2152e: Claiming fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.081 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 NetworkManager[48891]: <info>  [1764089491.0918] device (tap0cdb5ab1-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:51:31 np0005535469 NetworkManager[48891]: <info>  [1764089491.0928] device (tap0cdb5ab1-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.096 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.098 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a bound to our chassis#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1b70d379-8b3d-4361-b11d-cafbb578194a#033[00m
Nov 25 11:51:31 np0005535469 systemd-machined[216343]: New machine qemu-133-instance-00000069.
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.113 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cec25b83-3dd1-44b7-845b-78aa422ec6ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.114 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1b70d379-81 in ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.116 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1b70d379-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3540943-64e0-4f61-980a-009fbd8ffaf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.117 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2dc3001-9d90-43b6-aa54-fbc6ff2af064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.129 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[84b5fb9f-a7ae-47f1-b7fd-688aec412474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 systemd[1]: Started Virtual Machine qemu-133-instance-00000069.
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.156 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf83075-18c4-4d35-b250-aed32dc0f69a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01049|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e ovn-installed in OVS
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01050|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e up in Southbound
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.187 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a42014d-1708-41c2-ad04-cdb76b479887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.195 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5492ec7-57a8-4e60-9c01-88fc50936f5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 NetworkManager[48891]: <info>  [1764089491.1978] manager: (tap1b70d379-80): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.203 254096 DEBUG nova.compute.manager [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG oslo_concurrency.lockutils [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG oslo_concurrency.lockutils [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG oslo_concurrency.lockutils [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG nova.compute.manager [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] No waiting events found dispatching network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.205 254096 WARNING nova.compute.manager [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received unexpected event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:51:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.231 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cee49d73-21e5-4a09-b4af-17174dd666a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.233 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bd645849-0b27-4c37-9569-0752a162ff34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 NetworkManager[48891]: <info>  [1764089491.2607] device (tap1b70d379-80): carrier: link connected
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.266 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e41d86c5-3f3b-4553-9ada-37da916a268c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e99cdc06-d932-4b14-92bc-74be3d3ea128]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594885, 'reachable_time': 21348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361040, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[81970e31-5b30-4049-aded-45993af89708]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:396a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594885, 'tstamp': 594885}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361041, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73b5b64e-575f-48c8-b1b4-a13dcb689ead]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594885, 'reachable_time': 21348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361042, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.345 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a941f801-3b1d-4eed-9aab-d419ebd296cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.403 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8f086e-5690-4521-805c-ffb7ed967b43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.405 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.405 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.405 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b70d379-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:31 np0005535469 kernel: tap1b70d379-80: entered promiscuous mode
Nov 25 11:51:31 np0005535469 NetworkManager[48891]: <info>  [1764089491.4077] manager: (tap1b70d379-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.409 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1b70d379-80, col_values=(('external_ids', {'iface-id': '43f83cca-eded-4f81-a561-02d17bd21a2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01051|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.425 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.424 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.425 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[44a20f46-1d65-43c3-a701-419c9efe9583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.426 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.426 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'env', 'PROCESS_TAG=haproxy-1b70d379-8b3d-4361-b11d-cafbb578194a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1b70d379-8b3d-4361-b11d-cafbb578194a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:51:31 np0005535469 podman[361079]: 2025-11-25 16:51:31.764327517 +0000 UTC m=+0.048961121 container create a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 11:51:31 np0005535469 systemd[1]: Started libpod-conmon-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e.scope.
Nov 25 11:51:31 np0005535469 podman[361079]: 2025-11-25 16:51:31.738873625 +0000 UTC m=+0.023507259 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:51:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 25 11:51:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8c25a22fe3e9484bc39161e604e238df356a2c17ccd6f67b57960b5feb4d5f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.874 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089491.8740003, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Started (Lifecycle Event)#033[00m
Nov 25 11:51:31 np0005535469 podman[361079]: 2025-11-25 16:51:31.877958435 +0000 UTC m=+0.162592059 container init a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:51:31 np0005535469 podman[361079]: 2025-11-25 16:51:31.884264497 +0000 UTC m=+0.168898101 container start a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:51:31 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : New worker (361134) forked
Nov 25 11:51:31 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : Loading success.
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.905 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.907 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.907 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.908 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.908 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.908 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.910 254096 INFO nova.compute.manager [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Terminating instance#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.911 254096 DEBUG nova.compute.manager [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.915 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089491.8741636, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.916 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.932 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.937 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:51:31 np0005535469 kernel: tap774bb0a0-18 (unregistering): left promiscuous mode
Nov 25 11:51:31 np0005535469 NetworkManager[48891]: <info>  [1764089491.9486] device (tap774bb0a0-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01052|binding|INFO|Releasing lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 from this chassis (sb_readonly=0)
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.955 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01053|binding|INFO|Setting lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 down in Southbound
Nov 25 11:51:31 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:31Z|01054|binding|INFO|Removing iface tap774bb0a0-18 ovn-installed in OVS
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.962 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:33:c8 10.100.0.6'], port_security=['fa:16:3e:8c:33:c8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fa32ddb-072c-480c-9df3-a207412beb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faf3ae8544684cac802ef962ea89ba52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d73d11b-858b-4b1c-bcef-f58415708764', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2880425b-36a3-47bb-868a-17d2ff8251e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=774bb0a0-1853-424e-a6c2-1ee07d3bbf61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.964 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 in datapath 8973f8f9-6cab-4292-a0e3-cbd494454b03 unbound from our chassis#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.965 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8973f8f9-6cab-4292-a0e3-cbd494454b03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.966 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f83e106-ca9c-4f80-ba4a-128b2b0f2189]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.966 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 namespace which is not needed anymore#033[00m
Nov 25 11:51:31 np0005535469 nova_compute[254092]: 2025-11-25 16:51:31.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:31 np0005535469 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 11:51:31 np0005535469 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Consumed 3.361s CPU time.
Nov 25 11:51:32 np0005535469 systemd-machined[216343]: Machine qemu-132-instance-00000068 terminated.
Nov 25 11:51:32 np0005535469 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : haproxy version is 2.8.14-c23fe91
Nov 25 11:51:32 np0005535469 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : path to executable is /usr/sbin/haproxy
Nov 25 11:51:32 np0005535469 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [WARNING]  (360889) : Exiting Master process...
Nov 25 11:51:32 np0005535469 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [ALERT]    (360889) : Current worker (360893) exited with code 143 (Terminated)
Nov 25 11:51:32 np0005535469 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [WARNING]  (360889) : All workers exited. Exiting... (0)
Nov 25 11:51:32 np0005535469 systemd[1]: libpod-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b.scope: Deactivated successfully.
Nov 25 11:51:32 np0005535469 podman[361164]: 2025-11-25 16:51:32.087469668 +0000 UTC m=+0.038994630 container died ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 11:51:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b-userdata-shm.mount: Deactivated successfully.
Nov 25 11:51:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6f60641c11c202cd98160d15300f1c97d553f0de6fcab634db88f9b72731431e-merged.mount: Deactivated successfully.
Nov 25 11:51:32 np0005535469 podman[361164]: 2025-11-25 16:51:32.120014953 +0000 UTC m=+0.071539915 container cleanup ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:51:32 np0005535469 systemd[1]: libpod-conmon-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b.scope: Deactivated successfully.
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.145 254096 INFO nova.virt.libvirt.driver [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance destroyed successfully.#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.146 254096 DEBUG nova.objects.instance [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lazy-loading 'resources' on Instance uuid 2fa32ddb-072c-480c-9df3-a207412beb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.162 254096 DEBUG nova.virt.libvirt.vif [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-12066405',display_name='tempest-ServerGroupTestJSON-server-12066405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-12066405',id=104,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faf3ae8544684cac802ef962ea89ba52',ramdisk_id='',reservation_id='r-yx0g6j90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-671381879',owner_user_name='tempest-ServerGroupTestJSON-671381879-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:29Z,user_data=None,user_id='b117cf5d3a76422aacd4d3a62d7b2f0e',uuid=2fa32ddb-072c-480c-9df3-a207412beb72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.163 254096 DEBUG nova.network.os_vif_util [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converting VIF {"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.164 254096 DEBUG nova.network.os_vif_util [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.164 254096 DEBUG os_vif [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.166 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.166 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap774bb0a0-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.171 254096 INFO os_vif [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18')#033[00m
Nov 25 11:51:32 np0005535469 podman[361195]: 2025-11-25 16:51:32.183724035 +0000 UTC m=+0.041026307 container remove ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.189 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5ed6cf-f39d-4d03-b4c9-7749df260c57]: (4, ('Tue Nov 25 04:51:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 (ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b)\necb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b\nTue Nov 25 04:51:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 (ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b)\necb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e61989a3-cfc0-4b3c-9ff4-74851aae98bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.192 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8973f8f9-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:32 np0005535469 kernel: tap8973f8f9-60: left promiscuous mode
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.217 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66cbf923-5f45-4a57-9de7-351f3c6beccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[199513eb-8e5a-4ebf-affd-aaa5f3a86f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d88ea609-61fb-43af-bb73-6e09ed6a0e8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.245 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd80050-d448-4ebb-8d31-f4ce206365ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594624, 'reachable_time': 30721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361237, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.247 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:51:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.247 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[09730bce-a073-40a2-ae88-496a82c27340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:32 np0005535469 systemd[1]: run-netns-ovnmeta\x2d8973f8f9\x2d6cab\x2d4292\x2da0e3\x2dcbd494454b03.mount: Deactivated successfully.
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.484 254096 INFO nova.virt.libvirt.driver [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deleting instance files /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72_del#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.486 254096 INFO nova.virt.libvirt.driver [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deletion of /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72_del complete#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.552 254096 INFO nova.compute.manager [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.554 254096 DEBUG oslo.service.loopingcall [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.555 254096 DEBUG nova.compute.manager [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:51:32 np0005535469 nova_compute[254092]: 2025-11-25 16:51:32.556 254096 DEBUG nova.network.neutron [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.109 254096 DEBUG nova.network.neutron [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.127 254096 INFO nova.compute.manager [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 0.57 seconds to deallocate network for instance.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.177 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.177 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.236 254096 DEBUG oslo_concurrency.processutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.383 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.384 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.384 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.385 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.385 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Processing event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.385 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 WARNING nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-unplugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.388 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.388 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] No waiting events found dispatching network-vif-unplugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.388 254096 WARNING nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received unexpected event network-vif-unplugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] No waiting events found dispatching network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.390 254096 WARNING nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received unexpected event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.390 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-deleted-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.391 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089493.3953347, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.396 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.398 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.402 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance spawned successfully.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.402 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.443 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.451 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.455 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.456 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.456 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.456 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.486 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.531 254096 INFO nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 10.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.531 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.611 254096 INFO nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 11.71 seconds to build instance.#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.626 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:51:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186333535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.734 254096 DEBUG oslo_concurrency.processutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.740 254096 DEBUG nova.compute.provider_tree [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.762 254096 DEBUG nova.scheduler.client.report [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.797 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.838 254096 INFO nova.scheduler.client.report [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Deleted allocations for instance 2fa32ddb-072c-480c-9df3-a207412beb72#033[00m
Nov 25 11:51:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 25 11:51:33 np0005535469 nova_compute[254092]: 2025-11-25 16:51:33.888 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 105 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 11:51:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:37 np0005535469 nova_compute[254092]: 2025-11-25 16:51:37.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9193 writes, 41K keys, 9193 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 9193 writes, 9193 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1534 writes, 6924 keys, 1534 commit groups, 1.0 writes per commit group, ingest: 9.27 MB, 0.02 MB/s#012Interval WAL: 1534 writes, 1534 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     25.8      1.82              0.16        25    0.073       0      0       0.0       0.0#012  L6      1/0    9.28 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    102.2     84.9      2.17              0.53        24    0.091    130K    13K       0.0       0.0#012 Sum      1/0    9.28 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     55.6     58.0      3.99              0.69        49    0.081    130K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.6     92.9     93.0      0.54              0.16        10    0.054     33K   2569       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    102.2     84.9      2.17              0.53        24    0.091    130K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     26.4      1.77              0.16        24    0.074       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.1 total, 600.0 interval#012Flush(GB): cumulative 0.046, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 4.0 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 26.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000537 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1755,25.55 MB,8.40473%) FilterBlock(50,367.80 KB,0.11815%) IndexBlock(50,631.80 KB,0.202957%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 11:51:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:37Z|01055|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 11:51:37 np0005535469 NetworkManager[48891]: <info>  [1764089497.6682] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Nov 25 11:51:37 np0005535469 NetworkManager[48891]: <info>  [1764089497.6687] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Nov 25 11:51:37 np0005535469 nova_compute[254092]: 2025-11-25 16:51:37.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:37Z|01056|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 11:51:37 np0005535469 nova_compute[254092]: 2025-11-25 16:51:37.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:37 np0005535469 nova_compute[254092]: 2025-11-25 16:51:37.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:37 np0005535469 nova_compute[254092]: 2025-11-25 16:51:37.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 746 KiB/s wr, 199 op/s
Nov 25 11:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 25 11:51:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 25 11:51:37 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 25 11:51:38 np0005535469 nova_compute[254092]: 2025-11-25 16:51:38.436 254096 DEBUG nova.compute.manager [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:38 np0005535469 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG nova.compute.manager [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing instance network info cache due to event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:51:38 np0005535469 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG oslo_concurrency.lockutils [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:38 np0005535469 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG oslo_concurrency.lockutils [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:38 np0005535469 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG nova.network.neutron [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:51:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:38Z|01057|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 11:51:38 np0005535469 nova_compute[254092]: 2025-11-25 16:51:38.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:38 np0005535469 nova_compute[254092]: 2025-11-25 16:51:38.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 25 11:51:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 25 11:51:38 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 25 11:51:39 np0005535469 nova_compute[254092]: 2025-11-25 16:51:39.756 254096 DEBUG nova.network.neutron [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updated VIF entry in instance network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:51:39 np0005535469 nova_compute[254092]: 2025-11-25 16:51:39.757 254096 DEBUG nova.network.neutron [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:39 np0005535469 nova_compute[254092]: 2025-11-25 16:51:39.774 254096 DEBUG oslo_concurrency.lockutils [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:51:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 20 KiB/s wr, 228 op/s
Nov 25 11:51:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 25 11:51:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 25 11:51:39 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:51:40
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.mgr', 'backups']
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 25 11:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 25 11:51:40 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 25 11:51:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 11:51:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 25 11:51:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 25 11:51:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 25 11:51:42 np0005535469 nova_compute[254092]: 2025-11-25 16:51:42.172 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 25 11:51:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 25 11:51:43 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 25 11:51:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:43.047 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:51:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:43.048 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:51:43 np0005535469 nova_compute[254092]: 2025-11-25 16:51:43.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:43 np0005535469 nova_compute[254092]: 2025-11-25 16:51:43.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 11:51:45 np0005535469 nova_compute[254092]: 2025-11-25 16:51:45.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:45 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:45Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 11:51:45 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:45Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 11:51:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 104 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.7 MiB/s wr, 195 op/s
Nov 25 11:51:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 25 11:51:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 25 11:51:47 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 25 11:51:47 np0005535469 nova_compute[254092]: 2025-11-25 16:51:47.145 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089492.1432257, 2fa32ddb-072c-480c-9df3-a207412beb72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:47 np0005535469 nova_compute[254092]: 2025-11-25 16:51:47.146 254096 INFO nova.compute.manager [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:51:47 np0005535469 nova_compute[254092]: 2025-11-25 16:51:47.169 254096 DEBUG nova.compute.manager [None req-4a657906-8202-4fb9-a7ed-65914e5ff376 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:47 np0005535469 nova_compute[254092]: 2025-11-25 16:51:47.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:47 np0005535469 nova_compute[254092]: 2025-11-25 16:51:47.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 109 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 4.1 MiB/s wr, 146 op/s
Nov 25 11:51:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 25 11:51:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 25 11:51:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:48.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 25 11:51:48 np0005535469 nova_compute[254092]: 2025-11-25 16:51:48.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 25 11:51:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 25 11:51:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 25 11:51:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 109 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 4.1 MiB/s wr, 146 op/s
Nov 25 11:51:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 25 11:51:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 25 11:51:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.168 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.169 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.189 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:51:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.265 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.265 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.273 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.273 254096 INFO nova.compute.claims [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007562842881096569 of space, bias 1.0, pg target 0.22688528643289707 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671937064530833 of space, bias 1.0, pg target 0.20015811193592498 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.377 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.732 254096 INFO nova.compute.manager [None req-b5f82a09-7244-45a2-a96e-061cbbd2f585 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Get console output#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.738 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:51:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:51:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230878483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.833 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.841 254096 DEBUG nova.compute.provider_tree [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.851 254096 DEBUG nova.scheduler.client.report [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:51:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 241 KiB/s wr, 243 op/s
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.882 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.883 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.936 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.936 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.952 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:51:51 np0005535469 nova_compute[254092]: 2025-11-25 16:51:51.968 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.049 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.050 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.051 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Creating image(s)#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.071 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.099 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.129 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.134 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 25 11:51:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 25 11:51:52 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.209 254096 DEBUG oslo_concurrency.lockutils [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.210 254096 DEBUG oslo_concurrency.lockutils [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.211 254096 DEBUG nova.compute.manager [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.217 254096 DEBUG nova.compute.manager [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.219 254096 DEBUG nova.objects.instance [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.228 254096 DEBUG nova.policy [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '406c96278eea4ca9ac09c960f9240fd6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd04ee87178c14bcc860cdca885ea5685', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.242 254096 DEBUG nova.virt.libvirt.driver [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.244 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.244 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.245 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.245 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.274 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.279 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.612 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:52 np0005535469 podman[361379]: 2025-11-25 16:51:52.646403881 +0000 UTC m=+0.066306182 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 11:51:52 np0005535469 podman[361380]: 2025-11-25 16:51:52.66589157 +0000 UTC m=+0.085801122 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.684 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] resizing rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:51:52 np0005535469 podman[361381]: 2025-11-25 16:51:52.69051696 +0000 UTC m=+0.100924324 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.773 254096 DEBUG nova.objects.instance [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lazy-loading 'migration_context' on Instance uuid afaa5e41-729a-48cb-bfc0-54a38b0dc96f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.783 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Successfully created port: 08e9db98-366d-49ea-aa38-b2d4e8a80e80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.787 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.787 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Ensure instance console log exists: /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.788 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.788 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:52 np0005535469 nova_compute[254092]: 2025-11-25 16:51:52.788 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 25 11:51:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 25 11:51:53 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.509 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Successfully updated port: 08e9db98-366d-49ea-aa38-b2d4e8a80e80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.533 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.534 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquired lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.534 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.710 254096 DEBUG nova.compute.manager [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-changed-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.710 254096 DEBUG nova.compute.manager [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Refreshing instance network info cache due to event network-changed-08e9db98-366d-49ea-aa38-b2d4e8a80e80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.711 254096 DEBUG oslo_concurrency.lockutils [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.766 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:51:53 np0005535469 nova_compute[254092]: 2025-11-25 16:51:53.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 243 KiB/s wr, 246 op/s
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:51:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4caa3808-6e2f-45e5-a543-39ef0497e8fb does not exist
Nov 25 11:51:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fe97f82c-dbac-4001-a1a6-60421bc4ca8f does not exist
Nov 25 11:51:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dfbc2fdd-dfe4-4351-8762-830c4f9863b5 does not exist
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:51:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:51:54 np0005535469 kernel: tap0cdb5ab1-84 (unregistering): left promiscuous mode
Nov 25 11:51:54 np0005535469 NetworkManager[48891]: <info>  [1764089514.5623] device (tap0cdb5ab1-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:54Z|01058|binding|INFO|Releasing lport 0cdb5ab1-8463-4494-a522-360862f2152e from this chassis (sb_readonly=0)
Nov 25 11:51:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:54Z|01059|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e down in Southbound
Nov 25 11:51:54 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:54Z|01060|binding|INFO|Removing iface tap0cdb5ab1-84 ovn-installed in OVS
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.578 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a unbound from our chassis#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.580 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b70d379-8b3d-4361-b11d-cafbb578194a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.581 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb942ee5-ac13-4582-a01f-164abb8085bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.581 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace which is not needed anymore#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:54 np0005535469 podman[361780]: 2025-11-25 16:51:54.620923259 +0000 UTC m=+0.066266442 container create e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:51:54 np0005535469 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 25 11:51:54 np0005535469 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000069.scope: Consumed 13.694s CPU time.
Nov 25 11:51:54 np0005535469 systemd[1]: Started libpod-conmon-e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865.scope.
Nov 25 11:51:54 np0005535469 systemd-machined[216343]: Machine qemu-133-instance-00000069 terminated.
Nov 25 11:51:54 np0005535469 podman[361780]: 2025-11-25 16:51:54.581480087 +0000 UTC m=+0.026823290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:51:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:54 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : haproxy version is 2.8.14-c23fe91
Nov 25 11:51:54 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : path to executable is /usr/sbin/haproxy
Nov 25 11:51:54 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [WARNING]  (361132) : Exiting Master process...
Nov 25 11:51:54 np0005535469 podman[361780]: 2025-11-25 16:51:54.701723685 +0000 UTC m=+0.147066898 container init e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:51:54 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [ALERT]    (361132) : Current worker (361134) exited with code 143 (Terminated)
Nov 25 11:51:54 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [WARNING]  (361132) : All workers exited. Exiting... (0)
Nov 25 11:51:54 np0005535469 systemd[1]: libpod-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e.scope: Deactivated successfully.
Nov 25 11:51:54 np0005535469 podman[361780]: 2025-11-25 16:51:54.708110848 +0000 UTC m=+0.153454031 container start e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:51:54 np0005535469 podman[361780]: 2025-11-25 16:51:54.711903582 +0000 UTC m=+0.157246755 container attach e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 25 11:51:54 np0005535469 inspiring_bohr[361823]: 167 167
Nov 25 11:51:54 np0005535469 systemd[1]: libpod-e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865.scope: Deactivated successfully.
Nov 25 11:51:54 np0005535469 podman[361780]: 2025-11-25 16:51:54.713869014 +0000 UTC m=+0.159212197 container died e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 11:51:54 np0005535469 podman[361821]: 2025-11-25 16:51:54.713882035 +0000 UTC m=+0.047447801 container died a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:51:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-02b6a7f7b9d2bdfecb85bba395c3ea10c90c41ec28e482c4e1ff05faa81e0e9d-merged.mount: Deactivated successfully.
Nov 25 11:51:54 np0005535469 podman[361780]: 2025-11-25 16:51:54.758463846 +0000 UTC m=+0.203807029 container remove e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 11:51:54 np0005535469 systemd[1]: libpod-conmon-e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865.scope: Deactivated successfully.
Nov 25 11:51:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e-userdata-shm.mount: Deactivated successfully.
Nov 25 11:51:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0d8c25a22fe3e9484bc39161e604e238df356a2c17ccd6f67b57960b5feb4d5f-merged.mount: Deactivated successfully.
Nov 25 11:51:54 np0005535469 podman[361821]: 2025-11-25 16:51:54.790324093 +0000 UTC m=+0.123889829 container cleanup a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:51:54 np0005535469 systemd[1]: libpod-conmon-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e.scope: Deactivated successfully.
Nov 25 11:51:54 np0005535469 podman[361867]: 2025-11-25 16:51:54.852630146 +0000 UTC m=+0.042035474 container remove a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.859 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e853653-9e6f-4056-84ca-e4037b208a4c]: (4, ('Tue Nov 25 04:51:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e)\na355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e\nTue Nov 25 04:51:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e)\na355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.862 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af58ed18-4864-4ed2-9fe2-9e0324a1c290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:54 np0005535469 kernel: tap1b70d379-80: left promiscuous mode
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG nova.compute.manager [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG oslo_concurrency.lockutils [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG oslo_concurrency.lockutils [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG oslo_concurrency.lockutils [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.932 254096 DEBUG nova.compute.manager [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:54 np0005535469 nova_compute[254092]: 2025-11-25 16:51:54.932 254096 WARNING nova.compute.manager [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state active and task_state powering-off.#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.935 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b49009f8-017f-4849-b7fc-999e6cf2a89c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[817dac9b-ab43-4e90-b484-0d645be644a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a09dd5b-5fa5-4bcf-a598-2fc19547372a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 podman[361893]: 2025-11-25 16:51:54.950812803 +0000 UTC m=+0.041624152 container create 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4efad0ec-b7ca-4ad8-a214-13d2c6d72c72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594877, 'reachable_time': 29942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361912, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.967 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:51:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.967 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c6187ffa-9678-49da-8745-2f1ad970f171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:54 np0005535469 systemd[1]: Started libpod-conmon-734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7.scope.
Nov 25 11:51:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:55 np0005535469 podman[361893]: 2025-11-25 16:51:54.930908553 +0000 UTC m=+0.021719902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:51:55 np0005535469 podman[361893]: 2025-11-25 16:51:55.029894692 +0000 UTC m=+0.120706051 container init 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:51:55 np0005535469 podman[361893]: 2025-11-25 16:51:55.039307149 +0000 UTC m=+0.130118518 container start 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:51:55 np0005535469 podman[361893]: 2025-11-25 16:51:55.042900326 +0000 UTC m=+0.133711675 container attach 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.184 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updating instance_info_cache with network_info: [{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Releasing lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance network_info: |[{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG oslo_concurrency.lockutils [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG nova.network.neutron [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Refreshing network info cache for port 08e9db98-366d-49ea-aa38-b2d4e8a80e80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.204 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start _get_guest_xml network_info=[{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.209 254096 WARNING nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.219 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.220 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.222 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.223 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.226 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.226 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.228 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:51:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875539692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:51:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:51:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875539692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.352 254096 INFO nova.virt.libvirt.driver [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.358 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance destroyed successfully.#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.359 254096 DEBUG nova.objects.instance [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.372 254096 DEBUG nova.compute.manager [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.414 254096 DEBUG oslo_concurrency.lockutils [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:55 np0005535469 systemd[1]: run-netns-ovnmeta\x2d1b70d379\x2d8b3d\x2d4361\x2db11d\x2dcafbb578194a.mount: Deactivated successfully.
Nov 25 11:51:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:51:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296492758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.701 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.721 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:55 np0005535469 nova_compute[254092]: 2025-11-25 16:51:55.725 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 153 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.3 MiB/s wr, 254 op/s
Nov 25 11:51:56 np0005535469 admiring_edison[361916]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:51:56 np0005535469 admiring_edison[361916]: --> relative data size: 1.0
Nov 25 11:51:56 np0005535469 admiring_edison[361916]: --> All data devices are unavailable
Nov 25 11:51:56 np0005535469 systemd[1]: libpod-734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7.scope: Deactivated successfully.
Nov 25 11:51:56 np0005535469 podman[361893]: 2025-11-25 16:51:56.067605733 +0000 UTC m=+1.158417082 container died 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 11:51:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711-merged.mount: Deactivated successfully.
Nov 25 11:51:56 np0005535469 podman[361893]: 2025-11-25 16:51:56.12604455 +0000 UTC m=+1.216855909 container remove 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1083064880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:51:56 np0005535469 systemd[1]: libpod-conmon-734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7.scope: Deactivated successfully.
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.154 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.158 254096 DEBUG nova.virt.libvirt.vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1469913116',display_name='tempest-ServerMetadataTestJSON-server-1469913116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1469913116',id=106,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d04ee87178c14bcc860cdca885ea5685',ramdisk_id='',reservation_id='r-0xb1sqh5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-246494039',owner_user_name='tempest-ServerMetadataTestJSON-246494039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:52Z,user_data=None,user_id='406c96278eea4ca9ac09c960f9240fd6',uuid=afaa5e41-729a-48cb-bfc0-54a38b0dc96f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.159 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converting VIF {"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.160 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.162 254096 DEBUG nova.objects.instance [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lazy-loading 'pci_devices' on Instance uuid afaa5e41-729a-48cb-bfc0-54a38b0dc96f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.175 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <uuid>afaa5e41-729a-48cb-bfc0-54a38b0dc96f</uuid>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <name>instance-0000006a</name>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerMetadataTestJSON-server-1469913116</nova:name>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:51:55</nova:creationTime>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:user uuid="406c96278eea4ca9ac09c960f9240fd6">tempest-ServerMetadataTestJSON-246494039-project-member</nova:user>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:project uuid="d04ee87178c14bcc860cdca885ea5685">tempest-ServerMetadataTestJSON-246494039</nova:project>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <nova:port uuid="08e9db98-366d-49ea-aa38-b2d4e8a80e80">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <entry name="serial">afaa5e41-729a-48cb-bfc0-54a38b0dc96f</entry>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <entry name="uuid">afaa5e41-729a-48cb-bfc0-54a38b0dc96f</entry>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e2:98:e4"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <target dev="tap08e9db98-36"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/console.log" append="off"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:51:56 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:51:56 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:51:56 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:51:56 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.177 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Preparing to wait for external event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.177 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.177 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.178 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.178 254096 DEBUG nova.virt.libvirt.vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1469913116',display_name='tempest-ServerMetadataTestJSON-server-1469913116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1469913116',id=106,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d04ee87178c14bcc860cdca885ea5685',ramdisk_id='',reservation_id='r-0xb1sqh5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-246494039',owner_user_name='tempest-ServerMetadataTestJSON-246494039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:52Z,user_data=None,user_id='406c96278eea4ca9ac09c960f9240fd6',uuid=afaa5e41-729a-48cb-bfc0-54a38b0dc96f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.179 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converting VIF {"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.179 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.179 254096 DEBUG os_vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.180 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.181 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.183 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08e9db98-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08e9db98-36, col_values=(('external_ids', {'iface-id': '08e9db98-366d-49ea-aa38-b2d4e8a80e80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:98:e4', 'vm-uuid': 'afaa5e41-729a-48cb-bfc0-54a38b0dc96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:56 np0005535469 NetworkManager[48891]: <info>  [1764089516.1871] manager: (tap08e9db98-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.196 254096 INFO os_vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36')#033[00m
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.246 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.247 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.247 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] No VIF found with MAC fa:16:3e:e2:98:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.248 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Using config drive#033[00m
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.259738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516259771, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2427, "num_deletes": 510, "total_data_size": 3207647, "memory_usage": 3264272, "flush_reason": "Manual Compaction"}
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516277249, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2014992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40183, "largest_seqno": 42609, "table_properties": {"data_size": 2006806, "index_size": 4172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 23243, "raw_average_key_size": 19, "raw_value_size": 1986766, "raw_average_value_size": 1690, "num_data_blocks": 187, "num_entries": 1175, "num_filter_entries": 1175, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089331, "oldest_key_time": 1764089331, "file_creation_time": 1764089516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 17553 microseconds, and 5208 cpu microseconds.
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.277290) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2014992 bytes OK
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.277312) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.294412) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.294492) EVENT_LOG_v1 {"time_micros": 1764089516294482, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.294513) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3196352, prev total WAL file size 3196352, number of live WAL files 2.
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.295508) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353037' seq:72057594037927935, type:22 .. '6D6772737461740031373539' seq:0, type:0; will stop at (end)
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1967KB)], [89(9497KB)]
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516295559, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11740761, "oldest_snapshot_seqno": -1}
Nov 25 11:51:56 np0005535469 nova_compute[254092]: 2025-11-25 16:51:56.299 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6736 keys, 8868334 bytes, temperature: kUnknown
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516353296, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 8868334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8824295, "index_size": 26071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 172814, "raw_average_key_size": 25, "raw_value_size": 8704575, "raw_average_value_size": 1292, "num_data_blocks": 1032, "num_entries": 6736, "num_filter_entries": 6736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.353572) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 8868334 bytes
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.355080) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.0 rd, 153.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(10.2) write-amplify(4.4) OK, records in: 7679, records dropped: 943 output_compression: NoCompression
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.355095) EVENT_LOG_v1 {"time_micros": 1764089516355088, "job": 52, "event": "compaction_finished", "compaction_time_micros": 57832, "compaction_time_cpu_micros": 21501, "output_level": 6, "num_output_files": 1, "total_output_size": 8868334, "num_input_records": 7679, "num_output_records": 6736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516355539, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516357141, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.295422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:51:56 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:51:56 np0005535469 podman[362184]: 2025-11-25 16:51:56.733898439 +0000 UTC m=+0.044267483 container create e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:51:56 np0005535469 systemd[1]: Started libpod-conmon-e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841.scope.
Nov 25 11:51:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:56 np0005535469 podman[362184]: 2025-11-25 16:51:56.715346965 +0000 UTC m=+0.025716009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:51:56 np0005535469 podman[362184]: 2025-11-25 16:51:56.815231709 +0000 UTC m=+0.125600743 container init e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:51:56 np0005535469 podman[362184]: 2025-11-25 16:51:56.821929422 +0000 UTC m=+0.132298426 container start e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 11:51:56 np0005535469 podman[362184]: 2025-11-25 16:51:56.824991005 +0000 UTC m=+0.135360059 container attach e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:51:56 np0005535469 unruffled_mahavira[362201]: 167 167
Nov 25 11:51:56 np0005535469 systemd[1]: libpod-e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841.scope: Deactivated successfully.
Nov 25 11:51:56 np0005535469 podman[362184]: 2025-11-25 16:51:56.82887242 +0000 UTC m=+0.139241504 container died e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:51:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b758801e73ad4e7fe87bf89ad950a25762421ad666eb3b30180835238b991a5c-merged.mount: Deactivated successfully.
Nov 25 11:51:56 np0005535469 podman[362184]: 2025-11-25 16:51:56.87998216 +0000 UTC m=+0.190351214 container remove e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:51:56 np0005535469 systemd[1]: libpod-conmon-e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841.scope: Deactivated successfully.
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.062 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Creating config drive at /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config#033[00m
Nov 25 11:51:57 np0005535469 podman[362226]: 2025-11-25 16:51:57.068751029 +0000 UTC m=+0.047915533 container create 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.073 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1jccc7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:57 np0005535469 systemd[1]: Started libpod-conmon-82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3.scope.
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.127 254096 DEBUG nova.compute.manager [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.127 254096 DEBUG oslo_concurrency.lockutils [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 DEBUG oslo_concurrency.lockutils [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 DEBUG oslo_concurrency.lockutils [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 DEBUG nova.compute.manager [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 WARNING nova.compute.manager [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state stopped and task_state None.#033[00m
Nov 25 11:51:57 np0005535469 podman[362226]: 2025-11-25 16:51:57.047495481 +0000 UTC m=+0.026660045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:51:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:57 np0005535469 podman[362226]: 2025-11-25 16:51:57.174431671 +0000 UTC m=+0.153596205 container init 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:51:57 np0005535469 podman[362226]: 2025-11-25 16:51:57.187012593 +0000 UTC m=+0.166177107 container start 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:51:57 np0005535469 podman[362226]: 2025-11-25 16:51:57.19024925 +0000 UTC m=+0.169413794 container attach 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.223 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1jccc7b" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.261 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.267 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.438 254096 DEBUG nova.network.neutron [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updated VIF entry in instance network info cache for port 08e9db98-366d-49ea-aa38-b2d4e8a80e80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.440 254096 DEBUG nova.network.neutron [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updating instance_info_cache with network_info: [{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.447 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.448 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deleting local config drive /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config because it was imported into RBD.#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.457 254096 DEBUG oslo_concurrency.lockutils [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:51:57 np0005535469 NetworkManager[48891]: <info>  [1764089517.5091] manager: (tap08e9db98-36): new Tun device (/org/freedesktop/NetworkManager/Devices/434)
Nov 25 11:51:57 np0005535469 systemd-udevd[361798]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:51:57 np0005535469 kernel: tap08e9db98-36: entered promiscuous mode
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:57Z|01061|binding|INFO|Claiming lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 for this chassis.
Nov 25 11:51:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:57Z|01062|binding|INFO|08e9db98-366d-49ea-aa38-b2d4e8a80e80: Claiming fa:16:3e:e2:98:e4 10.100.0.12
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.528 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:98:e4 10.100.0.12'], port_security=['fa:16:3e:e2:98:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'afaa5e41-729a-48cb-bfc0-54a38b0dc96f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-461d8b90-d4fc-454e-911d-7fee7be073c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd04ee87178c14bcc860cdca885ea5685', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94eded8c-472b-4a0e-a390-113a914b266f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=957a0d26-cdd2-49bf-b411-89b73b8c3e75, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=08e9db98-366d-49ea-aa38-b2d4e8a80e80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:51:57 np0005535469 NetworkManager[48891]: <info>  [1764089517.5308] device (tap08e9db98-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:51:57 np0005535469 NetworkManager[48891]: <info>  [1764089517.5321] device (tap08e9db98-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.532 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 08e9db98-366d-49ea-aa38-b2d4e8a80e80 in datapath 461d8b90-d4fc-454e-911d-7fee7be073c4 bound to our chassis#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.534 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 461d8b90-d4fc-454e-911d-7fee7be073c4#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.552 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f191d6f4-ce66-4562-bdd7-701e4975244d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.553 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap461d8b90-d1 in ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.555 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap461d8b90-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c0299ef4-52dd-40c1-bc35-6adec128a2b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3dbf404-dc92-47b9-904d-d72d5a8090ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 systemd-machined[216343]: New machine qemu-134-instance-0000006a.
Nov 25 11:51:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:57Z|01063|binding|INFO|Setting lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 ovn-installed in OVS
Nov 25 11:51:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:57Z|01064|binding|INFO|Setting lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 up in Southbound
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:57 np0005535469 systemd[1]: Started Virtual Machine qemu-134-instance-0000006a.
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.580 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8847d402-12de-4541-9f34-99b45a757fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.608 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[882fa0a7-8b3e-41dc-9641-45327cf919f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.641 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[108c06fb-daa5-4540-b7d9-0daa1fcd5156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.648 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5600c4e2-e07c-4d1a-8efc-3f18608ee0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 NetworkManager[48891]: <info>  [1764089517.6488] manager: (tap461d8b90-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/435)
Nov 25 11:51:57 np0005535469 systemd-udevd[362319]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.681 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed77900-5062-4282-b954-e768b0bf3a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.686 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bd6ee6-c4d5-4a23-bb04-3c8fb3d939fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 NetworkManager[48891]: <info>  [1764089517.7165] device (tap461d8b90-d0): carrier: link connected
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.722 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8be1ae47-719e-4be3-821e-50e44a5d0439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66012451-3946-4456-8727-006be3e2cf31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap461d8b90-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e2:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 313], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597530, 'reachable_time': 22263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362341, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[12d2ff25-8197-4f62-8088-5ec7dc4f73a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:e2a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597530, 'tstamp': 597530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362342, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.779 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99162a79-6734-400f-8cef-97c64c1cb91e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap461d8b90-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e2:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 313], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597530, 'reachable_time': 22263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362343, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7a1998-7812-49d8-8aef-1e6d7101d0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c95cc443-d126-40f4-8b85-6f0a9c8ccdf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.876 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap461d8b90-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap461d8b90-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 25 11:51:57 np0005535469 NetworkManager[48891]: <info>  [1764089517.8798] manager: (tap461d8b90-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:57 np0005535469 kernel: tap461d8b90-d0: entered promiscuous mode
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.889 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap461d8b90-d0, col_values=(('external_ids', {'iface-id': '6b5b1d51-4b75-47f2-a227-9f92a6bb6041'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:51:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:51:57Z|01065|binding|INFO|Releasing lport 6b5b1d51-4b75-47f2-a227-9f92a6bb6041 from this chassis (sb_readonly=0)
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.911 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/461d8b90-d4fc-454e-911d-7fee7be073c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/461d8b90-d4fc-454e-911d-7fee7be073c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.912 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f245930-71ec-4ddc-9edf-26ef6dea15ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.912 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-461d8b90-d4fc-454e-911d-7fee7be073c4
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/461d8b90-d4fc-454e-911d-7fee7be073c4.pid.haproxy
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 461d8b90-d4fc-454e-911d-7fee7be073c4
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:51:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.913 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'env', 'PROCESS_TAG=haproxy-461d8b90-d4fc-454e-911d-7fee7be073c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/461d8b90-d4fc-454e-911d-7fee7be073c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.925 254096 DEBUG nova.compute.manager [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG oslo_concurrency.lockutils [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG oslo_concurrency.lockutils [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG oslo_concurrency.lockutils [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:57 np0005535469 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG nova.compute.manager [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Processing event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]: {
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:    "0": [
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:        {
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "devices": [
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "/dev/loop3"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            ],
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_name": "ceph_lv0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_size": "21470642176",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "name": "ceph_lv0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "tags": {
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cluster_name": "ceph",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.crush_device_class": "",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.encrypted": "0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osd_id": "0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.type": "block",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.vdo": "0"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            },
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "type": "block",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "vg_name": "ceph_vg0"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:        }
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:    ],
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:    "1": [
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:        {
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "devices": [
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "/dev/loop4"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            ],
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_name": "ceph_lv1",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_size": "21470642176",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "name": "ceph_lv1",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "tags": {
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cluster_name": "ceph",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.crush_device_class": "",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.encrypted": "0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osd_id": "1",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.type": "block",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.vdo": "0"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            },
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "type": "block",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "vg_name": "ceph_vg1"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:        }
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:    ],
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:    "2": [
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:        {
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "devices": [
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "/dev/loop5"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            ],
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_name": "ceph_lv2",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_size": "21470642176",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "name": "ceph_lv2",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "tags": {
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.cluster_name": "ceph",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.crush_device_class": "",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.encrypted": "0",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osd_id": "2",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.type": "block",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:                "ceph.vdo": "0"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            },
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "type": "block",
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:            "vg_name": "ceph_vg2"
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:        }
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]:    ]
Nov 25 11:51:57 np0005535469 intelligent_kapitsa[362244]: }
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.001 254096 INFO nova.compute.manager [None req-fb96f65d-8461-4e36-9e2c-acc401e25086 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Get console output#033[00m
Nov 25 11:51:58 np0005535469 systemd[1]: libpod-82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3.scope: Deactivated successfully.
Nov 25 11:51:58 np0005535469 podman[362226]: 2025-11-25 16:51:58.016587636 +0000 UTC m=+0.995752150 container died 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 11:51:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d-merged.mount: Deactivated successfully.
Nov 25 11:51:58 np0005535469 podman[362226]: 2025-11-25 16:51:58.077397989 +0000 UTC m=+1.056562503 container remove 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:51:58 np0005535469 systemd[1]: libpod-conmon-82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3.scope: Deactivated successfully.
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.332 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089518.3221192, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.334 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Started (Lifecycle Event)#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.336 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.345 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:51:58 np0005535469 podman[362485]: 2025-11-25 16:51:58.349626737 +0000 UTC m=+0.050951156 container create b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.349 254096 INFO nova.virt.libvirt.driver [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance spawned successfully.#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.350 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.361 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.362 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.369 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.373 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.373 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.375 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.375 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG oslo_concurrency.lockutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG oslo_concurrency.lockutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG nova.network.neutron [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'info_cache' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.406 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.407 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089518.3224535, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:51:58 np0005535469 systemd[1]: Started libpod-conmon-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74.scope.
Nov 25 11:51:58 np0005535469 podman[362485]: 2025-11-25 16:51:58.323836926 +0000 UTC m=+0.025161365 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:51:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c6aa58a7c081ad9b5f642b8ac1385a57de4dfb79d58b6ff8d6feed85edc246c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.442 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089518.3439662, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.442 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:51:58 np0005535469 podman[362485]: 2025-11-25 16:51:58.455170285 +0000 UTC m=+0.156494704 container init b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.460 254096 INFO nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 6.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.460 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:58 np0005535469 podman[362485]: 2025-11-25 16:51:58.461954579 +0000 UTC m=+0.163278998 container start b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.461 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.471 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:51:58 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : New worker (362556) forked
Nov 25 11:51:58 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : Loading success.
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.504 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.523 254096 INFO nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 7.29 seconds to build instance.#033[00m
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.540 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:51:58 np0005535469 podman[362605]: 2025-11-25 16:51:58.782129271 +0000 UTC m=+0.043910605 container create 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:51:58 np0005535469 nova_compute[254092]: 2025-11-25 16:51:58.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:51:58 np0005535469 systemd[1]: Started libpod-conmon-39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b.scope.
Nov 25 11:51:58 np0005535469 podman[362605]: 2025-11-25 16:51:58.762918249 +0000 UTC m=+0.024699573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:51:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:58 np0005535469 podman[362605]: 2025-11-25 16:51:58.904315471 +0000 UTC m=+0.166096825 container init 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:51:58 np0005535469 podman[362605]: 2025-11-25 16:51:58.915350261 +0000 UTC m=+0.177131585 container start 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:51:58 np0005535469 podman[362605]: 2025-11-25 16:51:58.919409901 +0000 UTC m=+0.181191295 container attach 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:51:58 np0005535469 sad_hellman[362622]: 167 167
Nov 25 11:51:58 np0005535469 systemd[1]: libpod-39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b.scope: Deactivated successfully.
Nov 25 11:51:58 np0005535469 podman[362605]: 2025-11-25 16:51:58.924574391 +0000 UTC m=+0.186355735 container died 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:51:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cfa958b78d903d4f52217048ec5002aad5f94001f4df740fa78acf0490c58f88-merged.mount: Deactivated successfully.
Nov 25 11:51:58 np0005535469 podman[362605]: 2025-11-25 16:51:58.972499224 +0000 UTC m=+0.234280548 container remove 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:51:58 np0005535469 systemd[1]: libpod-conmon-39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b.scope: Deactivated successfully.
Nov 25 11:51:59 np0005535469 podman[362649]: 2025-11-25 16:51:59.140204461 +0000 UTC m=+0.040910552 container create a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 11:51:59 np0005535469 systemd[1]: Started libpod-conmon-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope.
Nov 25 11:51:59 np0005535469 podman[362649]: 2025-11-25 16:51:59.122473639 +0000 UTC m=+0.023179760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:51:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:51:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:59 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:51:59 np0005535469 podman[362649]: 2025-11-25 16:51:59.250358735 +0000 UTC m=+0.151064866 container init a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 11:51:59 np0005535469 podman[362649]: 2025-11-25 16:51:59.258207908 +0000 UTC m=+0.158914039 container start a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:51:59 np0005535469 podman[362649]: 2025-11-25 16:51:59.262811493 +0000 UTC m=+0.163517664 container attach a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:51:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 2.8 MiB/s wr, 115 op/s
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.209 254096 DEBUG nova.compute.manager [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.211 254096 DEBUG oslo_concurrency.lockutils [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.211 254096 DEBUG oslo_concurrency.lockutils [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.211 254096 DEBUG oslo_concurrency.lockutils [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.212 254096 DEBUG nova.compute.manager [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] No waiting events found dispatching network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.212 254096 WARNING nova.compute.manager [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received unexpected event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:52:00 np0005535469 crazy_carver[362666]: {
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "osd_id": 1,
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "type": "bluestore"
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:    },
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "osd_id": 2,
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "type": "bluestore"
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:    },
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "osd_id": 0,
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:        "type": "bluestore"
Nov 25 11:52:00 np0005535469 crazy_carver[362666]:    }
Nov 25 11:52:00 np0005535469 crazy_carver[362666]: }
Nov 25 11:52:00 np0005535469 systemd[1]: libpod-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope: Deactivated successfully.
Nov 25 11:52:00 np0005535469 podman[362649]: 2025-11-25 16:52:00.258758898 +0000 UTC m=+1.159464999 container died a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:52:00 np0005535469 systemd[1]: libpod-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope: Consumed 1.004s CPU time.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.269204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520269274, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 285, "num_deletes": 251, "total_data_size": 69887, "memory_usage": 75368, "flush_reason": "Manual Compaction"}
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520271397, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 69520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42610, "largest_seqno": 42894, "table_properties": {"data_size": 67592, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4952, "raw_average_key_size": 18, "raw_value_size": 63852, "raw_average_value_size": 236, "num_data_blocks": 7, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089517, "oldest_key_time": 1764089517, "file_creation_time": 1764089520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 2220 microseconds, and 1170 cpu microseconds.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.271435) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 69520 bytes OK
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.271449) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272427) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272441) EVENT_LOG_v1 {"time_micros": 1764089520272436, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 67761, prev total WAL file size 67761, number of live WAL files 2.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272754) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(67KB)], [92(8660KB)]
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520272881, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 8937854, "oldest_snapshot_seqno": -1}
Nov 25 11:52:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1-merged.mount: Deactivated successfully.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6497 keys, 7286138 bytes, temperature: kUnknown
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520303822, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7286138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7245173, "index_size": 23639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 168547, "raw_average_key_size": 25, "raw_value_size": 7131038, "raw_average_value_size": 1097, "num_data_blocks": 921, "num_entries": 6497, "num_filter_entries": 6497, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.304157) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7286138 bytes
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.305873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 287.4 rd, 234.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(233.4) write-amplify(104.8) OK, records in: 7006, records dropped: 509 output_compression: NoCompression
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.305890) EVENT_LOG_v1 {"time_micros": 1764089520305882, "job": 54, "event": "compaction_finished", "compaction_time_micros": 31096, "compaction_time_cpu_micros": 16152, "output_level": 6, "num_output_files": 1, "total_output_size": 7286138, "num_input_records": 7006, "num_output_records": 6497, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520306331, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520307782, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:00 np0005535469 podman[362649]: 2025-11-25 16:52:00.322815239 +0000 UTC m=+1.223521340 container remove a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 11:52:00 np0005535469 systemd[1]: libpod-conmon-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope: Deactivated successfully.
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:52:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:52:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1271be50-61d3-4c31-9c30-1eee1d9c14d0 does not exist
Nov 25 11:52:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c9d3a61f-dac4-4bb1-9e26-c617601b2e76 does not exist
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.705 254096 DEBUG nova.network.neutron [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.733 254096 DEBUG oslo_concurrency.lockutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.756 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance destroyed successfully.#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.757 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.810 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.829 254096 DEBUG nova.virt.libvirt.vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:55Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.830 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.831 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.831 254096 DEBUG os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.833 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.833 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cdb5ab1-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.837 254096 INFO os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.843 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start _get_guest_xml network_info=[{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.847 254096 WARNING nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.854 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.855 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.858 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.858 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:00 np0005535469 nova_compute[254092]: 2025-11-25 16:52:00.873 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3020784841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.308 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.342 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405001762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.782 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.783 254096 DEBUG nova.virt.libvirt.vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:55Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.784 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.785 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.786 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.804 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <uuid>7368c721-3e2a-4635-b2d8-5703d20438d3</uuid>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <name>instance-00000069</name>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1076158717</nova:name>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:52:00</nova:creationTime>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <nova:port uuid="0cdb5ab1-8463-4494-a522-360862f2152e">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <entry name="serial">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <entry name="uuid">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ba:99:c7"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <target dev="tap0cdb5ab1-84"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/console.log" append="off"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:52:01 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:52:01 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:52:01 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:52:01 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.809 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.810 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.810 254096 DEBUG nova.virt.libvirt.vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:55Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.811 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.811 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.812 254096 DEBUG os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.813 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.813 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.815 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cdb5ab1-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.816 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0cdb5ab1-84, col_values=(('external_ids', {'iface-id': '0cdb5ab1-8463-4494-a522-360862f2152e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:99:c7', 'vm-uuid': '7368c721-3e2a-4635-b2d8-5703d20438d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:01 np0005535469 NetworkManager[48891]: <info>  [1764089521.8180] manager: (tap0cdb5ab1-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.825 254096 INFO os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')#033[00m
Nov 25 11:52:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Nov 25 11:52:01 np0005535469 kernel: tap0cdb5ab1-84: entered promiscuous mode
Nov 25 11:52:01 np0005535469 NetworkManager[48891]: <info>  [1764089521.8947] manager: (tap0cdb5ab1-84): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.895 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:01Z|01066|binding|INFO|Claiming lport 0cdb5ab1-8463-4494-a522-360862f2152e for this chassis.
Nov 25 11:52:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:01Z|01067|binding|INFO|0cdb5ab1-8463-4494-a522-360862f2152e: Claiming fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.903 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.906 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a bound to our chassis#033[00m
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.908 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1b70d379-8b3d-4361-b11d-cafbb578194a#033[00m
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:01Z|01068|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e ovn-installed in OVS
Nov 25 11:52:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:01Z|01069|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e up in Southbound
Nov 25 11:52:01 np0005535469 nova_compute[254092]: 2025-11-25 16:52:01.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b28517c3-2461-4677-bdc6-145f4225c6d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.922 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1b70d379-81 in ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.924 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1b70d379-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.924 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a067847-de20-4c93-bf4c-cea79ba5c650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.928 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7c1658-cc3f-407b-b704-0cd8697b1894]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:01 np0005535469 systemd-machined[216343]: New machine qemu-135-instance-00000069.
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.940 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c3f76d-5bdf-4511-9523-305d742d6e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:01 np0005535469 systemd[1]: Started Virtual Machine qemu-135-instance-00000069.
Nov 25 11:52:01 np0005535469 systemd-udevd[362841]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:52:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.972 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bed025f4-e9a3-440a-ad14-ab48aa0df8d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:01 np0005535469 NetworkManager[48891]: <info>  [1764089521.9854] device (tap0cdb5ab1-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:52:01 np0005535469 NetworkManager[48891]: <info>  [1764089521.9863] device (tap0cdb5ab1-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.007 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dcce64f5-d296-47b7-9c8f-d22758b84c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 systemd-udevd[362846]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:52:02 np0005535469 NetworkManager[48891]: <info>  [1764089522.0126] manager: (tap1b70d379-80): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45a68518-bec2-4129-a05a-727662ac64c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.041 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9da859-01ec-447c-9bf7-a703f83993c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.044 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8b8cb7-a323-4502-87ba-338bc9ad59dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 NetworkManager[48891]: <info>  [1764089522.0678] device (tap1b70d379-80): carrier: link connected
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.073 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9b835a61-c729-4453-ac02-13fd90e65ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.087 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4221330-284b-45e0-87cd-744a65b0ba2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597965, 'reachable_time': 34503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362871, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d250a6fb-adb9-4376-98c3-3ff68c44dbe9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:396a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597965, 'tstamp': 597965}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362872, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.111 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34952e07-965a-4629-a4e6-af3bb2c03d1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597965, 'reachable_time': 34503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362873, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.138 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8874d915-e8fb-4492-94db-31b93e3dccbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[439ee9d2-a1fb-4ef0-b9a8-f55c733b1b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.193 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.193 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.194 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b70d379-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:02 np0005535469 kernel: tap1b70d379-80: entered promiscuous mode
Nov 25 11:52:02 np0005535469 NetworkManager[48891]: <info>  [1764089522.1978] manager: (tap1b70d379-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.204 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1b70d379-80, col_values=(('external_ids', {'iface-id': '43f83cca-eded-4f81-a561-02d17bd21a2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:02Z|01070|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.210 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.213 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[511c2b34-aaf5-4a12-94e7-4633906fce6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.215 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:52:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.216 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'env', 'PROCESS_TAG=haproxy-1b70d379-8b3d-4361-b11d-cafbb578194a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1b70d379-8b3d-4361-b11d-cafbb578194a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.341 254096 DEBUG nova.compute.manager [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.341 254096 DEBUG oslo_concurrency.lockutils [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.341 254096 DEBUG oslo_concurrency.lockutils [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.342 254096 DEBUG oslo_concurrency.lockutils [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.342 254096 DEBUG nova.compute.manager [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.342 254096 WARNING nova.compute.manager [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 25 11:52:02 np0005535469 podman[362905]: 2025-11-25 16:52:02.60028673 +0000 UTC m=+0.048015867 container create d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:52:02 np0005535469 systemd[1]: Started libpod-conmon-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a.scope.
Nov 25 11:52:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:52:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4c45e0bc9eac1e04b6c01183480c0c616473c9660555153fe1e9a6aebaab91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:52:02 np0005535469 podman[362905]: 2025-11-25 16:52:02.577400347 +0000 UTC m=+0.025129504 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:52:02 np0005535469 podman[362905]: 2025-11-25 16:52:02.674155767 +0000 UTC m=+0.121884934 container init d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:52:02 np0005535469 podman[362905]: 2025-11-25 16:52:02.679813041 +0000 UTC m=+0.127542178 container start d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:52:02 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : New worker (362967) forked
Nov 25 11:52:02 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : Loading success.
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.766 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 7368c721-3e2a-4635-b2d8-5703d20438d3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.766 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089522.7656937, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.768 254096 DEBUG nova.compute.manager [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.771 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance rebooted successfully.#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.772 254096 DEBUG nova.compute.manager [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.795 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.798 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.828 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089522.7668285, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Started (Lifecycle Event)#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.856 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:02 np0005535469 nova_compute[254092]: 2025-11-25 16:52:02.861 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.662 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.662 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.662 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.663 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.663 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.665 254096 INFO nova.compute.manager [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Terminating instance#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.666 254096 DEBUG nova.compute.manager [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:52:03 np0005535469 kernel: tap08e9db98-36 (unregistering): left promiscuous mode
Nov 25 11:52:03 np0005535469 NetworkManager[48891]: <info>  [1764089523.7094] device (tap08e9db98-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:03Z|01071|binding|INFO|Releasing lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 from this chassis (sb_readonly=0)
Nov 25 11:52:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:03Z|01072|binding|INFO|Setting lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 down in Southbound
Nov 25 11:52:03 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:03Z|01073|binding|INFO|Removing iface tap08e9db98-36 ovn-installed in OVS
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.728 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:98:e4 10.100.0.12'], port_security=['fa:16:3e:e2:98:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'afaa5e41-729a-48cb-bfc0-54a38b0dc96f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-461d8b90-d4fc-454e-911d-7fee7be073c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd04ee87178c14bcc860cdca885ea5685', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94eded8c-472b-4a0e-a390-113a914b266f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=957a0d26-cdd2-49bf-b411-89b73b8c3e75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=08e9db98-366d-49ea-aa38-b2d4e8a80e80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.729 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 08e9db98-366d-49ea-aa38-b2d4e8a80e80 in datapath 461d8b90-d4fc-454e-911d-7fee7be073c4 unbound from our chassis#033[00m
Nov 25 11:52:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.730 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 461d8b90-d4fc-454e-911d-7fee7be073c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:52:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.731 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0fad0476-3a2a-42d0-9674-67477b9c51b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.731 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 namespace which is not needed anymore#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 25 11:52:03 np0005535469 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Consumed 5.990s CPU time.
Nov 25 11:52:03 np0005535469 systemd-machined[216343]: Machine qemu-134-instance-0000006a terminated.
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : haproxy version is 2.8.14-c23fe91
Nov 25 11:52:03 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : path to executable is /usr/sbin/haproxy
Nov 25 11:52:03 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [WARNING]  (362554) : Exiting Master process...
Nov 25 11:52:03 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [WARNING]  (362554) : Exiting Master process...
Nov 25 11:52:03 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [ALERT]    (362554) : Current worker (362556) exited with code 143 (Terminated)
Nov 25 11:52:03 np0005535469 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [WARNING]  (362554) : All workers exited. Exiting... (0)
Nov 25 11:52:03 np0005535469 systemd[1]: libpod-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74.scope: Deactivated successfully.
Nov 25 11:52:03 np0005535469 podman[362998]: 2025-11-25 16:52:03.868692158 +0000 UTC m=+0.049012672 container died b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:52:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 177 op/s
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.900 254096 INFO nova.virt.libvirt.driver [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance destroyed successfully.#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.901 254096 DEBUG nova.objects.instance [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lazy-loading 'resources' on Instance uuid afaa5e41-729a-48cb-bfc0-54a38b0dc96f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74-userdata-shm.mount: Deactivated successfully.
Nov 25 11:52:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5c6aa58a7c081ad9b5f642b8ac1385a57de4dfb79d58b6ff8d6feed85edc246c-merged.mount: Deactivated successfully.
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.914 254096 DEBUG nova.virt.libvirt.vif [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1469913116',display_name='tempest-ServerMetadataTestJSON-server-1469913116',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1469913116',id=106,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d04ee87178c14bcc860cdca885ea5685',ramdisk_id='',reservation_id='r-0xb1sqh5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-246494039',owner_user_name='tempest-ServerMetadataTestJSON-246494039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:52:03Z,user_data=None,user_id='406c96278eea4ca9ac09c960f9240fd6',uuid=afaa5e41-729a-48cb-bfc0-54a38b0dc96f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.916 254096 DEBUG nova.network.os_vif_util [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converting VIF {"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.918 254096 DEBUG nova.network.os_vif_util [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.918 254096 DEBUG os_vif [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.920 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08e9db98-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:03 np0005535469 nova_compute[254092]: 2025-11-25 16:52:03.925 254096 INFO os_vif [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36')#033[00m
Nov 25 11:52:03 np0005535469 podman[362998]: 2025-11-25 16:52:03.926862409 +0000 UTC m=+0.107182903 container cleanup b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 11:52:03 np0005535469 systemd[1]: libpod-conmon-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74.scope: Deactivated successfully.
Nov 25 11:52:04 np0005535469 podman[363046]: 2025-11-25 16:52:04.001630301 +0000 UTC m=+0.050539584 container remove b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.008 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5367d037-812b-4e46-8ce4-b0baf2e29170]: (4, ('Tue Nov 25 04:52:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 (b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74)\nb3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74\nTue Nov 25 04:52:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 (b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74)\nb3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad71313a-9403-4ea4-9f37-fe0daeace544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG nova.compute.manager [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-unplugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG oslo_concurrency.lockutils [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG oslo_concurrency.lockutils [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.010 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap461d8b90-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG oslo_concurrency.lockutils [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.011 254096 DEBUG nova.compute.manager [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] No waiting events found dispatching network-vif-unplugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.011 254096 DEBUG nova.compute.manager [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-unplugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:04 np0005535469 kernel: tap461d8b90-d0: left promiscuous mode
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.031 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1393e8-dd71-46f1-90bd-867faa8b40f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.041 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e386cd-f45e-4ec9-bff2-4e5d6b0a2b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.043 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17ffb486-ca04-4149-8a90-a2a1b5d52c4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.058 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[513e05f3-5191-4254-9732-0553e6baa101]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597522, 'reachable_time': 22817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363072, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:04 np0005535469 systemd[1]: run-netns-ovnmeta\x2d461d8b90\x2dd4fc\x2d454e\x2d911d\x2d7fee7be073c4.mount: Deactivated successfully.
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.062 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:52:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.062 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb1f741-b820-45a3-b111-682276878e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.452 254096 DEBUG nova.compute.manager [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG oslo_concurrency.lockutils [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG oslo_concurrency.lockutils [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG oslo_concurrency.lockutils [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG nova.compute.manager [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.454 254096 WARNING nova.compute.manager [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state active and task_state None.#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.621 254096 INFO nova.virt.libvirt.driver [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deleting instance files /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_del#033[00m
Nov 25 11:52:04 np0005535469 nova_compute[254092]: 2025-11-25 16:52:04.622 254096 INFO nova.virt.libvirt.driver [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deletion of /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_del complete#033[00m
Nov 25 11:52:05 np0005535469 nova_compute[254092]: 2025-11-25 16:52:05.053 254096 INFO nova.compute.manager [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 1.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:52:05 np0005535469 nova_compute[254092]: 2025-11-25 16:52:05.053 254096 DEBUG oslo.service.loopingcall [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:52:05 np0005535469 nova_compute[254092]: 2025-11-25 16:52:05.054 254096 DEBUG nova.compute.manager [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:52:05 np0005535469 nova_compute[254092]: 2025-11-25 16:52:05.054 254096 DEBUG nova.network.neutron [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:52:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 17 KiB/s wr, 167 op/s
Nov 25 11:52:06 np0005535469 nova_compute[254092]: 2025-11-25 16:52:06.222 254096 DEBUG nova.compute.manager [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:06 np0005535469 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG oslo_concurrency.lockutils [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:06 np0005535469 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG oslo_concurrency.lockutils [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:06 np0005535469 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG oslo_concurrency.lockutils [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:06 np0005535469 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG nova.compute.manager [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] No waiting events found dispatching network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:06 np0005535469 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 WARNING nova.compute.manager [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received unexpected event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:52:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.046 254096 DEBUG nova.network.neutron [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.084 254096 INFO nova.compute.manager [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 2.03 seconds to deallocate network for instance.#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.137 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.138 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.240 254096 DEBUG oslo_concurrency.processutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:52:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988681984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.685 254096 DEBUG oslo_concurrency.processutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.694 254096 DEBUG nova.compute.provider_tree [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.710 254096 DEBUG nova.scheduler.client.report [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.729 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.759 254096 INFO nova.scheduler.client.report [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Deleted allocations for instance afaa5e41-729a-48cb-bfc0-54a38b0dc96f#033[00m
Nov 25 11:52:07 np0005535469 nova_compute[254092]: 2025-11-25 16:52:07.812 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 205 op/s
Nov 25 11:52:08 np0005535469 nova_compute[254092]: 2025-11-25 16:52:08.356 254096 DEBUG nova.compute.manager [req-3d3bd817-c6e9-4406-9855-239ab0234900 req-5a749f42-029e-477c-992c-3c0ada830e81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-deleted-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:08 np0005535469 nova_compute[254092]: 2025-11-25 16:52:08.817 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:08 np0005535469 nova_compute[254092]: 2025-11-25 16:52:08.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 205 op/s
Nov 25 11:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:52:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:11 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:11Z|01074|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 11:52:11 np0005535469 nova_compute[254092]: 2025-11-25 16:52:11.362 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 KiB/s wr, 109 op/s
Nov 25 11:52:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:13.629 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:13 np0005535469 nova_compute[254092]: 2025-11-25 16:52:13.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 25 11:52:13 np0005535469 nova_compute[254092]: 2025-11-25 16:52:13.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:14 np0005535469 nova_compute[254092]: 2025-11-25 16:52:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 134 op/s
Nov 25 11:52:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:16Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 11:52:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:17 np0005535469 nova_compute[254092]: 2025-11-25 16:52:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:17 np0005535469 nova_compute[254092]: 2025-11-25 16:52:17.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 121 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 961 KiB/s rd, 12 KiB/s wr, 80 op/s
Nov 25 11:52:18 np0005535469 nova_compute[254092]: 2025-11-25 16:52:18.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:18 np0005535469 nova_compute[254092]: 2025-11-25 16:52:18.821 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:18 np0005535469 nova_compute[254092]: 2025-11-25 16:52:18.899 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089523.898389, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:18 np0005535469 nova_compute[254092]: 2025-11-25 16:52:18.900 254096 INFO nova.compute.manager [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:52:18 np0005535469 nova_compute[254092]: 2025-11-25 16:52:18.917 254096 DEBUG nova.compute.manager [None req-38b5ea01-7055-48c9-8ce1-846bec617862 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:18 np0005535469 nova_compute[254092]: 2025-11-25 16:52:18.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:19 np0005535469 nova_compute[254092]: 2025-11-25 16:52:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 121 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 12 KiB/s wr, 43 op/s
Nov 25 11:52:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.542 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.543 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.544 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.545 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 25 11:52:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:52:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927510687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:52:21 np0005535469 nova_compute[254092]: 2025-11-25 16:52:21.983 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.073 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.234 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.235 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3600MB free_disk=59.94268798828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.235 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.235 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.323 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7368c721-3e2a-4635-b2d8-5703d20438d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.323 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.324 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.356 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:52:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2429199596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.815 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.820 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.831 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.851 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:52:22 np0005535469 nova_compute[254092]: 2025-11-25 16:52:22.851 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:23 np0005535469 nova_compute[254092]: 2025-11-25 16:52:23.188 254096 INFO nova.compute.manager [None req-50e7fe2f-b229-4c0b-b0d0-6cdbcdb3dd55 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Get console output#033[00m
Nov 25 11:52:23 np0005535469 nova_compute[254092]: 2025-11-25 16:52:23.193 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:52:23 np0005535469 podman[363144]: 2025-11-25 16:52:23.643391339 +0000 UTC m=+0.063193148 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:52:23 np0005535469 podman[363145]: 2025-11-25 16:52:23.662709494 +0000 UTC m=+0.081858765 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 25 11:52:23 np0005535469 podman[363146]: 2025-11-25 16:52:23.743326795 +0000 UTC m=+0.154941161 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:52:23 np0005535469 nova_compute[254092]: 2025-11-25 16:52:23.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 25 11:52:23 np0005535469 nova_compute[254092]: 2025-11-25 16:52:23.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:24 np0005535469 nova_compute[254092]: 2025-11-25 16:52:24.847 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:24 np0005535469 nova_compute[254092]: 2025-11-25 16:52:24.847 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:24 np0005535469 nova_compute[254092]: 2025-11-25 16:52:24.848 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:52:24 np0005535469 nova_compute[254092]: 2025-11-25 16:52:24.848 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.096 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.097 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.097 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.098 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.098 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.100 254096 INFO nova.compute.manager [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Terminating instance#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.102 254096 DEBUG nova.compute.manager [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.136 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.136 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.137 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.137 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:25 np0005535469 kernel: tap0cdb5ab1-84 (unregistering): left promiscuous mode
Nov 25 11:52:25 np0005535469 NetworkManager[48891]: <info>  [1764089545.1607] device (tap0cdb5ab1-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:52:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:25Z|01075|binding|INFO|Releasing lport 0cdb5ab1-8463-4494-a522-360862f2152e from this chassis (sb_readonly=0)
Nov 25 11:52:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:25Z|01076|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e down in Southbound
Nov 25 11:52:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:25Z|01077|binding|INFO|Removing iface tap0cdb5ab1-84 ovn-installed in OVS
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.181 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.182 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a unbound from our chassis#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.183 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b70d379-8b3d-4361-b11d-cafbb578194a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.184 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[449420f6-a9ec-4884-9f77-e22a0224d861]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.186 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace which is not needed anymore#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.186 254096 DEBUG nova.compute.manager [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.187 254096 DEBUG nova.compute.manager [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing instance network info cache due to event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.188 254096 DEBUG oslo_concurrency.lockutils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:25 np0005535469 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 25 11:52:25 np0005535469 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Consumed 13.555s CPU time.
Nov 25 11:52:25 np0005535469 systemd-machined[216343]: Machine qemu-135-instance-00000069 terminated.
Nov 25 11:52:25 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : haproxy version is 2.8.14-c23fe91
Nov 25 11:52:25 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : path to executable is /usr/sbin/haproxy
Nov 25 11:52:25 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [WARNING]  (362960) : Exiting Master process...
Nov 25 11:52:25 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [WARNING]  (362960) : Exiting Master process...
Nov 25 11:52:25 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [ALERT]    (362960) : Current worker (362967) exited with code 143 (Terminated)
Nov 25 11:52:25 np0005535469 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [WARNING]  (362960) : All workers exited. Exiting... (0)
Nov 25 11:52:25 np0005535469 systemd[1]: libpod-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a.scope: Deactivated successfully.
Nov 25 11:52:25 np0005535469 podman[363229]: 2025-11-25 16:52:25.338617007 +0000 UTC m=+0.051219053 container died d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.350 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance destroyed successfully.#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.351 254096 DEBUG nova.objects.instance [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a-userdata-shm.mount: Deactivated successfully.
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.370 254096 DEBUG nova.virt.libvirt.vif [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:52:02Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.371 254096 DEBUG nova.network.os_vif_util [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2e4c45e0bc9eac1e04b6c01183480c0c616473c9660555153fe1e9a6aebaab91-merged.mount: Deactivated successfully.
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.372 254096 DEBUG nova.network.os_vif_util [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.372 254096 DEBUG os_vif [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.375 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cdb5ab1-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.382 254096 INFO os_vif [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')#033[00m
Nov 25 11:52:25 np0005535469 podman[363229]: 2025-11-25 16:52:25.385610594 +0000 UTC m=+0.098212640 container cleanup d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:52:25 np0005535469 systemd[1]: libpod-conmon-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a.scope: Deactivated successfully.
Nov 25 11:52:25 np0005535469 podman[363282]: 2025-11-25 16:52:25.462596647 +0000 UTC m=+0.048294004 container remove d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.468 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[479435d1-c8d8-47a3-91ea-bc2a3c87e726]: (4, ('Tue Nov 25 04:52:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a)\nd18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a\nTue Nov 25 04:52:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a)\nd18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.471 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa90e212-16fb-4b56-88ce-ace2181ff827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.471 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:25 np0005535469 kernel: tap1b70d379-80: left promiscuous mode
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.487 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.492 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee16b44-79ec-4abc-a45b-59e7afecc93d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.508 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41b3613c-c1c6-43e6-a8fd-71d2eabc6f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de3d32b6-2898-41fb-9851-64bcc3f706fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[784b0305-cb6f-458b-94f0-617a41236a70]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597959, 'reachable_time': 41808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363302, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 systemd[1]: run-netns-ovnmeta\x2d1b70d379\x2d8b3d\x2d4361\x2db11d\x2dcafbb578194a.mount: Deactivated successfully.
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.529 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:52:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.529 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41a774-74e2-4879-a8eb-eb38f0e486ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.770 254096 DEBUG nova.compute.manager [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.770 254096 DEBUG oslo_concurrency.lockutils [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG oslo_concurrency.lockutils [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG oslo_concurrency.lockutils [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG nova.compute.manager [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG nova.compute.manager [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.782 254096 INFO nova.virt.libvirt.driver [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deleting instance files /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3_del#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.783 254096 INFO nova.virt.libvirt.driver [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deletion of /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3_del complete#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.833 254096 INFO nova.compute.manager [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.834 254096 DEBUG oslo.service.loopingcall [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.834 254096 DEBUG nova.compute.manager [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:52:25 np0005535469 nova_compute[254092]: 2025-11-25 16:52:25.834 254096 DEBUG nova.network.neutron [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:52:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 45 op/s
Nov 25 11:52:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:26 np0005535469 nova_compute[254092]: 2025-11-25 16:52:26.571 254096 DEBUG nova.network.neutron [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:26 np0005535469 nova_compute[254092]: 2025-11-25 16:52:26.590 254096 INFO nova.compute.manager [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 0.76 seconds to deallocate network for instance.#033[00m
Nov 25 11:52:26 np0005535469 nova_compute[254092]: 2025-11-25 16:52:26.638 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:26 np0005535469 nova_compute[254092]: 2025-11-25 16:52:26.639 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:26 np0005535469 nova_compute[254092]: 2025-11-25 16:52:26.681 254096 DEBUG oslo_concurrency.processutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:52:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3010538172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.120 254096 DEBUG oslo_concurrency.processutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.129 254096 DEBUG nova.compute.provider_tree [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.145 254096 DEBUG nova.scheduler.client.report [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.209 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.259 254096 INFO nova.scheduler.client.report [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 7368c721-3e2a-4635-b2d8-5703d20438d3#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.266 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.266 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.292 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.369 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.392 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.392 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.400 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.401 254096 INFO nova.compute.claims [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.512 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.551 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.575 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.576 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.576 254096 DEBUG oslo_concurrency.lockutils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.576 254096 DEBUG nova.network.neutron [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.592 254096 DEBUG nova.compute.utils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.796 254096 INFO nova.network.neutron [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Port 0cdb5ab1-8463-4494-a522-360862f2152e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.797 254096 DEBUG nova.network.neutron [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.817 254096 DEBUG oslo_concurrency.lockutils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.855 254096 DEBUG nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.855 254096 DEBUG oslo_concurrency.lockutils [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.857 254096 DEBUG oslo_concurrency.lockutils [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.857 254096 DEBUG oslo_concurrency.lockutils [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.857 254096 DEBUG nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.858 254096 WARNING nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.858 254096 DEBUG nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-deleted-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 103 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 11 KiB/s wr, 19 op/s
Nov 25 11:52:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:52:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557331089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.954 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.959 254096 DEBUG nova.compute.provider_tree [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:52:27 np0005535469 nova_compute[254092]: 2025-11-25 16:52:27.975 254096 DEBUG nova.scheduler.client.report [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.001 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.002 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.045 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.045 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.061 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.080 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.173 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.174 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.174 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Creating image(s)#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.194 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.216 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.235 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.239 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.307 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.308 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.309 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.309 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.327 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.332 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.403 254096 DEBUG nova.policy [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9deaf2356cda4c0cb2a52383b7f2e609', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33a2e508e63149889f0d5d945726522c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.578 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.637 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] resizing rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.722 254096 DEBUG nova.objects.instance [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'migration_context' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.737 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.737 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Ensure instance console log exists: /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.738 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.738 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.738 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:28 np0005535469 nova_compute[254092]: 2025-11-25 16:52:28.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.883 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.883 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 103 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 11 KiB/s wr, 13 op/s
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.910 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.928 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Successfully created port: 923a00bb-da3b-434a-b154-c338c92e0635 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.974 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.974 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.981 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:52:29 np0005535469 nova_compute[254092]: 2025-11-25 16:52:29.981 254096 INFO nova.compute.claims [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.105 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:52:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757331519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.591 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.597 254096 DEBUG nova.compute.provider_tree [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.617 254096 DEBUG nova.scheduler.client.report [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.642 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.643 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.685 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.686 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.703 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.721 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.813 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.814 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.815 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating image(s)#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.833 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.852 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.869 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.872 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.938 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.940 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.941 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.941 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.968 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:30 np0005535469 nova_compute[254092]: 2025-11-25 16:52:30.974 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.237 254096 DEBUG nova.policy [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9deaf2356cda4c0cb2a52383b7f2e609', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33a2e508e63149889f0d5d945726522c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.254761) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551254793, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 544, "num_deletes": 259, "total_data_size": 507220, "memory_usage": 519080, "flush_reason": "Manual Compaction"}
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551258310, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 502561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42895, "largest_seqno": 43438, "table_properties": {"data_size": 499595, "index_size": 938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7186, "raw_average_key_size": 18, "raw_value_size": 493397, "raw_average_value_size": 1291, "num_data_blocks": 41, "num_entries": 382, "num_filter_entries": 382, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089520, "oldest_key_time": 1764089520, "file_creation_time": 1764089551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 3575 microseconds, and 1605 cpu microseconds.
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.258336) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 502561 bytes OK
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.258349) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259679) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259691) EVENT_LOG_v1 {"time_micros": 1764089551259688, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259702) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 504101, prev total WAL file size 504101, number of live WAL files 2.
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.260044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353037' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(490KB)], [95(7115KB)]
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551260077, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 7788699, "oldest_snapshot_seqno": -1}
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.279 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6345 keys, 7660194 bytes, temperature: kUnknown
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551319264, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7660194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7619260, "index_size": 23992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 166323, "raw_average_key_size": 26, "raw_value_size": 7506752, "raw_average_value_size": 1183, "num_data_blocks": 934, "num_entries": 6345, "num_filter_entries": 6345, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.319501) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7660194 bytes
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.320809) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.5 rd, 129.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 6.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(30.7) write-amplify(15.2) OK, records in: 6879, records dropped: 534 output_compression: NoCompression
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.320828) EVENT_LOG_v1 {"time_micros": 1764089551320819, "job": 56, "event": "compaction_finished", "compaction_time_micros": 59247, "compaction_time_cpu_micros": 35308, "output_level": 6, "num_output_files": 1, "total_output_size": 7660194, "num_input_records": 6879, "num_output_records": 6345, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551321018, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551322330, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.335 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] resizing rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.415 254096 DEBUG nova.objects.instance [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'migration_context' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.428 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.429 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Ensure instance console log exists: /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.430 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.430 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.430 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.721 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Successfully updated port: 923a00bb-da3b-434a-b154-c338c92e0635 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.740 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.740 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquired lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.740 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:52:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 88 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.947 254096 DEBUG nova.compute.manager [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-changed-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.948 254096 DEBUG nova.compute.manager [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Refreshing instance network info cache due to event network-changed-923a00bb-da3b-434a-b154-c338c92e0635. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:52:31 np0005535469 nova_compute[254092]: 2025-11-25 16:52:31.948 254096 DEBUG oslo_concurrency.lockutils [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:32 np0005535469 nova_compute[254092]: 2025-11-25 16:52:32.371 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:52:32 np0005535469 nova_compute[254092]: 2025-11-25 16:52:32.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:32 np0005535469 nova_compute[254092]: 2025-11-25 16:52:32.580 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Successfully created port: a74c23e6-4075-4b59-b36f-9d06bad062d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:52:32 np0005535469 nova_compute[254092]: 2025-11-25 16:52:32.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 88 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 11:52:33 np0005535469 nova_compute[254092]: 2025-11-25 16:52:33.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.165 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updating instance_info_cache with network_info: [{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.194 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Releasing lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.195 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance network_info: |[{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.196 254096 DEBUG oslo_concurrency.lockutils [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.196 254096 DEBUG nova.network.neutron [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Refreshing network info cache for port 923a00bb-da3b-434a-b154-c338c92e0635 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.202 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start _get_guest_xml network_info=[{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.212 254096 WARNING nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.227 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.228 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.233 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.234 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.235 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.235 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.236 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.237 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.237 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.238 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.239 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.239 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.240 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.240 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.241 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.242 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.248 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.312 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Successfully updated port: a74c23e6-4075-4b59-b36f-9d06bad062d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.344 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.345 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquired lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.345 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.484 254096 DEBUG nova.compute.manager [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-changed-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.485 254096 DEBUG nova.compute.manager [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Refreshing instance network info cache due to event network-changed-a74c23e6-4075-4b59-b36f-9d06bad062d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.486 254096 DEBUG oslo_concurrency.lockutils [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.637 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:52:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585207271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.810 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.840 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:34 np0005535469 nova_compute[254092]: 2025-11-25 16:52:34.845 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2045095226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.338 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.343 254096 DEBUG nova.virt.libvirt.vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-142320588',display_name='tempest-ServerRescueNegativeTestJSON-server-142320588',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-142320588',id=107,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-wqxsdvnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:28Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=6d70119e-e45b-4a12-893e-8d5a805ca8ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.344 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.344 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.346 254096 DEBUG nova.objects.instance [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.360 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <uuid>6d70119e-e45b-4a12-893e-8d5a805ca8ab</uuid>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <name>instance-0000006b</name>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-142320588</nova:name>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:52:34</nova:creationTime>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:user uuid="9deaf2356cda4c0cb2a52383b7f2e609">tempest-ServerRescueNegativeTestJSON-1769565225-project-member</nova:user>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:project uuid="33a2e508e63149889f0d5d945726522c">tempest-ServerRescueNegativeTestJSON-1769565225</nova:project>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <nova:port uuid="923a00bb-da3b-434a-b154-c338c92e0635">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <entry name="serial">6d70119e-e45b-4a12-893e-8d5a805ca8ab</entry>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <entry name="uuid">6d70119e-e45b-4a12-893e-8d5a805ca8ab</entry>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:4e:17:31"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <target dev="tap923a00bb-da"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/console.log" append="off"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:52:35 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:52:35 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:52:35 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:52:35 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.362 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Preparing to wait for external event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.363 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.364 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.365 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.366 254096 DEBUG nova.virt.libvirt.vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-142320588',display_name='tempest-ServerRescueNegativeTestJSON-server-142320588',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-142320588',id=107,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-wqxsdvnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:28Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=6d70119e-e45b-4a12-893e-8d5a805ca8ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.367 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.368 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.369 254096 DEBUG os_vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.371 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.371 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.374 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap923a00bb-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.375 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap923a00bb-da, col_values=(('external_ids', {'iface-id': '923a00bb-da3b-434a-b154-c338c92e0635', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:17:31', 'vm-uuid': '6d70119e-e45b-4a12-893e-8d5a805ca8ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:35 np0005535469 NetworkManager[48891]: <info>  [1764089555.3783] manager: (tap923a00bb-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.384 254096 INFO os_vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da')#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.443 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.444 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.444 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No VIF found with MAC fa:16:3e:4e:17:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.445 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Using config drive#033[00m
Nov 25 11:52:35 np0005535469 nova_compute[254092]: 2025-11-25 16:52:35.471 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 110 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.250 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.270 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Releasing lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.270 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance network_info: |[{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.271 254096 DEBUG oslo_concurrency.lockutils [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.271 254096 DEBUG nova.network.neutron [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Refreshing network info cache for port a74c23e6-4075-4b59-b36f-9d06bad062d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.274 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start _get_guest_xml network_info=[{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.279 254096 WARNING nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.285 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.285 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.289 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.289 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.290 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.290 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.290 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.295 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.339 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Creating config drive at /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.344 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyix4975 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.453 254096 DEBUG nova.network.neutron [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updated VIF entry in instance network info cache for port 923a00bb-da3b-434a-b154-c338c92e0635. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.454 254096 DEBUG nova.network.neutron [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updating instance_info_cache with network_info: [{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.468 254096 DEBUG oslo_concurrency.lockutils [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.496 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyix4975" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.530 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.541 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.731 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.732 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deleting local config drive /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config because it was imported into RBD.#033[00m
Nov 25 11:52:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991932512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:36 np0005535469 kernel: tap923a00bb-da: entered promiscuous mode
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.795 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:36 np0005535469 NetworkManager[48891]: <info>  [1764089556.7976] manager: (tap923a00bb-da): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Nov 25 11:52:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:36Z|01078|binding|INFO|Claiming lport 923a00bb-da3b-434a-b154-c338c92e0635 for this chassis.
Nov 25 11:52:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:36Z|01079|binding|INFO|923a00bb-da3b-434a-b154-c338c92e0635: Claiming fa:16:3e:4e:17:31 10.100.0.13
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.806 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:31 10.100.0.13'], port_security=['fa:16:3e:4e:17:31 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d70119e-e45b-4a12-893e-8d5a805ca8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=923a00bb-da3b-434a-b154-c338c92e0635) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.807 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 923a00bb-da3b-434a-b154-c338c92e0635 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e bound to our chassis#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.809 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e#033[00m
Nov 25 11:52:36 np0005535469 systemd-udevd[363873]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.822 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c04cb152-5f45-42b5-9549-82d5caa6cf25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.823 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf0cb5b9-c1 in ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.825 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf0cb5b9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba294cb9-9491-4223-97bf-91ebaa852d3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.826 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b135b342-7747-4ca8-902e-45a489e5b3af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 NetworkManager[48891]: <info>  [1764089556.8301] device (tap923a00bb-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:52:36 np0005535469 NetworkManager[48891]: <info>  [1764089556.8310] device (tap923a00bb-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:52:36 np0005535469 systemd-machined[216343]: New machine qemu-136-instance-0000006b.
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.837 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbf0251-84a1-4adf-a6d9-4a6f9a4c96c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.840 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.845 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:36 np0005535469 systemd[1]: Started Virtual Machine qemu-136-instance-0000006b.
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.865 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01377dd8-4deb-412c-b031-97ee5fa4036b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:36Z|01080|binding|INFO|Setting lport 923a00bb-da3b-434a-b154-c338c92e0635 ovn-installed in OVS
Nov 25 11:52:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:36Z|01081|binding|INFO|Setting lport 923a00bb-da3b-434a-b154-c338c92e0635 up in Southbound
Nov 25 11:52:36 np0005535469 nova_compute[254092]: 2025-11-25 16:52:36.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.909 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[770184cf-219a-43ef-ba22-2e4eaf65e8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 NetworkManager[48891]: <info>  [1764089556.9175] manager: (tapcf0cb5b9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/443)
Nov 25 11:52:36 np0005535469 systemd-udevd[363881]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.918 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af042b1a-8df2-4e18-80fc-25f56736bfda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.957 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a28446dc-154d-45e7-b40a-585cd8f0d5b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.960 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[082b0017-6e87-461d-b0f4-9b9cab05a3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:36 np0005535469 NetworkManager[48891]: <info>  [1764089556.9913] device (tapcf0cb5b9-c0): carrier: link connected
Nov 25 11:52:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.998 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4085f5-42e5-431e-9a25-a6f8f2b8ba9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4303c0-13a4-40f6-b299-18385511cc7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363932, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.028 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1ecbb3-b642-41b4-9ab3-74bd4ae505ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:ccc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601458, 'tstamp': 601458}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363933, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.050 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2796c31d-937b-4d9b-955e-379cf0e69ff5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363934, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.096 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71bc3146-de22-4c0e-8109-9bbd32145d7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.159 254096 DEBUG nova.compute.manager [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.159 254096 DEBUG oslo_concurrency.lockutils [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.159 254096 DEBUG oslo_concurrency.lockutils [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.160 254096 DEBUG oslo_concurrency.lockutils [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.160 254096 DEBUG nova.compute.manager [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Processing event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.168 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ee753148-1e7b-4209-9d11-18f9e0261265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.169 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.170 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.170 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:37 np0005535469 kernel: tapcf0cb5b9-c0: entered promiscuous mode
Nov 25 11:52:37 np0005535469 NetworkManager[48891]: <info>  [1764089557.1724] manager: (tapcf0cb5b9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.174 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:37 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:37Z|01082|binding|INFO|Releasing lport 04d240b5-2178-4712-9ede-6d58532785de from this chassis (sb_readonly=0)
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.189 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.190 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c641285-bd5c-4dbd-ba72-0abc7ca09734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.190 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.pid.haproxy
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:52:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.191 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'env', 'PROCESS_TAG=haproxy-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:52:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609706744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.310 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.311 254096 DEBUG nova.virt.libvirt.vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:30Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.312 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.313 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.314 254096 DEBUG nova.objects.instance [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.333 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <uuid>1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</uuid>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <name>instance-0000006c</name>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-2043816774</nova:name>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:52:36</nova:creationTime>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:user uuid="9deaf2356cda4c0cb2a52383b7f2e609">tempest-ServerRescueNegativeTestJSON-1769565225-project-member</nova:user>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:project uuid="33a2e508e63149889f0d5d945726522c">tempest-ServerRescueNegativeTestJSON-1769565225</nova:project>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <nova:port uuid="a74c23e6-4075-4b59-b36f-9d06bad062d2">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <entry name="serial">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <entry name="uuid">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:09:6e:80"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <target dev="tapa74c23e6-40"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/console.log" append="off"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:52:37 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:52:37 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:52:37 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:52:37 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.339 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Preparing to wait for external event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.339 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.339 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.340 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.341 254096 DEBUG nova.virt.libvirt.vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:30Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.341 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.342 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.342 254096 DEBUG os_vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.343 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.343 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.344 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089557.331682, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.344 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Started (Lifecycle Event)#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.346 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.348 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa74c23e6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.348 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa74c23e6-40, col_values=(('external_ids', {'iface-id': 'a74c23e6-4075-4b59-b36f-9d06bad062d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:6e:80', 'vm-uuid': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.376 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:37 np0005535469 NetworkManager[48891]: <info>  [1764089557.3835] manager: (tapa74c23e6-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.385 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.386 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.387 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.390 254096 INFO os_vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40')#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.392 254096 INFO nova.virt.libvirt.driver [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance spawned successfully.#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.393 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.409 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.409 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089557.333787, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.410 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.416 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.417 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.418 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.418 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.418 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.419 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.427 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.430 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089557.3827226, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.430 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.462 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.466 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.475 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.475 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.476 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No VIF found with MAC fa:16:3e:09:6e:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.476 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Using config drive#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.506 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.516 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.519 254096 INFO nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 9.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.520 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:37 np0005535469 podman[364012]: 2025-11-25 16:52:37.543739313 +0000 UTC m=+0.048242792 container create 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.580 254096 INFO nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 10.22 seconds to build instance.#033[00m
Nov 25 11:52:37 np0005535469 systemd[1]: Started libpod-conmon-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3.scope.
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.603 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:52:37 np0005535469 podman[364012]: 2025-11-25 16:52:37.519955017 +0000 UTC m=+0.024458516 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:52:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a699cce47daa3e0d0478b9596de6cbf41d67cddb37cef82117aeaf5f09e69db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:52:37 np0005535469 podman[364012]: 2025-11-25 16:52:37.628018673 +0000 UTC m=+0.132522172 container init 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:52:37 np0005535469 podman[364012]: 2025-11-25 16:52:37.633077981 +0000 UTC m=+0.137581460 container start 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:52:37 np0005535469 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : New worker (364051) forked
Nov 25 11:52:37 np0005535469 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : Loading success.
Nov 25 11:52:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 134 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.986 254096 DEBUG nova.network.neutron [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updated VIF entry in instance network info cache for port a74c23e6-4075-4b59-b36f-9d06bad062d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:52:37 np0005535469 nova_compute[254092]: 2025-11-25 16:52:37.987 254096 DEBUG nova.network.neutron [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.000 254096 DEBUG oslo_concurrency.lockutils [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.044 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating config drive at /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.048 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqf5luvq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.192 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqf5luvq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.213 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.216 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.382 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.385 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deleting local config drive /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config because it was imported into RBD.#033[00m
Nov 25 11:52:38 np0005535469 kernel: tapa74c23e6-40: entered promiscuous mode
Nov 25 11:52:38 np0005535469 NetworkManager[48891]: <info>  [1764089558.4536] manager: (tapa74c23e6-40): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Nov 25 11:52:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:38Z|01083|binding|INFO|Claiming lport a74c23e6-4075-4b59-b36f-9d06bad062d2 for this chassis.
Nov 25 11:52:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:38Z|01084|binding|INFO|a74c23e6-4075-4b59-b36f-9d06bad062d2: Claiming fa:16:3e:09:6e:80 10.100.0.14
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.461 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.464 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e bound to our chassis#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.467 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e#033[00m
Nov 25 11:52:38 np0005535469 NetworkManager[48891]: <info>  [1764089558.4808] device (tapa74c23e6-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:52:38 np0005535469 NetworkManager[48891]: <info>  [1764089558.4821] device (tapa74c23e6-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:52:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:38Z|01085|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 ovn-installed in OVS
Nov 25 11:52:38 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:38Z|01086|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 up in Southbound
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.486 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d4f7ad-5435-485a-bb95-3e2625f7fd89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:38 np0005535469 systemd-machined[216343]: New machine qemu-137-instance-0000006c.
Nov 25 11:52:38 np0005535469 systemd[1]: Started Virtual Machine qemu-137-instance-0000006c.
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.534 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[40062747-4c4f-495a-8142-0e32c6cd97e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.538 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[98faef3b-6cd2-48c6-ae0c-4e4aaf41b9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.573 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1df38c-d43a-4712-a28f-905ec5908923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.596 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8973859e-3f10-4513-83e1-a88032badf85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364123, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.619 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf02b92-719e-49b2-846d-ea2aef5a8d95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364126, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364126, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.622 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.627 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.728 254096 DEBUG nova.compute.manager [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.728 254096 DEBUG oslo_concurrency.lockutils [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.729 254096 DEBUG oslo_concurrency.lockutils [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.729 254096 DEBUG oslo_concurrency.lockutils [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.729 254096 DEBUG nova.compute.manager [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Processing event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:52:38 np0005535469 nova_compute[254092]: 2025-11-25 16:52:38.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.291 254096 DEBUG nova.compute.manager [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.292 254096 DEBUG oslo_concurrency.lockutils [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.293 254096 DEBUG oslo_concurrency.lockutils [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.293 254096 DEBUG oslo_concurrency.lockutils [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.293 254096 DEBUG nova.compute.manager [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] No waiting events found dispatching network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.294 254096 WARNING nova.compute.manager [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received unexpected event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.499 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.499 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.500 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.500 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.526 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.540 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.541 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Image id 8b512c8e-2281-41de-a668-eb983e174ba0 yields fingerprint 9e29bca11122733e2b34fccd9459097794a3a169 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.542 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] image 8b512c8e-2281-41de-a668-eb983e174ba0 at (/var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169): checking#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.542 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] image 8b512c8e-2281-41de-a668-eb983e174ba0 at (/var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.544 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.545 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] 6d70119e-e45b-4a12-893e-8d5a805ca8ab is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.545 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.545 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.546 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Active base files: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.546 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Removable base files: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.547 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.547 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.547 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.548 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.548 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.788 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089559.7883976, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.789 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Started (Lifecycle Event)#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.791 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.794 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.798 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance spawned successfully.#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.798 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.814 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.820 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.824 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.825 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.826 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.826 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.827 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.827 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.855 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.856 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089559.7893817, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.856 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.886 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.890 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089559.793624, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.890 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.896 254096 INFO nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 9.08 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.896 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 134 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 69 op/s
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.905 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.908 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.930 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.951 254096 INFO nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 10.00 seconds to build instance.#033[00m
Nov 25 11:52:39 np0005535469 nova_compute[254092]: 2025-11-25 16:52:39.963 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:52:40
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', '.mgr', 'vms']
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.347 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089545.345628, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.347 254096 INFO nova.compute.manager [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.367 254096 DEBUG nova.compute.manager [None req-9d0c4a12-f49a-4b80-9084-31d0e73a6f6c - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.899 254096 DEBUG nova.compute.manager [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.900 254096 DEBUG oslo_concurrency.lockutils [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.900 254096 DEBUG oslo_concurrency.lockutils [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.900 254096 DEBUG oslo_concurrency.lockutils [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.901 254096 DEBUG nova.compute.manager [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:40 np0005535469 nova_compute[254092]: 2025-11-25 16:52:40.901 254096 WARNING nova.compute.manager [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:52:40 np0005535469 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 11:52:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:41 np0005535469 nova_compute[254092]: 2025-11-25 16:52:41.453 254096 INFO nova.compute.manager [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Rescuing#033[00m
Nov 25 11:52:41 np0005535469 nova_compute[254092]: 2025-11-25 16:52:41.454 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:41 np0005535469 nova_compute[254092]: 2025-11-25 16:52:41.454 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquired lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:41 np0005535469 nova_compute[254092]: 2025-11-25 16:52:41.454 254096 DEBUG nova.network.neutron [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:52:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Nov 25 11:52:42 np0005535469 nova_compute[254092]: 2025-11-25 16:52:42.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 25 11:52:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 25 11:52:42 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 25 11:52:43 np0005535469 nova_compute[254092]: 2025-11-25 16:52:43.131 254096 DEBUG nova.network.neutron [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:43 np0005535469 nova_compute[254092]: 2025-11-25 16:52:43.177 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Releasing lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:43.411 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:43.412 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:52:43 np0005535469 nova_compute[254092]: 2025-11-25 16:52:43.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:43 np0005535469 nova_compute[254092]: 2025-11-25 16:52:43.464 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:52:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 11:52:43 np0005535469 nova_compute[254092]: 2025-11-25 16:52:43.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 200 op/s
Nov 25 11:52:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:47 np0005535469 nova_compute[254092]: 2025-11-25 16:52:47.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 32 KiB/s wr, 191 op/s
Nov 25 11:52:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 25 11:52:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 25 11:52:48 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 25 11:52:48 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 11:52:48 np0005535469 nova_compute[254092]: 2025-11-25 16:52:48.971 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:48 np0005535469 nova_compute[254092]: 2025-11-25 16:52:48.972 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:48 np0005535469 nova_compute[254092]: 2025-11-25 16:52:48.990 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:52:48 np0005535469 nova_compute[254092]: 2025-11-25 16:52:48.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 25 11:52:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 25 11:52:49 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.058 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.059 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.067 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.067 254096 INFO nova.compute.claims [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.210 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:52:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049670726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.689 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.695 254096 DEBUG nova.compute.provider_tree [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.710 254096 DEBUG nova.scheduler.client.report [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.733 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.734 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.783 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.784 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.807 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.824 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:52:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 84 op/s
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.909 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.910 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.911 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Creating image(s)#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.938 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.967 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.992 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:49 np0005535469 nova_compute[254092]: 2025-11-25 16:52:49.996 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 25 11:52:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 25 11:52:50 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.083 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.085 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.086 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.086 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.124 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.129 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b12625ea-31bf-4599-a248-4c6ced8e59c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.175 254096 DEBUG nova.policy [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:52:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:50.414 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.427 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b12625ea-31bf-4599-a248-4c6ced8e59c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.486 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.598 254096 DEBUG nova.objects.instance [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.610 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.611 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Ensure instance console log exists: /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.612 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.612 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:50 np0005535469 nova_compute[254092]: 2025-11-25 16:52:50.612 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:50Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:17:31 10.100.0.13
Nov 25 11:52:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:50Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:17:31 10.100.0.13
Nov 25 11:52:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664942086670574 of space, bias 1.0, pg target 0.19994826260011722 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:52:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 314 active+clean; 182 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 24 MiB/s wr, 295 op/s
Nov 25 11:52:52 np0005535469 nova_compute[254092]: 2025-11-25 16:52:52.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:52 np0005535469 nova_compute[254092]: 2025-11-25 16:52:52.448 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Successfully created port: 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:52:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:53Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:6e:80 10.100.0.14
Nov 25 11:52:53 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:53Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:6e:80 10.100.0.14
Nov 25 11:52:53 np0005535469 nova_compute[254092]: 2025-11-25 16:52:53.532 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 25 11:52:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 314 active+clean; 182 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 24 MiB/s wr, 263 op/s
Nov 25 11:52:53 np0005535469 nova_compute[254092]: 2025-11-25 16:52:53.954 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Successfully updated port: 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:52:53 np0005535469 nova_compute[254092]: 2025-11-25 16:52:53.970 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:53 np0005535469 nova_compute[254092]: 2025-11-25 16:52:53.970 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:53 np0005535469 nova_compute[254092]: 2025-11-25 16:52:53.970 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:52:53 np0005535469 nova_compute[254092]: 2025-11-25 16:52:53.996 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:54 np0005535469 nova_compute[254092]: 2025-11-25 16:52:54.160 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:52:54 np0005535469 nova_compute[254092]: 2025-11-25 16:52:54.234 254096 DEBUG nova.compute.manager [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:54 np0005535469 nova_compute[254092]: 2025-11-25 16:52:54.234 254096 DEBUG nova.compute.manager [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing instance network info cache due to event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:52:54 np0005535469 nova_compute[254092]: 2025-11-25 16:52:54.234 254096 DEBUG oslo_concurrency.lockutils [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:52:54 np0005535469 podman[364358]: 2025-11-25 16:52:54.653421722 +0000 UTC m=+0.061151942 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:52:54 np0005535469 podman[364357]: 2025-11-25 16:52:54.653581997 +0000 UTC m=+0.065206473 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 25 11:52:54 np0005535469 podman[364359]: 2025-11-25 16:52:54.719825416 +0000 UTC m=+0.127817564 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 25 11:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:52:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619463625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:52:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619463625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.338 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.368 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.368 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance network_info: |[{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.369 254096 DEBUG oslo_concurrency.lockutils [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.369 254096 DEBUG nova.network.neutron [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.372 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start _get_guest_xml network_info=[{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.375 254096 WARNING nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.380 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.381 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.384 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.384 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.385 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.385 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.385 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.387 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.387 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.387 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.388 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.391 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:55 np0005535469 kernel: tapa74c23e6-40 (unregistering): left promiscuous mode
Nov 25 11:52:55 np0005535469 NetworkManager[48891]: <info>  [1764089575.8082] device (tapa74c23e6-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:55Z|01087|binding|INFO|Releasing lport a74c23e6-4075-4b59-b36f-9d06bad062d2 from this chassis (sb_readonly=0)
Nov 25 11:52:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:55Z|01088|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 down in Southbound
Nov 25 11:52:55 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:55Z|01089|binding|INFO|Removing iface tapa74c23e6-40 ovn-installed in OVS
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.825 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e unbound from our chassis#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.829 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e#033[00m
Nov 25 11:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079908265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.850 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b72f23c-4f2e-463a-8604-95e8a5d7734b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:55 np0005535469 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 25 11:52:55 np0005535469 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Consumed 13.856s CPU time.
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.864 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:55 np0005535469 systemd-machined[216343]: Machine qemu-137-instance-0000006c terminated.
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.883 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b7dde4-ecf5-435a-b43b-0eb7bfc25efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.886 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.887 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae54ca1-22e9-40cc-8640-ebc5d095c17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.890 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 237 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 MiB/s wr, 304 op/s
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.914 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f802f10c-1495-42e1-91f4-613dd12c38b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d82fceb7-7404-4349-9532-b2b0320cb759]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364470, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.945 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdf8745-a1b3-4b65-97dd-94980aa5c0b4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364471, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364471, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.946 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:55 np0005535469 nova_compute[254092]: 2025-11-25 16:52:55.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.952 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.952 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.952 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.953 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 25 11:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 25 11:52:56 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 25 11:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3910056781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.335 254096 DEBUG nova.compute.manager [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.338 254096 DEBUG oslo_concurrency.lockutils [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.338 254096 DEBUG oslo_concurrency.lockutils [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.338 254096 DEBUG oslo_concurrency.lockutils [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.339 254096 DEBUG nova.compute.manager [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.339 254096 WARNING nova.compute.manager [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.339 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.341 254096 DEBUG nova.virt.libvirt.vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:49Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.341 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.342 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.343 254096 DEBUG nova.objects.instance [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <uuid>b12625ea-31bf-4599-a248-4c6ced8e59c2</uuid>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <name>instance-0000006d</name>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1982563586</nova:name>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:52:55</nova:creationTime>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <nova:port uuid="1ef97ad6-0798-4b3d-a9cc-562f9526ae38">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <entry name="serial">b12625ea-31bf-4599-a248-4c6ced8e59c2</entry>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <entry name="uuid">b12625ea-31bf-4599-a248-4c6ced8e59c2</entry>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b12625ea-31bf-4599-a248-4c6ced8e59c2_disk">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:5e:77:80"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <target dev="tap1ef97ad6-07"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/console.log" append="off"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:52:56 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:52:56 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:52:56 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:52:56 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Preparing to wait for external event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.359 254096 DEBUG nova.virt.libvirt.vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:49Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.359 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.360 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.360 254096 DEBUG os_vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.361 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.364 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ef97ad6-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.365 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ef97ad6-07, col_values=(('external_ids', {'iface-id': '1ef97ad6-0798-4b3d-a9cc-562f9526ae38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:77:80', 'vm-uuid': 'b12625ea-31bf-4599-a248-4c6ced8e59c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:56 np0005535469 NetworkManager[48891]: <info>  [1764089576.3667] manager: (tap1ef97ad6-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.375 254096 INFO os_vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07')#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.435 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.435 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.435 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:5e:77:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.436 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Using config drive#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.466 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.546 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance shutdown successfully after 13 seconds.#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.554 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance destroyed successfully.#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.555 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'numa_topology' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.570 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Attempting rescue#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.572 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.577 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.577 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating image(s)#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.606 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.609 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.644 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.678 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.685 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.800 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.801 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.802 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.803 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.835 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:56 np0005535469 nova_compute[254092]: 2025-11-25 16:52:56.844 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.044 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Creating config drive at /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.050 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjz7a6jei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.155 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.156 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'migration_context' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.168 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.169 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start _get_guest_xml network_info=[{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "vif_mac": "fa:16:3e:09:6e:80"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.169 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'resources' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.187 254096 WARNING nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.190 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.191 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.193 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.194 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.194 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.194 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.199 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjz7a6jei" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.222 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.226 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.291 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.419 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.419 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deleting local config drive /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config because it was imported into RBD.#033[00m
Nov 25 11:52:57 np0005535469 kernel: tap1ef97ad6-07: entered promiscuous mode
Nov 25 11:52:57 np0005535469 systemd-udevd[364444]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:52:57 np0005535469 NetworkManager[48891]: <info>  [1764089577.4651] manager: (tap1ef97ad6-07): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Nov 25 11:52:57 np0005535469 NetworkManager[48891]: <info>  [1764089577.4847] device (tap1ef97ad6-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:52:57 np0005535469 NetworkManager[48891]: <info>  [1764089577.4853] device (tap1ef97ad6-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:52:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:57Z|01090|binding|INFO|Claiming lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for this chassis.
Nov 25 11:52:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:57Z|01091|binding|INFO|1ef97ad6-0798-4b3d-a9cc-562f9526ae38: Claiming fa:16:3e:5e:77:80 10.100.0.8
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.525 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.536 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.537 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 bound to our chassis#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.538 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.549 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc337e0-3aac-4134-8361-ea41d338f7dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.550 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa7a2aa2-91 in ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.552 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa7a2aa2-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.552 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3acaf7d0-eb5f-473b-aa0d-aaee3fe2249b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c25b9e3-04fb-4f73-8d32-cdd5c095b15c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.564 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a154f111-dc0c-41c9-8667-462ee3cb7efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 systemd-machined[216343]: New machine qemu-138-instance-0000006d.
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.579 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f910c861-116c-4d31-995a-eb127b2ee5f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 systemd[1]: Started Virtual Machine qemu-138-instance-0000006d.
Nov 25 11:52:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:57Z|01092|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 ovn-installed in OVS
Nov 25 11:52:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:57Z|01093|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 up in Southbound
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.608 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0182ad-d796-4a65-8293-152855d174d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6092e4ba-174b-4641-951a-6b8ac26b7969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 NetworkManager[48891]: <info>  [1764089577.6137] manager: (tapaa7a2aa2-90): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.650 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2179fd0c-5613-4cbd-bc39-675a8cbefa69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.658 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfe3b72-ed8d-4fa0-bc6c-cc0f582d741f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 NetworkManager[48891]: <info>  [1764089577.6818] device (tapaa7a2aa2-90): carrier: link connected
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.687 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[45d3c66d-1d3f-4766-81df-926a1b5a4520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[78dfef53-3911-4112-86a5-10aba1b93660]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603527, 'reachable_time': 20731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364729, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.722 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ca012-0762-412c-8e52-721c378686a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:f6a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603527, 'tstamp': 603527}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364730, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[485b82b4-e17e-4e13-8bc6-712c31c94af0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603527, 'reachable_time': 20731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364731, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511631983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53ed4055-7fa5-4426-a52e-7c46e30d41a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.773 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.774 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.821 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18482dfb-7ec3-45bc-a03c-61bde883e6c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.822 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.822 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa7a2aa2-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.825 254096 DEBUG nova.network.neutron [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated VIF entry in instance network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:52:57 np0005535469 kernel: tapaa7a2aa2-90: entered promiscuous mode
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.826 254096 DEBUG nova.network.neutron [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:52:57 np0005535469 NetworkManager[48891]: <info>  [1764089577.8275] manager: (tapaa7a2aa2-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.832 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa7a2aa2-90, col_values=(('external_ids', {'iface-id': '4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.833 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:57Z|01094|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.842 254096 DEBUG oslo_concurrency.lockutils [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 nova_compute[254092]: 2025-11-25 16:52:57.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.856 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.857 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8a7c7f-0b41-45b7-9013-142d8066e64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.858 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:52:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.859 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'env', 'PROCESS_TAG=haproxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:52:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 246 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 MiB/s wr, 362 op/s
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.179 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089578.1792219, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.180 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Started (Lifecycle Event)#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.196 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.200 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089578.179442, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.200 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:52:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471538734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:58 np0005535469 podman[364831]: 2025-11-25 16:52:58.214901756 +0000 UTC m=+0.054592574 container create 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.223 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.225 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.226 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:58 np0005535469 systemd[1]: Started libpod-conmon-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a.scope.
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.261 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:58 np0005535469 podman[364831]: 2025-11-25 16:52:58.185319542 +0000 UTC m=+0.025010390 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.279 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:52:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:52:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3934ea0b4d9ff83643ea9a1fbc52010cb7a732e5e5a2e2bfce4873181e16e161/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:52:58 np0005535469 podman[364831]: 2025-11-25 16:52:58.314519214 +0000 UTC m=+0.154210062 container init 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:52:58 np0005535469 podman[364831]: 2025-11-25 16:52:58.320152807 +0000 UTC m=+0.159843625 container start 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 11:52:58 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : New worker (364860) forked
Nov 25 11:52:58 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : Loading success.
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.441 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.442 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.445 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.445 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 WARNING nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state active and task_state rescuing.#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.447 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.447 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.447 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Processing event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.449 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.449 254096 WARNING nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.450 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.454 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.454 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089578.4539948, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.454 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.458 254096 INFO nova.virt.libvirt.driver [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance spawned successfully.#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.458 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.483 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.488 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.491 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.491 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.492 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.492 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.493 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.493 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.523 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.556 254096 INFO nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 8.65 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.557 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:52:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:52:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350011716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.632 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.633 254096 DEBUG nova.virt.libvirt.vif [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:39Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "vif_mac": "fa:16:3e:09:6e:80"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.634 254096 DEBUG nova.network.os_vif_util [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "vif_mac": "fa:16:3e:09:6e:80"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.635 254096 DEBUG nova.network.os_vif_util [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.636 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.672 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <uuid>1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</uuid>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <name>instance-0000006c</name>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-2043816774</nova:name>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:52:57</nova:creationTime>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:user uuid="9deaf2356cda4c0cb2a52383b7f2e609">tempest-ServerRescueNegativeTestJSON-1769565225-project-member</nova:user>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:project uuid="33a2e508e63149889f0d5d945726522c">tempest-ServerRescueNegativeTestJSON-1769565225</nova:project>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <nova:port uuid="a74c23e6-4075-4b59-b36f-9d06bad062d2">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <entry name="serial">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <entry name="uuid">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <target dev="vdb" bus="virtio"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:09:6e:80"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <target dev="tapa74c23e6-40"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/console.log" append="off"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:52:58 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:52:58 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:52:58 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:52:58 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.678 254096 INFO nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 9.64 seconds to build instance.#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.682 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance destroyed successfully.#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.690 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No VIF found with MAC fa:16:3e:09:6e:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.741 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Using config drive#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.763 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.785 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.814 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'keypairs' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:52:58 np0005535469 nova_compute[254092]: 2025-11-25 16:52:58.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.434 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating config drive at /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.440 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9pj45tr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.583 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9pj45tr" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.607 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.611 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.762 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.763 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deleting local config drive /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue because it was imported into RBD.#033[00m
Nov 25 11:52:59 np0005535469 kernel: tapa74c23e6-40: entered promiscuous mode
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:59 np0005535469 NetworkManager[48891]: <info>  [1764089579.8151] manager: (tapa74c23e6-40): new Tun device (/org/freedesktop/NetworkManager/Devices/451)
Nov 25 11:52:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:59Z|01095|binding|INFO|Claiming lport a74c23e6-4075-4b59-b36f-9d06bad062d2 for this chassis.
Nov 25 11:52:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:59Z|01096|binding|INFO|a74c23e6-4075-4b59-b36f-9d06bad062d2: Claiming fa:16:3e:09:6e:80 10.100.0.14
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.822 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.824 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e bound to our chassis#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.825 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:59Z|01097|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 ovn-installed in OVS
Nov 25 11:52:59 np0005535469 ovn_controller[153477]: 2025-11-25T16:52:59Z|01098|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 up in Southbound
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:59 np0005535469 systemd-machined[216343]: New machine qemu-139-instance-0000006c.
Nov 25 11:52:59 np0005535469 systemd-udevd[364959]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.853 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04eb6876-4188-4ab1-9917-113bcc06fe58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:59 np0005535469 systemd[1]: Started Virtual Machine qemu-139-instance-0000006c.
Nov 25 11:52:59 np0005535469 NetworkManager[48891]: <info>  [1764089579.8726] device (tapa74c23e6-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:52:59 np0005535469 NetworkManager[48891]: <info>  [1764089579.8733] device (tapa74c23e6-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.888 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9648fc-2ae4-4c08-a541-8c2802dfbfff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.891 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0802b3-0088-4a4e-9355-ff161d2c6df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 246 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 19 MiB/s wr, 294 op/s
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.919 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9405543f-6bde-4f93-b0a0-e29467f72c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.938 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e934518c-e365-49fa-a2f3-a0115558eea8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364971, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.951 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fa2561-990c-40b9-950c-38274feb4907]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364973, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364973, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.953 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:59 np0005535469 nova_compute[254092]: 2025-11-25 16:52:59.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:52:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.377 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.377 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089580.3768363, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.378 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.382 254096 DEBUG nova.compute.manager [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.417 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.424 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.453 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089580.3777165, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.454 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Started (Lifecycle Event)#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.485 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.755 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 WARNING nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:00 np0005535469 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 WARNING nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state rescued and task_state None.#033[00m
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:53:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev be8c9f4c-9f2d-44e9-ba1b-db649aca7e48 does not exist
Nov 25 11:53:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 994401bf-b5e3-4f31-b741-74f7760a5f19 does not exist
Nov 25 11:53:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 642384ab-10fe-42e8-95d5-7b6c25e77b95 does not exist
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:53:01 np0005535469 nova_compute[254092]: 2025-11-25 16:53:01.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:53:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:53:01 np0005535469 podman[365305]: 2025-11-25 16:53:01.902561168 +0000 UTC m=+0.043487442 container create 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 11:53:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 256 op/s
Nov 25 11:53:01 np0005535469 systemd[1]: Started libpod-conmon-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope.
Nov 25 11:53:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:53:01 np0005535469 podman[365305]: 2025-11-25 16:53:01.88347365 +0000 UTC m=+0.024399954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:53:01 np0005535469 podman[365305]: 2025-11-25 16:53:01.991946607 +0000 UTC m=+0.132872911 container init 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:53:01 np0005535469 podman[365305]: 2025-11-25 16:53:01.998365303 +0000 UTC m=+0.139291577 container start 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:53:02 np0005535469 podman[365305]: 2025-11-25 16:53:02.001615481 +0000 UTC m=+0.142541765 container attach 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:53:02 np0005535469 fervent_kowalevski[365320]: 167 167
Nov 25 11:53:02 np0005535469 systemd[1]: libpod-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope: Deactivated successfully.
Nov 25 11:53:02 np0005535469 conmon[365320]: conmon 1da874b0c3bab412ea8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope/container/memory.events
Nov 25 11:53:02 np0005535469 podman[365305]: 2025-11-25 16:53:02.004739866 +0000 UTC m=+0.145666140 container died 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 11:53:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1e6a1cf0eb6b20f89fff21931645a50362c5110e573b08f33850ff54d5314dbc-merged.mount: Deactivated successfully.
Nov 25 11:53:02 np0005535469 podman[365305]: 2025-11-25 16:53:02.042478291 +0000 UTC m=+0.183404565 container remove 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:53:02 np0005535469 systemd[1]: libpod-conmon-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope: Deactivated successfully.
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.092 254096 INFO nova.compute.manager [None req-bcfe8ec3-c1ae-45c4-b42e-5ebd3ff3d938 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Pausing#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.093 254096 DEBUG nova.objects.instance [None req-bcfe8ec3-c1ae-45c4-b42e-5ebd3ff3d938 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'flavor' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.156 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089582.156158, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.156 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.161 254096 DEBUG nova.compute.manager [None req-bcfe8ec3-c1ae-45c4-b42e-5ebd3ff3d938 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.194 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.204 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:53:02 np0005535469 podman[365344]: 2025-11-25 16:53:02.23158351 +0000 UTC m=+0.038441016 container create 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:53:02 np0005535469 systemd[1]: Started libpod-conmon-08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f.scope.
Nov 25 11:53:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:53:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:53:02 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:53:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:02 np0005535469 podman[365344]: 2025-11-25 16:53:02.309538609 +0000 UTC m=+0.116396135 container init 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:53:02 np0005535469 podman[365344]: 2025-11-25 16:53:02.21650855 +0000 UTC m=+0.023366056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:53:02 np0005535469 podman[365344]: 2025-11-25 16:53:02.31992725 +0000 UTC m=+0.126784766 container start 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:53:02 np0005535469 podman[365344]: 2025-11-25 16:53:02.324356401 +0000 UTC m=+0.131213927 container attach 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:53:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:02Z|01099|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 11:53:02 np0005535469 NetworkManager[48891]: <info>  [1764089582.6024] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Nov 25 11:53:02 np0005535469 NetworkManager[48891]: <info>  [1764089582.6032] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Nov 25 11:53:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:02Z|01100|binding|INFO|Releasing lport 04d240b5-2178-4712-9ede-6d58532785de from this chassis (sb_readonly=0)
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:02Z|01101|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 11:53:02 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:02Z|01102|binding|INFO|Releasing lport 04d240b5-2178-4712-9ede-6d58532785de from this chassis (sb_readonly=0)
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.908 254096 DEBUG nova.compute.manager [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG nova.compute.manager [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing instance network info cache due to event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG oslo_concurrency.lockutils [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG oslo_concurrency.lockutils [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:53:02 np0005535469 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG nova.network.neutron [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:53:03 np0005535469 modest_lewin[365361]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:53:03 np0005535469 modest_lewin[365361]: --> relative data size: 1.0
Nov 25 11:53:03 np0005535469 modest_lewin[365361]: --> All data devices are unavailable
Nov 25 11:53:03 np0005535469 systemd[1]: libpod-08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f.scope: Deactivated successfully.
Nov 25 11:53:03 np0005535469 podman[365344]: 2025-11-25 16:53:03.301916567 +0000 UTC m=+1.108774063 container died 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:53:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e-merged.mount: Deactivated successfully.
Nov 25 11:53:03 np0005535469 podman[365344]: 2025-11-25 16:53:03.376241776 +0000 UTC m=+1.183099302 container remove 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:53:03 np0005535469 systemd[1]: libpod-conmon-08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f.scope: Deactivated successfully.
Nov 25 11:53:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 256 op/s
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:04 np0005535469 podman[365539]: 2025-11-25 16:53:04.046015658 +0000 UTC m=+0.043620777 container create cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:53:04 np0005535469 systemd[1]: Started libpod-conmon-cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967.scope.
Nov 25 11:53:04 np0005535469 podman[365539]: 2025-11-25 16:53:04.025401367 +0000 UTC m=+0.023006516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:53:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:53:04 np0005535469 podman[365539]: 2025-11-25 16:53:04.144500944 +0000 UTC m=+0.142106093 container init cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:53:04 np0005535469 podman[365539]: 2025-11-25 16:53:04.15537759 +0000 UTC m=+0.152982719 container start cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:53:04 np0005535469 podman[365539]: 2025-11-25 16:53:04.158263168 +0000 UTC m=+0.155868307 container attach cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 11:53:04 np0005535469 lucid_swanson[365555]: 167 167
Nov 25 11:53:04 np0005535469 systemd[1]: libpod-cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967.scope: Deactivated successfully.
Nov 25 11:53:04 np0005535469 podman[365539]: 2025-11-25 16:53:04.164316803 +0000 UTC m=+0.161921922 container died cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 11:53:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b827e6d7366a66443a4239c054e21080e50c9fcb514b12eb308b3f57b3a0e621-merged.mount: Deactivated successfully.
Nov 25 11:53:04 np0005535469 podman[365539]: 2025-11-25 16:53:04.196000103 +0000 UTC m=+0.193605222 container remove cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:53:04 np0005535469 systemd[1]: libpod-conmon-cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967.scope: Deactivated successfully.
Nov 25 11:53:04 np0005535469 podman[365579]: 2025-11-25 16:53:04.401088366 +0000 UTC m=+0.052614420 container create 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:53:04 np0005535469 systemd[1]: Started libpod-conmon-20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476.scope.
Nov 25 11:53:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:53:04 np0005535469 podman[365579]: 2025-11-25 16:53:04.369420916 +0000 UTC m=+0.020946980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:53:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:04 np0005535469 podman[365579]: 2025-11-25 16:53:04.504802235 +0000 UTC m=+0.156328279 container init 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:53:04 np0005535469 podman[365579]: 2025-11-25 16:53:04.510808358 +0000 UTC m=+0.162334402 container start 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:53:04 np0005535469 podman[365579]: 2025-11-25 16:53:04.514755066 +0000 UTC m=+0.166281110 container attach 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.571 254096 INFO nova.compute.manager [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Unpausing#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.572 254096 DEBUG nova.objects.instance [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'flavor' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.601 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089584.6014717, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.602 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:53:04 np0005535469 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.605 254096 DEBUG nova.virt.libvirt.guest [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.605 254096 DEBUG nova.compute.manager [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.622 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.625 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.647 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.711 254096 DEBUG nova.network.neutron [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated VIF entry in instance network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.712 254096 DEBUG nova.network.neutron [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:53:04 np0005535469 nova_compute[254092]: 2025-11-25 16:53:04.733 254096 DEBUG oslo_concurrency.lockutils [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]: {
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:    "0": [
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:        {
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "devices": [
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "/dev/loop3"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            ],
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_name": "ceph_lv0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_size": "21470642176",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "name": "ceph_lv0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "tags": {
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cluster_name": "ceph",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.crush_device_class": "",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.encrypted": "0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osd_id": "0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.type": "block",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.vdo": "0"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            },
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "type": "block",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "vg_name": "ceph_vg0"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:        }
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:    ],
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:    "1": [
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:        {
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "devices": [
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "/dev/loop4"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            ],
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_name": "ceph_lv1",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_size": "21470642176",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "name": "ceph_lv1",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "tags": {
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cluster_name": "ceph",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.crush_device_class": "",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.encrypted": "0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osd_id": "1",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.type": "block",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.vdo": "0"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            },
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "type": "block",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "vg_name": "ceph_vg1"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:        }
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:    ],
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:    "2": [
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:        {
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "devices": [
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "/dev/loop5"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            ],
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_name": "ceph_lv2",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_size": "21470642176",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "name": "ceph_lv2",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "tags": {
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.cluster_name": "ceph",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.crush_device_class": "",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.encrypted": "0",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osd_id": "2",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.type": "block",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:                "ceph.vdo": "0"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            },
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "type": "block",
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:            "vg_name": "ceph_vg2"
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:        }
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]:    ]
Nov 25 11:53:05 np0005535469 suspicious_albattani[365595]: }
Nov 25 11:53:05 np0005535469 systemd[1]: libpod-20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476.scope: Deactivated successfully.
Nov 25 11:53:05 np0005535469 podman[365579]: 2025-11-25 16:53:05.305406361 +0000 UTC m=+0.956932405 container died 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:53:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb-merged.mount: Deactivated successfully.
Nov 25 11:53:05 np0005535469 podman[365579]: 2025-11-25 16:53:05.363022578 +0000 UTC m=+1.014548622 container remove 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:53:05 np0005535469 systemd[1]: libpod-conmon-20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476.scope: Deactivated successfully.
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.568 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.568 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.569 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.569 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.569 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.571 254096 INFO nova.compute.manager [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Terminating instance#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.572 254096 DEBUG nova.compute.manager [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:53:05 np0005535469 kernel: tapa74c23e6-40 (unregistering): left promiscuous mode
Nov 25 11:53:05 np0005535469 NetworkManager[48891]: <info>  [1764089585.6228] device (tapa74c23e6-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:05Z|01103|binding|INFO|Releasing lport a74c23e6-4075-4b59-b36f-9d06bad062d2 from this chassis (sb_readonly=0)
Nov 25 11:53:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:05Z|01104|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 down in Southbound
Nov 25 11:53:05 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:05Z|01105|binding|INFO|Removing iface tapa74c23e6-40 ovn-installed in OVS
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.692 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.693 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e unbound from our chassis#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 25 11:53:05 np0005535469 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006c.scope: Consumed 5.752s CPU time.
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[61496df6-a8f4-4a18-bd73-dced9b50c6c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:05 np0005535469 systemd-machined[216343]: Machine qemu-139-instance-0000006c terminated.
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.750 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a43505-2f75-41b4-80d4-8e25967677b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.753 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[72d79081-513a-457f-9a2a-13367c58bbf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.792 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7a693d-7d69-4c81-b61c-ae1bb2f37176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.811 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance destroyed successfully.#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.812 254096 DEBUG nova.objects.instance [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'resources' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.817 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65adf9e9-029c-4782-9334-4aebb376abd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365734, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.825 254096 DEBUG nova.virt.libvirt.vif [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:53:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:00Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.825 254096 DEBUG nova.network.os_vif_util [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.826 254096 DEBUG nova.network.os_vif_util [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.827 254096 DEBUG os_vif [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.829 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa74c23e6-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.840 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ec8815-eb21-4aa4-8cdc-794dc3eb09ce]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365742, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365742, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.841 254096 INFO os_vif [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40')#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.842 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.844 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.844 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.845 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.845 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.868 254096 DEBUG nova.compute.manager [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.868 254096 DEBUG oslo_concurrency.lockutils [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.869 254096 DEBUG oslo_concurrency.lockutils [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.869 254096 DEBUG oslo_concurrency.lockutils [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.870 254096 DEBUG nova.compute.manager [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:05 np0005535469 nova_compute[254092]: 2025-11-25 16:53:05.870 254096 DEBUG nova.compute.manager [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:53:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.5 MiB/s wr, 247 op/s
Nov 25 11:53:06 np0005535469 podman[365798]: 2025-11-25 16:53:06.094595838 +0000 UTC m=+0.055229102 container create acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 11:53:06 np0005535469 systemd[1]: Started libpod-conmon-acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e.scope.
Nov 25 11:53:06 np0005535469 podman[365798]: 2025-11-25 16:53:06.07295667 +0000 UTC m=+0.033589944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:53:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:53:06 np0005535469 podman[365798]: 2025-11-25 16:53:06.188835679 +0000 UTC m=+0.149468953 container init acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:53:06 np0005535469 podman[365798]: 2025-11-25 16:53:06.197213886 +0000 UTC m=+0.157847160 container start acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:53:06 np0005535469 elastic_morse[365816]: 167 167
Nov 25 11:53:06 np0005535469 systemd[1]: libpod-acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e.scope: Deactivated successfully.
Nov 25 11:53:06 np0005535469 podman[365798]: 2025-11-25 16:53:06.204207427 +0000 UTC m=+0.164840711 container attach acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 11:53:06 np0005535469 podman[365798]: 2025-11-25 16:53:06.205911153 +0000 UTC m=+0.166544407 container died acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:53:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-eb01c780be59edbb774d83770dd6f2351374c66bb6ada7f6a3e1f882f514ba3e-merged.mount: Deactivated successfully.
Nov 25 11:53:06 np0005535469 podman[365798]: 2025-11-25 16:53:06.249899198 +0000 UTC m=+0.210532452 container remove acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:53:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:06 np0005535469 systemd[1]: libpod-conmon-acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e.scope: Deactivated successfully.
Nov 25 11:53:06 np0005535469 podman[365840]: 2025-11-25 16:53:06.496447538 +0000 UTC m=+0.074492375 container create c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 11:53:06 np0005535469 systemd[1]: Started libpod-conmon-c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f.scope.
Nov 25 11:53:06 np0005535469 podman[365840]: 2025-11-25 16:53:06.473360231 +0000 UTC m=+0.051405098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:53:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:53:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:06 np0005535469 podman[365840]: 2025-11-25 16:53:06.589693193 +0000 UTC m=+0.167738060 container init c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 11:53:06 np0005535469 podman[365840]: 2025-11-25 16:53:06.595323426 +0000 UTC m=+0.173368263 container start c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:53:06 np0005535469 podman[365840]: 2025-11-25 16:53:06.59773354 +0000 UTC m=+0.175778417 container attach c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:53:06 np0005535469 nova_compute[254092]: 2025-11-25 16:53:06.611 254096 INFO nova.virt.libvirt.driver [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deleting instance files /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_del#033[00m
Nov 25 11:53:06 np0005535469 nova_compute[254092]: 2025-11-25 16:53:06.614 254096 INFO nova.virt.libvirt.driver [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deletion of /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_del complete#033[00m
Nov 25 11:53:06 np0005535469 nova_compute[254092]: 2025-11-25 16:53:06.662 254096 INFO nova.compute.manager [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 1.09 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:53:06 np0005535469 nova_compute[254092]: 2025-11-25 16:53:06.662 254096 DEBUG oslo.service.loopingcall [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:53:06 np0005535469 nova_compute[254092]: 2025-11-25 16:53:06.663 254096 DEBUG nova.compute.manager [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:53:06 np0005535469 nova_compute[254092]: 2025-11-25 16:53:06.663 254096 DEBUG nova.network.neutron [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:53:07 np0005535469 cool_gates[365857]: {
Nov 25 11:53:07 np0005535469 cool_gates[365857]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "osd_id": 1,
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "type": "bluestore"
Nov 25 11:53:07 np0005535469 cool_gates[365857]:    },
Nov 25 11:53:07 np0005535469 cool_gates[365857]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "osd_id": 2,
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "type": "bluestore"
Nov 25 11:53:07 np0005535469 cool_gates[365857]:    },
Nov 25 11:53:07 np0005535469 cool_gates[365857]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "osd_id": 0,
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:53:07 np0005535469 cool_gates[365857]:        "type": "bluestore"
Nov 25 11:53:07 np0005535469 cool_gates[365857]:    }
Nov 25 11:53:07 np0005535469 cool_gates[365857]: }
Nov 25 11:53:07 np0005535469 systemd[1]: libpod-c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f.scope: Deactivated successfully.
Nov 25 11:53:07 np0005535469 podman[365840]: 2025-11-25 16:53:07.53420906 +0000 UTC m=+1.112253897 container died c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:53:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480-merged.mount: Deactivated successfully.
Nov 25 11:53:07 np0005535469 nova_compute[254092]: 2025-11-25 16:53:07.580 254096 DEBUG nova.network.neutron [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:53:07 np0005535469 podman[365840]: 2025-11-25 16:53:07.587903999 +0000 UTC m=+1.165948836 container remove c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 11:53:07 np0005535469 nova_compute[254092]: 2025-11-25 16:53:07.596 254096 INFO nova.compute.manager [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 0.93 seconds to deallocate network for instance.#033[00m
Nov 25 11:53:07 np0005535469 systemd[1]: libpod-conmon-c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f.scope: Deactivated successfully.
Nov 25 11:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:53:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:53:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:53:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:53:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 11d8f14a-7314-47cd-9d67-5a9873ae493c does not exist
Nov 25 11:53:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e1c9161f-6687-4904-8d0c-802c95c201f6 does not exist
Nov 25 11:53:07 np0005535469 nova_compute[254092]: 2025-11-25 16:53:07.657 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:07 np0005535469 nova_compute[254092]: 2025-11-25 16:53:07.657 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:07 np0005535469 nova_compute[254092]: 2025-11-25 16:53:07.751 254096 DEBUG nova.compute.manager [req-6204caa5-cff5-433b-9381-be8c8d6412a4 req-30f65e21-e4b9-4b83-a76c-674ae8e9cc37 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-deleted-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:07 np0005535469 nova_compute[254092]: 2025-11-25 16:53:07.773 254096 DEBUG oslo_concurrency.processutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:53:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 178 op/s
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.022 254096 DEBUG nova.compute.manager [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.022 254096 DEBUG oslo_concurrency.lockutils [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.023 254096 DEBUG oslo_concurrency.lockutils [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.023 254096 DEBUG oslo_concurrency.lockutils [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.023 254096 DEBUG nova.compute.manager [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.024 254096 WARNING nova.compute.manager [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:53:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:53:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861149236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.223 254096 DEBUG oslo_concurrency.processutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.230 254096 DEBUG nova.compute.provider_tree [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.246 254096 DEBUG nova.scheduler.client.report [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.272 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.329 254096 INFO nova.scheduler.client.report [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Deleted allocations for instance 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5#033[00m
Nov 25 11:53:08 np0005535469 nova_compute[254092]: 2025-11-25 16:53:08.461 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:53:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.103 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.103 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.104 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.104 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.104 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.105 254096 INFO nova.compute.manager [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Terminating instance#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.106 254096 DEBUG nova.compute.manager [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:53:09 np0005535469 kernel: tap923a00bb-da (unregistering): left promiscuous mode
Nov 25 11:53:09 np0005535469 NetworkManager[48891]: <info>  [1764089589.1589] device (tap923a00bb-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:09 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:09Z|01106|binding|INFO|Releasing lport 923a00bb-da3b-434a-b154-c338c92e0635 from this chassis (sb_readonly=0)
Nov 25 11:53:09 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:09Z|01107|binding|INFO|Setting lport 923a00bb-da3b-434a-b154-c338c92e0635 down in Southbound
Nov 25 11:53:09 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:09Z|01108|binding|INFO|Removing iface tap923a00bb-da ovn-installed in OVS
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.178 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:31 10.100.0.13'], port_security=['fa:16:3e:4e:17:31 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d70119e-e45b-4a12-893e-8d5a805ca8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=923a00bb-da3b-434a-b154-c338c92e0635) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.179 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 923a00bb-da3b-434a-b154-c338c92e0635 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e unbound from our chassis#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.180 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.181 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6b7b01-b6e8-4035-a964-f02be84dfb50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.182 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e namespace which is not needed anymore#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:09 np0005535469 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 25 11:53:09 np0005535469 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006b.scope: Consumed 13.951s CPU time.
Nov 25 11:53:09 np0005535469 systemd-machined[216343]: Machine qemu-136-instance-0000006b terminated.
Nov 25 11:53:09 np0005535469 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : haproxy version is 2.8.14-c23fe91
Nov 25 11:53:09 np0005535469 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : path to executable is /usr/sbin/haproxy
Nov 25 11:53:09 np0005535469 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [WARNING]  (364049) : Exiting Master process...
Nov 25 11:53:09 np0005535469 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [ALERT]    (364049) : Current worker (364051) exited with code 143 (Terminated)
Nov 25 11:53:09 np0005535469 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [WARNING]  (364049) : All workers exited. Exiting... (0)
Nov 25 11:53:09 np0005535469 systemd[1]: libpod-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3.scope: Deactivated successfully.
Nov 25 11:53:09 np0005535469 podman[365998]: 2025-11-25 16:53:09.320074771 +0000 UTC m=+0.048120529 container died 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:53:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3-userdata-shm.mount: Deactivated successfully.
Nov 25 11:53:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8a699cce47daa3e0d0478b9596de6cbf41d67cddb37cef82117aeaf5f09e69db-merged.mount: Deactivated successfully.
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.349 254096 INFO nova.virt.libvirt.driver [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance destroyed successfully.#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.351 254096 DEBUG nova.objects.instance [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'resources' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:09 np0005535469 podman[365998]: 2025-11-25 16:53:09.357518029 +0000 UTC m=+0.085563787 container cleanup 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:53:09 np0005535469 systemd[1]: libpod-conmon-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3.scope: Deactivated successfully.
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.368 254096 DEBUG nova.virt.libvirt.vif [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-142320588',display_name='tempest-ServerRescueNegativeTestJSON-server-142320588',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-142320588',id=107,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-wqxsdvnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:04Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=6d70119e-e45b-4a12-893e-8d5a805ca8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.369 254096 DEBUG nova.network.os_vif_util [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.370 254096 DEBUG nova.network.os_vif_util [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.371 254096 DEBUG os_vif [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.374 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap923a00bb-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.380 254096 INFO os_vif [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da')#033[00m
Nov 25 11:53:09 np0005535469 podman[366040]: 2025-11-25 16:53:09.425592549 +0000 UTC m=+0.041232112 container remove 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.431 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11be4882-10f8-48e8-bd32-58c428874228]: (4, ('Tue Nov 25 04:53:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e (9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3)\n9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3\nTue Nov 25 04:53:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e (9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3)\n9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.433 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ff0162-c0d7-474f-a6d3-2c7c6ee831b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.434 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:09 np0005535469 kernel: tapcf0cb5b9-c0: left promiscuous mode
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.453 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[736c6749-bcec-46e6-85d3-f20870b7e25f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.465 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e412452-18a1-42b1-9b89-a1eb5a6c3c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.466 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af31e8a2-2476-4995-8101-926225f9a83f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e570401c-2ade-4105-baea-8d38e3475caa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601448, 'reachable_time': 30208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366073, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.483 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:53:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.483 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaada79-abf2-411c-ad81-761741b15ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:09 np0005535469 systemd[1]: run-netns-ovnmeta\x2dcf0cb5b9\x2dc956\x2d4cd5\x2d8a3d\x2d7f55d142ae3e.mount: Deactivated successfully.
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.822 254096 INFO nova.virt.libvirt.driver [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deleting instance files /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab_del#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.823 254096 INFO nova.virt.libvirt.driver [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deletion of /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab_del complete#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.875 254096 INFO nova.compute.manager [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.875 254096 DEBUG oslo.service.loopingcall [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.875 254096 DEBUG nova.compute.manager [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:53:09 np0005535469 nova_compute[254092]: 2025-11-25 16:53:09.876 254096 DEBUG nova.network.neutron [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:53:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.029 254096 DEBUG nova.compute.manager [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-unplugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.029 254096 DEBUG oslo_concurrency.lockutils [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.029 254096 DEBUG oslo_concurrency.lockutils [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.030 254096 DEBUG oslo_concurrency.lockutils [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.030 254096 DEBUG nova.compute.manager [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] No waiting events found dispatching network-vif-unplugged-923a00bb-da3b-434a-b154-c338c92e0635 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.030 254096 DEBUG nova.compute.manager [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-unplugged-923a00bb-da3b-434a-b154-c338c92e0635 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:53:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:10Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:77:80 10.100.0.8
Nov 25 11:53:10 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:10Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:77:80 10.100.0.8
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.596 254096 DEBUG nova.network.neutron [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.612 254096 INFO nova.compute.manager [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 0.74 seconds to deallocate network for instance.#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.687 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.688 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:10 np0005535469 nova_compute[254092]: 2025-11-25 16:53:10.755 254096 DEBUG oslo_concurrency.processutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:53:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:53:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767569315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:53:11 np0005535469 nova_compute[254092]: 2025-11-25 16:53:11.181 254096 DEBUG oslo_concurrency.processutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:53:11 np0005535469 nova_compute[254092]: 2025-11-25 16:53:11.187 254096 DEBUG nova.compute.provider_tree [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:53:11 np0005535469 nova_compute[254092]: 2025-11-25 16:53:11.209 254096 DEBUG nova.scheduler.client.report [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:53:11 np0005535469 nova_compute[254092]: 2025-11-25 16:53:11.223 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:11 np0005535469 nova_compute[254092]: 2025-11-25 16:53:11.248 254096 INFO nova.scheduler.client.report [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Deleted allocations for instance 6d70119e-e45b-4a12-893e-8d5a805ca8ab#033[00m
Nov 25 11:53:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:11 np0005535469 nova_compute[254092]: 2025-11-25 16:53:11.327 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 109 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 294 op/s
Nov 25 11:53:12 np0005535469 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:12 np0005535469 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG oslo_concurrency.lockutils [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:12 np0005535469 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG oslo_concurrency.lockutils [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:12 np0005535469 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG oslo_concurrency.lockutils [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:12 np0005535469 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] No waiting events found dispatching network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:12 np0005535469 nova_compute[254092]: 2025-11-25 16:53:12.237 254096 WARNING nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received unexpected event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:53:12 np0005535469 nova_compute[254092]: 2025-11-25 16:53:12.237 254096 DEBUG nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-deleted-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:13.631 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 109 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 190 op/s
Nov 25 11:53:14 np0005535469 nova_compute[254092]: 2025-11-25 16:53:14.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:14 np0005535469 nova_compute[254092]: 2025-11-25 16:53:14.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 118 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 199 op/s
Nov 25 11:53:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:16Z|01109|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 11:53:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.279 254096 INFO nova.compute.manager [None req-348206ab-1bdc-4dab-95ff-bed7af5997fb 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Get console output#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.285 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.547 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.664 254096 DEBUG nova.objects.instance [None req-6e1ca346-2e9c-4fd1-ad37-303759815abd 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.685 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089596.6846845, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.705 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.710 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:53:16 np0005535469 nova_compute[254092]: 2025-11-25 16:53:16.726 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 11:53:17 np0005535469 kernel: tap1ef97ad6-07 (unregistering): left promiscuous mode
Nov 25 11:53:17 np0005535469 NetworkManager[48891]: <info>  [1764089597.3438] device (tap1ef97ad6-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01110|binding|INFO|Releasing lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 from this chassis (sb_readonly=0)
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01111|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 down in Southbound
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01112|binding|INFO|Removing iface tap1ef97ad6-07 ovn-installed in OVS
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.359 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.360 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.361 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.362 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[866b888b-9a83-4a60-850a-3db34e36b032]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.362 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace which is not needed anymore#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 25 11:53:17 np0005535469 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006d.scope: Consumed 13.172s CPU time.
Nov 25 11:53:17 np0005535469 systemd-machined[216343]: Machine qemu-138-instance-0000006d terminated.
Nov 25 11:53:17 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : haproxy version is 2.8.14-c23fe91
Nov 25 11:53:17 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : path to executable is /usr/sbin/haproxy
Nov 25 11:53:17 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [WARNING]  (364854) : Exiting Master process...
Nov 25 11:53:17 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [ALERT]    (364854) : Current worker (364860) exited with code 143 (Terminated)
Nov 25 11:53:17 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [WARNING]  (364854) : All workers exited. Exiting... (0)
Nov 25 11:53:17 np0005535469 systemd[1]: libpod-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a.scope: Deactivated successfully.
Nov 25 11:53:17 np0005535469 podman[366125]: 2025-11-25 16:53:17.493160736 +0000 UTC m=+0.041541730 container died 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:17 np0005535469 kernel: tap1ef97ad6-07: entered promiscuous mode
Nov 25 11:53:17 np0005535469 NetworkManager[48891]: <info>  [1764089597.5101] manager: (tap1ef97ad6-07): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01113|binding|INFO|Claiming lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for this chassis.
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01114|binding|INFO|1ef97ad6-0798-4b3d-a9cc-562f9526ae38: Claiming fa:16:3e:5e:77:80 10.100.0.8
Nov 25 11:53:17 np0005535469 kernel: tap1ef97ad6-07 (unregistering): left promiscuous mode
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.602 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a-userdata-shm.mount: Deactivated successfully.
Nov 25 11:53:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3934ea0b4d9ff83643ea9a1fbc52010cb7a732e5e5a2e2bfce4873181e16e161-merged.mount: Deactivated successfully.
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.608 254096 DEBUG nova.compute.manager [None req-6e1ca346-2e9c-4fd1-ad37-303759815abd 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:17 np0005535469 podman[366125]: 2025-11-25 16:53:17.614962786 +0000 UTC m=+0.163343780 container cleanup 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01115|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 ovn-installed in OVS
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01116|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 up in Southbound
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.620 254096 DEBUG nova.compute.manager [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.620 254096 DEBUG oslo_concurrency.lockutils [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.621 254096 DEBUG oslo_concurrency.lockutils [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.622 254096 DEBUG oslo_concurrency.lockutils [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.624 254096 DEBUG nova.compute.manager [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.624 254096 WARNING nova.compute.manager [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state active and task_state suspending.#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01117|binding|INFO|Releasing lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 from this chassis (sb_readonly=0)
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01118|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 down in Southbound
Nov 25 11:53:17 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:17Z|01119|binding|INFO|Removing iface tap1ef97ad6-07 ovn-installed in OVS
Nov 25 11:53:17 np0005535469 systemd[1]: libpod-conmon-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a.scope: Deactivated successfully.
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.630 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 podman[366166]: 2025-11-25 16:53:17.681002901 +0000 UTC m=+0.041818498 container remove 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.686 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba7c842-391d-4375-a9c5-b6a0cdd13891]: (4, ('Tue Nov 25 04:53:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a)\n1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a\nTue Nov 25 04:53:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a)\n1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.688 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b676e580-851b-49db-a006-45baea669133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.688 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.690 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 kernel: tapaa7a2aa2-90: left promiscuous mode
Nov 25 11:53:17 np0005535469 nova_compute[254092]: 2025-11-25 16:53:17.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bad61903-f4f8-40da-aa98-cb43198fb21d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe4a6e6-c838-45b3-9ba3-77081032eef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.738 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3248509-bc74-4845-bcee-6da9243435b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.753 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f297d223-3e1b-40f5-b1d7-25eaeb3fd8b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603519, 'reachable_time': 20349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366185, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 systemd[1]: run-netns-ovnmeta\x2daa7a2aa2\x2d9e73\x2d498f\x2dbac7\x2d4dcf3eb7c3d1.mount: Deactivated successfully.
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.757 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.757 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f72970c8-4364-4aaf-a01e-bb04581820ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.758 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.759 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.760 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9aadc93a-cc71-432c-b152-218505401fda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.761 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.762 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:53:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.762 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[658d0c88-943a-4c10-ae58-9454557a96da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 146 op/s
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.008 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.801 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:19 np0005535469 nova_compute[254092]: 2025-11-25 16:53:19.801 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 11:53:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.623 254096 INFO nova.compute.manager [None req-4b26ad17-a388-4efb-928b-006f23c09fb9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Get console output#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.798 254096 INFO nova.compute.manager [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Resuming#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.800 254096 DEBUG nova.objects.instance [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.807 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089585.807225, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.808 254096 INFO nova.compute.manager [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.823 254096 DEBUG nova.compute.manager [None req-20deafa6-e275-4822-9640-cbded10aef0e - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.831 254096 DEBUG oslo_concurrency.lockutils [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.832 254096 DEBUG oslo_concurrency.lockutils [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:53:20 np0005535469 nova_compute[254092]: 2025-11-25 16:53:20.832 254096 DEBUG nova.network.neutron [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:53:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:53:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 11:53:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:53:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395284670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:53:21 np0005535469 nova_compute[254092]: 2025-11-25 16:53:21.964 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.039 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.040 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.168 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.169 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3742MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.169 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.170 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b12625ea-31bf-4599-a248-4c6ced8e59c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.267 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:53:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:53:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634087155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.743 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.748 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.765 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.786 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.786 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.890 254096 DEBUG nova.network.neutron [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.903 254096 DEBUG oslo_concurrency.lockutils [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.907 254096 DEBUG nova.virt.libvirt.vif [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:17Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.908 254096 DEBUG nova.network.os_vif_util [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.908 254096 DEBUG nova.network.os_vif_util [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.908 254096 DEBUG os_vif [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.909 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.910 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ef97ad6-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ef97ad6-07, col_values=(('external_ids', {'iface-id': '1ef97ad6-0798-4b3d-a9cc-562f9526ae38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:77:80', 'vm-uuid': 'b12625ea-31bf-4599-a248-4c6ced8e59c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.913 254096 INFO os_vif [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07')#033[00m
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.926 254096 DEBUG nova.objects.instance [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:22 np0005535469 kernel: tap1ef97ad6-07: entered promiscuous mode
Nov 25 11:53:22 np0005535469 NetworkManager[48891]: <info>  [1764089602.9919] manager: (tap1ef97ad6-07): new Tun device (/org/freedesktop/NetworkManager/Devices/455)
Nov 25 11:53:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:22Z|01120|binding|INFO|Claiming lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for this chassis.
Nov 25 11:53:22 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:22Z|01121|binding|INFO|1ef97ad6-0798-4b3d-a9cc-562f9526ae38: Claiming fa:16:3e:5e:77:80 10.100.0.8
Nov 25 11:53:22 np0005535469 nova_compute[254092]: 2025-11-25 16:53:22.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.000 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.001 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 bound to our chassis#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.002 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:23Z|01122|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 ovn-installed in OVS
Nov 25 11:53:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:23Z|01123|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 up in Southbound
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[23b9e6fd-6eaa-404d-9fcc-9e46eb1c5be6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.015 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa7a2aa2-91 in ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.017 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa7a2aa2-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.017 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe947cf-99ae-4fea-a39a-301fe5a2d214]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[49893d48-81c0-4522-a7ae-7a33ea9b0e63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 systemd-udevd[366244]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:53:23 np0005535469 systemd-machined[216343]: New machine qemu-140-instance-0000006d.
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.031 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9163c010-5839-4688-a679-4aa119994165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 NetworkManager[48891]: <info>  [1764089603.0393] device (tap1ef97ad6-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:53:23 np0005535469 NetworkManager[48891]: <info>  [1764089603.0404] device (tap1ef97ad6-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:53:23 np0005535469 systemd[1]: Started Virtual Machine qemu-140-instance-0000006d.
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.050 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71c6b6e9-e90a-4735-b4eb-c2b458b7b265]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.078 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1002473a-90af-4f6a-972c-57faec473794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[410c3aba-27f8-4dbb-9594-1ce451e0e5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 NetworkManager[48891]: <info>  [1764089603.0838] manager: (tapaa7a2aa2-90): new Veth device (/org/freedesktop/NetworkManager/Devices/456)
Nov 25 11:53:23 np0005535469 systemd-udevd[366248]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.110 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[164cc12b-50c8-409c-b57f-096cec6c4e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.113 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c976627-8e45-4818-9543-e63777a6001f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 NetworkManager[48891]: <info>  [1764089603.1314] device (tapaa7a2aa2-90): carrier: link connected
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.135 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[540feadd-ab32-4f07-affd-658c2e548317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.150 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e50ee7d9-5289-4f32-8341-61c80d445666]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606072, 'reachable_time': 28152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366277, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a070af18-f933-4faf-8a49-5c721232c5d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:f6a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606072, 'tstamp': 606072}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366278, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.185 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3e29d7-91cd-4395-82b3-c67e65c14313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606072, 'reachable_time': 28152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366279, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.216 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a47fd36-3fa0-46b9-8c76-549b46690a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.268 254096 DEBUG nova.compute.manager [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.269 254096 DEBUG oslo_concurrency.lockutils [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.269 254096 DEBUG oslo_concurrency.lockutils [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.269 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8b64e1-d7ac-4f01-9ebc-7eea4d81c804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.270 254096 DEBUG oslo_concurrency.lockutils [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.270 254096 DEBUG nova.compute.manager [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.270 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.270 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.271 254096 WARNING nova.compute.manager [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.271 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa7a2aa2-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:23 np0005535469 kernel: tapaa7a2aa2-90: entered promiscuous mode
Nov 25 11:53:23 np0005535469 NetworkManager[48891]: <info>  [1764089603.2735] manager: (tapaa7a2aa2-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.275 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa7a2aa2-90, col_values=(('external_ids', {'iface-id': '4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.278 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:53:23 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:23Z|01124|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.290 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1cca3c-e1d8-4906-b095-cbc0f603ff3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.292 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:53:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.294 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'env', 'PROCESS_TAG=haproxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.631 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for b12625ea-31bf-4599-a248-4c6ced8e59c2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.633 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089603.6308513, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.633 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Started (Lifecycle Event)#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.649 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:23 np0005535469 podman[366353]: 2025-11-25 16:53:23.667192327 +0000 UTC m=+0.056081425 container create e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.672 254096 DEBUG nova.compute.manager [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.674 254096 DEBUG nova.objects.instance [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.678 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.691 254096 INFO nova.virt.libvirt.driver [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance running successfully.#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.692 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.692 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089603.6360326, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.693 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:53:23 np0005535469 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.695 254096 DEBUG nova.virt.libvirt.guest [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.695 254096 DEBUG nova.compute.manager [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.713 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.716 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:53:23 np0005535469 systemd[1]: Started libpod-conmon-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8.scope.
Nov 25 11:53:23 np0005535469 podman[366353]: 2025-11-25 16:53:23.64045317 +0000 UTC m=+0.029342278 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:53:23 np0005535469 nova_compute[254092]: 2025-11-25 16:53:23.741 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 11:53:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:53:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6467f1eaed3cd66ef8c9f25e2f7b2beb1393d9ccf64e1b42e45ebee6456e03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:53:23 np0005535469 podman[366353]: 2025-11-25 16:53:23.76631726 +0000 UTC m=+0.155206388 container init e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:53:23 np0005535469 podman[366353]: 2025-11-25 16:53:23.771331807 +0000 UTC m=+0.160220905 container start e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 11:53:23 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : New worker (366374) forked
Nov 25 11:53:23 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : Loading success.
Nov 25 11:53:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 105 KiB/s wr, 18 op/s
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.345 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089589.3433707, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.345 254096 INFO nova.compute.manager [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.366 254096 DEBUG nova.compute.manager [None req-ff73578c-f1e2-4a4d-a7e9-97ee0bf1b333 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:24 np0005535469 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.304 254096 INFO nova.compute.manager [None req-82471993-ac31-4782-b28e-389e5fb37240 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Get console output#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.309 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.467 254096 DEBUG nova.compute.manager [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.467 254096 DEBUG oslo_concurrency.lockutils [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 DEBUG oslo_concurrency.lockutils [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 DEBUG oslo_concurrency.lockutils [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 DEBUG nova.compute.manager [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 WARNING nova.compute.manager [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:53:25 np0005535469 podman[366384]: 2025-11-25 16:53:25.655460618 +0000 UTC m=+0.066216210 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 11:53:25 np0005535469 podman[366383]: 2025-11-25 16:53:25.686327467 +0000 UTC m=+0.098501378 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:53:25 np0005535469 podman[366385]: 2025-11-25 16:53:25.688032273 +0000 UTC m=+0.095635889 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.852 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.852 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.853 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:53:25 np0005535469 nova_compute[254092]: 2025-11-25 16:53:25.853 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 105 KiB/s wr, 22 op/s
Nov 25 11:53:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.796 254096 DEBUG nova.compute.manager [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.797 254096 DEBUG nova.compute.manager [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing instance network info cache due to event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.797 254096 DEBUG oslo_concurrency.lockutils [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.935 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.935 254096 INFO nova.compute.manager [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Terminating instance#033[00m
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.936 254096 DEBUG nova.compute.manager [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:53:26 np0005535469 kernel: tap1ef97ad6-07 (unregistering): left promiscuous mode
Nov 25 11:53:26 np0005535469 NetworkManager[48891]: <info>  [1764089606.9764] device (tap1ef97ad6-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:26Z|01125|binding|INFO|Releasing lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 from this chassis (sb_readonly=0)
Nov 25 11:53:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:26Z|01126|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 down in Southbound
Nov 25 11:53:26 np0005535469 ovn_controller[153477]: 2025-11-25T16:53:26Z|01127|binding|INFO|Removing iface tap1ef97ad6-07 ovn-installed in OVS
Nov 25 11:53:26 np0005535469 nova_compute[254092]: 2025-11-25 16:53:26.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.992 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.994 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis#033[00m
Nov 25 11:53:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.995 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:53:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[258622cb-dfe7-46a9-888e-5ed7f567d631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.996 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace which is not needed anymore#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.004 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:27 np0005535469 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 25 11:53:27 np0005535469 systemd-machined[216343]: Machine qemu-140-instance-0000006d terminated.
Nov 25 11:53:27 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : haproxy version is 2.8.14-c23fe91
Nov 25 11:53:27 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : path to executable is /usr/sbin/haproxy
Nov 25 11:53:27 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [WARNING]  (366372) : Exiting Master process...
Nov 25 11:53:27 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [ALERT]    (366372) : Current worker (366374) exited with code 143 (Terminated)
Nov 25 11:53:27 np0005535469 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [WARNING]  (366372) : All workers exited. Exiting... (0)
Nov 25 11:53:27 np0005535469 systemd[1]: libpod-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8.scope: Deactivated successfully.
Nov 25 11:53:27 np0005535469 podman[366472]: 2025-11-25 16:53:27.128859088 +0000 UTC m=+0.046597787 container died e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:53:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8-userdata-shm.mount: Deactivated successfully.
Nov 25 11:53:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5d6467f1eaed3cd66ef8c9f25e2f7b2beb1393d9ccf64e1b42e45ebee6456e03-merged.mount: Deactivated successfully.
Nov 25 11:53:27 np0005535469 podman[366472]: 2025-11-25 16:53:27.170283134 +0000 UTC m=+0.088021833 container cleanup e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:53:27 np0005535469 systemd[1]: libpod-conmon-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8.scope: Deactivated successfully.
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.179 254096 INFO nova.virt.libvirt.driver [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance destroyed successfully.#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.179 254096 DEBUG nova.objects.instance [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.189 254096 DEBUG nova.virt.libvirt.vif [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:23Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.189 254096 DEBUG nova.network.os_vif_util [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.190 254096 DEBUG nova.network.os_vif_util [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.190 254096 DEBUG os_vif [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.192 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ef97ad6-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.197 254096 INFO os_vif [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07')#033[00m
Nov 25 11:53:27 np0005535469 podman[366511]: 2025-11-25 16:53:27.240386809 +0000 UTC m=+0.043269366 container remove e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.248 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0b50d11a-0f1d-4e1f-9618-e2814902f721]: (4, ('Tue Nov 25 04:53:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8)\ne866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8\nTue Nov 25 04:53:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8)\ne866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.250 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[47514d72-e114-4ac9-a785-db9d0b88618b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.252 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:27 np0005535469 kernel: tapaa7a2aa2-90: left promiscuous mode
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.275 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29e633e3-8022-41d4-a5d7-b2949dd8560d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ec84c127-4d1c-47ca-b2a0-ed8f8e3e3a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[242de98a-02f3-4ecf-ab67-cbefb12137dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c55c7b15-7e7b-4e26-b609-2d7fa760ade1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606066, 'reachable_time': 38145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366545, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.325 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:53:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.325 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[56f585d7-5de6-47d1-a50a-4be83606976d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:27 np0005535469 systemd[1]: run-netns-ovnmeta\x2daa7a2aa2\x2d9e73\x2d498f\x2dbac7\x2d4dcf3eb7c3d1.mount: Deactivated successfully.
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.518 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.545 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.545 254096 DEBUG oslo_concurrency.lockutils [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.545 254096 DEBUG nova.network.neutron [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.601 254096 INFO nova.virt.libvirt.driver [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deleting instance files /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2_del#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.601 254096 INFO nova.virt.libvirt.driver [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deletion of /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2_del complete#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.660 254096 INFO nova.compute.manager [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.661 254096 DEBUG oslo.service.loopingcall [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.661 254096 DEBUG nova.compute.manager [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.661 254096 DEBUG nova.network.neutron [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.726 254096 DEBUG nova.compute.manager [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG oslo_concurrency.lockutils [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG oslo_concurrency.lockutils [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG oslo_concurrency.lockutils [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG nova.compute.manager [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:27 np0005535469 nova_compute[254092]: 2025-11-25 16:53:27.728 254096 DEBUG nova.compute.manager [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:53:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 28 KiB/s wr, 13 op/s
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.365 254096 DEBUG nova.network.neutron [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.386 254096 INFO nova.compute.manager [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.445 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.446 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.503 254096 DEBUG oslo_concurrency.processutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.924 254096 DEBUG nova.compute.manager [req-1b20025f-bd19-4c25-a578-e70e8a227d2d req-94144e38-b2af-4983-97a6-bb020d38adb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-deleted-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:53:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146727220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.969 254096 DEBUG oslo_concurrency.processutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.975 254096 DEBUG nova.compute.provider_tree [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:53:28 np0005535469 nova_compute[254092]: 2025-11-25 16:53:28.987 254096 DEBUG nova.scheduler.client.report [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.005 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.033 254096 INFO nova.scheduler.client.report [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance b12625ea-31bf-4599-a248-4c6ced8e59c2#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.083 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.601 254096 DEBUG nova.network.neutron [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated VIF entry in instance network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.602 254096 DEBUG nova.network.neutron [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.630 254096 DEBUG oslo_concurrency.lockutils [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:53:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.940 254096 DEBUG nova.compute.manager [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.941 254096 DEBUG oslo_concurrency.lockutils [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 DEBUG oslo_concurrency.lockutils [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 DEBUG oslo_concurrency.lockutils [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 DEBUG nova.compute.manager [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:53:29 np0005535469 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 WARNING nova.compute.manager [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 11:53:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 13 KiB/s wr, 33 op/s
Nov 25 11:53:32 np0005535469 nova_compute[254092]: 2025-11-25 16:53:32.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:33 np0005535469 nova_compute[254092]: 2025-11-25 16:53:33.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:33 np0005535469 nova_compute[254092]: 2025-11-25 16:53:33.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 11:53:34 np0005535469 nova_compute[254092]: 2025-11-25 16:53:34.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 11:53:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:36 np0005535469 nova_compute[254092]: 2025-11-25 16:53:36.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:36 np0005535469 nova_compute[254092]: 2025-11-25 16:53:36.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:53:36 np0005535469 nova_compute[254092]: 2025-11-25 16:53:36.530 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:53:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.774 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:5a:27 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a791f910-85c1-412c-90b8-601e3d703b65, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5cfd93ef-7743-40d7-8957-75a8c3d82550) old=Port_Binding(mac=['fa:16:3e:f5:5a:27 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.775 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5cfd93ef-7743-40d7-8957-75a8c3d82550 in datapath 8306a8bb-cdd9-40cb-9050-ddc2efdcd179 updated#033[00m
Nov 25 11:53:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.776 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8306a8bb-cdd9-40cb-9050-ddc2efdcd179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:53:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.777 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9591658-1b0c-44eb-9c46-468f78d4218d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:53:37 np0005535469 nova_compute[254092]: 2025-11-25 16:53:37.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:37 np0005535469 nova_compute[254092]: 2025-11-25 16:53:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:53:37 np0005535469 nova_compute[254092]: 2025-11-25 16:53:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:53:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 11:53:39 np0005535469 nova_compute[254092]: 2025-11-25 16:53:39.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:53:40
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'images', 'backups', '.mgr']
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 32K writes, 132K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.89 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9383 writes, 36K keys, 9383 commit groups, 1.0 writes per commit group, ingest: 35.89 MB, 0.06 MB/s#012Interval WAL: 9384 writes, 3775 syncs, 2.49 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:53:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 11:53:42 np0005535469 nova_compute[254092]: 2025-11-25 16:53:42.177 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089607.1757858, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:53:42 np0005535469 nova_compute[254092]: 2025-11-25 16:53:42.177 254096 INFO nova.compute.manager [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:53:42 np0005535469 nova_compute[254092]: 2025-11-25 16:53:42.250 254096 DEBUG nova.compute.manager [None req-27e55956-1eeb-478b-a89c-c6ae02145609 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:53:42 np0005535469 nova_compute[254092]: 2025-11-25 16:53:42.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:43.603 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:53:43 np0005535469 nova_compute[254092]: 2025-11-25 16:53:43.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:43.605 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:53:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:44 np0005535469 nova_compute[254092]: 2025-11-25 16:53:44.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:47 np0005535469 nova_compute[254092]: 2025-11-25 16:53:47.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:49 np0005535469 nova_compute[254092]: 2025-11-25 16:53:49.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:53:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.2 total, 600.0 interval#012Cumulative writes: 34K writes, 133K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.87 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9033 writes, 34K keys, 9033 commit groups, 1.0 writes per commit group, ingest: 35.03 MB, 0.06 MB/s#012Interval WAL: 9033 writes, 3578 syncs, 2.52 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:53:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:53:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:52 np0005535469 nova_compute[254092]: 2025-11-25 16:53:52.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:53:52.606 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:53:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:54 np0005535469 nova_compute[254092]: 2025-11-25 16:53:54.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:53:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141173161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:53:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141173161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:53:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:53:56 np0005535469 podman[366571]: 2025-11-25 16:53:56.640925948 +0000 UTC m=+0.059716097 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:53:56 np0005535469 podman[366570]: 2025-11-25 16:53:56.663468192 +0000 UTC m=+0.082920418 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 11:53:56 np0005535469 podman[366572]: 2025-11-25 16:53:56.683361554 +0000 UTC m=+0.094037712 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:53:57 np0005535469 nova_compute[254092]: 2025-11-25 16:53:57.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:53:59 np0005535469 nova_compute[254092]: 2025-11-25 16:53:59.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:53:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:02 np0005535469 nova_compute[254092]: 2025-11-25 16:54:02.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.564 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7c702cf-93bf-4690-931b-d19f31e3f140, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=677dcda5-8641-460d-b058-dce18b1330bf) old=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:54:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.566 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 677dcda5-8641-460d-b058-dce18b1330bf in datapath fda12e20-ad43-4206-95a1-86c7a084bd24 updated#033[00m
Nov 25 11:54:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.567 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fda12e20-ad43-4206-95a1-86c7a084bd24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:54:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[08446be3-bc98-47dd-b5b9-ae7b1c30c8c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:54:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:04 np0005535469 nova_compute[254092]: 2025-11-25 16:54:04.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:07 np0005535469 nova_compute[254092]: 2025-11-25 16:54:07.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 11:54:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.5 total, 600.0 interval#012Cumulative writes: 29K writes, 116K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.87 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8344 writes, 32K keys, 8344 commit groups, 1.0 writes per commit group, ingest: 34.73 MB, 0.06 MB/s#012Interval WAL: 8344 writes, 3341 syncs, 2.50 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 11:54:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:09 np0005535469 nova_compute[254092]: 2025-11-25 16:54:09.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ff8a111a-ca02-4923-a418-97831a29ab35 does not exist
Nov 25 11:54:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6491f478-3496-4ecd-9613-ff8a0f0ef3cb does not exist
Nov 25 11:54:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e9db6add-d45b-4d88-b6c9-602adb6a7de3 does not exist
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:54:09 np0005535469 podman[367022]: 2025-11-25 16:54:09.867735561 +0000 UTC m=+0.057994011 container create 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 11:54:09 np0005535469 systemd[1]: Started libpod-conmon-47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26.scope.
Nov 25 11:54:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:54:09 np0005535469 podman[367022]: 2025-11-25 16:54:09.935230399 +0000 UTC m=+0.125488849 container init 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:54:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:09 np0005535469 podman[367022]: 2025-11-25 16:54:09.943071733 +0000 UTC m=+0.133330173 container start 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:54:09 np0005535469 podman[367022]: 2025-11-25 16:54:09.84898968 +0000 UTC m=+0.039248130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:54:09 np0005535469 podman[367022]: 2025-11-25 16:54:09.946520577 +0000 UTC m=+0.136779027 container attach 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:54:09 np0005535469 pensive_khorana[367038]: 167 167
Nov 25 11:54:09 np0005535469 systemd[1]: libpod-47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26.scope: Deactivated successfully.
Nov 25 11:54:09 np0005535469 podman[367022]: 2025-11-25 16:54:09.947824122 +0000 UTC m=+0.138082602 container died 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 11:54:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-35a4d1ea74aad40cb31332298718d92d7ad5d60142ab1565d0b7102bf3a2874d-merged.mount: Deactivated successfully.
Nov 25 11:54:09 np0005535469 podman[367022]: 2025-11-25 16:54:09.985671013 +0000 UTC m=+0.175929493 container remove 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:54:10 np0005535469 systemd[1]: libpod-conmon-47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26.scope: Deactivated successfully.
Nov 25 11:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:54:10 np0005535469 podman[367062]: 2025-11-25 16:54:10.156714341 +0000 UTC m=+0.034031608 container create 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:54:10 np0005535469 systemd[1]: Started libpod-conmon-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope.
Nov 25 11:54:10 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:54:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:10 np0005535469 podman[367062]: 2025-11-25 16:54:10.235206519 +0000 UTC m=+0.112523826 container init 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:54:10 np0005535469 podman[367062]: 2025-11-25 16:54:10.142110014 +0000 UTC m=+0.019427281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:54:10 np0005535469 podman[367062]: 2025-11-25 16:54:10.244868312 +0000 UTC m=+0.122185599 container start 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:54:10 np0005535469 podman[367062]: 2025-11-25 16:54:10.248149151 +0000 UTC m=+0.125466498 container attach 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:54:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:11 np0005535469 nifty_jones[367079]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:54:11 np0005535469 nifty_jones[367079]: --> relative data size: 1.0
Nov 25 11:54:11 np0005535469 nifty_jones[367079]: --> All data devices are unavailable
Nov 25 11:54:11 np0005535469 systemd[1]: libpod-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope: Deactivated successfully.
Nov 25 11:54:11 np0005535469 systemd[1]: libpod-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope: Consumed 1.001s CPU time.
Nov 25 11:54:11 np0005535469 conmon[367079]: conmon 7e89a45c5bbcc66d41cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope/container/memory.events
Nov 25 11:54:11 np0005535469 podman[367062]: 2025-11-25 16:54:11.305083059 +0000 UTC m=+1.182400326 container died 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:54:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71-merged.mount: Deactivated successfully.
Nov 25 11:54:11 np0005535469 podman[367062]: 2025-11-25 16:54:11.354761992 +0000 UTC m=+1.232079269 container remove 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:54:11 np0005535469 systemd[1]: libpod-conmon-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope: Deactivated successfully.
Nov 25 11:54:11 np0005535469 podman[367258]: 2025-11-25 16:54:11.928529368 +0000 UTC m=+0.034822019 container create e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:54:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:11 np0005535469 systemd[1]: Started libpod-conmon-e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd.scope.
Nov 25 11:54:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:54:11 np0005535469 podman[367258]: 2025-11-25 16:54:11.984290257 +0000 UTC m=+0.090582928 container init e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:54:11 np0005535469 podman[367258]: 2025-11-25 16:54:11.991240137 +0000 UTC m=+0.097532788 container start e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:54:11 np0005535469 podman[367258]: 2025-11-25 16:54:11.9946836 +0000 UTC m=+0.100976251 container attach e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 11:54:11 np0005535469 competent_cerf[367274]: 167 167
Nov 25 11:54:11 np0005535469 systemd[1]: libpod-e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd.scope: Deactivated successfully.
Nov 25 11:54:11 np0005535469 podman[367258]: 2025-11-25 16:54:11.996691115 +0000 UTC m=+0.102983766 container died e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 11:54:12 np0005535469 podman[367258]: 2025-11-25 16:54:11.914724412 +0000 UTC m=+0.021017083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:54:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4cf68c465cfbc91d416633d26ae9b29838d96983d74b1d6d4ce8aa39ea856a04-merged.mount: Deactivated successfully.
Nov 25 11:54:12 np0005535469 podman[367258]: 2025-11-25 16:54:12.024292387 +0000 UTC m=+0.130585038 container remove e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:54:12 np0005535469 systemd[1]: libpod-conmon-e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd.scope: Deactivated successfully.
Nov 25 11:54:12 np0005535469 podman[367298]: 2025-11-25 16:54:12.215436333 +0000 UTC m=+0.064932430 container create 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:54:12 np0005535469 podman[367298]: 2025-11-25 16:54:12.170414937 +0000 UTC m=+0.019911074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:54:12 np0005535469 nova_compute[254092]: 2025-11-25 16:54:12.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:12 np0005535469 systemd[1]: Started libpod-conmon-9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0.scope.
Nov 25 11:54:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:54:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:12 np0005535469 podman[367298]: 2025-11-25 16:54:12.41177176 +0000 UTC m=+0.261267857 container init 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:54:12 np0005535469 podman[367298]: 2025-11-25 16:54:12.42392422 +0000 UTC m=+0.273420297 container start 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 11:54:12 np0005535469 podman[367298]: 2025-11-25 16:54:12.488627823 +0000 UTC m=+0.338123930 container attach 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]: {
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:    "0": [
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:        {
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "devices": [
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "/dev/loop3"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            ],
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_name": "ceph_lv0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_size": "21470642176",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "name": "ceph_lv0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "tags": {
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cluster_name": "ceph",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.crush_device_class": "",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.encrypted": "0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osd_id": "0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.type": "block",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.vdo": "0"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            },
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "type": "block",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "vg_name": "ceph_vg0"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:        }
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:    ],
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:    "1": [
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:        {
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "devices": [
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "/dev/loop4"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            ],
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_name": "ceph_lv1",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_size": "21470642176",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "name": "ceph_lv1",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "tags": {
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cluster_name": "ceph",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.crush_device_class": "",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.encrypted": "0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osd_id": "1",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.type": "block",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.vdo": "0"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            },
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "type": "block",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "vg_name": "ceph_vg1"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:        }
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:    ],
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:    "2": [
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:        {
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "devices": [
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "/dev/loop5"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            ],
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_name": "ceph_lv2",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_size": "21470642176",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "name": "ceph_lv2",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "tags": {
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.cluster_name": "ceph",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.crush_device_class": "",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.encrypted": "0",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osd_id": "2",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.type": "block",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:                "ceph.vdo": "0"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            },
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "type": "block",
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:            "vg_name": "ceph_vg2"
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:        }
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]:    ]
Nov 25 11:54:13 np0005535469 interesting_rhodes[367315]: }
Nov 25 11:54:13 np0005535469 systemd[1]: libpod-9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0.scope: Deactivated successfully.
Nov 25 11:54:13 np0005535469 podman[367298]: 2025-11-25 16:54:13.214611695 +0000 UTC m=+1.064107772 container died 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 11:54:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa-merged.mount: Deactivated successfully.
Nov 25 11:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.410 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7c702cf-93bf-4690-931b-d19f31e3f140, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=677dcda5-8641-460d-b058-dce18b1330bf) old=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.413 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 677dcda5-8641-460d-b058-dce18b1330bf in datapath fda12e20-ad43-4206-95a1-86c7a084bd24 updated#033[00m
Nov 25 11:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.414 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fda12e20-ad43-4206-95a1-86c7a084bd24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.415 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd58c851-1584-4a23-adcb-69c3991684ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:54:13 np0005535469 podman[367298]: 2025-11-25 16:54:13.424321267 +0000 UTC m=+1.273817344 container remove 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:54:13 np0005535469 systemd[1]: libpod-conmon-9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0.scope: Deactivated successfully.
Nov 25 11:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.631 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:54:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:14 np0005535469 podman[367480]: 2025-11-25 16:54:14.01703901 +0000 UTC m=+0.042475367 container create 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:54:14 np0005535469 nova_compute[254092]: 2025-11-25 16:54:14.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:14 np0005535469 systemd[1]: Started libpod-conmon-088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d.scope.
Nov 25 11:54:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:54:14 np0005535469 podman[367480]: 2025-11-25 16:54:14.090733027 +0000 UTC m=+0.116169384 container init 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:54:14 np0005535469 podman[367480]: 2025-11-25 16:54:13.99937371 +0000 UTC m=+0.024810097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:54:14 np0005535469 podman[367480]: 2025-11-25 16:54:14.097685147 +0000 UTC m=+0.123121504 container start 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:54:14 np0005535469 podman[367480]: 2025-11-25 16:54:14.100263797 +0000 UTC m=+0.125700154 container attach 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:54:14 np0005535469 jolly_euclid[367496]: 167 167
Nov 25 11:54:14 np0005535469 systemd[1]: libpod-088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d.scope: Deactivated successfully.
Nov 25 11:54:14 np0005535469 podman[367480]: 2025-11-25 16:54:14.102973201 +0000 UTC m=+0.128409558 container died 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:54:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cbae5afc2a52d6e39ec4475975b2d7242b1dbebc5d96db0eef38db2eb29dea76-merged.mount: Deactivated successfully.
Nov 25 11:54:14 np0005535469 podman[367480]: 2025-11-25 16:54:14.131702573 +0000 UTC m=+0.157138930 container remove 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:54:14 np0005535469 systemd[1]: libpod-conmon-088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d.scope: Deactivated successfully.
Nov 25 11:54:14 np0005535469 podman[367519]: 2025-11-25 16:54:14.2883561 +0000 UTC m=+0.035406015 container create 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 11:54:14 np0005535469 systemd[1]: Started libpod-conmon-79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5.scope.
Nov 25 11:54:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:54:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:54:14 np0005535469 podman[367519]: 2025-11-25 16:54:14.364257947 +0000 UTC m=+0.111307902 container init 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:54:14 np0005535469 podman[367519]: 2025-11-25 16:54:14.273271599 +0000 UTC m=+0.020321524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:54:14 np0005535469 podman[367519]: 2025-11-25 16:54:14.369746437 +0000 UTC m=+0.116796362 container start 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:54:14 np0005535469 podman[367519]: 2025-11-25 16:54:14.37278536 +0000 UTC m=+0.119835295 container attach 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]: {
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "osd_id": 1,
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "type": "bluestore"
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:    },
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "osd_id": 2,
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "type": "bluestore"
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:    },
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "osd_id": 0,
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:        "type": "bluestore"
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]:    }
Nov 25 11:54:15 np0005535469 wonderful_shaw[367535]: }
Nov 25 11:54:15 np0005535469 systemd[1]: libpod-79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5.scope: Deactivated successfully.
Nov 25 11:54:15 np0005535469 podman[367519]: 2025-11-25 16:54:15.317589282 +0000 UTC m=+1.064639207 container died 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 11:54:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1-merged.mount: Deactivated successfully.
Nov 25 11:54:15 np0005535469 podman[367519]: 2025-11-25 16:54:15.366667779 +0000 UTC m=+1.113717704 container remove 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:54:15 np0005535469 systemd[1]: libpod-conmon-79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5.scope: Deactivated successfully.
Nov 25 11:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:54:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:54:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6a8f14b0-e1c1-45ea-9dd6-1c84d86f9500 does not exist
Nov 25 11:54:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 20f7cd01-07d8-444c-ad44-38d94fc8f60a does not exist
Nov 25 11:54:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:16 np0005535469 ovn_controller[153477]: 2025-11-25T16:54:16Z|01128|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 11:54:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:54:16 np0005535469 nova_compute[254092]: 2025-11-25 16:54:16.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:17 np0005535469 nova_compute[254092]: 2025-11-25 16:54:17.309 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:19 np0005535469 nova_compute[254092]: 2025-11-25 16:54:19.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:19 np0005535469 nova_compute[254092]: 2025-11-25 16:54:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:54:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:54:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130077105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:54:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:21 np0005535469 nova_compute[254092]: 2025-11-25 16:54:21.958 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.129 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.132 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3747MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.390 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.390 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:54:22 np0005535469 nova_compute[254092]: 2025-11-25 16:54:22.481 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:54:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:24 np0005535469 nova_compute[254092]: 2025-11-25 16:54:24.036 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 11:54:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:54:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832087066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:54:26 np0005535469 nova_compute[254092]: 2025-11-25 16:54:26.746 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:54:26 np0005535469 nova_compute[254092]: 2025-11-25 16:54:26.753 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:54:26 np0005535469 nova_compute[254092]: 2025-11-25 16:54:26.764 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:54:27 np0005535469 nova_compute[254092]: 2025-11-25 16:54:27.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:27 np0005535469 podman[367676]: 2025-11-25 16:54:27.642162996 +0000 UTC m=+0.054069554 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:54:27 np0005535469 podman[367675]: 2025-11-25 16:54:27.642726011 +0000 UTC m=+0.058634888 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:54:27 np0005535469 podman[367677]: 2025-11-25 16:54:27.679373069 +0000 UTC m=+0.084041910 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:54:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:28 np0005535469 nova_compute[254092]: 2025-11-25 16:54:28.711 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:54:28 np0005535469 nova_compute[254092]: 2025-11-25 16:54:28.712 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:54:29 np0005535469 nova_compute[254092]: 2025-11-25 16:54:29.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.711 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.711 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.712 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.712 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.724 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.736 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:54:30 np0005535469 nova_compute[254092]: 2025-11-25 16:54:30.736 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:54:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:32 np0005535469 nova_compute[254092]: 2025-11-25 16:54:32.319 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:34 np0005535469 nova_compute[254092]: 2025-11-25 16:54:34.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:37 np0005535469 nova_compute[254092]: 2025-11-25 16:54:37.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:39 np0005535469 nova_compute[254092]: 2025-11-25 16:54:39.043 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:54:40
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'default.rgw.control']
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:54:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:42 np0005535469 nova_compute[254092]: 2025-11-25 16:54:42.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:44 np0005535469 nova_compute[254092]: 2025-11-25 16:54:44.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:44 np0005535469 nova_compute[254092]: 2025-11-25 16:54:44.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:44.849 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:54:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:44.850 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:54:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:47 np0005535469 nova_compute[254092]: 2025-11-25 16:54:47.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:49 np0005535469 nova_compute[254092]: 2025-11-25 16:54:49.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:49.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:54:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.752 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:54:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.754 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:54:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.754 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:54:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.756 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9240022e-fbb3-48b7-8147-18e68c936df6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:54:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:52 np0005535469 nova_compute[254092]: 2025-11-25 16:54:52.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:54:54 np0005535469 nova_compute[254092]: 2025-11-25 16:54:54.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 25 11:54:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 25 11:54:55 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 25 11:54:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:54:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1185465127' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:54:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:54:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1185465127' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:54:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.911 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:54:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.912 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:54:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.913 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:54:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.914 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfb75c9-2f66-456c-bff1-7791094fd411]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:54:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 1.3 KiB/s wr, 11 op/s
Nov 25 11:54:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 25 11:54:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 25 11:54:56 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 25 11:54:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:54:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.299 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:54:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.300 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:54:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.301 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:54:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9680b3d5-fd95-4db9-a570-f33fc501e7d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:54:57 np0005535469 nova_compute[254092]: 2025-11-25 16:54:57.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.6 KiB/s wr, 13 op/s
Nov 25 11:54:58 np0005535469 podman[367737]: 2025-11-25 16:54:58.652693091 +0000 UTC m=+0.067600502 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 11:54:58 np0005535469 podman[367738]: 2025-11-25 16:54:58.664387229 +0000 UTC m=+0.079252319 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 25 11:54:58 np0005535469 podman[367739]: 2025-11-25 16:54:58.712604553 +0000 UTC m=+0.125072548 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:54:59 np0005535469 nova_compute[254092]: 2025-11-25 16:54:59.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:54:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.2 KiB/s wr, 43 op/s
Nov 25 11:55:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.759 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:55:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.760 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:55:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.761 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:55:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.761 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00390be1-5b75-40d1-94fb-c771e7fe828d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:55:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Nov 25 11:55:02 np0005535469 nova_compute[254092]: 2025-11-25 16:55:02.381 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.066 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:55:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.068 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:55:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.070 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:55:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.071 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45d54016-b7eb-40a8-b1c8-250dc29406ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:55:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 36 op/s
Nov 25 11:55:04 np0005535469 nova_compute[254092]: 2025-11-25 16:55:04.051 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 11:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 25 11:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 25 11:55:06 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 25 11:55:07 np0005535469 nova_compute[254092]: 2025-11-25 16:55:07.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 11:55:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.917 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:55:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.919 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:55:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.920 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:55:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.920 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cd26d9-add9-432f-a5ee-df2ccbd5f2c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:55:09 np0005535469 nova_compute[254092]: 2025-11-25 16:55:09.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 511 B/s wr, 5 op/s
Nov 25 11:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:55:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.596 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:55:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.597 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:55:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.598 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:55:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.599 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2886b0ac-b846-4c2f-b418-4aa701c1e5f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:55:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:12 np0005535469 nova_compute[254092]: 2025-11-25 16:55:12.386 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:55:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:55:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:55:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:14 np0005535469 nova_compute[254092]: 2025-11-25 16:55:14.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:55:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:55:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:17 np0005535469 podman[368189]: 2025-11-25 16:55:17.382093846 +0000 UTC m=+0.076774952 container create 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:55:17 np0005535469 nova_compute[254092]: 2025-11-25 16:55:17.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:17 np0005535469 systemd[1]: Started libpod-conmon-08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621.scope.
Nov 25 11:55:17 np0005535469 podman[368189]: 2025-11-25 16:55:17.353383744 +0000 UTC m=+0.048064950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:17 np0005535469 podman[368189]: 2025-11-25 16:55:17.481147014 +0000 UTC m=+0.175828140 container init 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:55:17 np0005535469 podman[368189]: 2025-11-25 16:55:17.487655321 +0000 UTC m=+0.182336417 container start 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:55:17 np0005535469 podman[368189]: 2025-11-25 16:55:17.490080468 +0000 UTC m=+0.184761574 container attach 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:55:17 np0005535469 vigilant_kepler[368205]: 167 167
Nov 25 11:55:17 np0005535469 systemd[1]: libpod-08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621.scope: Deactivated successfully.
Nov 25 11:55:17 np0005535469 podman[368189]: 2025-11-25 16:55:17.495836434 +0000 UTC m=+0.190517550 container died 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:55:17 np0005535469 nova_compute[254092]: 2025-11-25 16:55:17.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.516 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:55:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.518 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:55:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.519 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:55:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.520 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f17b06-e8e2-4dc9-a77c-18e835a62565]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:55:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-89b53407ea94f5bf04847df97580844ba9c111570734318de3cd9bed388fbd72-merged.mount: Deactivated successfully.
Nov 25 11:55:17 np0005535469 podman[368189]: 2025-11-25 16:55:17.537346124 +0000 UTC m=+0.232027230 container remove 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:55:17 np0005535469 systemd[1]: libpod-conmon-08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621.scope: Deactivated successfully.
Nov 25 11:55:17 np0005535469 podman[368228]: 2025-11-25 16:55:17.719340671 +0000 UTC m=+0.048666616 container create 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:55:17 np0005535469 systemd[1]: Started libpod-conmon-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope.
Nov 25 11:55:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:17 np0005535469 podman[368228]: 2025-11-25 16:55:17.693449986 +0000 UTC m=+0.022775921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:17 np0005535469 podman[368228]: 2025-11-25 16:55:17.798799296 +0000 UTC m=+0.128125261 container init 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:55:17 np0005535469 podman[368228]: 2025-11-25 16:55:17.809814736 +0000 UTC m=+0.139140681 container start 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:55:17 np0005535469 podman[368228]: 2025-11-25 16:55:17.812777296 +0000 UTC m=+0.142103241 container attach 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 11:55:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:19 np0005535469 nova_compute[254092]: 2025-11-25 16:55:19.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]: [
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:    {
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "available": false,
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "ceph_device": false,
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "lsm_data": {},
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "lvs": [],
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "path": "/dev/sr0",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "rejected_reasons": [
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "Has a FileSystem",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "Insufficient space (<5GB)"
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        ],
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        "sys_api": {
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "actuators": null,
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "device_nodes": "sr0",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "devname": "sr0",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "human_readable_size": "482.00 KB",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "id_bus": "ata",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "model": "QEMU DVD-ROM",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "nr_requests": "2",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "parent": "/dev/sr0",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "partitions": {},
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "path": "/dev/sr0",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "removable": "1",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "rev": "2.5+",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "ro": "0",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "rotational": "1",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "sas_address": "",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "sas_device_handle": "",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "scheduler_mode": "mq-deadline",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "sectors": 0,
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "sectorsize": "2048",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "size": 493568.0,
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "support_discard": "2048",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "type": "disk",
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:            "vendor": "QEMU"
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:        }
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]:    }
Nov 25 11:55:19 np0005535469 optimistic_tharp[368244]: ]
Nov 25 11:55:19 np0005535469 systemd[1]: libpod-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope: Deactivated successfully.
Nov 25 11:55:19 np0005535469 podman[368228]: 2025-11-25 16:55:19.185758121 +0000 UTC m=+1.515084076 container died 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:55:19 np0005535469 systemd[1]: libpod-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope: Consumed 1.433s CPU time.
Nov 25 11:55:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f-merged.mount: Deactivated successfully.
Nov 25 11:55:19 np0005535469 podman[368228]: 2025-11-25 16:55:19.236907533 +0000 UTC m=+1.566233478 container remove 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:55:19 np0005535469 systemd[1]: libpod-conmon-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope: Deactivated successfully.
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1de6067b-63ba-41c9-9f9a-11e64ced3a9a does not exist
Nov 25 11:55:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b3eba2ae-6b27-4671-b668-226bc0d59093 does not exist
Nov 25 11:55:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f36bbd24-9989-4b14-b17d-a95b95bfcf20 does not exist
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:55:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:55:19 np0005535469 nova_compute[254092]: 2025-11-25 16:55:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:19 np0005535469 podman[370566]: 2025-11-25 16:55:19.774468354 +0000 UTC m=+0.034646284 container create 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:55:19 np0005535469 systemd[1]: Started libpod-conmon-230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59.scope.
Nov 25 11:55:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:19 np0005535469 podman[370566]: 2025-11-25 16:55:19.850269289 +0000 UTC m=+0.110447279 container init 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 11:55:19 np0005535469 podman[370566]: 2025-11-25 16:55:19.761007788 +0000 UTC m=+0.021185738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:19 np0005535469 podman[370566]: 2025-11-25 16:55:19.858522794 +0000 UTC m=+0.118700724 container start 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:55:19 np0005535469 podman[370566]: 2025-11-25 16:55:19.861265829 +0000 UTC m=+0.121443779 container attach 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:55:19 np0005535469 strange_bell[370583]: 167 167
Nov 25 11:55:19 np0005535469 systemd[1]: libpod-230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59.scope: Deactivated successfully.
Nov 25 11:55:19 np0005535469 podman[370566]: 2025-11-25 16:55:19.863621983 +0000 UTC m=+0.123799903 container died 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:55:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a10fe5e9bea41fd9276fd2671ae64291e7454f0edf981d3d78f051a7dc1f1b71-merged.mount: Deactivated successfully.
Nov 25 11:55:19 np0005535469 podman[370566]: 2025-11-25 16:55:19.915115795 +0000 UTC m=+0.175293765 container remove 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:55:19 np0005535469 systemd[1]: libpod-conmon-230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59.scope: Deactivated successfully.
Nov 25 11:55:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:20 np0005535469 podman[370606]: 2025-11-25 16:55:20.083550212 +0000 UTC m=+0.040313838 container create c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:55:20 np0005535469 systemd[1]: Started libpod-conmon-c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f.scope.
Nov 25 11:55:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:20 np0005535469 podman[370606]: 2025-11-25 16:55:20.154418223 +0000 UTC m=+0.111181869 container init c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:55:20 np0005535469 podman[370606]: 2025-11-25 16:55:20.066683233 +0000 UTC m=+0.023446879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:20 np0005535469 podman[370606]: 2025-11-25 16:55:20.162456492 +0000 UTC m=+0.119220118 container start c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:55:20 np0005535469 podman[370606]: 2025-11-25 16:55:20.166312377 +0000 UTC m=+0.123076003 container attach c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:55:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:55:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:55:21 np0005535469 angry_mendel[370622]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:55:21 np0005535469 angry_mendel[370622]: --> relative data size: 1.0
Nov 25 11:55:21 np0005535469 angry_mendel[370622]: --> All data devices are unavailable
Nov 25 11:55:21 np0005535469 systemd[1]: libpod-c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f.scope: Deactivated successfully.
Nov 25 11:55:21 np0005535469 podman[370606]: 2025-11-25 16:55:21.155677813 +0000 UTC m=+1.112441449 container died c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:55:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543-merged.mount: Deactivated successfully.
Nov 25 11:55:21 np0005535469 podman[370606]: 2025-11-25 16:55:21.227757016 +0000 UTC m=+1.184520642 container remove c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:55:21 np0005535469 systemd[1]: libpod-conmon-c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f.scope: Deactivated successfully.
Nov 25 11:55:21 np0005535469 nova_compute[254092]: 2025-11-25 16:55:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:21 np0005535469 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:55:21 np0005535469 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:55:21 np0005535469 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:55:21 np0005535469 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:55:21 np0005535469 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:55:21 np0005535469 podman[370823]: 2025-11-25 16:55:21.82121325 +0000 UTC m=+0.034070540 container create 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:55:21 np0005535469 systemd[1]: Started libpod-conmon-6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9.scope.
Nov 25 11:55:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:21 np0005535469 podman[370823]: 2025-11-25 16:55:21.897019734 +0000 UTC m=+0.109877044 container init 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:55:21 np0005535469 podman[370823]: 2025-11-25 16:55:21.807829625 +0000 UTC m=+0.020686915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:21 np0005535469 podman[370823]: 2025-11-25 16:55:21.905335481 +0000 UTC m=+0.118192781 container start 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:55:21 np0005535469 hopeful_varahamihira[370840]: 167 167
Nov 25 11:55:21 np0005535469 podman[370823]: 2025-11-25 16:55:21.909560975 +0000 UTC m=+0.122418255 container attach 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 11:55:21 np0005535469 systemd[1]: libpod-6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9.scope: Deactivated successfully.
Nov 25 11:55:21 np0005535469 podman[370823]: 2025-11-25 16:55:21.911109288 +0000 UTC m=+0.123966578 container died 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:55:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:55:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143071527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:55:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2933fc27f4c075c56a56a99b4c8321ab9d979bf91a45d90084d4d7accc04920a-merged.mount: Deactivated successfully.
Nov 25 11:55:21 np0005535469 podman[370823]: 2025-11-25 16:55:21.940852638 +0000 UTC m=+0.153709928 container remove 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:55:21 np0005535469 nova_compute[254092]: 2025-11-25 16:55:21.941 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:55:21 np0005535469 systemd[1]: libpod-conmon-6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9.scope: Deactivated successfully.
Nov 25 11:55:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.089 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.090 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.091 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.091 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:55:22 np0005535469 podman[370866]: 2025-11-25 16:55:22.111245439 +0000 UTC m=+0.053327914 container create f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:55:22 np0005535469 systemd[1]: Started libpod-conmon-f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0.scope.
Nov 25 11:55:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:22 np0005535469 podman[370866]: 2025-11-25 16:55:22.082864166 +0000 UTC m=+0.024946741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:22 np0005535469 podman[370866]: 2025-11-25 16:55:22.182015716 +0000 UTC m=+0.124098191 container init f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:55:22 np0005535469 podman[370866]: 2025-11-25 16:55:22.191460954 +0000 UTC m=+0.133543429 container start f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:55:22 np0005535469 podman[370866]: 2025-11-25 16:55:22.194178278 +0000 UTC m=+0.136260753 container attach f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.340 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.355 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.355 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.370 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.393 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.409 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:55:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:55:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108859968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.860 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.867 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.883 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.885 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:55:22 np0005535469 nova_compute[254092]: 2025-11-25 16:55:22.885 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]: {
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:    "0": [
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:        {
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "devices": [
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "/dev/loop3"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            ],
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_name": "ceph_lv0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_size": "21470642176",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "name": "ceph_lv0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "tags": {
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cluster_name": "ceph",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.crush_device_class": "",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.encrypted": "0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osd_id": "0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.type": "block",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.vdo": "0"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            },
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "type": "block",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "vg_name": "ceph_vg0"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:        }
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:    ],
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:    "1": [
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:        {
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "devices": [
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "/dev/loop4"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            ],
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_name": "ceph_lv1",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_size": "21470642176",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "name": "ceph_lv1",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "tags": {
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cluster_name": "ceph",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.crush_device_class": "",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.encrypted": "0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osd_id": "1",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.type": "block",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.vdo": "0"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            },
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "type": "block",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "vg_name": "ceph_vg1"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:        }
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:    ],
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:    "2": [
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:        {
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "devices": [
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "/dev/loop5"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            ],
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_name": "ceph_lv2",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_size": "21470642176",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "name": "ceph_lv2",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "tags": {
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.cluster_name": "ceph",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.crush_device_class": "",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.encrypted": "0",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osd_id": "2",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.type": "block",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:                "ceph.vdo": "0"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            },
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "type": "block",
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:            "vg_name": "ceph_vg2"
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:        }
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]:    ]
Nov 25 11:55:22 np0005535469 cool_kapitsa[370882]: }
Nov 25 11:55:22 np0005535469 systemd[1]: libpod-f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0.scope: Deactivated successfully.
Nov 25 11:55:22 np0005535469 podman[370866]: 2025-11-25 16:55:22.933801792 +0000 UTC m=+0.875884287 container died f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:55:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2-merged.mount: Deactivated successfully.
Nov 25 11:55:22 np0005535469 podman[370866]: 2025-11-25 16:55:22.988972375 +0000 UTC m=+0.931054850 container remove f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:55:22 np0005535469 systemd[1]: libpod-conmon-f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0.scope: Deactivated successfully.
Nov 25 11:55:23 np0005535469 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 11:55:23 np0005535469 podman[371067]: 2025-11-25 16:55:23.565071055 +0000 UTC m=+0.033996657 container create 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 25 11:55:23 np0005535469 systemd[1]: Started libpod-conmon-6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d.scope.
Nov 25 11:55:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:23 np0005535469 podman[371067]: 2025-11-25 16:55:23.641858167 +0000 UTC m=+0.110783819 container init 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:55:23 np0005535469 podman[371067]: 2025-11-25 16:55:23.648299152 +0000 UTC m=+0.117224754 container start 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:55:23 np0005535469 podman[371067]: 2025-11-25 16:55:23.551126635 +0000 UTC m=+0.020052257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:23 np0005535469 podman[371067]: 2025-11-25 16:55:23.651432727 +0000 UTC m=+0.120358359 container attach 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 11:55:23 np0005535469 zen_kapitsa[371083]: 167 167
Nov 25 11:55:23 np0005535469 systemd[1]: libpod-6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d.scope: Deactivated successfully.
Nov 25 11:55:23 np0005535469 podman[371067]: 2025-11-25 16:55:23.653958976 +0000 UTC m=+0.122884598 container died 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:55:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-896a0d1b5eb7a7d060252490e73fce174449c35a793e5da397c3f87f354793df-merged.mount: Deactivated successfully.
Nov 25 11:55:23 np0005535469 podman[371067]: 2025-11-25 16:55:23.69046793 +0000 UTC m=+0.159393532 container remove 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:55:23 np0005535469 systemd[1]: libpod-conmon-6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d.scope: Deactivated successfully.
Nov 25 11:55:23 np0005535469 nova_compute[254092]: 2025-11-25 16:55:23.885 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:23 np0005535469 nova_compute[254092]: 2025-11-25 16:55:23.887 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:23 np0005535469 nova_compute[254092]: 2025-11-25 16:55:23.887 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:23 np0005535469 podman[371106]: 2025-11-25 16:55:23.891971238 +0000 UTC m=+0.046176899 container create cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 11:55:23 np0005535469 systemd[1]: Started libpod-conmon-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope.
Nov 25 11:55:23 np0005535469 podman[371106]: 2025-11-25 16:55:23.874321448 +0000 UTC m=+0.028527099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:55:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:55:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:55:23 np0005535469 podman[371106]: 2025-11-25 16:55:23.992684011 +0000 UTC m=+0.146889662 container init cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:55:24 np0005535469 podman[371106]: 2025-11-25 16:55:24.000741111 +0000 UTC m=+0.154946742 container start cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:55:24 np0005535469 podman[371106]: 2025-11-25 16:55:24.004364309 +0000 UTC m=+0.158569940 container attach cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:55:24 np0005535469 nova_compute[254092]: 2025-11-25 16:55:24.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:24 np0005535469 nova_compute[254092]: 2025-11-25 16:55:24.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]: {
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "osd_id": 1,
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "type": "bluestore"
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:    },
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "osd_id": 2,
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "type": "bluestore"
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:    },
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "osd_id": 0,
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:        "type": "bluestore"
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]:    }
Nov 25 11:55:24 np0005535469 pedantic_tu[371123]: }
Nov 25 11:55:25 np0005535469 systemd[1]: libpod-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope: Deactivated successfully.
Nov 25 11:55:25 np0005535469 podman[371106]: 2025-11-25 16:55:25.016287859 +0000 UTC m=+1.170493520 container died cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:55:25 np0005535469 systemd[1]: libpod-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope: Consumed 1.019s CPU time.
Nov 25 11:55:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412-merged.mount: Deactivated successfully.
Nov 25 11:55:25 np0005535469 podman[371106]: 2025-11-25 16:55:25.070392943 +0000 UTC m=+1.224598574 container remove cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 11:55:25 np0005535469 systemd[1]: libpod-conmon-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope: Deactivated successfully.
Nov 25 11:55:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:55:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:55:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b0013731-30c9-45da-b127-743732996315 does not exist
Nov 25 11:55:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7d3a2893-9e1e-48ff-b1be-3dc218d1156f does not exist
Nov 25 11:55:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:55:25 np0005535469 nova_compute[254092]: 2025-11-25 16:55:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:25 np0005535469 nova_compute[254092]: 2025-11-25 16:55:25.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:55:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:27 np0005535469 nova_compute[254092]: 2025-11-25 16:55:27.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:29 np0005535469 nova_compute[254092]: 2025-11-25 16:55:29.063 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:29 np0005535469 nova_compute[254092]: 2025-11-25 16:55:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:29 np0005535469 nova_compute[254092]: 2025-11-25 16:55:29.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:55:29 np0005535469 nova_compute[254092]: 2025-11-25 16:55:29.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:55:29 np0005535469 nova_compute[254092]: 2025-11-25 16:55:29.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:55:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:30 np0005535469 podman[371217]: 2025-11-25 16:55:30.116711421 +0000 UTC m=+0.529107581 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:55:30 np0005535469 podman[371216]: 2025-11-25 16:55:30.120825243 +0000 UTC m=+0.533421149 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:55:30 np0005535469 podman[371218]: 2025-11-25 16:55:30.147946552 +0000 UTC m=+0.560586389 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.343854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730343914, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1831, "num_deletes": 254, "total_data_size": 2951670, "memory_usage": 2993696, "flush_reason": "Manual Compaction"}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730363023, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 2867283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43439, "largest_seqno": 45269, "table_properties": {"data_size": 2858854, "index_size": 5179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17520, "raw_average_key_size": 20, "raw_value_size": 2841877, "raw_average_value_size": 3296, "num_data_blocks": 230, "num_entries": 862, "num_filter_entries": 862, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089552, "oldest_key_time": 1764089552, "file_creation_time": 1764089730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 19196 microseconds, and 6793 cpu microseconds.
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.363055) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 2867283 bytes OK
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.363071) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.364736) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.364751) EVENT_LOG_v1 {"time_micros": 1764089730364746, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.364768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2943783, prev total WAL file size 2943783, number of live WAL files 2.
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.365603) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(2800KB)], [98(7480KB)]
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730365667, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 10527477, "oldest_snapshot_seqno": -1}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6685 keys, 8893735 bytes, temperature: kUnknown
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730417409, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8893735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8849247, "index_size": 26648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 174120, "raw_average_key_size": 26, "raw_value_size": 8729614, "raw_average_value_size": 1305, "num_data_blocks": 1043, "num_entries": 6685, "num_filter_entries": 6685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.417712) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8893735 bytes
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.419129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 171.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7207, records dropped: 522 output_compression: NoCompression
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.419149) EVENT_LOG_v1 {"time_micros": 1764089730419140, "job": 58, "event": "compaction_finished", "compaction_time_micros": 51823, "compaction_time_cpu_micros": 19602, "output_level": 6, "num_output_files": 1, "total_output_size": 8893735, "num_input_records": 7207, "num_output_records": 6685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730420008, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730421936, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.365529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:55:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:55:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:32 np0005535469 nova_compute[254092]: 2025-11-25 16:55:32.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:32 np0005535469 nova_compute[254092]: 2025-11-25 16:55:32.506 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:55:34 np0005535469 nova_compute[254092]: 2025-11-25 16:55:34.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:37 np0005535469 nova_compute[254092]: 2025-11-25 16:55:37.397 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:39 np0005535469 nova_compute[254092]: 2025-11-25 16:55:39.069 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:55:40
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'vms']
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:55:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:42 np0005535469 nova_compute[254092]: 2025-11-25 16:55:42.400 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:44 np0005535469 nova_compute[254092]: 2025-11-25 16:55:44.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:45.258 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:55:45 np0005535469 nova_compute[254092]: 2025-11-25 16:55:45.258 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:45.259 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:55:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:47 np0005535469 nova_compute[254092]: 2025-11-25 16:55:47.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:49 np0005535469 nova_compute[254092]: 2025-11-25 16:55:49.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:51.260 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:55:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:55:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:52 np0005535469 nova_compute[254092]: 2025-11-25 16:55:52.463 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:54 np0005535469 nova_compute[254092]: 2025-11-25 16:55:54.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888755253' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888755253' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:55:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.319 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8:0:1:f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:55:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.320 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated#033[00m
Nov 25 11:55:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.320 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:55:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.321 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[901e4296-5ad1-478c-b48b-55d890fb0d1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:55:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:55:57 np0005535469 nova_compute[254092]: 2025-11-25 16:55:57.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:55:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:55:59 np0005535469 nova_compute[254092]: 2025-11-25 16:55:59.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:00 np0005535469 podman[371281]: 2025-11-25 16:56:00.662283538 +0000 UTC m=+0.071270932 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 25 11:56:00 np0005535469 podman[371280]: 2025-11-25 16:56:00.665954928 +0000 UTC m=+0.084570414 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Nov 25 11:56:00 np0005535469 podman[371282]: 2025-11-25 16:56:00.685414448 +0000 UTC m=+0.095916803 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:56:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 11:56:02 np0005535469 nova_compute[254092]: 2025-11-25 16:56:02.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 11:56:04 np0005535469 nova_compute[254092]: 2025-11-25 16:56:04.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 11:56:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:07 np0005535469 nova_compute[254092]: 2025-11-25 16:56:07.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 11:56:09 np0005535469 nova_compute[254092]: 2025-11-25 16:56:09.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:56:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 11:56:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 11:56:12 np0005535469 nova_compute[254092]: 2025-11-25 16:56:12.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:56:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:56:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:56:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:56:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:56:13.633 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:56:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 11:56:14 np0005535469 nova_compute[254092]: 2025-11-25 16:56:14.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 11:56:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:17 np0005535469 nova_compute[254092]: 2025-11-25 16:56:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:17 np0005535469 nova_compute[254092]: 2025-11-25 16:56:17.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:19 np0005535469 nova_compute[254092]: 2025-11-25 16:56:19.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:21 np0005535469 nova_compute[254092]: 2025-11-25 16:56:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:56:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127705648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:56:22 np0005535469 nova_compute[254092]: 2025-11-25 16:56:22.936 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.077 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.078 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3866MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.078 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.079 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.128 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.129 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.145 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:56:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:56:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930245941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.548 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.553 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.571 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.572 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:56:23 np0005535469 nova_compute[254092]: 2025-11-25 16:56:23.573 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:56:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:24 np0005535469 nova_compute[254092]: 2025-11-25 16:56:24.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:25 np0005535469 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:25 np0005535469 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:25 np0005535469 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:25 np0005535469 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:56:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c8099620-b715-4b49-ae09-e2246fad99b4 does not exist
Nov 25 11:56:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 16d1d671-ca2a-4b40-92d7-30573412b72e does not exist
Nov 25 11:56:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev faed2e19-dc3e-4342-972f-38115209b21c does not exist
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:56:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:26 np0005535469 nova_compute[254092]: 2025-11-25 16:56:26.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:26 np0005535469 podman[371659]: 2025-11-25 16:56:26.666794181 +0000 UTC m=+0.045499080 container create 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:56:26 np0005535469 systemd[1]: Started libpod-conmon-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope.
Nov 25 11:56:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:56:26 np0005535469 podman[371659]: 2025-11-25 16:56:26.646443617 +0000 UTC m=+0.025148556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:56:26 np0005535469 podman[371659]: 2025-11-25 16:56:26.754558602 +0000 UTC m=+0.133263511 container init 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:56:26 np0005535469 podman[371659]: 2025-11-25 16:56:26.763200608 +0000 UTC m=+0.141905507 container start 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:56:26 np0005535469 podman[371659]: 2025-11-25 16:56:26.766699083 +0000 UTC m=+0.145403992 container attach 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:56:26 np0005535469 frosty_kowalevski[371676]: 167 167
Nov 25 11:56:26 np0005535469 systemd[1]: libpod-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope: Deactivated successfully.
Nov 25 11:56:26 np0005535469 conmon[371676]: conmon 65b9f7224cc78319febc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope/container/memory.events
Nov 25 11:56:26 np0005535469 podman[371681]: 2025-11-25 16:56:26.810831175 +0000 UTC m=+0.027805218 container died 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:56:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3e79dc638523f0f2876dd386516e4bbfd9e20a9d394ca63dfc33cfe295e2105d-merged.mount: Deactivated successfully.
Nov 25 11:56:26 np0005535469 podman[371681]: 2025-11-25 16:56:26.862832921 +0000 UTC m=+0.079806924 container remove 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:56:26 np0005535469 systemd[1]: libpod-conmon-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope: Deactivated successfully.
Nov 25 11:56:27 np0005535469 podman[371704]: 2025-11-25 16:56:27.074531727 +0000 UTC m=+0.039537978 container create e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 11:56:27 np0005535469 systemd[1]: Started libpod-conmon-e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597.scope.
Nov 25 11:56:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:56:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:27 np0005535469 podman[371704]: 2025-11-25 16:56:27.05777214 +0000 UTC m=+0.022778421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:56:27 np0005535469 podman[371704]: 2025-11-25 16:56:27.159200163 +0000 UTC m=+0.124206434 container init e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:56:27 np0005535469 podman[371704]: 2025-11-25 16:56:27.171783256 +0000 UTC m=+0.136789507 container start e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 11:56:27 np0005535469 podman[371704]: 2025-11-25 16:56:27.174992223 +0000 UTC m=+0.139998474 container attach e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:56:27 np0005535469 nova_compute[254092]: 2025-11-25 16:56:27.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:29 np0005535469 nova_compute[254092]: 2025-11-25 16:56:29.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:29 np0005535469 hardcore_keldysh[371720]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:56:29 np0005535469 hardcore_keldysh[371720]: --> relative data size: 1.0
Nov 25 11:56:29 np0005535469 hardcore_keldysh[371720]: --> All data devices are unavailable
Nov 25 11:56:29 np0005535469 systemd[1]: libpod-e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597.scope: Deactivated successfully.
Nov 25 11:56:29 np0005535469 podman[371749]: 2025-11-25 16:56:29.432694383 +0000 UTC m=+0.028159147 container died e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 11:56:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c-merged.mount: Deactivated successfully.
Nov 25 11:56:29 np0005535469 podman[371749]: 2025-11-25 16:56:29.481102682 +0000 UTC m=+0.076567426 container remove e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:56:29 np0005535469 systemd[1]: libpod-conmon-e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597.scope: Deactivated successfully.
Nov 25 11:56:30 np0005535469 podman[371905]: 2025-11-25 16:56:30.115963053 +0000 UTC m=+0.036885056 container create 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:56:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:30 np0005535469 systemd[1]: Started libpod-conmon-30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075.scope.
Nov 25 11:56:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:56:30 np0005535469 podman[371905]: 2025-11-25 16:56:30.193234227 +0000 UTC m=+0.114156280 container init 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 11:56:30 np0005535469 podman[371905]: 2025-11-25 16:56:30.101496788 +0000 UTC m=+0.022418811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:56:30 np0005535469 podman[371905]: 2025-11-25 16:56:30.199092716 +0000 UTC m=+0.120014719 container start 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:56:30 np0005535469 podman[371905]: 2025-11-25 16:56:30.20179203 +0000 UTC m=+0.122714063 container attach 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:56:30 np0005535469 condescending_stonebraker[371922]: 167 167
Nov 25 11:56:30 np0005535469 systemd[1]: libpod-30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075.scope: Deactivated successfully.
Nov 25 11:56:30 np0005535469 podman[371905]: 2025-11-25 16:56:30.204540445 +0000 UTC m=+0.125462448 container died 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 11:56:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-911cef95f7d0214a09f5b16a8f9a23e5d77d5098653c19d96ee1919c701b3f1f-merged.mount: Deactivated successfully.
Nov 25 11:56:30 np0005535469 podman[371905]: 2025-11-25 16:56:30.237936015 +0000 UTC m=+0.158858028 container remove 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:56:30 np0005535469 systemd[1]: libpod-conmon-30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075.scope: Deactivated successfully.
Nov 25 11:56:30 np0005535469 podman[371945]: 2025-11-25 16:56:30.391487227 +0000 UTC m=+0.039515127 container create 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 11:56:30 np0005535469 systemd[1]: Started libpod-conmon-3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b.scope.
Nov 25 11:56:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:56:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:30 np0005535469 podman[371945]: 2025-11-25 16:56:30.45952976 +0000 UTC m=+0.107557670 container init 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:56:30 np0005535469 podman[371945]: 2025-11-25 16:56:30.465880902 +0000 UTC m=+0.113908802 container start 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:56:30 np0005535469 podman[371945]: 2025-11-25 16:56:30.468516714 +0000 UTC m=+0.116544634 container attach 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:56:30 np0005535469 podman[371945]: 2025-11-25 16:56:30.3758445 +0000 UTC m=+0.023872420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:56:30 np0005535469 nova_compute[254092]: 2025-11-25 16:56:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:56:30 np0005535469 nova_compute[254092]: 2025-11-25 16:56:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:56:30 np0005535469 nova_compute[254092]: 2025-11-25 16:56:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:56:30 np0005535469 nova_compute[254092]: 2025-11-25 16:56:30.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]: {
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:    "0": [
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:        {
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "devices": [
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "/dev/loop3"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            ],
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_name": "ceph_lv0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_size": "21470642176",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "name": "ceph_lv0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "tags": {
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cluster_name": "ceph",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.crush_device_class": "",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.encrypted": "0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osd_id": "0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.type": "block",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.vdo": "0"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            },
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "type": "block",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "vg_name": "ceph_vg0"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:        }
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:    ],
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:    "1": [
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:        {
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "devices": [
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "/dev/loop4"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            ],
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_name": "ceph_lv1",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_size": "21470642176",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "name": "ceph_lv1",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "tags": {
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cluster_name": "ceph",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.crush_device_class": "",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.encrypted": "0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osd_id": "1",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.type": "block",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.vdo": "0"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            },
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "type": "block",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "vg_name": "ceph_vg1"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:        }
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:    ],
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:    "2": [
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:        {
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "devices": [
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "/dev/loop5"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            ],
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_name": "ceph_lv2",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_size": "21470642176",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "name": "ceph_lv2",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "tags": {
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.cluster_name": "ceph",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.crush_device_class": "",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.encrypted": "0",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osd_id": "2",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.type": "block",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:                "ceph.vdo": "0"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            },
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "type": "block",
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:            "vg_name": "ceph_vg2"
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:        }
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]:    ]
Nov 25 11:56:31 np0005535469 jovial_noyce[371962]: }
Nov 25 11:56:31 np0005535469 systemd[1]: libpod-3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b.scope: Deactivated successfully.
Nov 25 11:56:31 np0005535469 podman[371945]: 2025-11-25 16:56:31.21479786 +0000 UTC m=+0.862825760 container died 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:56:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9-merged.mount: Deactivated successfully.
Nov 25 11:56:31 np0005535469 podman[371945]: 2025-11-25 16:56:31.273874039 +0000 UTC m=+0.921901939 container remove 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:56:31 np0005535469 systemd[1]: libpod-conmon-3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b.scope: Deactivated successfully.
Nov 25 11:56:31 np0005535469 podman[371979]: 2025-11-25 16:56:31.313719694 +0000 UTC m=+0.068153777 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 11:56:31 np0005535469 podman[371972]: 2025-11-25 16:56:31.317071696 +0000 UTC m=+0.068752103 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:56:31 np0005535469 podman[371980]: 2025-11-25 16:56:31.383503445 +0000 UTC m=+0.135091070 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:56:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:31 np0005535469 podman[372185]: 2025-11-25 16:56:31.844267575 +0000 UTC m=+0.039238810 container create 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:56:31 np0005535469 systemd[1]: Started libpod-conmon-4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704.scope.
Nov 25 11:56:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:56:31 np0005535469 podman[372185]: 2025-11-25 16:56:31.912683278 +0000 UTC m=+0.107654553 container init 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:56:31 np0005535469 podman[372185]: 2025-11-25 16:56:31.919677948 +0000 UTC m=+0.114649183 container start 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:56:31 np0005535469 podman[372185]: 2025-11-25 16:56:31.829443231 +0000 UTC m=+0.024414496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:56:31 np0005535469 podman[372185]: 2025-11-25 16:56:31.922819654 +0000 UTC m=+0.117790919 container attach 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:56:31 np0005535469 relaxed_dhawan[372201]: 167 167
Nov 25 11:56:31 np0005535469 systemd[1]: libpod-4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704.scope: Deactivated successfully.
Nov 25 11:56:31 np0005535469 podman[372185]: 2025-11-25 16:56:31.924956442 +0000 UTC m=+0.119927687 container died 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 11:56:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8fab899b22e7fd5c57e2dbd6541d004f2b1f7176c63bdfcb5780bf4a7a35106b-merged.mount: Deactivated successfully.
Nov 25 11:56:31 np0005535469 podman[372185]: 2025-11-25 16:56:31.959022769 +0000 UTC m=+0.153994014 container remove 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:56:31 np0005535469 systemd[1]: libpod-conmon-4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704.scope: Deactivated successfully.
Nov 25 11:56:32 np0005535469 podman[372223]: 2025-11-25 16:56:32.103984538 +0000 UTC m=+0.042132399 container create 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:56:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:32 np0005535469 systemd[1]: Started libpod-conmon-5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d.scope.
Nov 25 11:56:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:56:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:56:32 np0005535469 podman[372223]: 2025-11-25 16:56:32.176458062 +0000 UTC m=+0.114605913 container init 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 11:56:32 np0005535469 podman[372223]: 2025-11-25 16:56:32.082871003 +0000 UTC m=+0.021018904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:56:32 np0005535469 podman[372223]: 2025-11-25 16:56:32.183383631 +0000 UTC m=+0.121531482 container start 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:56:32 np0005535469 podman[372223]: 2025-11-25 16:56:32.186415343 +0000 UTC m=+0.124563194 container attach 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:56:32 np0005535469 nova_compute[254092]: 2025-11-25 16:56:32.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]: {
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "osd_id": 1,
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "type": "bluestore"
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:    },
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "osd_id": 2,
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "type": "bluestore"
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:    },
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "osd_id": 0,
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:        "type": "bluestore"
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]:    }
Nov 25 11:56:33 np0005535469 thirsty_shockley[372239]: }
Nov 25 11:56:33 np0005535469 systemd[1]: libpod-5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d.scope: Deactivated successfully.
Nov 25 11:56:33 np0005535469 podman[372223]: 2025-11-25 16:56:33.081389198 +0000 UTC m=+1.019537049 container died 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 11:56:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23-merged.mount: Deactivated successfully.
Nov 25 11:56:33 np0005535469 podman[372223]: 2025-11-25 16:56:33.131342829 +0000 UTC m=+1.069490680 container remove 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 11:56:33 np0005535469 systemd[1]: libpod-conmon-5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d.scope: Deactivated successfully.
Nov 25 11:56:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:56:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:56:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:56:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:56:33 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e9800288-981b-4947-9bfe-7eab647c1666 does not exist
Nov 25 11:56:33 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 35619cca-792c-4d15-bb83-d7c174e53912 does not exist
Nov 25 11:56:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:34 np0005535469 nova_compute[254092]: 2025-11-25 16:56:34.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:56:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:56:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:37 np0005535469 nova_compute[254092]: 2025-11-25 16:56:37.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:39 np0005535469 nova_compute[254092]: 2025-11-25 16:56:39.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:56:40
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups']
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:56:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:42 np0005535469 nova_compute[254092]: 2025-11-25 16:56:42.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:44 np0005535469 nova_compute[254092]: 2025-11-25 16:56:44.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:56:45.461 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:56:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:56:45.462 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:56:45 np0005535469 nova_compute[254092]: 2025-11-25 16:56:45.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:47 np0005535469 nova_compute[254092]: 2025-11-25 16:56:47.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:49 np0005535469 nova_compute[254092]: 2025-11-25 16:56:49.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:56:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:56:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:56:51.463 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:56:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:52 np0005535469 nova_compute[254092]: 2025-11-25 16:56:52.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:54 np0005535469 nova_compute[254092]: 2025-11-25 16:56:54.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:56:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/78123321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:56:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:56:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/78123321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:56:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:56:57 np0005535469 nova_compute[254092]: 2025-11-25 16:56:57.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:56:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:56:59 np0005535469 nova_compute[254092]: 2025-11-25 16:56:59.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:01 np0005535469 podman[372334]: 2025-11-25 16:57:01.716173345 +0000 UTC m=+0.111310773 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 11:57:01 np0005535469 podman[372335]: 2025-11-25 16:57:01.732202302 +0000 UTC m=+0.128365647 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:57:01 np0005535469 podman[372336]: 2025-11-25 16:57:01.73616641 +0000 UTC m=+0.126744483 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 11:57:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:02 np0005535469 nova_compute[254092]: 2025-11-25 16:57:02.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:04 np0005535469 nova_compute[254092]: 2025-11-25 16:57:04.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:07 np0005535469 nova_compute[254092]: 2025-11-25 16:57:07.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:09 np0005535469 nova_compute[254092]: 2025-11-25 16:57:09.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:57:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:12 np0005535469 nova_compute[254092]: 2025-11-25 16:57:12.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:13.633 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:13.633 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:13.634 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:14 np0005535469 nova_compute[254092]: 2025-11-25 16:57:14.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:17 np0005535469 nova_compute[254092]: 2025-11-25 16:57:17.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:19 np0005535469 nova_compute[254092]: 2025-11-25 16:57:19.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:19 np0005535469 nova_compute[254092]: 2025-11-25 16:57:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:22 np0005535469 nova_compute[254092]: 2025-11-25 16:57:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:22 np0005535469 nova_compute[254092]: 2025-11-25 16:57:22.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:23 np0005535469 nova_compute[254092]: 2025-11-25 16:57:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:23 np0005535469 nova_compute[254092]: 2025-11-25 16:57:23.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:23 np0005535469 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:23 np0005535469 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:23 np0005535469 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:57:23 np0005535469 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:57:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571550181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:57:23 np0005535469 nova_compute[254092]: 2025-11-25 16:57:23.971 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.119 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.121 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3872MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.121 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.121 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.188 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.189 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.276 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:57:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318878177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.694 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.699 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.717 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.718 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:57:24 np0005535469 nova_compute[254092]: 2025-11-25 16:57:24.718 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:25 np0005535469 nova_compute[254092]: 2025-11-25 16:57:25.719 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:25 np0005535469 nova_compute[254092]: 2025-11-25 16:57:25.719 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:25 np0005535469 nova_compute[254092]: 2025-11-25 16:57:25.720 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:25 np0005535469 nova_compute[254092]: 2025-11-25 16:57:25.720 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:57:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:26 np0005535469 nova_compute[254092]: 2025-11-25 16:57:26.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:27 np0005535469 nova_compute[254092]: 2025-11-25 16:57:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:27 np0005535469 nova_compute[254092]: 2025-11-25 16:57:27.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:29 np0005535469 nova_compute[254092]: 2025-11-25 16:57:29.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:31 np0005535469 nova_compute[254092]: 2025-11-25 16:57:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:31 np0005535469 nova_compute[254092]: 2025-11-25 16:57:31.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:57:31 np0005535469 nova_compute[254092]: 2025-11-25 16:57:31.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:57:31 np0005535469 nova_compute[254092]: 2025-11-25 16:57:31.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:57:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:32 np0005535469 podman[372440]: 2025-11-25 16:57:32.692169792 +0000 UTC m=+0.099354776 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:57:32 np0005535469 podman[372439]: 2025-11-25 16:57:32.692289525 +0000 UTC m=+0.093938838 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Nov 25 11:57:32 np0005535469 podman[372441]: 2025-11-25 16:57:32.717751349 +0000 UTC m=+0.123279599 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:57:32 np0005535469 nova_compute[254092]: 2025-11-25 16:57:32.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:57:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:57:34 np0005535469 nova_compute[254092]: 2025-11-25 16:57:34.231 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:57:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 95eac55f-af9f-4557-9c1f-dfd5da1d708e does not exist
Nov 25 11:57:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9995f906-4427-4961-bb2c-305ad104d0d4 does not exist
Nov 25 11:57:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1fefd49b-82fa-48f4-8cf0-d58869d88069 does not exist
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:57:34 np0005535469 nova_compute[254092]: 2025-11-25 16:57:34.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:57:34 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:57:34 np0005535469 podman[372777]: 2025-11-25 16:57:34.92586943 +0000 UTC m=+0.075718215 container create ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:57:34 np0005535469 systemd[1]: Started libpod-conmon-ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6.scope.
Nov 25 11:57:34 np0005535469 podman[372777]: 2025-11-25 16:57:34.88658913 +0000 UTC m=+0.036437995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:57:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:57:35 np0005535469 podman[372777]: 2025-11-25 16:57:35.051041648 +0000 UTC m=+0.200890463 container init ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:57:35 np0005535469 podman[372777]: 2025-11-25 16:57:35.057842844 +0000 UTC m=+0.207691629 container start ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 11:57:35 np0005535469 zealous_shamir[372793]: 167 167
Nov 25 11:57:35 np0005535469 systemd[1]: libpod-ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6.scope: Deactivated successfully.
Nov 25 11:57:35 np0005535469 podman[372777]: 2025-11-25 16:57:35.064491115 +0000 UTC m=+0.214339900 container attach ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 25 11:57:35 np0005535469 podman[372777]: 2025-11-25 16:57:35.065600764 +0000 UTC m=+0.215449579 container died ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:57:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ae318328ba4c744bc5827feaa263b5166d900d7584e9879ca182d674d7cb22cd-merged.mount: Deactivated successfully.
Nov 25 11:57:35 np0005535469 podman[372777]: 2025-11-25 16:57:35.279342806 +0000 UTC m=+0.429191601 container remove ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:57:35 np0005535469 systemd[1]: libpod-conmon-ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6.scope: Deactivated successfully.
Nov 25 11:57:35 np0005535469 podman[372819]: 2025-11-25 16:57:35.474586883 +0000 UTC m=+0.077668765 container create 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:57:35 np0005535469 podman[372819]: 2025-11-25 16:57:35.423513362 +0000 UTC m=+0.026595354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:57:35 np0005535469 systemd[1]: Started libpod-conmon-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope.
Nov 25 11:57:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:57:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:35 np0005535469 podman[372819]: 2025-11-25 16:57:35.641097879 +0000 UTC m=+0.244179841 container init 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:57:35 np0005535469 podman[372819]: 2025-11-25 16:57:35.655681036 +0000 UTC m=+0.258762918 container start 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:57:35 np0005535469 podman[372819]: 2025-11-25 16:57:35.6628094 +0000 UTC m=+0.265891362 container attach 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:57:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.541 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.543 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.595 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:57:36 np0005535469 condescending_jennings[372836]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:57:36 np0005535469 condescending_jennings[372836]: --> relative data size: 1.0
Nov 25 11:57:36 np0005535469 condescending_jennings[372836]: --> All data devices are unavailable
Nov 25 11:57:36 np0005535469 systemd[1]: libpod-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope: Deactivated successfully.
Nov 25 11:57:36 np0005535469 podman[372819]: 2025-11-25 16:57:36.712424697 +0000 UTC m=+1.315506599 container died 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 11:57:36 np0005535469 systemd[1]: libpod-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope: Consumed 1.003s CPU time.
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.719 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.721 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.731 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.732 254096 INFO nova.compute.claims [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:57:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be-merged.mount: Deactivated successfully.
Nov 25 11:57:36 np0005535469 nova_compute[254092]: 2025-11-25 16:57:36.897 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:36 np0005535469 podman[372819]: 2025-11-25 16:57:36.920928766 +0000 UTC m=+1.524010668 container remove 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 11:57:36 np0005535469 systemd[1]: libpod-conmon-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope: Deactivated successfully.
Nov 25 11:57:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:57:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505803417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.345 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.355 254096 DEBUG nova.compute.provider_tree [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.371 254096 DEBUG nova.scheduler.client.report [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.403 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.406 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.452 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.453 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.471 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.497 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.583 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.585 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.585 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating image(s)#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.614 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.644 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.680 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.686 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:37 np0005535469 podman[373056]: 2025-11-25 16:57:37.68774511 +0000 UTC m=+0.057569488 container create 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:57:37 np0005535469 systemd[1]: Started libpod-conmon-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope.
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.740 254096 DEBUG nova.policy [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a830f6b7532459380b24ae0297b12bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0fef68bf8cf647f89586309d548d4bd7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:57:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:57:37 np0005535469 podman[373056]: 2025-11-25 16:57:37.664398625 +0000 UTC m=+0.034223003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:57:37 np0005535469 podman[373056]: 2025-11-25 16:57:37.780332552 +0000 UTC m=+0.150156960 container init 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:57:37 np0005535469 podman[373056]: 2025-11-25 16:57:37.787032475 +0000 UTC m=+0.156856853 container start 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.787 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.788 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:37 np0005535469 podman[373056]: 2025-11-25 16:57:37.790296044 +0000 UTC m=+0.160120422 container attach 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.789 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.790 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:37 np0005535469 epic_germain[373112]: 167 167
Nov 25 11:57:37 np0005535469 systemd[1]: libpod-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope: Deactivated successfully.
Nov 25 11:57:37 np0005535469 conmon[373112]: conmon 5500289e6071f0b214a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope/container/memory.events
Nov 25 11:57:37 np0005535469 podman[373056]: 2025-11-25 16:57:37.79676679 +0000 UTC m=+0.166591158 container died 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:57:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-add913bfab9dbc5cbf46755a74ff1022e13c357c563ddb35c16bca25a5e27a73-merged.mount: Deactivated successfully.
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.834 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:57:37 np0005535469 podman[373056]: 2025-11-25 16:57:37.841330254 +0000 UTC m=+0.211154612 container remove 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.843 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:37 np0005535469 systemd[1]: libpod-conmon-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope: Deactivated successfully.
Nov 25 11:57:37 np0005535469 nova_compute[254092]: 2025-11-25 16:57:37.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:38 np0005535469 podman[373170]: 2025-11-25 16:57:38.135254939 +0000 UTC m=+0.126820965 container create 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:57:38 np0005535469 podman[373170]: 2025-11-25 16:57:38.051191109 +0000 UTC m=+0.042757175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:57:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 11:57:38 np0005535469 systemd[1]: Started libpod-conmon-35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2.scope.
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.203 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:57:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:38 np0005535469 podman[373170]: 2025-11-25 16:57:38.264086058 +0000 UTC m=+0.255652154 container init 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 11:57:38 np0005535469 podman[373170]: 2025-11-25 16:57:38.282155381 +0000 UTC m=+0.273721407 container start 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:57:38 np0005535469 podman[373170]: 2025-11-25 16:57:38.288196475 +0000 UTC m=+0.279762551 container attach 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.291 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] resizing rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.408 254096 DEBUG nova.objects.instance [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.425 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.425 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Ensure instance console log exists: /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.426 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.426 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:38 np0005535469 nova_compute[254092]: 2025-11-25 16:57:38.427 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:39 np0005535469 cool_lewin[373187]: {
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:    "0": [
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:        {
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "devices": [
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "/dev/loop3"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            ],
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_name": "ceph_lv0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_size": "21470642176",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "name": "ceph_lv0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "tags": {
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cluster_name": "ceph",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.crush_device_class": "",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.encrypted": "0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osd_id": "0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.type": "block",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.vdo": "0"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            },
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "type": "block",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "vg_name": "ceph_vg0"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:        }
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:    ],
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:    "1": [
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:        {
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "devices": [
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "/dev/loop4"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            ],
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_name": "ceph_lv1",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_size": "21470642176",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "name": "ceph_lv1",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "tags": {
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cluster_name": "ceph",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.crush_device_class": "",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.encrypted": "0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osd_id": "1",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.type": "block",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.vdo": "0"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            },
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "type": "block",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "vg_name": "ceph_vg1"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:        }
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:    ],
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:    "2": [
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:        {
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "devices": [
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "/dev/loop5"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            ],
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_name": "ceph_lv2",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_size": "21470642176",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "name": "ceph_lv2",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "tags": {
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.cluster_name": "ceph",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.crush_device_class": "",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.encrypted": "0",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osd_id": "2",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.type": "block",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:                "ceph.vdo": "0"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            },
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "type": "block",
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:            "vg_name": "ceph_vg2"
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:        }
Nov 25 11:57:39 np0005535469 cool_lewin[373187]:    ]
Nov 25 11:57:39 np0005535469 cool_lewin[373187]: }
Nov 25 11:57:39 np0005535469 systemd[1]: libpod-35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2.scope: Deactivated successfully.
Nov 25 11:57:39 np0005535469 podman[373170]: 2025-11-25 16:57:39.127236036 +0000 UTC m=+1.118802072 container died 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:57:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce-merged.mount: Deactivated successfully.
Nov 25 11:57:39 np0005535469 podman[373170]: 2025-11-25 16:57:39.203904255 +0000 UTC m=+1.195470301 container remove 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 11:57:39 np0005535469 systemd[1]: libpod-conmon-35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2.scope: Deactivated successfully.
Nov 25 11:57:39 np0005535469 nova_compute[254092]: 2025-11-25 16:57:39.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:39 np0005535469 nova_compute[254092]: 2025-11-25 16:57:39.518 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Successfully created port: 75edff1b-5ceb-4f80-befe-e1a5ec106382 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:57:39 np0005535469 podman[373422]: 2025-11-25 16:57:39.823140011 +0000 UTC m=+0.051577967 container create 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:57:39 np0005535469 systemd[1]: Started libpod-conmon-8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f.scope.
Nov 25 11:57:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:57:39 np0005535469 podman[373422]: 2025-11-25 16:57:39.809278103 +0000 UTC m=+0.037716079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:57:39 np0005535469 podman[373422]: 2025-11-25 16:57:39.903957102 +0000 UTC m=+0.132395078 container init 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:57:39 np0005535469 podman[373422]: 2025-11-25 16:57:39.91492218 +0000 UTC m=+0.143360136 container start 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 11:57:39 np0005535469 podman[373422]: 2025-11-25 16:57:39.917600663 +0000 UTC m=+0.146038619 container attach 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 11:57:39 np0005535469 tender_ride[373439]: 167 167
Nov 25 11:57:39 np0005535469 systemd[1]: libpod-8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f.scope: Deactivated successfully.
Nov 25 11:57:39 np0005535469 podman[373422]: 2025-11-25 16:57:39.923489574 +0000 UTC m=+0.151927530 container died 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:57:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cab6bee6c13118da8a45e84bdfbe2557e37928011c5cd73a8a91cb078013da84-merged.mount: Deactivated successfully.
Nov 25 11:57:39 np0005535469 podman[373422]: 2025-11-25 16:57:39.95680952 +0000 UTC m=+0.185247496 container remove 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 11:57:39 np0005535469 systemd[1]: libpod-conmon-8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f.scope: Deactivated successfully.
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:57:40
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control']
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:57:40 np0005535469 podman[373465]: 2025-11-25 16:57:40.137845471 +0000 UTC m=+0.049430787 container create 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 81 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Nov 25 11:57:40 np0005535469 systemd[1]: Started libpod-conmon-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope.
Nov 25 11:57:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:57:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:40 np0005535469 podman[373465]: 2025-11-25 16:57:40.11651377 +0000 UTC m=+0.028099176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:57:40 np0005535469 podman[373465]: 2025-11-25 16:57:40.215677251 +0000 UTC m=+0.127262617 container init 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 11:57:40 np0005535469 podman[373465]: 2025-11-25 16:57:40.224545193 +0000 UTC m=+0.136130519 container start 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:57:40 np0005535469 podman[373465]: 2025-11-25 16:57:40.227727119 +0000 UTC m=+0.139312445 container attach 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]: {
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "osd_id": 1,
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "type": "bluestore"
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:    },
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "osd_id": 2,
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "type": "bluestore"
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:    },
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "osd_id": 0,
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:        "type": "bluestore"
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]:    }
Nov 25 11:57:41 np0005535469 nostalgic_merkle[373482]: }
Nov 25 11:57:41 np0005535469 systemd[1]: libpod-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope: Deactivated successfully.
Nov 25 11:57:41 np0005535469 podman[373465]: 2025-11-25 16:57:41.246821416 +0000 UTC m=+1.158406732 container died 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:57:41 np0005535469 systemd[1]: libpod-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope: Consumed 1.026s CPU time.
Nov 25 11:57:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f-merged.mount: Deactivated successfully.
Nov 25 11:57:41 np0005535469 podman[373465]: 2025-11-25 16:57:41.306922822 +0000 UTC m=+1.218508138 container remove 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 11:57:41 np0005535469 systemd[1]: libpod-conmon-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope: Deactivated successfully.
Nov 25 11:57:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:57:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:57:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:57:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:57:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 11a71731-195e-4d02-ab78-2b5ce9435e65 does not exist
Nov 25 11:57:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 290cbf2d-08b8-46a4-82fb-82f91d11014a does not exist
Nov 25 11:57:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.643 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Successfully updated port: 75edff1b-5ceb-4f80-befe-e1a5ec106382 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.654 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.654 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.654 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.809 254096 DEBUG nova.compute.manager [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.809 254096 DEBUG nova.compute.manager [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.809 254096 DEBUG oslo_concurrency.lockutils [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:57:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:57:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:57:41 np0005535469 nova_compute[254092]: 2025-11-25 16:57:41.941 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:57:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 11:57:42 np0005535469 nova_compute[254092]: 2025-11-25 16:57:42.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.404 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.424 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.424 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance network_info: |[{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.424 254096 DEBUG oslo_concurrency.lockutils [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.425 254096 DEBUG nova.network.neutron [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.427 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start _get_guest_xml network_info=[{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.432 254096 WARNING nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.438 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.438 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.446 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.446 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.446 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.451 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:57:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466815479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.899 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.925 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:57:43 np0005535469 nova_compute[254092]: 2025-11-25 16:57:43.929 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:57:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913018023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.386 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.388 254096 DEBUG nova.virt.libvirt.vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:57:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.388 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.389 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.391 254096 DEBUG nova.objects.instance [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.410 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <uuid>5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</uuid>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <name>instance-0000006e</name>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestShelveInstance-server-1491050401</nova:name>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:57:43</nova:creationTime>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:user uuid="2a830f6b7532459380b24ae0297b12bb">tempest-TestShelveInstance-535396087-project-member</nova:user>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:project uuid="0fef68bf8cf647f89586309d548d4bd7">tempest-TestShelveInstance-535396087</nova:project>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <nova:port uuid="75edff1b-5ceb-4f80-befe-e1a5ec106382">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <entry name="serial">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <entry name="uuid">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:9e:02:f1"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <target dev="tap75edff1b-5c"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log" append="off"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:57:44 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:57:44 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:57:44 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:57:44 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.412 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Preparing to wait for external event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.412 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.413 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.413 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.415 254096 DEBUG nova.virt.libvirt.vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:57:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.415 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.416 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.417 254096 DEBUG os_vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.419 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.420 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.426 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75edff1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.426 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75edff1b-5c, col_values=(('external_ids', {'iface-id': '75edff1b-5ceb-4f80-befe-e1a5ec106382', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:02:f1', 'vm-uuid': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:57:44 np0005535469 NetworkManager[48891]: <info>  [1764089864.4299] manager: (tap75edff1b-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.438 254096 INFO os_vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.510 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.511 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.511 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No VIF found with MAC fa:16:3e:9e:02:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.512 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Using config drive#033[00m
Nov 25 11:57:44 np0005535469 nova_compute[254092]: 2025-11-25 16:57:44.538 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.574 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating config drive at /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.579 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp08ctyvp8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:45.680 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:45.681 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:57:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:45.681 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.715 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp08ctyvp8" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.740 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.743 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.830 254096 DEBUG nova.network.neutron [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.831 254096 DEBUG nova.network.neutron [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.846 254096 DEBUG oslo_concurrency.lockutils [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.887 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.889 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting local config drive /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config because it was imported into RBD.#033[00m
Nov 25 11:57:45 np0005535469 systemd[1]: Starting libvirt secret daemon...
Nov 25 11:57:45 np0005535469 systemd[1]: Started libvirt secret daemon.
Nov 25 11:57:45 np0005535469 kernel: tap75edff1b-5c: entered promiscuous mode
Nov 25 11:57:45 np0005535469 NetworkManager[48891]: <info>  [1764089865.9879] manager: (tap75edff1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Nov 25 11:57:45 np0005535469 ovn_controller[153477]: 2025-11-25T16:57:45Z|01129|binding|INFO|Claiming lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 for this chassis.
Nov 25 11:57:45 np0005535469 ovn_controller[153477]: 2025-11-25T16:57:45Z|01130|binding|INFO|75edff1b-5ceb-4f80-befe-e1a5ec106382: Claiming fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:45 np0005535469 nova_compute[254092]: 2025-11-25 16:57:45.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.003 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.005 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 bound to our chassis#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.006 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4#033[00m
Nov 25 11:57:46 np0005535469 systemd-udevd[373734]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50a4bb1b-f1d2-4eed-9b8a-de948fce6bcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.019 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c62671a-f1 in ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.024 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c62671a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.024 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38a50e05-99ab-490b-b753-58da97fcd70a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.024 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac71e15e-aff7-46c0-9c41-36b153f29e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 systemd-machined[216343]: New machine qemu-141-instance-0000006e.
Nov 25 11:57:46 np0005535469 NetworkManager[48891]: <info>  [1764089866.0306] device (tap75edff1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:57:46 np0005535469 NetworkManager[48891]: <info>  [1764089866.0321] device (tap75edff1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.038 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[eb572d03-90ef-4fca-a439-17af921f8816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 systemd[1]: Started Virtual Machine qemu-141-instance-0000006e.
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:57:46Z|01131|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 ovn-installed in OVS
Nov 25 11:57:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:57:46Z|01132|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 up in Southbound
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.060 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e51aec-23a9-4e7e-a542-7c312bdb1801]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.087 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13eafc9b-1c88-45fb-8b85-a144d0718e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.091 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[51e54160-abf2-4115-92df-f59ddb59e679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 NetworkManager[48891]: <info>  [1764089866.0928] manager: (tap6c62671a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/460)
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.122 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b01328b2-5327-4210-9edf-b2bbd28bd9ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.125 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[36b6f068-a968-481c-9a50-aecb31c02425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 NetworkManager[48891]: <info>  [1764089866.1497] device (tap6c62671a-f0): carrier: link connected
Nov 25 11:57:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.155 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3789f067-7b10-4d42-9c5b-42e8f583e38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.171 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[506954df-1468-41f0-a57c-10a393cbd1cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632374, 'reachable_time': 22805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373766, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.188 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41458a7f-4aa3-43bf-92d6-577733172e4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c826'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632374, 'tstamp': 632374}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373767, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.202 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a383c72-d718-4378-b3b3-f56342e6ed78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632374, 'reachable_time': 22805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373768, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.233 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0905096e-2da5-4be9-be5c-b144421838e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.288 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4d1c3c-80f3-4261-a1e3-2db5e31677f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c62671a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 NetworkManager[48891]: <info>  [1764089866.2919] manager: (tap6c62671a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Nov 25 11:57:46 np0005535469 kernel: tap6c62671a-f0: entered promiscuous mode
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.295 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c62671a-f0, col_values=(('external_ids', {'iface-id': 'f4863bb8-2150-4d2f-9927-dcfc70177f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.296 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 ovn_controller[153477]: 2025-11-25T16:57:46Z|01133|binding|INFO|Releasing lport f4863bb8-2150-4d2f-9927-dcfc70177f3b from this chassis (sb_readonly=0)
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.299 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[589ed177-aa4c-4e87-bf80-7bb0c89bfb3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.303 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:57:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.304 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'env', 'PROCESS_TAG=haproxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.340 254096 DEBUG nova.compute.manager [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.340 254096 DEBUG oslo_concurrency.lockutils [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.341 254096 DEBUG oslo_concurrency.lockutils [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.341 254096 DEBUG oslo_concurrency.lockutils [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.341 254096 DEBUG nova.compute.manager [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Processing event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.508 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089866.5080724, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.508 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Started (Lifecycle Event)#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.510 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.514 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.518 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance spawned successfully.#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.519 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.525 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:57:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.529 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.537 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.538 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.539 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.539 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.540 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.541 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.547 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.548 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089866.5111504, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.548 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.571 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.575 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089866.5133576, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.575 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.592 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.595 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.600 254096 INFO nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 9.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.600 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.623 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:57:46 np0005535469 podman[373842]: 2025-11-25 16:57:46.635088609 +0000 UTC m=+0.044047770 container create 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.663 254096 INFO nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 9.98 seconds to build instance.#033[00m
Nov 25 11:57:46 np0005535469 systemd[1]: Started libpod-conmon-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754.scope.
Nov 25 11:57:46 np0005535469 nova_compute[254092]: 2025-11-25 16:57:46.677 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:57:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f537bbdf1e418b80ee2344362b8e77ed6b022d25a22d9d91c2f957f20bed9fbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:57:46 np0005535469 podman[373842]: 2025-11-25 16:57:46.613617465 +0000 UTC m=+0.022576646 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:57:46 np0005535469 podman[373842]: 2025-11-25 16:57:46.714729999 +0000 UTC m=+0.123689180 container init 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:57:46 np0005535469 podman[373842]: 2025-11-25 16:57:46.724395001 +0000 UTC m=+0.133354162 container start 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:57:46 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : New worker (373864) forked
Nov 25 11:57:46 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : Loading success.
Nov 25 11:57:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 11:57:48 np0005535469 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG nova.compute.manager [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:57:48 np0005535469 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG oslo_concurrency.lockutils [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:57:48 np0005535469 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG oslo_concurrency.lockutils [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:57:48 np0005535469 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG oslo_concurrency.lockutils [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:57:48 np0005535469 nova_compute[254092]: 2025-11-25 16:57:48.432 254096 DEBUG nova.compute.manager [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:57:48 np0005535469 nova_compute[254092]: 2025-11-25 16:57:48.432 254096 WARNING nova.compute.manager [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:57:49 np0005535469 nova_compute[254092]: 2025-11-25 16:57:49.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:49 np0005535469 nova_compute[254092]: 2025-11-25 16:57:49.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:49 np0005535469 NetworkManager[48891]: <info>  [1764089869.2853] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Nov 25 11:57:49 np0005535469 NetworkManager[48891]: <info>  [1764089869.2862] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Nov 25 11:57:49 np0005535469 nova_compute[254092]: 2025-11-25 16:57:49.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:49 np0005535469 ovn_controller[153477]: 2025-11-25T16:57:49Z|01134|binding|INFO|Releasing lport f4863bb8-2150-4d2f-9927-dcfc70177f3b from this chassis (sb_readonly=0)
Nov 25 11:57:49 np0005535469 nova_compute[254092]: 2025-11-25 16:57:49.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:49 np0005535469 nova_compute[254092]: 2025-11-25 16:57:49.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 25 11:57:50 np0005535469 nova_compute[254092]: 2025-11-25 16:57:50.189 254096 DEBUG nova.compute.manager [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:57:50 np0005535469 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG nova.compute.manager [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:57:50 np0005535469 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG oslo_concurrency.lockutils [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:57:50 np0005535469 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG oslo_concurrency.lockutils [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:57:50 np0005535469 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG nova.network.neutron [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:57:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:57:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 448 KiB/s wr, 85 op/s
Nov 25 11:57:53 np0005535469 nova_compute[254092]: 2025-11-25 16:57:53.414 254096 DEBUG nova.network.neutron [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:57:53 np0005535469 nova_compute[254092]: 2025-11-25 16:57:53.414 254096 DEBUG nova.network.neutron [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:57:53 np0005535469 nova_compute[254092]: 2025-11-25 16:57:53.851 254096 DEBUG oslo_concurrency.lockutils [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:57:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 11:57:54 np0005535469 nova_compute[254092]: 2025-11-25 16:57:54.415 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:54 np0005535469 nova_compute[254092]: 2025-11-25 16:57:54.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:57:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1494414632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:57:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:57:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1494414632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:57:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 11:57:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:57:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 25 11:57:59 np0005535469 nova_compute[254092]: 2025-11-25 16:57:59.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:57:59 np0005535469 nova_compute[254092]: 2025-11-25 16:57:59.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:57:59 np0005535469 nova_compute[254092]: 2025-11-25 16:57:59.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5014 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 25 11:57:59 np0005535469 nova_compute[254092]: 2025-11-25 16:57:59.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:57:59 np0005535469 nova_compute[254092]: 2025-11-25 16:57:59.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:57:59 np0005535469 nova_compute[254092]: 2025-11-25 16:57:59.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 109 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 25 11:58:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:01Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 11:58:01 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:01Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 11:58:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 112 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Nov 25 11:58:03 np0005535469 podman[373877]: 2025-11-25 16:58:03.653489359 +0000 UTC m=+0.062597726 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 11:58:03 np0005535469 podman[373878]: 2025-11-25 16:58:03.680372651 +0000 UTC m=+0.086425365 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:58:03 np0005535469 podman[373879]: 2025-11-25 16:58:03.680486804 +0000 UTC m=+0.083226858 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 11:58:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 112 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 51 op/s
Nov 25 11:58:04 np0005535469 nova_compute[254092]: 2025-11-25 16:58:04.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 11:58:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 11:58:09 np0005535469 nova_compute[254092]: 2025-11-25 16:58:09.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:09 np0005535469 nova_compute[254092]: 2025-11-25 16:58:09.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:58:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 11:58:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 108 KiB/s wr, 27 op/s
Nov 25 11:58:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:13.634 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 100 KiB/s wr, 11 op/s
Nov 25 11:58:14 np0005535469 nova_compute[254092]: 2025-11-25 16:58:14.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:58:14 np0005535469 nova_compute[254092]: 2025-11-25 16:58:14.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:58:14 np0005535469 nova_compute[254092]: 2025-11-25 16:58:14.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 25 11:58:14 np0005535469 nova_compute[254092]: 2025-11-25 16:58:14.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:58:14 np0005535469 nova_compute[254092]: 2025-11-25 16:58:14.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:14 np0005535469 nova_compute[254092]: 2025-11-25 16:58:14.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:58:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 100 KiB/s wr, 11 op/s
Nov 25 11:58:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 11:58:19 np0005535469 nova_compute[254092]: 2025-11-25 16:58:19.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 11:58:20 np0005535469 nova_compute[254092]: 2025-11-25 16:58:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 11:58:23 np0005535469 nova_compute[254092]: 2025-11-25 16:58:23.197 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:23 np0005535469 nova_compute[254092]: 2025-11-25 16:58:23.199 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:23 np0005535469 nova_compute[254092]: 2025-11-25 16:58:23.199 254096 INFO nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Shelving#033[00m
Nov 25 11:58:23 np0005535469 nova_compute[254092]: 2025-11-25 16:58:23.222 254096 DEBUG nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 25 11:58:23 np0005535469 nova_compute[254092]: 2025-11-25 16:58:23.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 11:58:24 np0005535469 nova_compute[254092]: 2025-11-25 16:58:24.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:58:24 np0005535469 nova_compute[254092]: 2025-11-25 16:58:24.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:58:24 np0005535469 nova_compute[254092]: 2025-11-25 16:58:24.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 25 11:58:24 np0005535469 nova_compute[254092]: 2025-11-25 16:58:24.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:58:24 np0005535469 nova_compute[254092]: 2025-11-25 16:58:24.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:24 np0005535469 nova_compute[254092]: 2025-11-25 16:58:24.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:24 np0005535469 nova_compute[254092]: 2025-11-25 16:58:24.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 11:58:25 np0005535469 kernel: tap75edff1b-5c (unregistering): left promiscuous mode
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:25 np0005535469 NetworkManager[48891]: <info>  [1764089905.4982] device (tap75edff1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:25Z|01135|binding|INFO|Releasing lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 from this chassis (sb_readonly=0)
Nov 25 11:58:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:25Z|01136|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 down in Southbound
Nov 25 11:58:25 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:25Z|01137|binding|INFO|Removing iface tap75edff1b-5c ovn-installed in OVS
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.510 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.516 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.518 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 unbound from our chassis#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.520 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[485647f4-1950-4924-9a78-55bc08c3c145]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.522 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace which is not needed anymore#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.535 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.535 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.535 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.536 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:25 np0005535469 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 11:58:25 np0005535469 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Consumed 14.187s CPU time.
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:25 np0005535469 systemd-machined[216343]: Machine qemu-141-instance-0000006e terminated.
Nov 25 11:58:25 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : haproxy version is 2.8.14-c23fe91
Nov 25 11:58:25 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : path to executable is /usr/sbin/haproxy
Nov 25 11:58:25 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [WARNING]  (373862) : Exiting Master process...
Nov 25 11:58:25 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [ALERT]    (373862) : Current worker (373864) exited with code 143 (Terminated)
Nov 25 11:58:25 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [WARNING]  (373862) : All workers exited. Exiting... (0)
Nov 25 11:58:25 np0005535469 systemd[1]: libpod-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754.scope: Deactivated successfully.
Nov 25 11:58:25 np0005535469 podman[373972]: 2025-11-25 16:58:25.68881638 +0000 UTC m=+0.049557211 container died 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:58:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754-userdata-shm.mount: Deactivated successfully.
Nov 25 11:58:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f537bbdf1e418b80ee2344362b8e77ed6b022d25a22d9d91c2f957f20bed9fbc-merged.mount: Deactivated successfully.
Nov 25 11:58:25 np0005535469 podman[373972]: 2025-11-25 16:58:25.732579662 +0000 UTC m=+0.093320433 container cleanup 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:58:25 np0005535469 systemd[1]: libpod-conmon-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754.scope: Deactivated successfully.
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.773 254096 DEBUG nova.compute.manager [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.778 254096 DEBUG oslo_concurrency.lockutils [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.778 254096 DEBUG oslo_concurrency.lockutils [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.778 254096 DEBUG oslo_concurrency.lockutils [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.779 254096 DEBUG nova.compute.manager [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.779 254096 WARNING nova.compute.manager [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state shelving.#033[00m
Nov 25 11:58:25 np0005535469 podman[374023]: 2025-11-25 16:58:25.819816818 +0000 UTC m=+0.050993880 container remove 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.827 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[23055595-7867-456d-8bea-5f3a18cc60fe]: (4, ('Tue Nov 25 04:58:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754)\n9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754\nTue Nov 25 04:58:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754)\n9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.828 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8982cb-3566-403a-958e-b7cbbc2e381b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.829 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:25 np0005535469 kernel: tap6c62671a-f0: left promiscuous mode
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.852 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4befd1-ef41-4bfb-8660-5f805402d733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.869 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c098cc1-ad8f-452b-8c7b-47c9f1af22e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.870 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f706a40d-3c9b-4bf9-afa7-d30c7763a67c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.887 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c064bb62-1498-4e62-985b-238f0a21cd2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632367, 'reachable_time': 21345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374049, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.889 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:58:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.889 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1737fbff-fa18-4ac8-a9db-b78ccfa59809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:25 np0005535469 systemd[1]: run-netns-ovnmeta\x2d6c62671a\x2df9ae\x2d4033\x2d9c32\x2db04ccb3e0de4.mount: Deactivated successfully.
Nov 25 11:58:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:58:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080967315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:58:25 np0005535469 nova_compute[254092]: 2025-11-25 16:58:25.997 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 11:58:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 28 KiB/s wr, 4 op/s
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.230 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.231 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3812MB free_disk=59.94274139404297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.232 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.232 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.243 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance shutdown successfully after 3 seconds.#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.250 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.250 254096 DEBUG nova.objects.instance [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.298 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.299 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.299 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.337 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.610 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Beginning cold snapshot process#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.767 254096 DEBUG nova.virt.libvirt.imagebackend [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 11:58:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:58:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793325635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.811 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.817 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.830 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.845 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:26 np0005535469 nova_compute[254092]: 2025-11-25 16:58:26.981 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] creating snapshot(2d7784b1d2864fbcb9bcb6d80a6ef20d) on rbd image(5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:58:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 25 11:58:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 25 11:58:27 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.816 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] cloning vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk@2d7784b1d2864fbcb9bcb6d80a6ef20d to images/e39c28b6-49f5-4073-a746-38050d6a700b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.867 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.868 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.868 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.869 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.886 254096 DEBUG nova.compute.manager [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.887 254096 DEBUG oslo_concurrency.lockutils [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.888 254096 DEBUG oslo_concurrency.lockutils [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.889 254096 DEBUG oslo_concurrency.lockutils [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.889 254096 DEBUG nova.compute.manager [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.891 254096 WARNING nova.compute.manager [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 25 11:58:27 np0005535469 nova_compute[254092]: 2025-11-25 16:58:27.995 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] flattening images/e39c28b6-49f5-4073-a746-38050d6a700b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:58:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 33 KiB/s wr, 5 op/s
Nov 25 11:58:28 np0005535469 nova_compute[254092]: 2025-11-25 16:58:28.524 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] removing snapshot(2d7784b1d2864fbcb9bcb6d80a6ef20d) on rbd image(5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 11:58:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 25 11:58:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 25 11:58:28 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 25 11:58:28 np0005535469 nova_compute[254092]: 2025-11-25 16:58:28.894 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] creating snapshot(snap) on rbd image(e39c28b6-49f5-4073-a746-38050d6a700b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 11:58:29 np0005535469 nova_compute[254092]: 2025-11-25 16:58:29.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:29 np0005535469 nova_compute[254092]: 2025-11-25 16:58:29.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 25 11:58:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 25 11:58:29 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 25 11:58:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 163 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.199 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Snapshot image upload complete#033[00m
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.199 254096 DEBUG nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.250 254096 INFO nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Shelve offloading#033[00m
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.258 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.#033[00m
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.259 254096 DEBUG nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.261 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.261 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:58:31 np0005535469 nova_compute[254092]: 2025-11-25 16:58:31.262 254096 DEBUG nova.network.neutron [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:58:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 153 op/s
Nov 25 11:58:32 np0005535469 nova_compute[254092]: 2025-11-25 16:58:32.744 254096 DEBUG nova.network.neutron [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:58:32 np0005535469 nova_compute[254092]: 2025-11-25 16:58:32.792 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:58:33 np0005535469 nova_compute[254092]: 2025-11-25 16:58:33.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:33 np0005535469 nova_compute[254092]: 2025-11-25 16:58:33.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:58:33 np0005535469 nova_compute[254092]: 2025-11-25 16:58:33.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:58:33 np0005535469 nova_compute[254092]: 2025-11-25 16:58:33.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:58:33 np0005535469 nova_compute[254092]: 2025-11-25 16:58:33.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:58:33 np0005535469 nova_compute[254092]: 2025-11-25 16:58:33.527 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 11:58:33 np0005535469 nova_compute[254092]: 2025-11-25 16:58:33.527 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.3 MiB/s wr, 143 op/s
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.350 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.351 254096 DEBUG nova.objects.instance [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'resources' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.366 254096 DEBUG nova.virt.libvirt.vif [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:57:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member',shelved_at='2025-11-25T16:58:31.199734',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e39c28b6-49f5-4073-a746-38050d6a700b'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:58:26Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.367 254096 DEBUG nova.network.os_vif_util [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.368 254096 DEBUG nova.network.os_vif_util [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.369 254096 DEBUG os_vif [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.372 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75edff1b-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.415 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.423 254096 INFO os_vif [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.468 254096 DEBUG nova.compute.manager [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.468 254096 DEBUG nova.compute.manager [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.468 254096 DEBUG oslo_concurrency.lockutils [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:34 np0005535469 podman[374235]: 2025-11-25 16:58:34.672929839 +0000 UTC m=+0.082935569 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:58:34 np0005535469 podman[374236]: 2025-11-25 16:58:34.691553227 +0000 UTC m=+0.096287844 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:58:34 np0005535469 podman[374237]: 2025-11-25 16:58:34.736672116 +0000 UTC m=+0.125549231 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.891 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting instance files /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.892 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deletion of /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del complete#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.905 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.960 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.960 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.961 254096 DEBUG oslo_concurrency.lockutils [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:58:34 np0005535469 nova_compute[254092]: 2025-11-25 16:58:34.961 254096 DEBUG nova.network.neutron [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.016 254096 INFO nova.scheduler.client.report [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Deleted allocations for instance 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.054 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.055 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.104 254096 DEBUG oslo_concurrency.processutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:58:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784194978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.545 254096 DEBUG oslo_concurrency.processutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.552 254096 DEBUG nova.compute.provider_tree [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.572 254096 DEBUG nova.scheduler.client.report [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.607 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:35 np0005535469 nova_compute[254092]: 2025-11-25 16:58:35.662 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 120 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 170 op/s
Nov 25 11:58:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 25 11:58:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 25 11:58:36 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 25 11:58:36 np0005535469 nova_compute[254092]: 2025-11-25 16:58:36.592 254096 DEBUG nova.network.neutron [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:58:36 np0005535469 nova_compute[254092]: 2025-11-25 16:58:36.592 254096 DEBUG nova.network.neutron [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": null, "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap75edff1b-5c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:58:36 np0005535469 nova_compute[254092]: 2025-11-25 16:58:36.610 254096 DEBUG oslo_concurrency.lockutils [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.740 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.740 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.741 254096 INFO nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Unshelving#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.809 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.809 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.814 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.825 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.837 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.837 254096 INFO nova.compute.claims [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:58:37 np0005535469 nova_compute[254092]: 2025-11-25 16:58:37.917 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 120 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.6 MiB/s wr, 163 op/s
Nov 25 11:58:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:58:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631280970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:58:38 np0005535469 nova_compute[254092]: 2025-11-25 16:58:38.393 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:38 np0005535469 nova_compute[254092]: 2025-11-25 16:58:38.398 254096 DEBUG nova.compute.provider_tree [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:58:38 np0005535469 nova_compute[254092]: 2025-11-25 16:58:38.415 254096 DEBUG nova.scheduler.client.report [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:58:38 np0005535469 nova_compute[254092]: 2025-11-25 16:58:38.432 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:38 np0005535469 nova_compute[254092]: 2025-11-25 16:58:38.584 254096 INFO nova.network.neutron [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating port 75edff1b-5ceb-4f80-befe-e1a5ec106382 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.075 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.076 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.076 254096 DEBUG nova.network.neutron [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.184 254096 DEBUG nova.compute.manager [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.184 254096 DEBUG nova.compute.manager [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.185 254096 DEBUG oslo_concurrency.lockutils [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.418 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:39 np0005535469 nova_compute[254092]: 2025-11-25 16:58:39.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:58:40
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'vms', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images']
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 575 KiB/s rd, 2.5 MiB/s wr, 84 op/s
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.208 254096 DEBUG nova.network.neutron [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.223 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.225 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.225 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating image(s)#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.247 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.251 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.252 254096 DEBUG oslo_concurrency.lockutils [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.253 254096 DEBUG nova.network.neutron [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.288 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.312 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.316 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "3cd7afdb8044e756124699d2a63eed57978d47cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.317 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "3cd7afdb8044e756124699d2a63eed57978d47cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.628 254096 DEBUG nova.virt.libvirt.imagebackend [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/e39c28b6-49f5-4073-a746-38050d6a700b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/e39c28b6-49f5-4073-a746-38050d6a700b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.703 254096 DEBUG nova.virt.libvirt.imagebackend [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/e39c28b6-49f5-4073-a746-38050d6a700b/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.705 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] cloning images/e39c28b6-49f5-4073-a746-38050d6a700b@snap to None/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.763 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089905.7616975, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.764 254096 INFO nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.782 254096 DEBUG nova.compute.manager [None req-fa11f7c4-d102-49d1-baac-aa4bc034cad4 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.829 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "3cd7afdb8044e756124699d2a63eed57978d47cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:40 np0005535469 nova_compute[254092]: 2025-11-25 16:58:40.967 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.025 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] flattening vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.456 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Image rbd:vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.456 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Ensure instance console log exists: /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.459 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start _get_guest_xml network_info=[{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:58:23Z,direct_url=<?>,disk_format='raw',id=e39c28b6-49f5-4073-a746-38050d6a700b,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1491050401-shelved',owner='0fef68bf8cf647f89586309d548d4bd7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:58:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.463 254096 WARNING nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.467 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.467 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.470 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.471 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.471 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.471 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:58:23Z,direct_url=<?>,disk_format='raw',id=e39c28b6-49f5-4073-a746-38050d6a700b,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1491050401-shelved',owner='0fef68bf8cf647f89586309d548d4bd7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:58:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.487 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.536 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.537 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.574 254096 DEBUG nova.network.neutron [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.574 254096 DEBUG nova.network.neutron [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.588 254096 DEBUG oslo_concurrency.lockutils [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:58:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:58:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207005865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.960 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.980 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:58:41 np0005535469 nova_compute[254092]: 2025-11-25 16:58:41.985 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 44 op/s
Nov 25 11:58:42 np0005535469 podman[374771]: 2025-11-25 16:58:42.300929374 +0000 UTC m=+0.160994506 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:58:42 np0005535469 podman[374771]: 2025-11-25 16:58:42.388789477 +0000 UTC m=+0.248854619 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 11:58:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:58:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018880638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.455 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.459 254096 DEBUG nova.virt.libvirt.vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='e39c28b6-49f5-4073-a746-38050d6a700b',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:57:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member',shelved_at='2025-11-25T16:58:31.199734',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e39c28b6-49f5-4073-a746-38050d6a700b'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:58:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.460 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.461 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.463 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.475 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <uuid>5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</uuid>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <name>instance-0000006e</name>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestShelveInstance-server-1491050401</nova:name>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:58:41</nova:creationTime>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:user uuid="2a830f6b7532459380b24ae0297b12bb">tempest-TestShelveInstance-535396087-project-member</nova:user>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:project uuid="0fef68bf8cf647f89586309d548d4bd7">tempest-TestShelveInstance-535396087</nova:project>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="e39c28b6-49f5-4073-a746-38050d6a700b"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <nova:port uuid="75edff1b-5ceb-4f80-befe-e1a5ec106382">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <entry name="serial">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <entry name="uuid">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:9e:02:f1"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <target dev="tap75edff1b-5c"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log" append="off"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:58:42 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:58:42 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:58:42 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:58:42 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Preparing to wait for external event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.477 254096 DEBUG nova.virt.libvirt.vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='e39c28b6-49f5-4073-a746-38050d6a700b',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:57:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member',shelved_at='2025-11-25T16:58:31.199734',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e39c28b6-49f5-4073-a746-38050d6a700b'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:58:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.477 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.478 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.478 254096 DEBUG os_vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.479 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.479 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75edff1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.482 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75edff1b-5c, col_values=(('external_ids', {'iface-id': '75edff1b-5ceb-4f80-befe-e1a5ec106382', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:02:f1', 'vm-uuid': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:42 np0005535469 NetworkManager[48891]: <info>  [1764089922.4841] manager: (tap75edff1b-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.490 254096 INFO os_vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.535 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.535 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.536 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No VIF found with MAC fa:16:3e:9e:02:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.536 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Using config drive#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.561 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.580 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:42 np0005535469 nova_compute[254092]: 2025-11-25 16:58:42.619 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'keypairs' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.139 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating config drive at /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.145 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkkccyspj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.318 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkkccyspj" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.345 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.350 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.513 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.514 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting local config drive /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config because it was imported into RBD.#033[00m
Nov 25 11:58:43 np0005535469 kernel: tap75edff1b-5c: entered promiscuous mode
Nov 25 11:58:43 np0005535469 NetworkManager[48891]: <info>  [1764089923.5754] manager: (tap75edff1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/465)
Nov 25 11:58:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:43Z|01138|binding|INFO|Claiming lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 for this chassis.
Nov 25 11:58:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:43Z|01139|binding|INFO|75edff1b-5ceb-4f80-befe-e1a5ec106382: Claiming fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.586 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 bound to our chassis#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.588 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4#033[00m
Nov 25 11:58:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:43Z|01140|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 ovn-installed in OVS
Nov 25 11:58:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:43Z|01141|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 up in Southbound
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e354db62-1b5f-4fba-8751-7d24045c410c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.604 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c62671a-f1 in ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.606 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c62671a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.606 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d68c082a-c3a9-4be8-a1ef-1bd4be78ce27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.607 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0983896b-5eb8-4aa5-8da8-8adb419378b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 systemd-udevd[375128]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:58:43 np0005535469 systemd-machined[216343]: New machine qemu-142-instance-0000006e.
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.620 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa1609c-ac4a-486d-ae49-51d5de1e7488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 NetworkManager[48891]: <info>  [1764089923.6327] device (tap75edff1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:58:43 np0005535469 systemd[1]: Started Virtual Machine qemu-142-instance-0000006e.
Nov 25 11:58:43 np0005535469 NetworkManager[48891]: <info>  [1764089923.6354] device (tap75edff1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.648 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae661d72-1334-40d9-ba0e-87a5a4f16883]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.681 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c5d72b-878e-461b-9d23-18f9a67015f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.686 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5813ad05-8f3e-4496-8300-0b1525e03d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 systemd-udevd[375134]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:58:43 np0005535469 NetworkManager[48891]: <info>  [1764089923.6877] manager: (tap6c62671a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/466)
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.718 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[070a519b-029c-4c0d-a150-32313a207a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.721 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9dbc77-12c6-42f4-845f-6a567ccea8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 NetworkManager[48891]: <info>  [1764089923.7429] device (tap6c62671a-f0): carrier: link connected
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.749 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abf020db-6bdc-4898-9ebf-81d60196c9cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15f12d89-d6cd-404d-bc85-a98c2e8b9793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638133, 'reachable_time': 26386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375165, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.792 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37fd37bb-4d12-4cf3-b4bf-ff4de20ec99a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c826'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638133, 'tstamp': 638133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375166, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.812 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba455f16-ca8a-4400-a923-f5d3a6cf3970]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638133, 'reachable_time': 26386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375167, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.850 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e01d97cc-ad98-4ae4-8d2a-ca643e2f4295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.876 254096 DEBUG nova.compute.manager [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.876 254096 DEBUG oslo_concurrency.lockutils [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.877 254096 DEBUG oslo_concurrency.lockutils [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.877 254096 DEBUG oslo_concurrency.lockutils [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.877 254096 DEBUG nova.compute.manager [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Processing event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0705fbee-37de-492e-815d-6ae849c19831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.919 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.920 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.920 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c62671a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 NetworkManager[48891]: <info>  [1764089923.9234] manager: (tap6c62671a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Nov 25 11:58:43 np0005535469 kernel: tap6c62671a-f0: entered promiscuous mode
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.927 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c62671a-f0, col_values=(('external_ids', {'iface-id': 'f4863bb8-2150-4d2f-9927-dcfc70177f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:43Z|01142|binding|INFO|Releasing lport f4863bb8-2150-4d2f-9927-dcfc70177f3b from this chassis (sb_readonly=0)
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.931 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07de8174-6660-48ed-842e-ef6e490ff7ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.933 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:58:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.935 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'env', 'PROCESS_TAG=haproxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:58:43 np0005535469 nova_compute[254092]: 2025-11-25 16:58:43.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:58:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:44 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev eda2d163-ce61-45a6-99b6-7746e5f14770 does not exist
Nov 25 11:58:44 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a9743d25-e6f2-4ed8-b35d-ff697a66337a does not exist
Nov 25 11:58:44 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 954c77ab-833d-4fd7-a532-1434444020b7 does not exist
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:58:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 44 op/s
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.197 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089924.1967454, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.197 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Started (Lifecycle Event)#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.200 254096 DEBUG nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.203 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.205 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance spawned successfully.#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.222 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.226 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.252 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.252 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089924.1969492, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.253 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.276 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.280 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089924.2031713, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.280 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.297 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.303 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:58:44 np0005535469 podman[375333]: 2025-11-25 16:58:44.312411278 +0000 UTC m=+0.057265180 container create 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.324 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:58:44 np0005535469 systemd[1]: Started libpod-conmon-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697.scope.
Nov 25 11:58:44 np0005535469 podman[375333]: 2025-11-25 16:58:44.279417499 +0000 UTC m=+0.024271421 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:58:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:58:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d52d0fc086d39ee392d5c99998b632b0abad46d43aa1aa2121c7e09f49de6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:44 np0005535469 podman[375333]: 2025-11-25 16:58:44.402272405 +0000 UTC m=+0.147126327 container init 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 11:58:44 np0005535469 podman[375333]: 2025-11-25 16:58:44.408181476 +0000 UTC m=+0.153035378 container start 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:58:44 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : New worker (375379) forked
Nov 25 11:58:44 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : Loading success.
Nov 25 11:58:44 np0005535469 nova_compute[254092]: 2025-11-25 16:58:44.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:44 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:58:44 np0005535469 podman[375421]: 2025-11-25 16:58:44.636953577 +0000 UTC m=+0.035274282 container create 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:58:44 np0005535469 systemd[1]: Started libpod-conmon-3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e.scope.
Nov 25 11:58:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:58:44 np0005535469 podman[375421]: 2025-11-25 16:58:44.622517954 +0000 UTC m=+0.020838679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:58:44 np0005535469 podman[375421]: 2025-11-25 16:58:44.731696588 +0000 UTC m=+0.130017313 container init 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 11:58:44 np0005535469 podman[375421]: 2025-11-25 16:58:44.741821153 +0000 UTC m=+0.140141858 container start 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 11:58:44 np0005535469 podman[375421]: 2025-11-25 16:58:44.7453637 +0000 UTC m=+0.143684405 container attach 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:58:44 np0005535469 objective_hopper[375437]: 167 167
Nov 25 11:58:44 np0005535469 systemd[1]: libpod-3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e.scope: Deactivated successfully.
Nov 25 11:58:44 np0005535469 podman[375421]: 2025-11-25 16:58:44.752752871 +0000 UTC m=+0.151073576 container died 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:58:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2e26bdeda77d984bece039e957e5016be6606975fba5abf64f676f3e066884b1-merged.mount: Deactivated successfully.
Nov 25 11:58:44 np0005535469 podman[375421]: 2025-11-25 16:58:44.799380731 +0000 UTC m=+0.197701436 container remove 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 11:58:44 np0005535469 systemd[1]: libpod-conmon-3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e.scope: Deactivated successfully.
Nov 25 11:58:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 25 11:58:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 25 11:58:45 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 25 11:58:45 np0005535469 podman[375460]: 2025-11-25 16:58:45.027016981 +0000 UTC m=+0.073798211 container create 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 11:58:45 np0005535469 podman[375460]: 2025-11-25 16:58:44.999147102 +0000 UTC m=+0.045928312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:58:45 np0005535469 systemd[1]: Started libpod-conmon-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope.
Nov 25 11:58:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:58:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:45 np0005535469 podman[375460]: 2025-11-25 16:58:45.156536749 +0000 UTC m=+0.203318019 container init 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:58:45 np0005535469 podman[375460]: 2025-11-25 16:58:45.165868823 +0000 UTC m=+0.212650053 container start 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:58:45 np0005535469 podman[375460]: 2025-11-25 16:58:45.179588367 +0000 UTC m=+0.226369607 container attach 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:58:45 np0005535469 nova_compute[254092]: 2025-11-25 16:58:45.459 254096 DEBUG nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:58:45 np0005535469 nova_compute[254092]: 2025-11-25 16:58:45.787 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:45 np0005535469 nova_compute[254092]: 2025-11-25 16:58:45.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:45.800 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:58:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:45.802 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:58:46 np0005535469 nova_compute[254092]: 2025-11-25 16:58:46.039 254096 DEBUG nova.compute.manager [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:58:46 np0005535469 nova_compute[254092]: 2025-11-25 16:58:46.040 254096 DEBUG oslo_concurrency.lockutils [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:58:46 np0005535469 nova_compute[254092]: 2025-11-25 16:58:46.040 254096 DEBUG oslo_concurrency.lockutils [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:58:46 np0005535469 nova_compute[254092]: 2025-11-25 16:58:46.040 254096 DEBUG oslo_concurrency.lockutils [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:58:46 np0005535469 nova_compute[254092]: 2025-11-25 16:58:46.041 254096 DEBUG nova.compute.manager [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:58:46 np0005535469 nova_compute[254092]: 2025-11-25 16:58:46.041 254096 WARNING nova.compute.manager [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:58:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 163 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 144 op/s
Nov 25 11:58:46 np0005535469 compassionate_pascal[375477]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:58:46 np0005535469 compassionate_pascal[375477]: --> relative data size: 1.0
Nov 25 11:58:46 np0005535469 compassionate_pascal[375477]: --> All data devices are unavailable
Nov 25 11:58:46 np0005535469 systemd[1]: libpod-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope: Deactivated successfully.
Nov 25 11:58:46 np0005535469 conmon[375477]: conmon 84f098177ff40a7e372c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope/container/memory.events
Nov 25 11:58:46 np0005535469 podman[375460]: 2025-11-25 16:58:46.237595572 +0000 UTC m=+1.284376802 container died 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 11:58:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d-merged.mount: Deactivated successfully.
Nov 25 11:58:46 np0005535469 podman[375460]: 2025-11-25 16:58:46.311767852 +0000 UTC m=+1.358549052 container remove 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:58:46 np0005535469 systemd[1]: libpod-conmon-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope: Deactivated successfully.
Nov 25 11:58:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:47 np0005535469 podman[375658]: 2025-11-25 16:58:47.042757172 +0000 UTC m=+0.054583588 container create 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:58:47 np0005535469 systemd[1]: Started libpod-conmon-71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4.scope.
Nov 25 11:58:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:58:47 np0005535469 podman[375658]: 2025-11-25 16:58:47.023298562 +0000 UTC m=+0.035125008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:58:47 np0005535469 podman[375658]: 2025-11-25 16:58:47.13304237 +0000 UTC m=+0.144868846 container init 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:58:47 np0005535469 podman[375658]: 2025-11-25 16:58:47.140667128 +0000 UTC m=+0.152493524 container start 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 11:58:47 np0005535469 podman[375658]: 2025-11-25 16:58:47.143788323 +0000 UTC m=+0.155614739 container attach 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:58:47 np0005535469 blissful_archimedes[375674]: 167 167
Nov 25 11:58:47 np0005535469 podman[375658]: 2025-11-25 16:58:47.145589373 +0000 UTC m=+0.157415769 container died 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 11:58:47 np0005535469 systemd[1]: libpod-71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4.scope: Deactivated successfully.
Nov 25 11:58:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-20bf69ffaaa6489f32bf5df14855fff396475f3d272a9e156a0eccbc0cddf3a0-merged.mount: Deactivated successfully.
Nov 25 11:58:47 np0005535469 podman[375658]: 2025-11-25 16:58:47.192031807 +0000 UTC m=+0.203858203 container remove 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:58:47 np0005535469 systemd[1]: libpod-conmon-71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4.scope: Deactivated successfully.
Nov 25 11:58:47 np0005535469 podman[375699]: 2025-11-25 16:58:47.353796473 +0000 UTC m=+0.045326795 container create fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 11:58:47 np0005535469 systemd[1]: Started libpod-conmon-fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031.scope.
Nov 25 11:58:47 np0005535469 podman[375699]: 2025-11-25 16:58:47.330575941 +0000 UTC m=+0.022106253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:58:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:58:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:47 np0005535469 podman[375699]: 2025-11-25 16:58:47.46129069 +0000 UTC m=+0.152821002 container init fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 11:58:47 np0005535469 podman[375699]: 2025-11-25 16:58:47.468930499 +0000 UTC m=+0.160460791 container start fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:58:47 np0005535469 podman[375699]: 2025-11-25 16:58:47.476578297 +0000 UTC m=+0.168108589 container attach fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:58:47 np0005535469 nova_compute[254092]: 2025-11-25 16:58:47.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 163 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 138 op/s
Nov 25 11:58:48 np0005535469 elated_bohr[375716]: {
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:    "0": [
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:        {
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "devices": [
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "/dev/loop3"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            ],
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_name": "ceph_lv0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_size": "21470642176",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "name": "ceph_lv0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "tags": {
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cluster_name": "ceph",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.crush_device_class": "",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.encrypted": "0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osd_id": "0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.type": "block",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.vdo": "0"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            },
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "type": "block",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "vg_name": "ceph_vg0"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:        }
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:    ],
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:    "1": [
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:        {
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "devices": [
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "/dev/loop4"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            ],
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_name": "ceph_lv1",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_size": "21470642176",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "name": "ceph_lv1",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "tags": {
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cluster_name": "ceph",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.crush_device_class": "",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.encrypted": "0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osd_id": "1",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.type": "block",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.vdo": "0"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            },
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "type": "block",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "vg_name": "ceph_vg1"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:        }
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:    ],
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:    "2": [
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:        {
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "devices": [
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "/dev/loop5"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            ],
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_name": "ceph_lv2",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_size": "21470642176",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "name": "ceph_lv2",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "tags": {
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.cluster_name": "ceph",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.crush_device_class": "",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.encrypted": "0",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osd_id": "2",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.type": "block",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:                "ceph.vdo": "0"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            },
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "type": "block",
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:            "vg_name": "ceph_vg2"
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:        }
Nov 25 11:58:48 np0005535469 elated_bohr[375716]:    ]
Nov 25 11:58:48 np0005535469 elated_bohr[375716]: }
Nov 25 11:58:48 np0005535469 systemd[1]: libpod-fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031.scope: Deactivated successfully.
Nov 25 11:58:48 np0005535469 podman[375699]: 2025-11-25 16:58:48.291490652 +0000 UTC m=+0.983020964 container died fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:58:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40-merged.mount: Deactivated successfully.
Nov 25 11:58:48 np0005535469 podman[375699]: 2025-11-25 16:58:48.346588432 +0000 UTC m=+1.038118744 container remove fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 11:58:48 np0005535469 systemd[1]: libpod-conmon-fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031.scope: Deactivated successfully.
Nov 25 11:58:49 np0005535469 podman[375874]: 2025-11-25 16:58:49.039624348 +0000 UTC m=+0.040749831 container create 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 11:58:49 np0005535469 systemd[1]: Started libpod-conmon-7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541.scope.
Nov 25 11:58:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:58:49 np0005535469 podman[375874]: 2025-11-25 16:58:49.025082071 +0000 UTC m=+0.026207574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:58:49 np0005535469 podman[375874]: 2025-11-25 16:58:49.124789547 +0000 UTC m=+0.125915030 container init 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:58:49 np0005535469 podman[375874]: 2025-11-25 16:58:49.131321495 +0000 UTC m=+0.132446978 container start 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 11:58:49 np0005535469 confident_mahavira[375891]: 167 167
Nov 25 11:58:49 np0005535469 systemd[1]: libpod-7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541.scope: Deactivated successfully.
Nov 25 11:58:49 np0005535469 podman[375874]: 2025-11-25 16:58:49.138361437 +0000 UTC m=+0.139486940 container attach 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:58:49 np0005535469 podman[375874]: 2025-11-25 16:58:49.138768378 +0000 UTC m=+0.139893861 container died 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 11:58:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-928dc2a17756ba43b95912df350203e57da654692434fe07d8165638de1fce82-merged.mount: Deactivated successfully.
Nov 25 11:58:49 np0005535469 podman[375874]: 2025-11-25 16:58:49.174661186 +0000 UTC m=+0.175786669 container remove 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:58:49 np0005535469 systemd[1]: libpod-conmon-7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541.scope: Deactivated successfully.
Nov 25 11:58:49 np0005535469 podman[375914]: 2025-11-25 16:58:49.394698158 +0000 UTC m=+0.058119143 container create a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 11:58:49 np0005535469 podman[375914]: 2025-11-25 16:58:49.366782698 +0000 UTC m=+0.030203673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:58:49 np0005535469 systemd[1]: Started libpod-conmon-a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868.scope.
Nov 25 11:58:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:58:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:58:49 np0005535469 podman[375914]: 2025-11-25 16:58:49.519768495 +0000 UTC m=+0.183189450 container init a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 11:58:49 np0005535469 podman[375914]: 2025-11-25 16:58:49.533972472 +0000 UTC m=+0.197393417 container start a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 11:58:49 np0005535469 nova_compute[254092]: 2025-11-25 16:58:49.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:49 np0005535469 podman[375914]: 2025-11-25 16:58:49.540370856 +0000 UTC m=+0.203791801 container attach a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:58:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]: {
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "osd_id": 1,
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "type": "bluestore"
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:    },
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "osd_id": 2,
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "type": "bluestore"
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:    },
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "osd_id": 0,
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:        "type": "bluestore"
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]:    }
Nov 25 11:58:50 np0005535469 hardcore_haibt[375931]: }
Nov 25 11:58:50 np0005535469 systemd[1]: libpod-a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868.scope: Deactivated successfully.
Nov 25 11:58:50 np0005535469 podman[375914]: 2025-11-25 16:58:50.445413375 +0000 UTC m=+1.108834320 container died a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:58:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466-merged.mount: Deactivated successfully.
Nov 25 11:58:50 np0005535469 podman[375914]: 2025-11-25 16:58:50.504172156 +0000 UTC m=+1.167593141 container remove a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:58:50 np0005535469 systemd[1]: libpod-conmon-a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868.scope: Deactivated successfully.
Nov 25 11:58:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 11:58:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 11:58:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d5b39198-acd6-4758-be83-0d80370ce72d does not exist
Nov 25 11:58:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6a213abf-c272-435b-b2c9-519794bb25f1 does not exist
Nov 25 11:58:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:58:50.805 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007613079540274793 of space, bias 1.0, pg target 0.22839238620824379 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:58:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:58:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 25 11:58:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 25 11:58:51 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 25 11:58:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:58:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 258 op/s
Nov 25 11:58:52 np0005535469 nova_compute[254092]: 2025-11-25 16:58:52.520 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Nov 25 11:58:54 np0005535469 nova_compute[254092]: 2025-11-25 16:58:54.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:58:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3961666780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:58:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:58:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3961666780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:58:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 409 B/s wr, 74 op/s
Nov 25 11:58:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:58:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:58:57Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 11:58:57 np0005535469 nova_compute[254092]: 2025-11-25 16:58:57.522 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:58:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 409 B/s wr, 74 op/s
Nov 25 11:58:59 np0005535469 nova_compute[254092]: 2025-11-25 16:58:59.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 16 KiB/s wr, 47 op/s
Nov 25 11:59:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 15 KiB/s wr, 50 op/s
Nov 25 11:59:02 np0005535469 nova_compute[254092]: 2025-11-25 16:59:02.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 13 KiB/s wr, 44 op/s
Nov 25 11:59:04 np0005535469 nova_compute[254092]: 2025-11-25 16:59:04.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:05 np0005535469 podman[376029]: 2025-11-25 16:59:05.650801297 +0000 UTC m=+0.062085661 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:59:05 np0005535469 podman[376028]: 2025-11-25 16:59:05.656742929 +0000 UTC m=+0.070687326 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:59:05 np0005535469 podman[376030]: 2025-11-25 16:59:05.688554186 +0000 UTC m=+0.090741523 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 11:59:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 11:59:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:07 np0005535469 nova_compute[254092]: 2025-11-25 16:59:07.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 23 KiB/s wr, 40 op/s
Nov 25 11:59:09 np0005535469 nova_compute[254092]: 2025-11-25 16:59:09.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:59:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 26 KiB/s wr, 40 op/s
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.684522) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951684540, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 2079, "num_deletes": 253, "total_data_size": 3404100, "memory_usage": 3453560, "flush_reason": "Manual Compaction"}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951699506, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 3337098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45270, "largest_seqno": 47348, "table_properties": {"data_size": 3327625, "index_size": 6031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19211, "raw_average_key_size": 20, "raw_value_size": 3308632, "raw_average_value_size": 3493, "num_data_blocks": 267, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089732, "oldest_key_time": 1764089732, "file_creation_time": 1764089951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 15036 microseconds, and 6393 cpu microseconds.
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.699551) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 3337098 bytes OK
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.699571) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.701722) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.701744) EVENT_LOG_v1 {"time_micros": 1764089951701737, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.701765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 3395390, prev total WAL file size 3395390, number of live WAL files 2.
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.702828) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(3258KB)], [101(8685KB)]
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951702879, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 12230833, "oldest_snapshot_seqno": -1}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7110 keys, 10569211 bytes, temperature: kUnknown
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951749968, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 10569211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10520294, "index_size": 30053, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 183608, "raw_average_key_size": 25, "raw_value_size": 10391420, "raw_average_value_size": 1461, "num_data_blocks": 1184, "num_entries": 7110, "num_filter_entries": 7110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.750238) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 10569211 bytes
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.751388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.3 rd, 224.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 7632, records dropped: 522 output_compression: NoCompression
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.751407) EVENT_LOG_v1 {"time_micros": 1764089951751398, "job": 60, "event": "compaction_finished", "compaction_time_micros": 47172, "compaction_time_cpu_micros": 25621, "output_level": 6, "num_output_files": 1, "total_output_size": 10569211, "num_input_records": 7632, "num_output_records": 7110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951752121, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951754190, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.702680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:59:11 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 11:59:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 13 KiB/s wr, 5 op/s
Nov 25 11:59:12 np0005535469 nova_compute[254092]: 2025-11-25 16:59:12.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:13.636 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 11:59:14 np0005535469 nova_compute[254092]: 2025-11-25 16:59:14.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 25 11:59:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:16 np0005535469 nova_compute[254092]: 2025-11-25 16:59:16.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:17 np0005535469 nova_compute[254092]: 2025-11-25 16:59:17.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.833 254096 DEBUG nova.compute.manager [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG nova.compute.manager [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG oslo_concurrency.lockutils [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG oslo_concurrency.lockutils [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG nova.network.neutron [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.904 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.906 254096 INFO nova.compute.manager [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Terminating instance#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.907 254096 DEBUG nova.compute.manager [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 11:59:19 np0005535469 kernel: tap75edff1b-5c (unregistering): left promiscuous mode
Nov 25 11:59:19 np0005535469 NetworkManager[48891]: <info>  [1764089959.9732] device (tap75edff1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:59:19 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:19Z|01143|binding|INFO|Releasing lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 from this chassis (sb_readonly=0)
Nov 25 11:59:19 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:19Z|01144|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 down in Southbound
Nov 25 11:59:19 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:19Z|01145|binding|INFO|Removing iface tap75edff1b-5c ovn-installed in OVS
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:19 np0005535469 nova_compute[254092]: 2025-11-25 16:59:19.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.995 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '9', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:59:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.996 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 unbound from our chassis#033[00m
Nov 25 11:59:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.997 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.998 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01a799bb-80f1-4f4d-b896-a1d4da9bec30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.999 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace which is not needed anymore#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:20 np0005535469 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 11:59:20 np0005535469 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d0000006e.scope: Consumed 14.273s CPU time.
Nov 25 11:59:20 np0005535469 systemd-machined[216343]: Machine qemu-142-instance-0000006e terminated.
Nov 25 11:59:20 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : haproxy version is 2.8.14-c23fe91
Nov 25 11:59:20 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : path to executable is /usr/sbin/haproxy
Nov 25 11:59:20 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [WARNING]  (375372) : Exiting Master process...
Nov 25 11:59:20 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [ALERT]    (375372) : Current worker (375379) exited with code 143 (Terminated)
Nov 25 11:59:20 np0005535469 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [WARNING]  (375372) : All workers exited. Exiting... (0)
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.143 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.143 254096 DEBUG nova.objects.instance [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'resources' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:59:20 np0005535469 systemd[1]: libpod-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697.scope: Deactivated successfully.
Nov 25 11:59:20 np0005535469 podman[376116]: 2025-11-25 16:59:20.152964675 +0000 UTC m=+0.053248771 container died 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.161 254096 DEBUG nova.virt.libvirt.vif [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:58:45Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.162 254096 DEBUG nova.network.os_vif_util [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.163 254096 DEBUG nova.network.os_vif_util [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.164 254096 DEBUG os_vif [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.166 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.166 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75edff1b-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.172 254096 INFO os_vif [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')#033[00m
Nov 25 11:59:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697-userdata-shm.mount: Deactivated successfully.
Nov 25 11:59:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-594d52d0fc086d39ee392d5c99998b632b0abad46d43aa1aa2121c7e09f49de6-merged.mount: Deactivated successfully.
Nov 25 11:59:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 3.7 KiB/s wr, 2 op/s
Nov 25 11:59:20 np0005535469 podman[376116]: 2025-11-25 16:59:20.21848995 +0000 UTC m=+0.118774036 container cleanup 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 11:59:20 np0005535469 systemd[1]: libpod-conmon-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697.scope: Deactivated successfully.
Nov 25 11:59:20 np0005535469 podman[376175]: 2025-11-25 16:59:20.293098932 +0000 UTC m=+0.054743331 container remove 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b29742eb-7f18-49e2-9a3e-daba519ef67c]: (4, ('Tue Nov 25 04:59:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697)\n37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697\nTue Nov 25 04:59:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697)\n37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b86d3442-08f5-44ba-a693-cd035987133c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.301 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:20 np0005535469 kernel: tap6c62671a-f0: left promiscuous mode
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.370 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0aaadd-3e7f-4077-9624-c7766d34b3f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.389 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d355193-5826-4ea4-bc48-1021cc62b8df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.390 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e7fd37-75ce-4ee7-b679-39a3147647c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.409 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9021eac-ac2e-4875-a30f-7b2dc78dc691]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638126, 'reachable_time': 22674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376190, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 systemd[1]: run-netns-ovnmeta\x2d6c62671a\x2df9ae\x2d4033\x2d9c32\x2db04ccb3e0de4.mount: Deactivated successfully.
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.412 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 11:59:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.412 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a7692737-149a-4a9f-be98-98ffc12cc324]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.797 254096 INFO nova.virt.libvirt.driver [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting instance files /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.798 254096 INFO nova.virt.libvirt.driver [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deletion of /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del complete#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.846 254096 INFO nova.compute.manager [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.847 254096 DEBUG oslo.service.loopingcall [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.848 254096 DEBUG nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 11:59:20 np0005535469 nova_compute[254092]: 2025-11-25 16:59:20.848 254096 DEBUG nova.network.neutron [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 11:59:21 np0005535469 nova_compute[254092]: 2025-11-25 16:59:21.310 254096 DEBUG nova.network.neutron [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:59:21 np0005535469 nova_compute[254092]: 2025-11-25 16:59:21.310 254096 DEBUG nova.network.neutron [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:59:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:21 np0005535469 nova_compute[254092]: 2025-11-25 16:59:21.641 254096 DEBUG oslo_concurrency.lockutils [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.069 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.072 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.072 254096 WARNING nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.159 254096 DEBUG nova.network.neutron [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:59:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 95 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.3 KiB/s wr, 7 op/s
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.226 254096 INFO nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 1.38 seconds to deallocate network for instance.#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.284 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.285 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.440 254096 DEBUG oslo_concurrency.processutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:59:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112997725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.873 254096 DEBUG oslo_concurrency.processutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.880 254096 DEBUG nova.compute.provider_tree [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.897 254096 DEBUG nova.scheduler.client.report [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:59:22 np0005535469 nova_compute[254092]: 2025-11-25 16:59:22.920 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:23 np0005535469 nova_compute[254092]: 2025-11-25 16:59:23.009 254096 INFO nova.scheduler.client.report [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Deleted allocations for instance 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda#033[00m
Nov 25 11:59:23 np0005535469 nova_compute[254092]: 2025-11-25 16:59:23.072 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:23 np0005535469 nova_compute[254092]: 2025-11-25 16:59:23.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:23 np0005535469 nova_compute[254092]: 2025-11-25 16:59:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 95 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1023 B/s wr, 7 op/s
Nov 25 11:59:24 np0005535469 nova_compute[254092]: 2025-11-25 16:59:24.212 254096 DEBUG nova.compute.manager [req-85a2dbda-bc50-428c-8003-eed2340a277b req-b99d063b-78df-4dcb-b61a-c09bf0aff64a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-deleted-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:24 np0005535469 nova_compute[254092]: 2025-11-25 16:59:24.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:25 np0005535469 nova_compute[254092]: 2025-11-25 16:59:25.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:59:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805136051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:59:26 np0005535469 nova_compute[254092]: 2025-11-25 16:59:26.981 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.202 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.204 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3856MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.204 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.205 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.254 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.255 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.276 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:27 np0005535469 nova_compute[254092]: 2025-11-25 16:59:27.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 11:59:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:59:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118453245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:59:28 np0005535469 nova_compute[254092]: 2025-11-25 16:59:28.411 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:28 np0005535469 nova_compute[254092]: 2025-11-25 16:59:28.418 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:59:28 np0005535469 nova_compute[254092]: 2025-11-25 16:59:28.434 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:59:28 np0005535469 nova_compute[254092]: 2025-11-25 16:59:28.454 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 11:59:28 np0005535469 nova_compute[254092]: 2025-11-25 16:59:28.454 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:29 np0005535469 nova_compute[254092]: 2025-11-25 16:59:29.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:29 np0005535469 nova_compute[254092]: 2025-11-25 16:59:29.980 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:29 np0005535469 nova_compute[254092]: 2025-11-25 16:59:29.980 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.046 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.151 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.152 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.160 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.161 254096 INFO nova.compute.claims [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.260 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.452 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.453 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.454 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 11:59:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 11:59:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389327970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.727 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.734 254096 DEBUG nova.compute.provider_tree [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.758 254096 DEBUG nova.scheduler.client.report [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.785 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.786 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.843 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.844 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.865 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.880 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.982 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.984 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 11:59:30 np0005535469 nova_compute[254092]: 2025-11-25 16:59:30.984 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Creating image(s)#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.012 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.042 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.070 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.074 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.140 254096 DEBUG nova.policy [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.181 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.182 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.183 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.183 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.211 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.216 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d643dc57-9536-4a67-9a17-c20512710ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.549 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d643dc57-9536-4a67-9a17-c20512710ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.641 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.759 254096 DEBUG nova.objects.instance [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.775 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.775 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Ensure instance console log exists: /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.775 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.776 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.776 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:31 np0005535469 nova_compute[254092]: 2025-11-25 16:59:31.967 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Successfully created port: 12c882d8-4cd5-4233-8d3b-650401885991 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 11:59:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.093 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Successfully updated port: 12c882d8-4cd5-4233-8d3b-650401885991 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.108 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.109 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.109 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.230 254096 DEBUG nova.compute.manager [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-changed-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.231 254096 DEBUG nova.compute.manager [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing instance network info cache due to event network-changed-12c882d8-4cd5-4233-8d3b-650401885991. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.231 254096 DEBUG oslo_concurrency.lockutils [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:59:33 np0005535469 nova_compute[254092]: 2025-11-25 16:59:33.370 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 11:59:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.515 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.537 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.538 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance network_info: |[{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.540 254096 DEBUG oslo_concurrency.lockutils [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.540 254096 DEBUG nova.network.neutron [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.547 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start _get_guest_xml network_info=[{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.557 254096 WARNING nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.564 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.565 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.572 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.573 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.573 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.574 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.574 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.574 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.576 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.576 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.576 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.579 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:34 np0005535469 nova_compute[254092]: 2025-11-25 16:59:34.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:59:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1593843188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.020 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.052 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.059 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.142 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089960.140186, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.143 254096 INFO nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Stopped (Lifecycle Event)#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.167 254096 DEBUG nova.compute.manager [None req-fc057178-a07c-4b11-bed0-29d4dbedfb20 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 11:59:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936021589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.509 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.510 254096 DEBUG nova.virt.libvirt.vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:59:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-888996977',display_name='tempest-TestNetworkBasicOps-server-888996977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-888996977',id=111,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtS45HGfMK2vg5lA1DMxbi57g++ufQ+h4UViHDHhpxxz0TeEAmFiy6LE8nwpuctZL207E2zBi1qnO46vlL2kuhASWctkQ5Cos3uH5AqhXg1h51/mOABxlzeqFcNuWZ38Q==',key_name='tempest-TestNetworkBasicOps-202100109',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5inbkv7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:59:30Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=d643dc57-9536-4a67-9a17-c20512710ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.510 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.511 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.512 254096 DEBUG nova.objects.instance [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.529 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <uuid>d643dc57-9536-4a67-9a17-c20512710ea5</uuid>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <name>instance-0000006f</name>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-888996977</nova:name>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 16:59:34</nova:creationTime>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <nova:port uuid="12c882d8-4cd5-4233-8d3b-650401885991">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <system>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <entry name="serial">d643dc57-9536-4a67-9a17-c20512710ea5</entry>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <entry name="uuid">d643dc57-9536-4a67-9a17-c20512710ea5</entry>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </system>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <os>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  </os>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <features>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  </features>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  </clock>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  <devices>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d643dc57-9536-4a67-9a17-c20512710ea5_disk">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/d643dc57-9536-4a67-9a17-c20512710ea5_disk.config">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:80:dd:57"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <target dev="tap12c882d8-4c"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </interface>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/console.log" append="off"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </serial>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <video>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </video>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </rng>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 11:59:35 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 11:59:35 np0005535469 nova_compute[254092]:  </devices>
Nov 25 11:59:35 np0005535469 nova_compute[254092]: </domain>
Nov 25 11:59:35 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.530 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Preparing to wait for external event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.530 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.531 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.531 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.532 254096 DEBUG nova.virt.libvirt.vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:59:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-888996977',display_name='tempest-TestNetworkBasicOps-server-888996977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-888996977',id=111,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtS45HGfMK2vg5lA1DMxbi57g++ufQ+h4UViHDHhpxxz0TeEAmFiy6LE8nwpuctZL207E2zBi1qnO46vlL2kuhASWctkQ5Cos3uH5AqhXg1h51/mOABxlzeqFcNuWZ38Q==',key_name='tempest-TestNetworkBasicOps-202100109',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5inbkv7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:59:30Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=d643dc57-9536-4a67-9a17-c20512710ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.532 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.533 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.533 254096 DEBUG os_vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.534 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.535 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.541 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12c882d8-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.542 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12c882d8-4c, col_values=(('external_ids', {'iface-id': '12c882d8-4cd5-4233-8d3b-650401885991', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:dd:57', 'vm-uuid': 'd643dc57-9536-4a67-9a17-c20512710ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:35 np0005535469 NetworkManager[48891]: <info>  [1764089975.5455] manager: (tap12c882d8-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.555 254096 INFO os_vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c')#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.597 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.597 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.598 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:80:dd:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.598 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Using config drive#033[00m
Nov 25 11:59:35 np0005535469 nova_compute[254092]: 2025-11-25 16:59:35.627 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.074 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Creating config drive at /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.081 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtk3p6bj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.122 254096 DEBUG nova.network.neutron [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated VIF entry in instance network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.122 254096 DEBUG nova.network.neutron [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.149 254096 DEBUG oslo_concurrency.lockutils [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:59:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.229 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtk3p6bj" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.253 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.258 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config d643dc57-9536-4a67-9a17-c20512710ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.462 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config d643dc57-9536-4a67-9a17-c20512710ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.463 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deleting local config drive /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config because it was imported into RBD.#033[00m
Nov 25 11:59:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:36 np0005535469 kernel: tap12c882d8-4c: entered promiscuous mode
Nov 25 11:59:36 np0005535469 NetworkManager[48891]: <info>  [1764089976.5557] manager: (tap12c882d8-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Nov 25 11:59:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:36Z|01146|binding|INFO|Claiming lport 12c882d8-4cd5-4233-8d3b-650401885991 for this chassis.
Nov 25 11:59:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:36Z|01147|binding|INFO|12c882d8-4cd5-4233-8d3b-650401885991: Claiming fa:16:3e:80:dd:57 10.100.0.3
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.580 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:dd:57 10.100.0.3'], port_security=['fa:16:3e:80:dd:57 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd643dc57-9536-4a67-9a17-c20512710ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f446da-e1c5-4251-b5dd-3071154486f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '055c4680-7ea5-4bc6-a453-5482dfbe9b96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=168e2889-06cc-4097-8922-1d94c15fa45a, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12c882d8-4cd5-4233-8d3b-650401885991) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.582 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12c882d8-4cd5-4233-8d3b-650401885991 in datapath 22f446da-e1c5-4251-b5dd-3071154486f0 bound to our chassis#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.583 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22f446da-e1c5-4251-b5dd-3071154486f0#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bad55d-8b56-4369-96fe-ca92a5ab72bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.598 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22f446da-e1 in ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.602 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22f446da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[943814aa-7cd2-4423-9998-a1a166eed6f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[98a44baf-62c8-4a99-9f1b-ea63b34da527]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.621 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c252b-bf2a-4ac5-869d-e44aa56679fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 systemd-udevd[376607]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:59:36 np0005535469 systemd[1]: Started Virtual Machine qemu-143-instance-0000006f.
Nov 25 11:59:36 np0005535469 systemd-machined[216343]: New machine qemu-143-instance-0000006f.
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 NetworkManager[48891]: <info>  [1764089976.6548] device (tap12c882d8-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:59:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:36Z|01148|binding|INFO|Setting lport 12c882d8-4cd5-4233-8d3b-650401885991 ovn-installed in OVS
Nov 25 11:59:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:36Z|01149|binding|INFO|Setting lport 12c882d8-4cd5-4233-8d3b-650401885991 up in Southbound
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.655 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[96cc9887-4bc6-4519-90a3-6f3da471cfa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 NetworkManager[48891]: <info>  [1764089976.6594] device (tap12c882d8-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.691 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3b27b41f-c1aa-4e6a-80bd-1f5ba8db5756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 podman[376580]: 2025-11-25 16:59:36.696787221 +0000 UTC m=+0.091237856 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:59:36 np0005535469 podman[376579]: 2025-11-25 16:59:36.697094749 +0000 UTC m=+0.084785421 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.698 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e21c4ef6-e156-48d2-85d5-e7b1c8349fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 systemd-udevd[376615]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:59:36 np0005535469 NetworkManager[48891]: <info>  [1764089976.7060] manager: (tap22f446da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Nov 25 11:59:36 np0005535469 podman[376582]: 2025-11-25 16:59:36.72833418 +0000 UTC m=+0.120270067 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.736 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fddfebaf-188c-4442-b6f3-8a338ffd95d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.740 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[01ac0a75-c75c-462f-8968-48d10c713502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 NetworkManager[48891]: <info>  [1764089976.7612] device (tap22f446da-e0): carrier: link connected
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.766 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41588749-f7fa-442b-aeba-3c307855aee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.783 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edb2753c-3537-478e-ba03-ddbfbdf31c52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22f446da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:2d:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643435, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376672, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.799 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6bdf96-142f-4d0d-98cb-a4eafa161a1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:2d67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643435, 'tstamp': 643435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376673, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a27d6389-5c71-4765-a17e-4330cafe6512]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22f446da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:2d:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643435, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376674, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[33fe39fd-5077-42e7-9f33-e8ad250ed4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.899 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d666747-8475-4136-90de-605b262ee5ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f446da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22f446da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 NetworkManager[48891]: <info>  [1764089976.9043] manager: (tap22f446da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Nov 25 11:59:36 np0005535469 kernel: tap22f446da-e0: entered promiscuous mode
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.907 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22f446da-e0, col_values=(('external_ids', {'iface-id': 'ecc3dd32-bc67-4be1-9d9a-caf7485b6c03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:36Z|01150|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 nova_compute[254092]: 2025-11-25 16:59:36.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.929 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22f446da-e1c5-4251-b5dd-3071154486f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22f446da-e1c5-4251-b5dd-3071154486f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.931 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[272db025-e476-4aa6-ad2d-05903ee212fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.932 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-22f446da-e1c5-4251-b5dd-3071154486f0
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/22f446da-e1c5-4251-b5dd-3071154486f0.pid.haproxy
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 22f446da-e1c5-4251-b5dd-3071154486f0
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 11:59:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.933 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'env', 'PROCESS_TAG=haproxy-22f446da-e1c5-4251-b5dd-3071154486f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22f446da-e1c5-4251-b5dd-3071154486f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.157 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089977.1560907, d643dc57-9536-4a67-9a17-c20512710ea5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.157 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Started (Lifecycle Event)#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.182 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.190 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089977.1563592, d643dc57-9536-4a67-9a17-c20512710ea5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.191 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Paused (Lifecycle Event)#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.209 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.213 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.232 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:59:37 np0005535469 podman[376748]: 2025-11-25 16:59:37.378346813 +0000 UTC m=+0.080361840 container create 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:59:37 np0005535469 systemd[1]: Started libpod-conmon-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99.scope.
Nov 25 11:59:37 np0005535469 podman[376748]: 2025-11-25 16:59:37.34445278 +0000 UTC m=+0.046467847 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 11:59:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:59:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b995b572a86953529648503ae39e65ee754ec7d4b7a8548926206da37e805c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:37 np0005535469 podman[376748]: 2025-11-25 16:59:37.486157949 +0000 UTC m=+0.188172976 container init 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:59:37 np0005535469 podman[376748]: 2025-11-25 16:59:37.496915502 +0000 UTC m=+0.198930509 container start 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:59:37 np0005535469 nova_compute[254092]: 2025-11-25 16:59:37.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 11:59:37 np0005535469 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : New worker (376769) forked
Nov 25 11:59:37 np0005535469 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : Loading success.
Nov 25 11:59:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 11:59:39 np0005535469 nova_compute[254092]: 2025-11-25 16:59:39.585 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:59:40
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:59:40 np0005535469 nova_compute[254092]: 2025-11-25 16:59:40.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 11:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 11:59:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 11:59:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.585 254096 DEBUG nova.compute.manager [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG oslo_concurrency.lockutils [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG oslo_concurrency.lockutils [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG oslo_concurrency.lockutils [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG nova.compute.manager [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Processing event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.587 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.591 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089984.5914507, d643dc57-9536-4a67-9a17-c20512710ea5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.591 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.593 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.595 254096 INFO nova.virt.libvirt.driver [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance spawned successfully.#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.595 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.618 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.622 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.623 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.623 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.623 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.624 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.624 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.628 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.663 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.711 254096 INFO nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 13.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.711 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.787 254096 INFO nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 14.66 seconds to build instance.#033[00m
Nov 25 11:59:44 np0005535469 nova_compute[254092]: 2025-11-25 16:59:44.815 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:45 np0005535469 nova_compute[254092]: 2025-11-25 16:59:45.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 11:59:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:46.290 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 11:59:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:46.291 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 11:59:46 np0005535469 nova_compute[254092]: 2025-11-25 16:59:46.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:46 np0005535469 nova_compute[254092]: 2025-11-25 16:59:46.682 254096 DEBUG nova.compute.manager [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:46 np0005535469 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG oslo_concurrency.lockutils [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 11:59:46 np0005535469 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG oslo_concurrency.lockutils [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 11:59:46 np0005535469 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG oslo_concurrency.lockutils [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 11:59:46 np0005535469 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG nova.compute.manager [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] No waiting events found dispatching network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 11:59:46 np0005535469 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 WARNING nova.compute.manager [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received unexpected event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 for instance with vm_state active and task_state None.#033[00m
Nov 25 11:59:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 12 KiB/s wr, 13 op/s
Nov 25 11:59:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 16:59:49.293 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 11:59:49 np0005535469 nova_compute[254092]: 2025-11-25 16:59:49.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 11:59:50 np0005535469 nova_compute[254092]: 2025-11-25 16:59:50.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:50 np0005535469 NetworkManager[48891]: <info>  [1764089990.8900] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Nov 25 11:59:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:50Z|01151|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 11:59:50 np0005535469 nova_compute[254092]: 2025-11-25 16:59:50.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:50 np0005535469 NetworkManager[48891]: <info>  [1764089990.8917] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Nov 25 11:59:50 np0005535469 nova_compute[254092]: 2025-11-25 16:59:50.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:50 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:50Z|01152|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 11:59:50 np0005535469 nova_compute[254092]: 2025-11-25 16:59:50.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 11:59:51 np0005535469 nova_compute[254092]: 2025-11-25 16:59:51.519 254096 DEBUG nova.compute.manager [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-changed-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 11:59:51 np0005535469 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG nova.compute.manager [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing instance network info cache due to event network-changed-12c882d8-4cd5-4233-8d3b-650401885991. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 11:59:51 np0005535469 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG oslo_concurrency.lockutils [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 11:59:51 np0005535469 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG oslo_concurrency.lockutils [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 11:59:51 np0005535469 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG nova.network.neutron [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 299318d3-c3e7-4744-b840-9b6797c7f81c does not exist
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c368e4f1-62df-42cd-a7df-7ec2643c664f does not exist
Nov 25 11:59:51 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8482a452-8b31-4723-9aeb-0a687675b3b7 does not exist
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 11:59:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 11:59:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 65 op/s
Nov 25 11:59:52 np0005535469 podman[377050]: 2025-11-25 16:59:52.17004442 +0000 UTC m=+0.027337826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:59:52 np0005535469 podman[377050]: 2025-11-25 16:59:52.27910049 +0000 UTC m=+0.136393856 container create ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 11:59:52 np0005535469 systemd[1]: Started libpod-conmon-ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36.scope.
Nov 25 11:59:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:59:52 np0005535469 podman[377050]: 2025-11-25 16:59:52.457276754 +0000 UTC m=+0.314570140 container init ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 11:59:52 np0005535469 podman[377050]: 2025-11-25 16:59:52.464482159 +0000 UTC m=+0.321775515 container start ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:59:52 np0005535469 zen_jennings[377067]: 167 167
Nov 25 11:59:52 np0005535469 systemd[1]: libpod-ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36.scope: Deactivated successfully.
Nov 25 11:59:52 np0005535469 podman[377050]: 2025-11-25 16:59:52.641935903 +0000 UTC m=+0.499229289 container attach ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 11:59:52 np0005535469 podman[377050]: 2025-11-25 16:59:52.643163936 +0000 UTC m=+0.500457302 container died ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:59:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-85c109c12f18b211279d6f18e581e27e3849c471c414ee3fc9af08b236fcb541-merged.mount: Deactivated successfully.
Nov 25 11:59:53 np0005535469 podman[377050]: 2025-11-25 16:59:53.440133532 +0000 UTC m=+1.297426928 container remove ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:59:53 np0005535469 systemd[1]: libpod-conmon-ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36.scope: Deactivated successfully.
Nov 25 11:59:53 np0005535469 podman[377092]: 2025-11-25 16:59:53.700418901 +0000 UTC m=+0.090598728 container create 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 11:59:53 np0005535469 podman[377092]: 2025-11-25 16:59:53.651857109 +0000 UTC m=+0.042036936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:59:53 np0005535469 systemd[1]: Started libpod-conmon-6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7.scope.
Nov 25 11:59:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:59:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:53 np0005535469 nova_compute[254092]: 2025-11-25 16:59:53.866 254096 DEBUG nova.network.neutron [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated VIF entry in instance network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 11:59:53 np0005535469 nova_compute[254092]: 2025-11-25 16:59:53.867 254096 DEBUG nova.network.neutron [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 11:59:53 np0005535469 podman[377092]: 2025-11-25 16:59:53.899089972 +0000 UTC m=+0.289269819 container init 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 11:59:53 np0005535469 nova_compute[254092]: 2025-11-25 16:59:53.906 254096 DEBUG oslo_concurrency.lockutils [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 11:59:53 np0005535469 podman[377092]: 2025-11-25 16:59:53.909990169 +0000 UTC m=+0.300169956 container start 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 11:59:54 np0005535469 podman[377092]: 2025-11-25 16:59:54.008279516 +0000 UTC m=+0.398459353 container attach 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 11:59:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 25 11:59:54 np0005535469 nova_compute[254092]: 2025-11-25 16:59:54.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:54 np0005535469 friendly_shannon[377109]: --> passed data devices: 0 physical, 3 LVM
Nov 25 11:59:54 np0005535469 friendly_shannon[377109]: --> relative data size: 1.0
Nov 25 11:59:54 np0005535469 friendly_shannon[377109]: --> All data devices are unavailable
Nov 25 11:59:54 np0005535469 systemd[1]: libpod-6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7.scope: Deactivated successfully.
Nov 25 11:59:54 np0005535469 podman[377092]: 2025-11-25 16:59:54.932268322 +0000 UTC m=+1.322448129 container died 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 11:59:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018-merged.mount: Deactivated successfully.
Nov 25 11:59:55 np0005535469 podman[377092]: 2025-11-25 16:59:55.111055911 +0000 UTC m=+1.501235748 container remove 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:59:55 np0005535469 systemd[1]: libpod-conmon-6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7.scope: Deactivated successfully.
Nov 25 11:59:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 11:59:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283618193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 11:59:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 11:59:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283618193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 11:59:55 np0005535469 nova_compute[254092]: 2025-11-25 16:59:55.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:55 np0005535469 podman[377293]: 2025-11-25 16:59:55.790836276 +0000 UTC m=+0.062095223 container create 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:59:55 np0005535469 podman[377293]: 2025-11-25 16:59:55.749828009 +0000 UTC m=+0.021086966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:59:55 np0005535469 systemd[1]: Started libpod-conmon-4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae.scope.
Nov 25 11:59:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:59:55 np0005535469 podman[377293]: 2025-11-25 16:59:55.949054705 +0000 UTC m=+0.220313662 container init 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:59:55 np0005535469 podman[377293]: 2025-11-25 16:59:55.955002297 +0000 UTC m=+0.226261224 container start 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 11:59:55 np0005535469 strange_greider[377309]: 167 167
Nov 25 11:59:55 np0005535469 systemd[1]: libpod-4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae.scope: Deactivated successfully.
Nov 25 11:59:55 np0005535469 podman[377293]: 2025-11-25 16:59:55.985090266 +0000 UTC m=+0.256349233 container attach 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:59:55 np0005535469 podman[377293]: 2025-11-25 16:59:55.985553299 +0000 UTC m=+0.256812236 container died 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:59:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-33726ae32b7b6bb533a44a04cc5e9a6d824182d3e094286ff03e0e2565e829b3-merged.mount: Deactivated successfully.
Nov 25 11:59:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 49 KiB/s wr, 67 op/s
Nov 25 11:59:56 np0005535469 podman[377293]: 2025-11-25 16:59:56.257026853 +0000 UTC m=+0.528285790 container remove 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 11:59:56 np0005535469 systemd[1]: libpod-conmon-4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae.scope: Deactivated successfully.
Nov 25 11:59:56 np0005535469 podman[377335]: 2025-11-25 16:59:56.487918631 +0000 UTC m=+0.076864684 container create 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:59:56 np0005535469 podman[377335]: 2025-11-25 16:59:56.441199029 +0000 UTC m=+0.030145082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:59:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 11:59:56 np0005535469 systemd[1]: Started libpod-conmon-11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296.scope.
Nov 25 11:59:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:59:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:56 np0005535469 podman[377335]: 2025-11-25 16:59:56.636197159 +0000 UTC m=+0.225143232 container init 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 11:59:56 np0005535469 podman[377335]: 2025-11-25 16:59:56.644375122 +0000 UTC m=+0.233321175 container start 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:59:56 np0005535469 podman[377335]: 2025-11-25 16:59:56.668925921 +0000 UTC m=+0.257872054 container attach 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 11:59:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:57Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:dd:57 10.100.0.3
Nov 25 11:59:57 np0005535469 ovn_controller[153477]: 2025-11-25T16:59:57Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:dd:57 10.100.0.3
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]: {
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:    "0": [
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:        {
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "devices": [
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "/dev/loop3"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            ],
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_name": "ceph_lv0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_size": "21470642176",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "name": "ceph_lv0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "tags": {
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cluster_name": "ceph",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.crush_device_class": "",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.encrypted": "0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osd_id": "0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.type": "block",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.vdo": "0"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            },
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "type": "block",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "vg_name": "ceph_vg0"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:        }
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:    ],
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:    "1": [
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:        {
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "devices": [
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "/dev/loop4"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            ],
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_name": "ceph_lv1",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_size": "21470642176",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "name": "ceph_lv1",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "tags": {
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cluster_name": "ceph",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.crush_device_class": "",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.encrypted": "0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osd_id": "1",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.type": "block",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.vdo": "0"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            },
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "type": "block",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "vg_name": "ceph_vg1"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:        }
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:    ],
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:    "2": [
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:        {
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "devices": [
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "/dev/loop5"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            ],
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_name": "ceph_lv2",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_size": "21470642176",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "name": "ceph_lv2",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "tags": {
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cephx_lockbox_secret": "",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.cluster_name": "ceph",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.crush_device_class": "",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.encrypted": "0",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osd_id": "2",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.type": "block",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:                "ceph.vdo": "0"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            },
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "type": "block",
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:            "vg_name": "ceph_vg2"
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:        }
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]:    ]
Nov 25 11:59:57 np0005535469 vigorous_rubin[377352]: }
Nov 25 11:59:57 np0005535469 systemd[1]: libpod-11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296.scope: Deactivated successfully.
Nov 25 11:59:57 np0005535469 podman[377335]: 2025-11-25 16:59:57.408857103 +0000 UTC m=+0.997803156 container died 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 11:59:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8-merged.mount: Deactivated successfully.
Nov 25 11:59:57 np0005535469 podman[377335]: 2025-11-25 16:59:57.619187292 +0000 UTC m=+1.208133325 container remove 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 11:59:57 np0005535469 systemd[1]: libpod-conmon-11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296.scope: Deactivated successfully.
Nov 25 11:59:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 49 KiB/s wr, 63 op/s
Nov 25 11:59:58 np0005535469 podman[377513]: 2025-11-25 16:59:58.287873265 +0000 UTC m=+0.106699847 container create cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 11:59:58 np0005535469 podman[377513]: 2025-11-25 16:59:58.210065855 +0000 UTC m=+0.028892507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:59:58 np0005535469 systemd[1]: Started libpod-conmon-cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b.scope.
Nov 25 11:59:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:59:58 np0005535469 podman[377513]: 2025-11-25 16:59:58.39381761 +0000 UTC m=+0.212644202 container init cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 11:59:58 np0005535469 podman[377513]: 2025-11-25 16:59:58.400273926 +0000 UTC m=+0.219100538 container start cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 11:59:58 np0005535469 condescending_perlman[377529]: 167 167
Nov 25 11:59:58 np0005535469 systemd[1]: libpod-cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b.scope: Deactivated successfully.
Nov 25 11:59:58 np0005535469 podman[377513]: 2025-11-25 16:59:58.416162019 +0000 UTC m=+0.234988621 container attach cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:59:58 np0005535469 podman[377513]: 2025-11-25 16:59:58.416827427 +0000 UTC m=+0.235654029 container died cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 11:59:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bf6a85e3ade79185c54e11427104d69549b17203139983356917871973e7491f-merged.mount: Deactivated successfully.
Nov 25 11:59:58 np0005535469 podman[377513]: 2025-11-25 16:59:58.556337147 +0000 UTC m=+0.375163759 container remove cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 11:59:58 np0005535469 systemd[1]: libpod-conmon-cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b.scope: Deactivated successfully.
Nov 25 11:59:58 np0005535469 podman[377555]: 2025-11-25 16:59:58.748455689 +0000 UTC m=+0.058473644 container create 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 11:59:58 np0005535469 podman[377555]: 2025-11-25 16:59:58.710707851 +0000 UTC m=+0.020725826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 11:59:58 np0005535469 systemd[1]: Started libpod-conmon-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope.
Nov 25 11:59:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 11:59:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 11:59:58 np0005535469 podman[377555]: 2025-11-25 16:59:58.893299534 +0000 UTC m=+0.203317569 container init 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:59:58 np0005535469 podman[377555]: 2025-11-25 16:59:58.900529541 +0000 UTC m=+0.210547526 container start 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 11:59:58 np0005535469 podman[377555]: 2025-11-25 16:59:58.941683241 +0000 UTC m=+0.251701276 container attach 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 11:59:59 np0005535469 nova_compute[254092]: 2025-11-25 16:59:59.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]: {
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "osd_id": 1,
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "type": "bluestore"
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:    },
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "osd_id": 2,
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "type": "bluestore"
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:    },
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "osd_id": 0,
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:        "type": "bluestore"
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]:    }
Nov 25 11:59:59 np0005535469 heuristic_swirles[377571]: }
Nov 25 11:59:59 np0005535469 systemd[1]: libpod-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope: Deactivated successfully.
Nov 25 11:59:59 np0005535469 podman[377555]: 2025-11-25 16:59:59.90137828 +0000 UTC m=+1.211396245 container died 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 11:59:59 np0005535469 systemd[1]: libpod-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope: Consumed 1.003s CPU time.
Nov 25 11:59:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453-merged.mount: Deactivated successfully.
Nov 25 12:00:00 np0005535469 podman[377555]: 2025-11-25 17:00:00.049111794 +0000 UTC m=+1.359129749 container remove 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 12:00:00 np0005535469 systemd[1]: libpod-conmon-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope: Deactivated successfully.
Nov 25 12:00:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:00:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:00:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:00:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:00:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5b0b0f24-b4bb-4a12-822e-03137a56220d does not exist
Nov 25 12:00:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 41fa2b6d-3c3d-4512-a448-85c15d073042 does not exist
Nov 25 12:00:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 112 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Nov 25 12:00:00 np0005535469 nova_compute[254092]: 2025-11-25 17:00:00.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.658502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090001658529, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 656, "num_deletes": 257, "total_data_size": 774169, "memory_usage": 787768, "flush_reason": "Manual Compaction"}
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090001853912, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 767463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47349, "largest_seqno": 48004, "table_properties": {"data_size": 763908, "index_size": 1399, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7908, "raw_average_key_size": 18, "raw_value_size": 756830, "raw_average_value_size": 1806, "num_data_blocks": 62, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089952, "oldest_key_time": 1764089952, "file_creation_time": 1764090001, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 195462 microseconds, and 2550 cpu microseconds.
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.853958) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 767463 bytes OK
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.853979) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990218) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990260) EVENT_LOG_v1 {"time_micros": 1764090001990250, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990282) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 770640, prev total WAL file size 770640, number of live WAL files 2.
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990967) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303133' seq:0, type:0; will stop at (end)
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(749KB)], [104(10MB)]
Nov 25 12:00:01 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090001990990, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11336674, "oldest_snapshot_seqno": -1}
Nov 25 12:00:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7003 keys, 11207180 bytes, temperature: kUnknown
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090002266774, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11207180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11157705, "index_size": 30867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 182338, "raw_average_key_size": 26, "raw_value_size": 11029466, "raw_average_value_size": 1574, "num_data_blocks": 1216, "num_entries": 7003, "num_filter_entries": 7003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090001, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.267131) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11207180 bytes
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.343937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.1 rd, 40.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(29.4) write-amplify(14.6) OK, records in: 7529, records dropped: 526 output_compression: NoCompression
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.343974) EVENT_LOG_v1 {"time_micros": 1764090002343961, "job": 62, "event": "compaction_finished", "compaction_time_micros": 275937, "compaction_time_cpu_micros": 24437, "output_level": 6, "num_output_files": 1, "total_output_size": 11207180, "num_input_records": 7529, "num_output_records": 7003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090002344274, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090002345998, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 12:00:04 np0005535469 nova_compute[254092]: 2025-11-25 17:00:04.332 254096 INFO nova.compute.manager [None req-5c54b0bc-a485-4cf5-bfb4-fbc2f3ecabda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Get console output#033[00m
Nov 25 12:00:04 np0005535469 nova_compute[254092]: 2025-11-25 17:00:04.339 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:00:04 np0005535469 nova_compute[254092]: 2025-11-25 17:00:04.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:05 np0005535469 nova_compute[254092]: 2025-11-25 17:00:05.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 12:00:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:07 np0005535469 podman[377668]: 2025-11-25 17:00:07.653340811 +0000 UTC m=+0.063954353 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 12:00:07 np0005535469 podman[377667]: 2025-11-25 17:00:07.664527616 +0000 UTC m=+0.077452141 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 12:00:07 np0005535469 podman[377669]: 2025-11-25 17:00:07.689481395 +0000 UTC m=+0.098774111 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:00:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 12:00:09 np0005535469 nova_compute[254092]: 2025-11-25 17:00:09.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:00:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:00:10 np0005535469 nova_compute[254092]: 2025-11-25 17:00:10.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 625 KiB/s wr, 20 op/s
Nov 25 12:00:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:13.636 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:13.637 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:13.638 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 13 KiB/s wr, 0 op/s
Nov 25 12:00:14 np0005535469 nova_compute[254092]: 2025-11-25 17:00:14.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:15 np0005535469 nova_compute[254092]: 2025-11-25 17:00:15.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 12:00:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 12:00:19 np0005535469 nova_compute[254092]: 2025-11-25 17:00:19.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 12:00:20 np0005535469 nova_compute[254092]: 2025-11-25 17:00:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:20 np0005535469 nova_compute[254092]: 2025-11-25 17:00:20.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.408 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.408 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.422 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.511 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.511 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.520 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.520 254096 INFO nova.compute.claims [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:00:21 np0005535469 nova_compute[254092]: 2025-11-25 17:00:21.620 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.642491) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021642512, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 396, "num_deletes": 251, "total_data_size": 293062, "memory_usage": 300208, "flush_reason": "Manual Compaction"}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021645497, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 237486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48005, "largest_seqno": 48400, "table_properties": {"data_size": 235205, "index_size": 445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6163, "raw_average_key_size": 20, "raw_value_size": 230675, "raw_average_value_size": 756, "num_data_blocks": 20, "num_entries": 305, "num_filter_entries": 305, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090002, "oldest_key_time": 1764090002, "file_creation_time": 1764090021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 3034 microseconds, and 878 cpu microseconds.
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.645525) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 237486 bytes OK
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.645537) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646534) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646542) EVENT_LOG_v1 {"time_micros": 1764090021646539, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646554) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 290543, prev total WAL file size 290543, number of live WAL files 2.
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373538' seq:72057594037927935, type:22 .. '6D6772737461740032303130' seq:0, type:0; will stop at (end)
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(231KB)], [107(10MB)]
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021647038, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11444666, "oldest_snapshot_seqno": -1}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6803 keys, 8194578 bytes, temperature: kUnknown
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021712513, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 8194578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8151080, "index_size": 25402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 178356, "raw_average_key_size": 26, "raw_value_size": 8030952, "raw_average_value_size": 1180, "num_data_blocks": 988, "num_entries": 6803, "num_filter_entries": 6803, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.712771) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 8194578 bytes
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.714593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.7 rd, 125.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.7 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(82.7) write-amplify(34.5) OK, records in: 7308, records dropped: 505 output_compression: NoCompression
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.714610) EVENT_LOG_v1 {"time_micros": 1764090021714602, "job": 64, "event": "compaction_finished", "compaction_time_micros": 65499, "compaction_time_cpu_micros": 27762, "output_level": 6, "num_output_files": 1, "total_output_size": 8194578, "num_input_records": 7308, "num_output_records": 6803, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021714881, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021716687, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:00:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:00:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494636634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.157 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.166 254096 DEBUG nova.compute.provider_tree [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.181 254096 DEBUG nova.scheduler.client.report [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.210 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.211 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:00:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s wr, 0 op/s
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.252 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.253 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.269 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.281 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.372 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.373 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.374 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Creating image(s)#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.406 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.434 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.463 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.468 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.515 254096 DEBUG nova.policy [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.555 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.556 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.557 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.558 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.587 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.592 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 810cd712-872a-49a6-bf45-04a319ba8d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:22 np0005535469 nova_compute[254092]: 2025-11-25 17:00:22.981 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 810cd712-872a-49a6-bf45-04a319ba8d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.046 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.163 254096 DEBUG nova.objects.instance [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 810cd712-872a-49a6-bf45-04a319ba8d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.175 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.176 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Ensure instance console log exists: /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.176 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.177 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.177 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.223 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Successfully created port: fae63c22-d59b-4f75-9a0f-23ee49042eb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:00:23 np0005535469 nova_compute[254092]: 2025-11-25 17:00:23.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.263 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Successfully updated port: fae63c22-d59b-4f75-9a0f-23ee49042eb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.280 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.280 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.281 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.443 254096 DEBUG nova.compute.manager [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-changed-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.444 254096 DEBUG nova.compute.manager [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Refreshing instance network info cache due to event network-changed-fae63c22-d59b-4f75-9a0f-23ee49042eb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.444 254096 DEBUG oslo_concurrency.lockutils [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.526 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:00:24 np0005535469 nova_compute[254092]: 2025-11-25 17:00:24.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.599 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updating instance_info_cache with network_info: [{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.736 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.737 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance network_info: |[{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.737 254096 DEBUG oslo_concurrency.lockutils [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.738 254096 DEBUG nova.network.neutron [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Refreshing network info cache for port fae63c22-d59b-4f75-9a0f-23ee49042eb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.740 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start _get_guest_xml network_info=[{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.745 254096 WARNING nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.748 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.748 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.766 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.767 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.767 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.768 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.768 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.770 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.770 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.770 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.771 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.771 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:00:25 np0005535469 nova_compute[254092]: 2025-11-25 17:00:25.773 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:00:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:00:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1098099446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.264 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.292 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.297 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:00:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015243251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.735 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.739 254096 DEBUG nova.virt.libvirt.vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-869327373',display_name='tempest-TestNetworkBasicOps-server-869327373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-869327373',id=112,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuOSoLJisGupCr6c7mO9PpNQ98EI+RPV6Pzjh9gT1H1Xu3OmtIley7lcmeOBktNLnaIXezfnI8n1+SWLXO5zvxz2wr1BCBoDOK2e1yKu9xR1PRa/EJoxyZJU+8n+8oakg==',key_name='tempest-TestNetworkBasicOps-1211766568',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-fdgy2dxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:22Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=810cd712-872a-49a6-bf45-04a319ba8d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.740 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.744 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.747 254096 DEBUG nova.objects.instance [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 810cd712-872a-49a6-bf45-04a319ba8d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.767 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <uuid>810cd712-872a-49a6-bf45-04a319ba8d57</uuid>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <name>instance-00000070</name>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-869327373</nova:name>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:00:25</nova:creationTime>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <nova:port uuid="fae63c22-d59b-4f75-9a0f-23ee49042eb6">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <entry name="serial">810cd712-872a-49a6-bf45-04a319ba8d57</entry>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <entry name="uuid">810cd712-872a-49a6-bf45-04a319ba8d57</entry>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/810cd712-872a-49a6-bf45-04a319ba8d57_disk">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/810cd712-872a-49a6-bf45-04a319ba8d57_disk.config">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f3:c0:ff"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <target dev="tapfae63c22-d5"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/console.log" append="off"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:00:26 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:00:26 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:00:26 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:00:26 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.770 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Preparing to wait for external event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.770 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.771 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.771 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.772 254096 DEBUG nova.virt.libvirt.vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-869327373',display_name='tempest-TestNetworkBasicOps-server-869327373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-869327373',id=112,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuOSoLJisGupCr6c7mO9PpNQ98EI+RPV6Pzjh9gT1H1Xu3OmtIley7lcmeOBktNLnaIXezfnI8n1+SWLXO5zvxz2wr1BCBoDOK2e1yKu9xR1PRa/EJoxyZJU+8n+8oakg==',key_name='tempest-TestNetworkBasicOps-1211766568',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-fdgy2dxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:22Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=810cd712-872a-49a6-bf45-04a319ba8d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.773 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.773 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.774 254096 DEBUG os_vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.775 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.776 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.779 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfae63c22-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.780 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfae63c22-d5, col_values=(('external_ids', {'iface-id': 'fae63c22-d59b-4f75-9a0f-23ee49042eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:c0:ff', 'vm-uuid': '810cd712-872a-49a6-bf45-04a319ba8d57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:26 np0005535469 NetworkManager[48891]: <info>  [1764090026.8302] manager: (tapfae63c22-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.841 254096 INFO os_vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5')#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.902 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.903 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.904 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:f3:c0:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.904 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Using config drive#033[00m
Nov 25 12:00:26 np0005535469 nova_compute[254092]: 2025-11-25 17:00:26.927 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.347 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Creating config drive at /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.351 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaemv3g6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.508 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaemv3g6x" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.539 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.543 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.709 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.711 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deleting local config drive /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config because it was imported into RBD.#033[00m
Nov 25 12:00:27 np0005535469 kernel: tapfae63c22-d5: entered promiscuous mode
Nov 25 12:00:27 np0005535469 NetworkManager[48891]: <info>  [1764090027.7814] manager: (tapfae63c22-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Nov 25 12:00:27 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:27Z|01153|binding|INFO|Claiming lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 for this chassis.
Nov 25 12:00:27 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:27Z|01154|binding|INFO|fae63c22-d59b-4f75-9a0f-23ee49042eb6: Claiming fa:16:3e:f3:c0:ff 10.100.0.26
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.801 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c0:ff 10.100.0.26'], port_security=['fa:16:3e:f3:c0:ff 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '810cd712-872a-49a6-bf45-04a319ba8d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02789c4f-78fd-4a08-94c2-0c5eb3bf492c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be501355-0d69-488a-a6b4-4788d24c4a8e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fae63c22-d59b-4f75-9a0f-23ee49042eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.805 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fae63c22-d59b-4f75-9a0f-23ee49042eb6 in datapath a6fc284a-0332-49e7-8f7e-5297640d0e32 bound to our chassis#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.808 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a6fc284a-0332-49e7-8f7e-5297640d0e32#033[00m
Nov 25 12:00:27 np0005535469 systemd-udevd[378056]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.834 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1e26c557-f426-4f79-979a-91d4488f71de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.836 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa6fc284a-01 in ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.841 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa6fc284a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.841 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[462f04d5-5526-4aea-bb41-6bc5b89744bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.843 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fc79a9ee-0d94-4f4b-9d67-db35fd44f3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.846 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:27 np0005535469 NetworkManager[48891]: <info>  [1764090027.8512] device (tapfae63c22-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:00:27 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:27Z|01155|binding|INFO|Setting lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 ovn-installed in OVS
Nov 25 12:00:27 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:27Z|01156|binding|INFO|Setting lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 up in Southbound
Nov 25 12:00:27 np0005535469 NetworkManager[48891]: <info>  [1764090027.8528] device (tapfae63c22-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:00:27 np0005535469 nova_compute[254092]: 2025-11-25 17:00:27.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:27 np0005535469 systemd-machined[216343]: New machine qemu-144-instance-00000070.
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.859 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[08d64a01-0dd9-402a-93d3-6830225c5e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 systemd[1]: Started Virtual Machine qemu-144-instance-00000070.
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.882 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f685932-9ca2-49d4-8a5f-fda95e14c28b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8a321a-dcaf-4406-aa06-31f4e179a091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 NetworkManager[48891]: <info>  [1764090027.9249] manager: (tapa6fc284a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/476)
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[74fd1be6-746e-4be7-8a0f-2c5f0bb4ee46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.961 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9c5d58-7302-4eaa-9445-2ba4441ab7b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.964 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5981bf0b-c137-444b-8dfc-154ba9ee9800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:27 np0005535469 NetworkManager[48891]: <info>  [1764090027.9970] device (tapa6fc284a-00): carrier: link connected
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.002 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5b5cf4-cbd2-481a-b4c2-e5459dd19409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.025 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f08dafd8-f334-4ccd-b100-0f09cc7d5dc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6fc284a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:d9:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648558, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378089, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17735c4c-0356-43ed-8a0d-7694a8f98823]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:d91d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 648558, 'tstamp': 648558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378090, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.073 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c40128cd-d9f6-4c34-8f56-1efb052e5b74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6fc284a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:d9:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648558, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378091, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27b97303-c6d6-4cdc-a17b-1a0e26aefff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d72c3-305c-4f7c-83f3-f6e400109e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6fc284a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.196 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6fc284a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:28 np0005535469 kernel: tapa6fc284a-00: entered promiscuous mode
Nov 25 12:00:28 np0005535469 NetworkManager[48891]: <info>  [1764090028.1985] manager: (tapa6fc284a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.200 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa6fc284a-00, col_values=(('external_ids', {'iface-id': '9a1f1538-db9c-44f0-800c-07be1e2da375'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:28 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:28Z|01157|binding|INFO|Releasing lport 9a1f1538-db9c-44f0-800c-07be1e2da375 from this chassis (sb_readonly=0)
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.203 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a6fc284a-0332-49e7-8f7e-5297640d0e32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a6fc284a-0332-49e7-8f7e-5297640d0e32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.204 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5819e859-7b41-44d9-b478-a0566c10debc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.205 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-a6fc284a-0332-49e7-8f7e-5297640d0e32
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/a6fc284a-0332-49e7-8f7e-5297640d0e32.pid.haproxy
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID a6fc284a-0332-49e7-8f7e-5297640d0e32
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:00:28 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.207 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'env', 'PROCESS_TAG=haproxy-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a6fc284a-0332-49e7-8f7e-5297640d0e32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.582 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.638 254096 DEBUG nova.compute.manager [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.638 254096 DEBUG oslo_concurrency.lockutils [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.638 254096 DEBUG oslo_concurrency.lockutils [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.639 254096 DEBUG oslo_concurrency.lockutils [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.639 254096 DEBUG nova.compute.manager [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Processing event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.688 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090028.688423, 810cd712-872a-49a6-bf45-04a319ba8d57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.689 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Started (Lifecycle Event)#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.691 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.701 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.713 254096 INFO nova.virt.libvirt.driver [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance spawned successfully.#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.716 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.725 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.750 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.751 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.752 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.752 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.753 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.754 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.761 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.761 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090028.6886623, 810cd712-872a-49a6-bf45-04a319ba8d57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.762 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.789 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.794 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090028.6952353, 810cd712-872a-49a6-bf45-04a319ba8d57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.794 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.818 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.823 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.831 254096 INFO nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 6.46 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.832 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.846 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.911 254096 INFO nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 7.44 seconds to build instance.#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.927 254096 DEBUG nova.network.neutron [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updated VIF entry in instance network info cache for port fae63c22-d59b-4f75-9a0f-23ee49042eb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.928 254096 DEBUG nova.network.neutron [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updating instance_info_cache with network_info: [{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.931 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:28 np0005535469 podman[378185]: 2025-11-25 17:00:28.940129143 +0000 UTC m=+0.056746787 container create b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:00:28 np0005535469 nova_compute[254092]: 2025-11-25 17:00:28.941 254096 DEBUG oslo_concurrency.lockutils [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:00:28 np0005535469 systemd[1]: Started libpod-conmon-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4.scope.
Nov 25 12:00:29 np0005535469 podman[378185]: 2025-11-25 17:00:28.91031342 +0000 UTC m=+0.026931114 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:00:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:00:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0170379c24e2e0c12161dc8a7845072d42d78dc45d297c38ef19f95a48d9a9cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:00:29 np0005535469 podman[378185]: 2025-11-25 17:00:29.052352479 +0000 UTC m=+0.168970203 container init b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:00:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:00:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054360708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:00:29 np0005535469 podman[378185]: 2025-11-25 17:00:29.060749038 +0000 UTC m=+0.177366722 container start b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.085 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:29 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : New worker (378209) forked
Nov 25 12:00:29 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : Loading success.
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.353 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.354 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.359 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.359 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.581 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.583 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.92196273803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.646 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance d643dc57-9536-4a67-9a17-c20512710ea5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.646 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 810cd712-872a-49a6-bf45-04a319ba8d57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.646 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.647 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.828 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.848 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.848 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.870 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.898 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:00:29 np0005535469 nova_compute[254092]: 2025-11-25 17:00:29.952 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 12:00:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:00:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959655731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.425 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.432 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.447 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.482 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.483 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.802 254096 DEBUG nova.compute.manager [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.802 254096 DEBUG oslo_concurrency.lockutils [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.802 254096 DEBUG oslo_concurrency.lockutils [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.803 254096 DEBUG oslo_concurrency.lockutils [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.803 254096 DEBUG nova.compute.manager [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] No waiting events found dispatching network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:00:30 np0005535469 nova_compute[254092]: 2025-11-25 17:00:30.803 254096 WARNING nova.compute.manager [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received unexpected event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:00:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:31 np0005535469 nova_compute[254092]: 2025-11-25 17:00:31.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 12:00:32 np0005535469 nova_compute[254092]: 2025-11-25 17:00:32.478 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:32 np0005535469 nova_compute[254092]: 2025-11-25 17:00:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 12:00:34 np0005535469 nova_compute[254092]: 2025-11-25 17:00:34.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 12:00:36 np0005535469 nova_compute[254092]: 2025-11-25 17:00:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:00:36 np0005535469 nova_compute[254092]: 2025-11-25 17:00:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:00:36 np0005535469 nova_compute[254092]: 2025-11-25 17:00:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:00:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:36 np0005535469 nova_compute[254092]: 2025-11-25 17:00:36.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:37 np0005535469 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:00:37 np0005535469 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:00:37 np0005535469 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:00:37 np0005535469 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:00:37 np0005535469 nova_compute[254092]: 2025-11-25 17:00:37.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:00:38 np0005535469 nova_compute[254092]: 2025-11-25 17:00:38.522 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:38 np0005535469 nova_compute[254092]: 2025-11-25 17:00:38.532 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:00:38 np0005535469 nova_compute[254092]: 2025-11-25 17:00:38.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:00:38 np0005535469 podman[378242]: 2025-11-25 17:00:38.638224579 +0000 UTC m=+0.053173709 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:00:38 np0005535469 podman[378241]: 2025-11-25 17:00:38.692382804 +0000 UTC m=+0.107602641 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:00:38 np0005535469 podman[378243]: 2025-11-25 17:00:38.709401817 +0000 UTC m=+0.115844785 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 12:00:39 np0005535469 nova_compute[254092]: 2025-11-25 17:00:39.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:00:40
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'volumes', 'default.rgw.log', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root']
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:00:40 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:00:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:41 np0005535469 nova_compute[254092]: 2025-11-25 17:00:41.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 170 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 576 KiB/s wr, 58 op/s
Nov 25 12:00:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:42Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:c0:ff 10.100.0.26
Nov 25 12:00:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:42Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:c0:ff 10.100.0.26
Nov 25 12:00:42 np0005535469 nova_compute[254092]: 2025-11-25 17:00:42.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 170 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 575 KiB/s wr, 24 op/s
Nov 25 12:00:44 np0005535469 nova_compute[254092]: 2025-11-25 17:00:44.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 12:00:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:46 np0005535469 nova_compute[254092]: 2025-11-25 17:00:46.876 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:00:48 np0005535469 nova_compute[254092]: 2025-11-25 17:00:48.834 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:48 np0005535469 nova_compute[254092]: 2025-11-25 17:00:48.835 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:48 np0005535469 nova_compute[254092]: 2025-11-25 17:00:48.859 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:00:48 np0005535469 nova_compute[254092]: 2025-11-25 17:00:48.963 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:48 np0005535469 nova_compute[254092]: 2025-11-25 17:00:48.964 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:48 np0005535469 nova_compute[254092]: 2025-11-25 17:00:48.972 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:00:48 np0005535469 nova_compute[254092]: 2025-11-25 17:00:48.973 254096 INFO nova.compute.claims [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.078 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:00:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931076854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.557 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.563 254096 DEBUG nova.compute.provider_tree [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.586 254096 DEBUG nova.scheduler.client.report [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.607 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.662 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.663 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.686 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.704 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.831 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.832 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.832 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Creating image(s)#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.855 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.877 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.902 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.906 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.944 254096 DEBUG nova.policy [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.978 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.980 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.981 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:49 np0005535469 nova_compute[254092]: 2025-11-25 17:00:49.981 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.009 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.013 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 caf64ca2-5f73-454a-8442-9965c9853cba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.134 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.135 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.136 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.136 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.136 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.138 254096 INFO nova.compute.manager [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Terminating instance#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.139 254096 DEBUG nova.compute.manager [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:00:50 np0005535469 kernel: tapfae63c22-d5 (unregistering): left promiscuous mode
Nov 25 12:00:50 np0005535469 NetworkManager[48891]: <info>  [1764090050.2093] device (tapfae63c22-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:00:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:00:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:50Z|01158|binding|INFO|Releasing lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 from this chassis (sb_readonly=0)
Nov 25 12:00:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:50Z|01159|binding|INFO|Setting lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 down in Southbound
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:50Z|01160|binding|INFO|Removing iface tapfae63c22-d5 ovn-installed in OVS
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.270 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.282 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c0:ff 10.100.0.26'], port_security=['fa:16:3e:f3:c0:ff 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '810cd712-872a-49a6-bf45-04a319ba8d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02789c4f-78fd-4a08-94c2-0c5eb3bf492c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be501355-0d69-488a-a6b4-4788d24c4a8e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fae63c22-d59b-4f75-9a0f-23ee49042eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.284 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fae63c22-d59b-4f75-9a0f-23ee49042eb6 in datapath a6fc284a-0332-49e7-8f7e-5297640d0e32 unbound from our chassis#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.285 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a6fc284a-0332-49e7-8f7e-5297640d0e32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.287 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8de2816b-d4f0-40f0-975b-8abf1c60fdd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.287 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 namespace which is not needed anymore#033[00m
Nov 25 12:00:50 np0005535469 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 25 12:00:50 np0005535469 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Consumed 12.944s CPU time.
Nov 25 12:00:50 np0005535469 systemd-machined[216343]: Machine qemu-144-instance-00000070 terminated.
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.356 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 caf64ca2-5f73-454a-8442-9965c9853cba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:50 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : haproxy version is 2.8.14-c23fe91
Nov 25 12:00:50 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : path to executable is /usr/sbin/haproxy
Nov 25 12:00:50 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [WARNING]  (378206) : Exiting Master process...
Nov 25 12:00:50 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [WARNING]  (378206) : Exiting Master process...
Nov 25 12:00:50 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [ALERT]    (378206) : Current worker (378209) exited with code 143 (Terminated)
Nov 25 12:00:50 np0005535469 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [WARNING]  (378206) : All workers exited. Exiting... (0)
Nov 25 12:00:50 np0005535469 systemd[1]: libpod-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4.scope: Deactivated successfully.
Nov 25 12:00:50 np0005535469 podman[378447]: 2025-11-25 17:00:50.42090819 +0000 UTC m=+0.046012635 container died b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.422 254096 INFO nova.virt.libvirt.driver [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance destroyed successfully.#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.424 254096 DEBUG nova.objects.instance [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 810cd712-872a-49a6-bf45-04a319ba8d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.428 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:00:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4-userdata-shm.mount: Deactivated successfully.
Nov 25 12:00:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0170379c24e2e0c12161dc8a7845072d42d78dc45d297c38ef19f95a48d9a9cb-merged.mount: Deactivated successfully.
Nov 25 12:00:50 np0005535469 podman[378447]: 2025-11-25 17:00:50.457790704 +0000 UTC m=+0.082895139 container cleanup b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:00:50 np0005535469 systemd[1]: libpod-conmon-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4.scope: Deactivated successfully.
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.472 254096 DEBUG nova.virt.libvirt.vif [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-869327373',display_name='tempest-TestNetworkBasicOps-server-869327373',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-869327373',id=112,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuOSoLJisGupCr6c7mO9PpNQ98EI+RPV6Pzjh9gT1H1Xu3OmtIley7lcmeOBktNLnaIXezfnI8n1+SWLXO5zvxz2wr1BCBoDOK2e1yKu9xR1PRa/EJoxyZJU+8n+8oakg==',key_name='tempest-TestNetworkBasicOps-1211766568',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:00:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-fdgy2dxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:00:28Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=810cd712-872a-49a6-bf45-04a319ba8d57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.473 254096 DEBUG nova.network.os_vif_util [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.474 254096 DEBUG nova.network.os_vif_util [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.474 254096 DEBUG os_vif [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.476 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfae63c22-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.483 254096 INFO os_vif [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5')#033[00m
Nov 25 12:00:50 np0005535469 podman[378534]: 2025-11-25 17:00:50.521533641 +0000 UTC m=+0.040618518 container remove b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8330e4d-163c-4160-8834-421143777a2a]: (4, ('Tue Nov 25 05:00:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 (b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4)\nb63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4\nTue Nov 25 05:00:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 (b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4)\nb63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.529 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dae655-8fc3-46e3-877f-76df0d2b4c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.530 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6fc284a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.532 254096 DEBUG nova.objects.instance [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid caf64ca2-5f73-454a-8442-9965c9853cba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:00:50 np0005535469 kernel: tapa6fc284a-00: left promiscuous mode
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.537 254096 DEBUG nova.compute.manager [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-unplugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.538 254096 DEBUG oslo_concurrency.lockutils [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.538 254096 DEBUG oslo_concurrency.lockutils [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.538 254096 DEBUG oslo_concurrency.lockutils [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.539 254096 DEBUG nova.compute.manager [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] No waiting events found dispatching network-vif-unplugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.539 254096 DEBUG nova.compute.manager [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-unplugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.547 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc89bb4a-60fc-4ab8-9300-c5e357a43ff4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.547 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.548 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Ensure instance console log exists: /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.549 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.550 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.574 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6621e0a-c456-4054-a604-e2706fffdf23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.575 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[683c1471-1115-4aef-a410-8ef209fc44cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[629447ec-5488-4d78-bb92-359e3c010510]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648550, 'reachable_time': 26144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378583, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 systemd[1]: run-netns-ovnmeta\x2da6fc284a\x2d0332\x2d49e7\x2d8f7e\x2d5297640d0e32.mount: Deactivated successfully.
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.596 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.597 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f11cf6-0ebc-42fa-b328-70341ace36c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.639 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.640 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.741 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Successfully created port: 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.858 254096 INFO nova.virt.libvirt.driver [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deleting instance files /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57_del#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.859 254096 INFO nova.virt.libvirt.driver [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deletion of /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57_del complete#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.905 254096 INFO nova.compute.manager [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.906 254096 DEBUG oslo.service.loopingcall [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.906 254096 DEBUG nova.compute.manager [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:00:50 np0005535469 nova_compute[254092]: 2025-11-25 17:00:50.906 254096 DEBUG nova.network.neutron [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.331 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Successfully updated port: 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.344 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.345 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.345 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.429 254096 DEBUG nova.compute.manager [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.429 254096 DEBUG nova.compute.manager [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing instance network info cache due to event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.429 254096 DEBUG oslo_concurrency.lockutils [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.448 254096 DEBUG nova.network.neutron [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.462 254096 INFO nova.compute.manager [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 0.56 seconds to deallocate network for instance.#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.470 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001519118419124829 of space, bias 1.0, pg target 0.45573552573744874 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:00:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.502 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.503 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:51 np0005535469 nova_compute[254092]: 2025-11-25 17:00:51.585 254096 DEBUG oslo_concurrency.processutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:00:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071352961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.074 254096 DEBUG oslo_concurrency.processutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.079 254096 DEBUG nova.compute.provider_tree [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.090 254096 DEBUG nova.scheduler.client.report [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.106 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.126 254096 INFO nova.scheduler.client.report [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 810cd712-872a-49a6-bf45-04a319ba8d57#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.167 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.183 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.184 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.184 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance network_info: |[{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.185 254096 DEBUG oslo_concurrency.lockutils [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.185 254096 DEBUG nova.network.neutron [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.187 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start _get_guest_xml network_info=[{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.191 254096 WARNING nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.196 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.196 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.202 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.203 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.203 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.203 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.206 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.209 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 185 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Nov 25 12:00:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:52.643 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.650 254096 DEBUG nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.651 254096 DEBUG oslo_concurrency.lockutils [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.651 254096 DEBUG oslo_concurrency.lockutils [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 DEBUG oslo_concurrency.lockutils [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 DEBUG nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] No waiting events found dispatching network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 WARNING nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received unexpected event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 DEBUG nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-deleted-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:00:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131512354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.706 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.733 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:52 np0005535469 nova_compute[254092]: 2025-11-25 17:00:52.737 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:00:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399783084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.231 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.233 254096 DEBUG nova.virt.libvirt.vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=113,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC5IGkA0EZcfpUrnJAPAyNPE3aP4ux+1YVZrN6xmNxNmyBv5luv7uNh5XsGgePIRhuMTv5vEwnWkpC0iguYDb2SFlQPUW7qNQRGe9ic9lTfmn148JQBqNQ9VGr6RxpqguQ==',key_name='tempest-TestSecurityGroupsBasicOps-1631979734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-o6t5fl29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:49Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=caf64ca2-5f73-454a-8442-9965c9853cba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.234 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.235 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.236 254096 DEBUG nova.objects.instance [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid caf64ca2-5f73-454a-8442-9965c9853cba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.249 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <uuid>caf64ca2-5f73-454a-8442-9965c9853cba</uuid>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <name>instance-00000071</name>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357</nova:name>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:00:52</nova:creationTime>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <nova:port uuid="1caaa3da-b3eb-4441-b6b2-8eaa71146e77">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <entry name="serial">caf64ca2-5f73-454a-8442-9965c9853cba</entry>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <entry name="uuid">caf64ca2-5f73-454a-8442-9965c9853cba</entry>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/caf64ca2-5f73-454a-8442-9965c9853cba_disk">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/caf64ca2-5f73-454a-8442-9965c9853cba_disk.config">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:97:ec:69"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <target dev="tap1caaa3da-b3"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/console.log" append="off"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:00:53 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:00:53 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:00:53 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:00:53 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.251 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Preparing to wait for external event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.252 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.252 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.252 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.253 254096 DEBUG nova.virt.libvirt.vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=113,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC5IGkA0EZcfpUrnJAPAyNPE3aP4ux+1YVZrN6xmNxNmyBv5luv7uNh5XsGgePIRhuMTv5vEwnWkpC0iguYDb2SFlQPUW7qNQRGe9ic9lTfmn148JQBqNQ9VGr6RxpqguQ==',key_name='tempest-TestSecurityGroupsBasicOps-1631979734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-o6t5fl29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:49Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=caf64ca2-5f73-454a-8442-9965c9853cba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.253 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.254 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.254 254096 DEBUG os_vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.256 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.256 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.260 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1caaa3da-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.260 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1caaa3da-b3, col_values=(('external_ids', {'iface-id': '1caaa3da-b3eb-4441-b6b2-8eaa71146e77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:ec:69', 'vm-uuid': 'caf64ca2-5f73-454a-8442-9965c9853cba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:53 np0005535469 NetworkManager[48891]: <info>  [1764090053.2638] manager: (tap1caaa3da-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.267 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.267 254096 INFO os_vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3')#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.325 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.325 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.326 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:97:ec:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.326 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Using config drive#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.359 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.458 254096 DEBUG nova.network.neutron [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updated VIF entry in instance network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.459 254096 DEBUG nova.network.neutron [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.472 254096 DEBUG oslo_concurrency.lockutils [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.739 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Creating config drive at /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.747 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tfq0hk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.891 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tfq0hk" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.923 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:00:53 np0005535469 nova_compute[254092]: 2025-11-25 17:00:53.929 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config caf64ca2-5f73-454a-8442-9965c9853cba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.068 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config caf64ca2-5f73-454a-8442-9965c9853cba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.068 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deleting local config drive /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config because it was imported into RBD.#033[00m
Nov 25 12:00:54 np0005535469 kernel: tap1caaa3da-b3: entered promiscuous mode
Nov 25 12:00:54 np0005535469 NetworkManager[48891]: <info>  [1764090054.1252] manager: (tap1caaa3da-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Nov 25 12:00:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:54Z|01161|binding|INFO|Claiming lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for this chassis.
Nov 25 12:00:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:54Z|01162|binding|INFO|1caaa3da-b3eb-4441-b6b2-8eaa71146e77: Claiming fa:16:3e:97:ec:69 10.100.0.8
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:ec:69 10.100.0.8'], port_security=['fa:16:3e:97:ec:69 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'caf64ca2-5f73-454a-8442-9965c9853cba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b034178-39ad-4db7-adab-aaf6bc34bd4a e7198f6b-79d7-48d7-845d-93c396c87f35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e96b6e1-6935-4458-bc78-50ea3ed2412d, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1caaa3da-b3eb-4441-b6b2-8eaa71146e77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.137 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 in datapath 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 bound to our chassis#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.138 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2#033[00m
Nov 25 12:00:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:54Z|01163|binding|INFO|Setting lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 ovn-installed in OVS
Nov 25 12:00:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:54Z|01164|binding|INFO|Setting lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 up in Southbound
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94112d3c-1606-4766-942d-0ec10c34b944]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.150 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f953cb4-a1 in ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.152 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f953cb4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[22ef31e8-8037-4d86-a672-3ed41aa5f325]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.153 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec9856c-fe68-4231-9ed0-7a447d1b91a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 systemd-machined[216343]: New machine qemu-145-instance-00000071.
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.167 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a991557e-0812-4661-9a3f-cab24d365327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.180 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[147cef6a-29f9-4ea7-99e0-2b4c97c9ede5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 systemd[1]: Started Virtual Machine qemu-145-instance-00000071.
Nov 25 12:00:54 np0005535469 systemd-udevd[378746]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:00:54 np0005535469 NetworkManager[48891]: <info>  [1764090054.2094] device (tap1caaa3da-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:00:54 np0005535469 NetworkManager[48891]: <info>  [1764090054.2105] device (tap1caaa3da-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.216 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e489d2f5-4396-403a-9f2e-631aaf9cd4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 systemd-udevd[378749]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:00:54 np0005535469 NetworkManager[48891]: <info>  [1764090054.2242] manager: (tap0f953cb4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.222 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b503125a-2f5c-4bbf-944c-55b3ef1dbc0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 185 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.256 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3d84b5-5c6b-4b95-b8bb-940c5c3ebde5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.261 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f3926b9e-1680-4d25-8ce1-b47c9dc0875c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 NetworkManager[48891]: <info>  [1764090054.2848] device (tap0f953cb4-a0): carrier: link connected
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.288 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[250eab1f-98ad-410b-a8e8-9276fe68d55f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.304 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a91bf7e4-80a9-4060-a982-ab595424c63c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f953cb4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:94:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651187, 'reachable_time': 29997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378775, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.321 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b78c97ae-d3f0-4af5-bd1a-9d0cc58f793e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:942b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651187, 'tstamp': 651187}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378776, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c7e538-f8ec-46a4-996d-772747a5f503]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f953cb4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:94:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651187, 'reachable_time': 29997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378777, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:54Z|01165|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8a4188-4774-46b7-9f10-4d1e902c1ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.385 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.390 254096 DEBUG nova.compute.manager [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG oslo_concurrency.lockutils [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG oslo_concurrency.lockutils [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG oslo_concurrency.lockutils [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG nova.compute.manager [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Processing event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.430 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c5bc34-1dd8-4abe-a716-cc617f3c7ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.431 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f953cb4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.431 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.431 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f953cb4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 NetworkManager[48891]: <info>  [1764090054.4338] manager: (tap0f953cb4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Nov 25 12:00:54 np0005535469 kernel: tap0f953cb4-a0: entered promiscuous mode
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.437 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f953cb4-a0, col_values=(('external_ids', {'iface-id': '2166482b-c36e-4cfe-a45d-40e1c4e6a3e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:54Z|01166|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.458 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[615c5c57-1119-4981-9984-85376d369e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.461 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.pid.haproxy
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:00:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.461 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'env', 'PROCESS_TAG=haproxy-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:54 np0005535469 podman[378842]: 2025-11-25 17:00:54.856011274 +0000 UTC m=+0.044811942 container create 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 12:00:54 np0005535469 systemd[1]: Started libpod-conmon-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72.scope.
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.914 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090054.914092, caf64ca2-5f73-454a-8442-9965c9853cba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.915 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Started (Lifecycle Event)#033[00m
Nov 25 12:00:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.917 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:00:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186dab47ed7f953257a28b5bb609e6cd78bd265db3c45fccf8e31014045c990/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.921 254096 DEBUG nova.compute.manager [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-changed-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.921 254096 DEBUG nova.compute.manager [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing instance network info cache due to event network-changed-12c882d8-4cd5-4233-8d3b-650401885991. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.921 254096 DEBUG oslo_concurrency.lockutils [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.922 254096 DEBUG oslo_concurrency.lockutils [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.922 254096 DEBUG nova.network.neutron [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.923 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.926 254096 INFO nova.virt.libvirt.driver [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance spawned successfully.#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.926 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:00:54 np0005535469 podman[378842]: 2025-11-25 17:00:54.834196309 +0000 UTC m=+0.022997007 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:00:54 np0005535469 podman[378842]: 2025-11-25 17:00:54.932921158 +0000 UTC m=+0.121721856 container init 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:00:54 np0005535469 podman[378842]: 2025-11-25 17:00:54.941101281 +0000 UTC m=+0.129901959 container start 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.947 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.950 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.958 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:54 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : New worker (378872) forked
Nov 25 12:00:54 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : Loading success.
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.959 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.960 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.960 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.960 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.961 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.964 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.965 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090054.914177, caf64ca2-5f73-454a-8442-9965c9853cba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:00:54 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.965 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:54.999 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.002 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090054.9194849, caf64ca2-5f73-454a-8442-9965c9853cba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.002 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.020 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.022 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.037 254096 INFO nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 5.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.038 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.042 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.053 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.055 254096 INFO nova.compute.manager [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Terminating instance#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.056 254096 DEBUG nova.compute.manager [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.098 254096 INFO nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 6.17 seconds to build instance.#033[00m
Nov 25 12:00:55 np0005535469 kernel: tap12c882d8-4c (unregistering): left promiscuous mode
Nov 25 12:00:55 np0005535469 NetworkManager[48891]: <info>  [1764090055.1140] device (tap12c882d8-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.116 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:55Z|01167|binding|INFO|Releasing lport 12c882d8-4cd5-4233-8d3b-650401885991 from this chassis (sb_readonly=0)
Nov 25 12:00:55 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:55Z|01168|binding|INFO|Setting lport 12c882d8-4cd5-4233-8d3b-650401885991 down in Southbound
Nov 25 12:00:55 np0005535469 ovn_controller[153477]: 2025-11-25T17:00:55Z|01169|binding|INFO|Removing iface tap12c882d8-4c ovn-installed in OVS
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.123 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.127 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:dd:57 10.100.0.3'], port_security=['fa:16:3e:80:dd:57 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd643dc57-9536-4a67-9a17-c20512710ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f446da-e1c5-4251-b5dd-3071154486f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '055c4680-7ea5-4bc6-a453-5482dfbe9b96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=168e2889-06cc-4097-8922-1d94c15fa45a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12c882d8-4cd5-4233-8d3b-650401885991) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.128 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12c882d8-4cd5-4233-8d3b-650401885991 in datapath 22f446da-e1c5-4251-b5dd-3071154486f0 unbound from our chassis#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.129 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22f446da-e1c5-4251-b5dd-3071154486f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.130 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7bc16c-8339-4040-83c5-01d0449b9a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.131 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 namespace which is not needed anymore#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 25 12:00:55 np0005535469 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d0000006f.scope: Consumed 15.619s CPU time.
Nov 25 12:00:55 np0005535469 systemd-machined[216343]: Machine qemu-143-instance-0000006f terminated.
Nov 25 12:00:55 np0005535469 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : haproxy version is 2.8.14-c23fe91
Nov 25 12:00:55 np0005535469 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : path to executable is /usr/sbin/haproxy
Nov 25 12:00:55 np0005535469 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [WARNING]  (376767) : Exiting Master process...
Nov 25 12:00:55 np0005535469 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [ALERT]    (376767) : Current worker (376769) exited with code 143 (Terminated)
Nov 25 12:00:55 np0005535469 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [WARNING]  (376767) : All workers exited. Exiting... (0)
Nov 25 12:00:55 np0005535469 systemd[1]: libpod-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99.scope: Deactivated successfully.
Nov 25 12:00:55 np0005535469 podman[378901]: 2025-11-25 17:00:55.269958328 +0000 UTC m=+0.043447545 container died 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.291 254096 INFO nova.virt.libvirt.driver [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance destroyed successfully.#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.291 254096 DEBUG nova.objects.instance [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:00:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99-userdata-shm.mount: Deactivated successfully.
Nov 25 12:00:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a7b995b572a86953529648503ae39e65ee754ec7d4b7a8548926206da37e805c-merged.mount: Deactivated successfully.
Nov 25 12:00:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:00:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/291562265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.303 254096 DEBUG nova.virt.libvirt.vif [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:59:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-888996977',display_name='tempest-TestNetworkBasicOps-server-888996977',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-888996977',id=111,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtS45HGfMK2vg5lA1DMxbi57g++ufQ+h4UViHDHhpxxz0TeEAmFiy6LE8nwpuctZL207E2zBi1qnO46vlL2kuhASWctkQ5Cos3uH5AqhXg1h51/mOABxlzeqFcNuWZ38Q==',key_name='tempest-TestNetworkBasicOps-202100109',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:59:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5inbkv7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:59:44Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=d643dc57-9536-4a67-9a17-c20512710ea5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.303 254096 DEBUG nova.network.os_vif_util [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.304 254096 DEBUG nova.network.os_vif_util [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.304 254096 DEBUG os_vif [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:00:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:00:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/291562265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.307 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12c882d8-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 podman[378901]: 2025-11-25 17:00:55.309964417 +0000 UTC m=+0.083453624 container cleanup 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.317 254096 INFO os_vif [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c')#033[00m
Nov 25 12:00:55 np0005535469 systemd[1]: libpod-conmon-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99.scope: Deactivated successfully.
Nov 25 12:00:55 np0005535469 podman[378949]: 2025-11-25 17:00:55.392851074 +0000 UTC m=+0.054159835 container remove 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.399 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d955fe55-9a6b-4b29-b950-d74ed38cdc23]: (4, ('Tue Nov 25 05:00:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 (674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99)\n674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99\nTue Nov 25 05:00:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 (674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99)\n674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.401 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c470c826-a5b7-4307-a661-09277e5ac62c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f446da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 kernel: tap22f446da-e0: left promiscuous mode
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.426 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eade23f2-ff45-4d49-b756-81102d9dca9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37c96e3e-0281-4b48-97e0-76e12c873074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.450 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d96ea4c-5ec9-4d28-88e4-6a169a52e943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.470 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2d165a-324d-4e4c-9582-db5877fc53e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643427, 'reachable_time': 32842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378975, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.474 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:00:55 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.474 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b305dd9e-3f38-4141-b632-90a245abf6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:00:55 np0005535469 systemd[1]: run-netns-ovnmeta\x2d22f446da\x2de1c5\x2d4251\x2db5dd\x2d3071154486f0.mount: Deactivated successfully.
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.724 254096 INFO nova.virt.libvirt.driver [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deleting instance files /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5_del#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.725 254096 INFO nova.virt.libvirt.driver [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deletion of /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5_del complete#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.769 254096 INFO nova.compute.manager [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.770 254096 DEBUG oslo.service.loopingcall [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.770 254096 DEBUG nova.compute.manager [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:00:55 np0005535469 nova_compute[254092]: 2025-11-25 17:00:55.770 254096 DEBUG nova.network.neutron [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:00:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 148 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 3.4 MiB/s wr, 123 op/s
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.259 254096 DEBUG nova.network.neutron [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.271 254096 INFO nova.compute.manager [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 0.50 seconds to deallocate network for instance.#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.310 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.310 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.363 254096 DEBUG oslo_concurrency.processutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.479 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.480 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.480 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.480 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] No waiting events found dispatching network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 WARNING nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received unexpected event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-unplugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] No waiting events found dispatching network-vif-unplugged-12c882d8-4cd5-4233-8d3b-650401885991 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 WARNING nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received unexpected event network-vif-unplugged-12c882d8-4cd5-4233-8d3b-650401885991 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] No waiting events found dispatching network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.484 254096 WARNING nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received unexpected event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.484 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-deleted-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.587 254096 DEBUG nova.network.neutron [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated VIF entry in instance network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.588 254096 DEBUG nova.network.neutron [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.601 254096 DEBUG oslo_concurrency.lockutils [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:00:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:00:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:00:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969505326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.815 254096 DEBUG oslo_concurrency.processutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.820 254096 DEBUG nova.compute.provider_tree [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.836 254096 DEBUG nova.scheduler.client.report [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.855 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.876 254096 INFO nova.scheduler.client.report [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance d643dc57-9536-4a67-9a17-c20512710ea5#033[00m
Nov 25 12:00:56 np0005535469 nova_compute[254092]: 2025-11-25 17:00:56.944 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:00:57 np0005535469 nova_compute[254092]: 2025-11-25 17:00:57.789 254096 DEBUG nova.compute.manager [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:00:57 np0005535469 nova_compute[254092]: 2025-11-25 17:00:57.790 254096 DEBUG nova.compute.manager [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing instance network info cache due to event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:00:57 np0005535469 nova_compute[254092]: 2025-11-25 17:00:57.791 254096 DEBUG oslo_concurrency.lockutils [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:00:57 np0005535469 nova_compute[254092]: 2025-11-25 17:00:57.791 254096 DEBUG oslo_concurrency.lockutils [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:00:57 np0005535469 nova_compute[254092]: 2025-11-25 17:00:57.791 254096 DEBUG nova.network.neutron [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:00:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 148 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 12:00:59 np0005535469 nova_compute[254092]: 2025-11-25 17:00:59.194 254096 DEBUG nova.network.neutron [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updated VIF entry in instance network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:00:59 np0005535469 nova_compute[254092]: 2025-11-25 17:00:59.196 254096 DEBUG nova.network.neutron [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:00:59 np0005535469 nova_compute[254092]: 2025-11-25 17:00:59.232 254096 DEBUG oslo_concurrency.lockutils [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:00:59 np0005535469 nova_compute[254092]: 2025-11-25 17:00:59.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 12:01:00 np0005535469 nova_compute[254092]: 2025-11-25 17:01:00.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:01:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:01:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:01:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:01:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:01:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6dd16109-48a3-41a7-88c3-7ec686c82017 does not exist
Nov 25 12:01:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1a893b26-9cc6-4f7a-9e75-8d61a09ab8be does not exist
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:01:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fb485b3d-ba54-4a64-a9ce-442d8c3e69c6 does not exist
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:01:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:01Z|01170|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 12:01:01 np0005535469 nova_compute[254092]: 2025-11-25 17:01:01.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:01:01 np0005535469 podman[379281]: 2025-11-25 17:01:01.646037946 +0000 UTC m=+0.040140224 container create 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 12:01:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:01 np0005535469 systemd[1]: Started libpod-conmon-507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101.scope.
Nov 25 12:01:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:01 np0005535469 podman[379281]: 2025-11-25 17:01:01.626874334 +0000 UTC m=+0.020976632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:01:01 np0005535469 podman[379281]: 2025-11-25 17:01:01.72473425 +0000 UTC m=+0.118836548 container init 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:01:01 np0005535469 podman[379281]: 2025-11-25 17:01:01.731844063 +0000 UTC m=+0.125946341 container start 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:01:01 np0005535469 podman[379281]: 2025-11-25 17:01:01.734714281 +0000 UTC m=+0.128816589 container attach 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:01:01 np0005535469 lucid_neumann[379298]: 167 167
Nov 25 12:01:01 np0005535469 systemd[1]: libpod-507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101.scope: Deactivated successfully.
Nov 25 12:01:01 np0005535469 podman[379281]: 2025-11-25 17:01:01.738728471 +0000 UTC m=+0.132830759 container died 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:01:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f02ebf654e595645d8bba24ee9bc559bf3a8d967035c5aa4f3a0d4a09e8ce606-merged.mount: Deactivated successfully.
Nov 25 12:01:01 np0005535469 podman[379281]: 2025-11-25 17:01:01.774393182 +0000 UTC m=+0.168495460 container remove 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 12:01:01 np0005535469 systemd[1]: libpod-conmon-507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101.scope: Deactivated successfully.
Nov 25 12:01:01 np0005535469 podman[379322]: 2025-11-25 17:01:01.935388537 +0000 UTC m=+0.044290217 container create 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:01:01 np0005535469 systemd[1]: Started libpod-conmon-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope.
Nov 25 12:01:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:02 np0005535469 podman[379322]: 2025-11-25 17:01:01.913737217 +0000 UTC m=+0.022638917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:01:02 np0005535469 podman[379322]: 2025-11-25 17:01:02.038213958 +0000 UTC m=+0.147115708 container init 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:01:02 np0005535469 podman[379322]: 2025-11-25 17:01:02.045725402 +0000 UTC m=+0.154627082 container start 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:01:02 np0005535469 podman[379322]: 2025-11-25 17:01:02.051409707 +0000 UTC m=+0.160311437 container attach 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:01:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Nov 25 12:01:03 np0005535469 fervent_robinson[379338]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:01:03 np0005535469 fervent_robinson[379338]: --> relative data size: 1.0
Nov 25 12:01:03 np0005535469 fervent_robinson[379338]: --> All data devices are unavailable
Nov 25 12:01:03 np0005535469 systemd[1]: libpod-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope: Deactivated successfully.
Nov 25 12:01:03 np0005535469 systemd[1]: libpod-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope: Consumed 1.036s CPU time.
Nov 25 12:01:03 np0005535469 podman[379367]: 2025-11-25 17:01:03.177084096 +0000 UTC m=+0.026775580 container died 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:01:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32-merged.mount: Deactivated successfully.
Nov 25 12:01:03 np0005535469 podman[379367]: 2025-11-25 17:01:03.841113361 +0000 UTC m=+0.690804825 container remove 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:01:03 np0005535469 systemd[1]: libpod-conmon-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope: Deactivated successfully.
Nov 25 12:01:04 np0005535469 auditd[702]: Audit daemon rotating log files
Nov 25 12:01:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 127 op/s
Nov 25 12:01:04 np0005535469 podman[379522]: 2025-11-25 17:01:04.512359484 +0000 UTC m=+0.094092404 container create 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:01:04 np0005535469 podman[379522]: 2025-11-25 17:01:04.443297792 +0000 UTC m=+0.025030722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:01:04 np0005535469 systemd[1]: Started libpod-conmon-486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9.scope.
Nov 25 12:01:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:04 np0005535469 nova_compute[254092]: 2025-11-25 17:01:04.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:04 np0005535469 podman[379522]: 2025-11-25 17:01:04.806528456 +0000 UTC m=+0.388261386 container init 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 12:01:04 np0005535469 podman[379522]: 2025-11-25 17:01:04.817999348 +0000 UTC m=+0.399732268 container start 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:01:04 np0005535469 laughing_joliot[379536]: 167 167
Nov 25 12:01:04 np0005535469 systemd[1]: libpod-486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9.scope: Deactivated successfully.
Nov 25 12:01:04 np0005535469 podman[379522]: 2025-11-25 17:01:04.859116698 +0000 UTC m=+0.440849648 container attach 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:01:04 np0005535469 podman[379522]: 2025-11-25 17:01:04.860186147 +0000 UTC m=+0.441919067 container died 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:01:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-68c2b6cdb0b3ba2fd314383fd1c423cb50362da065ebe4546fb43eae8c3d215b-merged.mount: Deactivated successfully.
Nov 25 12:01:05 np0005535469 nova_compute[254092]: 2025-11-25 17:01:05.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:05 np0005535469 podman[379522]: 2025-11-25 17:01:05.374474124 +0000 UTC m=+0.956207034 container remove 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:01:05 np0005535469 nova_compute[254092]: 2025-11-25 17:01:05.389 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090050.3713243, 810cd712-872a-49a6-bf45-04a319ba8d57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:05 np0005535469 nova_compute[254092]: 2025-11-25 17:01:05.390 254096 INFO nova.compute.manager [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:01:05 np0005535469 systemd[1]: libpod-conmon-486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9.scope: Deactivated successfully.
Nov 25 12:01:05 np0005535469 nova_compute[254092]: 2025-11-25 17:01:05.417 254096 DEBUG nova.compute.manager [None req-7c1c081a-afb5-485f-9254-73d67a0d689d - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:05 np0005535469 podman[379562]: 2025-11-25 17:01:05.551935167 +0000 UTC m=+0.019586884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:01:05 np0005535469 podman[379562]: 2025-11-25 17:01:05.729188595 +0000 UTC m=+0.196840292 container create e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:01:05 np0005535469 systemd[1]: Started libpod-conmon-e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379.scope.
Nov 25 12:01:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:05 np0005535469 podman[379562]: 2025-11-25 17:01:05.99006835 +0000 UTC m=+0.457720057 container init e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:01:06 np0005535469 podman[379562]: 2025-11-25 17:01:06.00251382 +0000 UTC m=+0.470165517 container start e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:01:06 np0005535469 podman[379562]: 2025-11-25 17:01:06.073426621 +0000 UTC m=+0.541078338 container attach e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:01:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 127 op/s
Nov 25 12:01:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]: {
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:    "0": [
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:        {
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "devices": [
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "/dev/loop3"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            ],
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_name": "ceph_lv0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_size": "21470642176",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "name": "ceph_lv0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "tags": {
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cluster_name": "ceph",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.crush_device_class": "",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.encrypted": "0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osd_id": "0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.type": "block",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.vdo": "0"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            },
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "type": "block",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "vg_name": "ceph_vg0"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:        }
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:    ],
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:    "1": [
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:        {
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "devices": [
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "/dev/loop4"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            ],
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_name": "ceph_lv1",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_size": "21470642176",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "name": "ceph_lv1",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "tags": {
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cluster_name": "ceph",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.crush_device_class": "",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.encrypted": "0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osd_id": "1",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.type": "block",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.vdo": "0"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            },
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "type": "block",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "vg_name": "ceph_vg1"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:        }
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:    ],
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:    "2": [
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:        {
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "devices": [
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "/dev/loop5"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            ],
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_name": "ceph_lv2",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_size": "21470642176",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "name": "ceph_lv2",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "tags": {
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.cluster_name": "ceph",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.crush_device_class": "",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.encrypted": "0",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osd_id": "2",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.type": "block",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:                "ceph.vdo": "0"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            },
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "type": "block",
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:            "vg_name": "ceph_vg2"
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:        }
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]:    ]
Nov 25 12:01:06 np0005535469 nifty_bhaskara[379579]: }
Nov 25 12:01:06 np0005535469 systemd[1]: libpod-e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379.scope: Deactivated successfully.
Nov 25 12:01:06 np0005535469 podman[379562]: 2025-11-25 17:01:06.776443778 +0000 UTC m=+1.244095475 container died e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:01:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503-merged.mount: Deactivated successfully.
Nov 25 12:01:07 np0005535469 podman[379562]: 2025-11-25 17:01:07.230278529 +0000 UTC m=+1.697930226 container remove e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:01:07 np0005535469 systemd[1]: libpod-conmon-e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379.scope: Deactivated successfully.
Nov 25 12:01:08 np0005535469 podman[379741]: 2025-11-25 17:01:07.927590011 +0000 UTC m=+0.022530645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:01:08 np0005535469 podman[379741]: 2025-11-25 17:01:08.114711288 +0000 UTC m=+0.209651902 container create 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:01:08 np0005535469 systemd[1]: Started libpod-conmon-2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6.scope.
Nov 25 12:01:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 76 op/s
Nov 25 12:01:08 np0005535469 podman[379741]: 2025-11-25 17:01:08.27524519 +0000 UTC m=+0.370185874 container init 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:01:08 np0005535469 podman[379741]: 2025-11-25 17:01:08.284682726 +0000 UTC m=+0.379623340 container start 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 12:01:08 np0005535469 affectionate_lichterman[379757]: 167 167
Nov 25 12:01:08 np0005535469 systemd[1]: libpod-2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6.scope: Deactivated successfully.
Nov 25 12:01:08 np0005535469 podman[379741]: 2025-11-25 17:01:08.313856351 +0000 UTC m=+0.408796965 container attach 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 12:01:08 np0005535469 podman[379741]: 2025-11-25 17:01:08.314577661 +0000 UTC m=+0.409518275 container died 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:01:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a87ba094b85bdecb279eced366cab2f3aa55c0966dc41e17a03ba9ba863832dc-merged.mount: Deactivated successfully.
Nov 25 12:01:08 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:08Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:ec:69 10.100.0.8
Nov 25 12:01:08 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:08Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:ec:69 10.100.0.8
Nov 25 12:01:08 np0005535469 podman[379741]: 2025-11-25 17:01:08.669663202 +0000 UTC m=+0.764603816 container remove 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 12:01:08 np0005535469 systemd[1]: libpod-conmon-2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6.scope: Deactivated successfully.
Nov 25 12:01:08 np0005535469 podman[379786]: 2025-11-25 17:01:08.912604039 +0000 UTC m=+0.112576498 container create 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:01:08 np0005535469 podman[379777]: 2025-11-25 17:01:08.916485004 +0000 UTC m=+0.123088613 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:01:08 np0005535469 podman[379785]: 2025-11-25 17:01:08.917865852 +0000 UTC m=+0.121363227 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:01:08 np0005535469 podman[379786]: 2025-11-25 17:01:08.824893949 +0000 UTC m=+0.024866438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:01:09 np0005535469 systemd[1]: Started libpod-conmon-0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73.scope.
Nov 25 12:01:09 np0005535469 podman[379783]: 2025-11-25 17:01:09.081036756 +0000 UTC m=+0.284287734 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:01:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:09 np0005535469 podman[379786]: 2025-11-25 17:01:09.166779481 +0000 UTC m=+0.366751940 container init 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:01:09 np0005535469 podman[379786]: 2025-11-25 17:01:09.174558563 +0000 UTC m=+0.374531022 container start 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:01:09 np0005535469 nova_compute[254092]: 2025-11-25 17:01:09.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:09 np0005535469 podman[379786]: 2025-11-25 17:01:09.352562812 +0000 UTC m=+0.552535271 container attach 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:01:09 np0005535469 nova_compute[254092]: 2025-11-25 17:01:09.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]: {
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "osd_id": 1,
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "type": "bluestore"
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:    },
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "osd_id": 2,
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "type": "bluestore"
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:    },
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "osd_id": 0,
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:        "type": "bluestore"
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]:    }
Nov 25 12:01:10 np0005535469 dreamy_elbakyan[379864]: }
Nov 25 12:01:10 np0005535469 systemd[1]: libpod-0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73.scope: Deactivated successfully.
Nov 25 12:01:10 np0005535469 podman[379786]: 2025-11-25 17:01:10.120563619 +0000 UTC m=+1.320536078 container died 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:01:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299-merged.mount: Deactivated successfully.
Nov 25 12:01:10 np0005535469 podman[379786]: 2025-11-25 17:01:10.175099253 +0000 UTC m=+1.375071712 container remove 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:01:10 np0005535469 systemd[1]: libpod-conmon-0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73.scope: Deactivated successfully.
Nov 25 12:01:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:01:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:01:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:01:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c21ae369-2853-478b-a50f-d5461ae1e5e6 does not exist
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev eee7bdfd-c403-42b9-b6f1-8aa72ed11f2d does not exist
Nov 25 12:01:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 115 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 120 op/s
Nov 25 12:01:10 np0005535469 nova_compute[254092]: 2025-11-25 17:01:10.288 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090055.286896, d643dc57-9536-4a67-9a17-c20512710ea5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:10 np0005535469 nova_compute[254092]: 2025-11-25 17:01:10.289 254096 INFO nova.compute.manager [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:01:10 np0005535469 nova_compute[254092]: 2025-11-25 17:01:10.307 254096 DEBUG nova.compute.manager [None req-8ba4d26d-1747-41c8-8461-44bb703c0c91 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:10 np0005535469 nova_compute[254092]: 2025-11-25 17:01:10.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:01:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:01:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 121 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 12:01:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:13.637 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:13.639 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 121 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:01:14 np0005535469 nova_compute[254092]: 2025-11-25 17:01:14.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:14 np0005535469 nova_compute[254092]: 2025-11-25 17:01:14.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:15 np0005535469 nova_compute[254092]: 2025-11-25 17:01:15.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:01:16 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:01:16 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:01:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:01:19 np0005535469 nova_compute[254092]: 2025-11-25 17:01:19.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:01:20 np0005535469 nova_compute[254092]: 2025-11-25 17:01:20.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:20 np0005535469 nova_compute[254092]: 2025-11-25 17:01:20.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:20 np0005535469 nova_compute[254092]: 2025-11-25 17:01:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 300 KiB/s wr, 19 op/s
Nov 25 12:01:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 4 op/s
Nov 25 12:01:24 np0005535469 nova_compute[254092]: 2025-11-25 17:01:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:24 np0005535469 nova_compute[254092]: 2025-11-25 17:01:24.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:24 np0005535469 nova_compute[254092]: 2025-11-25 17:01:24.925 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:24 np0005535469 nova_compute[254092]: 2025-11-25 17:01:24.925 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:24 np0005535469 nova_compute[254092]: 2025-11-25 17:01:24.959 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.083 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.084 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.093 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.094 254096 INFO nova.compute.claims [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.220 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:01:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448640309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.639 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.646 254096 DEBUG nova.compute.provider_tree [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.667 254096 DEBUG nova.scheduler.client.report [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.698 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.699 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.805 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.805 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.846 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.864 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.965 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.967 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.967 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Creating image(s)#033[00m
Nov 25 12:01:25 np0005535469 nova_compute[254092]: 2025-11-25 17:01:25.987 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.006 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.025 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.029 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.098 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.098 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.099 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.099 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.117 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.120 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.220 254096 DEBUG nova.policy [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:01:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 4 op/s
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.415 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.479 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.568 254096 DEBUG nova.objects.instance [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.582 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.582 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Ensure instance console log exists: /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.583 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.583 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:26 np0005535469 nova_compute[254092]: 2025-11-25 17:01:26.583 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:27 np0005535469 nova_compute[254092]: 2025-11-25 17:01:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:27 np0005535469 nova_compute[254092]: 2025-11-25 17:01:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Nov 25 12:01:28 np0005535469 nova_compute[254092]: 2025-11-25 17:01:28.560 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully created port: 9a960a19-c599-4217-b99c-ac16fe6384b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.422 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully updated port: 9a960a19-c599-4217-b99c-ac16fe6384b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.442 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.442 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.442 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.502 254096 DEBUG nova.compute.manager [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.503 254096 DEBUG nova.compute.manager [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.503 254096 DEBUG oslo_concurrency.lockutils [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.597 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:01:29 np0005535469 nova_compute[254092]: 2025-11-25 17:01:29.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 150 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.307 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.337 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.338 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance network_info: |[{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.338 254096 DEBUG oslo_concurrency.lockutils [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.339 254096 DEBUG nova.network.neutron [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.344 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start _get_guest_xml network_info=[{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.352 254096 WARNING nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.364 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.365 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.369 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.370 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.371 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.372 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.373 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.373 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.374 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.374 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.375 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.375 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.376 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.376 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.377 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.377 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.382 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:01:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733548415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.844 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.873 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:30 np0005535469 nova_compute[254092]: 2025-11-25 17:01:30.888 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:01:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8111940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.045 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.137 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.137 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.306 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.383 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.384 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.391 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.393 254096 INFO nova.compute.claims [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.411 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.942745208740234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:01:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3123534453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.503 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.505 254096 DEBUG nova.virt.libvirt.vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:25Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.505 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.507 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.508 254096 DEBUG nova.objects.instance [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.529 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <name>instance-00000072</name>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:01:30</nova:creationTime>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <entry name="serial">f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <entry name="uuid">f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:a8:c9:7b"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <target dev="tap9a960a19-c5"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log" append="off"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:01:31 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:01:31 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:01:31 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:01:31 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.531 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Preparing to wait for external event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.532 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.533 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.533 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.535 254096 DEBUG nova.virt.libvirt.vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:25Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.536 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.537 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.538 254096 DEBUG os_vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.541 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.541 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.547 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a960a19-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.548 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a960a19-c5, col_values=(('external_ids', {'iface-id': '9a960a19-c599-4217-b99c-ac16fe6384b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:c9:7b', 'vm-uuid': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:31 np0005535469 NetworkManager[48891]: <info>  [1764090091.5519] manager: (tap9a960a19-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.559 254096 INFO os_vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5')#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.564 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.660 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.661 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.661 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:a8:c9:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.661 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Using config drive#033[00m
Nov 25 12:01:31 np0005535469 nova_compute[254092]: 2025-11-25 17:01:31.681 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:01:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171348109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.066 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.073 254096 DEBUG nova.compute.provider_tree [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.092 254096 DEBUG nova.scheduler.client.report [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.122 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.124 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.128 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.236 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.237 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.256 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:01:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.280 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance caf64ca2-5f73-454a-8442-9965c9853cba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.377 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.378 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.378 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Creating image(s)#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.399 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.431 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.467 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.472 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.533 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.580 254096 DEBUG nova.network.neutron [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updated VIF entry in instance network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.581 254096 DEBUG nova.network.neutron [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.585 254096 DEBUG nova.policy [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'baa046e735b94aba93374dff061b9e77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa394380a92d48188f2de86f1a100c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.590 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.591 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.592 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.592 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.619 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.625 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.674 254096 DEBUG oslo_concurrency.lockutils [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.759 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Creating config drive at /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.766 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppwo4f3c5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.927 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppwo4f3c5" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.964 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:32 np0005535469 nova_compute[254092]: 2025-11-25 17:01:32.971 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:01:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971799756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.007 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.035 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.069 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] resizing rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.097 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.114 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.137 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.137 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.140 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.141 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deleting local config drive /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config because it was imported into RBD.#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.175 254096 DEBUG nova.objects.instance [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:01:33 np0005535469 NetworkManager[48891]: <info>  [1764090093.1863] manager: (tap9a960a19-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/483)
Nov 25 12:01:33 np0005535469 kernel: tap9a960a19-c5: entered promiscuous mode
Nov 25 12:01:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:33Z|01171|binding|INFO|Claiming lport 9a960a19-c599-4217-b99c-ac16fe6384b1 for this chassis.
Nov 25 12:01:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:33Z|01172|binding|INFO|9a960a19-c599-4217-b99c-ac16fe6384b1: Claiming fa:16:3e:a8:c9:7b 10.100.0.3
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.193 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.194 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Ensure instance console log exists: /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.194 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.195 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.195 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.197 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:c9:7b 10.100.0.3'], port_security=['fa:16:3e:a8:c9:7b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '573985ee-22d8-4e8a-b764-ea06c40f2ee7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ece2fc14-3f44-4554-9543-96a461b3adc3, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9a960a19-c599-4217-b99c-ac16fe6384b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.198 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9a960a19-c599-4217-b99c-ac16fe6384b1 in datapath 131ae834-ee81-42ce-b61e-863b3a8d52e1 bound to our chassis#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.199 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 131ae834-ee81-42ce-b61e-863b3a8d52e1#033[00m
Nov 25 12:01:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:33Z|01173|binding|INFO|Setting lport 9a960a19-c599-4217-b99c-ac16fe6384b1 ovn-installed in OVS
Nov 25 12:01:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:33Z|01174|binding|INFO|Setting lport 9a960a19-c599-4217-b99c-ac16fe6384b1 up in Southbound
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.232 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a725a70-5a09-43f6-88d0-b796af324bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.233 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap131ae834-e1 in ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.235 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap131ae834-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[32ffe03f-ceea-4fa1-8f71-606353c7b0c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41f0c8d2-7d9a-4551-9e10-6cfeeb804d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 systemd-udevd[380513]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:01:33 np0005535469 NetworkManager[48891]: <info>  [1764090093.2500] device (tap9a960a19-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:01:33 np0005535469 NetworkManager[48891]: <info>  [1764090093.2510] device (tap9a960a19-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.250 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[14fd8c3f-8789-40c3-9614-c6cdf886da60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 systemd-machined[216343]: New machine qemu-146-instance-00000072.
Nov 25 12:01:33 np0005535469 systemd[1]: Started Virtual Machine qemu-146-instance-00000072.
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.273 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54c5f3b2-cbbf-463c-ae23-ebbfc54f5fae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.302 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[49f4b1dc-9724-4106-8e3b-efbc225dceda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 systemd-udevd[380519]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:01:33 np0005535469 NetworkManager[48891]: <info>  [1764090093.3079] manager: (tap131ae834-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/484)
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7912d7-9001-4939-897c-2371483e50ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.337 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d6443277-1bce-4737-842f-8e553464d272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.340 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[508b7a61-1bc1-4653-b231-4abb509d21ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 NetworkManager[48891]: <info>  [1764090093.3595] device (tap131ae834-e0): carrier: link connected
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.365 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[46f60e1d-9fe7-4533-8bb0-d69aea915072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.385 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb770f7-7672-403f-b3de-c7b1a2eb2000]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap131ae834-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:27:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655095, 'reachable_time': 20234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380548, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.399 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f56c93d-d8c4-4eee-b54d-0bc0160e6da8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:278b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655095, 'tstamp': 655095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380549, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a791cb8-9b1f-4cf0-85ed-ce8717d0dd8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap131ae834-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:27:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655095, 'reachable_time': 20234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380550, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.448 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7493bb7a-1264-4750-b754-b1b9bff16fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c543888-9e10-44f1-b649-6fbcbbb7ad32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.523 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap131ae834-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.523 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.524 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap131ae834-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:33 np0005535469 kernel: tap131ae834-e0: entered promiscuous mode
Nov 25 12:01:33 np0005535469 NetworkManager[48891]: <info>  [1764090093.5278] manager: (tap131ae834-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.527 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.533 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap131ae834-e0, col_values=(('external_ids', {'iface-id': '60519f01-35a8-45ac-b477-17b0e31a750f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:33Z|01175|binding|INFO|Releasing lport 60519f01-35a8-45ac-b477-17b0e31a750f from this chassis (sb_readonly=0)
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.561 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/131ae834-ee81-42ce-b61e-863b3a8d52e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/131ae834-ee81-42ce-b61e-863b3a8d52e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.563 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a16b1fa5-1247-4e75-bbd9-e2db6a9280d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.565 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-131ae834-ee81-42ce-b61e-863b3a8d52e1
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/131ae834-ee81-42ce-b61e-863b3a8d52e1.pid.haproxy
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 131ae834-ee81-42ce-b61e-863b3a8d52e1
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:01:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.567 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'env', 'PROCESS_TAG=haproxy-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/131ae834-ee81-42ce-b61e-863b3a8d52e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.617 254096 DEBUG nova.compute.manager [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.618 254096 DEBUG oslo_concurrency.lockutils [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.619 254096 DEBUG oslo_concurrency.lockutils [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.620 254096 DEBUG oslo_concurrency.lockutils [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.620 254096 DEBUG nova.compute.manager [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Processing event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.722 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090093.7218275, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.723 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Started (Lifecycle Event)#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.725 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.733 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.735 254096 INFO nova.virt.libvirt.driver [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance spawned successfully.#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.735 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.738 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.741 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.750 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.751 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.751 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.752 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.752 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.753 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.757 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090093.722729, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.782 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.785 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090093.7321575, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.785 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.806 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.811 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.818 254096 INFO nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 7.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.818 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.839 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.877 254096 INFO nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 8.82 seconds to build instance.#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.885 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Successfully created port: 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:01:33 np0005535469 nova_compute[254092]: 2025-11-25 17:01:33.891 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:33 np0005535469 podman[380624]: 2025-11-25 17:01:33.954408291 +0000 UTC m=+0.041618725 container create 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:01:33 np0005535469 systemd[1]: Started libpod-conmon-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01.scope.
Nov 25 12:01:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30edfa0bf3043d83325a2d2e3ffee6672b0c56070c560af5182afdcb7482f9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:34 np0005535469 podman[380624]: 2025-11-25 17:01:33.933346537 +0000 UTC m=+0.020557001 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:01:34 np0005535469 podman[380624]: 2025-11-25 17:01:34.037940155 +0000 UTC m=+0.125150619 container init 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:01:34 np0005535469 podman[380624]: 2025-11-25 17:01:34.043074225 +0000 UTC m=+0.130284659 container start 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 12:01:34 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : New worker (380645) forked
Nov 25 12:01:34 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : Loading success.
Nov 25 12:01:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.743 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Successfully updated port: 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.764 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.765 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.765 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.889 254096 DEBUG nova.compute.manager [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.889 254096 DEBUG nova.compute.manager [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing instance network info cache due to event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.890 254096 DEBUG oslo_concurrency.lockutils [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:01:34 np0005535469 nova_compute[254092]: 2025-11-25 17:01:34.965 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.789 254096 DEBUG nova.compute.manager [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.790 254096 DEBUG oslo_concurrency.lockutils [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.790 254096 DEBUG oslo_concurrency.lockutils [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.791 254096 DEBUG oslo_concurrency.lockutils [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.791 254096 DEBUG nova.compute.manager [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:01:35 np0005535469 nova_compute[254092]: 2025-11-25 17:01:35.791 254096 WARNING nova.compute.manager [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.043 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.067 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.068 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance network_info: |[{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.069 254096 DEBUG oslo_concurrency.lockutils [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.070 254096 DEBUG nova.network.neutron [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.076 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start _get_guest_xml network_info=[{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.084 254096 WARNING nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.095 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.096 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.100 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.101 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.102 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.102 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.102 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.103 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.103 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.103 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.104 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.104 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.109 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 213 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Nov 25 12:01:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:01:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260906191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.538 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.571 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.576 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:36 np0005535469 nova_compute[254092]: 2025-11-25 17:01:36.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:01:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082642859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.049 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.052 254096 DEBUG nova.virt.libvirt.vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-877248969-acc',id=115,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYG6882dVO70OVi4d9NVrB3PaeuhIrFZ+oR1NKshvHYJDUOm1rbaI60huuXoUEKrmzCPg+QgDBxi0uURLyDj9uJZlfeSkkPsCqzCs3wQ3F9X3LJ3PkXg4AAZvavey5RFw==',key_name='tempest-TestSecurityGroupsBasicOps-143893991',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa394380a92d48188f2de86f1a100c08',ramdisk_id='',reservation_id='r-2k83oce0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-877248969',owner_user_name='tempest-TestSecurityGroupsBasicOps-877248969-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:32Z,user_data=None,user_id='baa046e735b94aba93374dff061b9e77',uuid=1f0c80e2-19cd-43ba-881d-e24e5bcd62fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.052 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converting VIF {"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.054 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.056 254096 DEBUG nova.objects.instance [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.073 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <uuid>1f0c80e2-19cd-43ba-881d-e24e5bcd62fb</uuid>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <name>instance-00000073</name>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942</nova:name>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:01:36</nova:creationTime>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:user uuid="baa046e735b94aba93374dff061b9e77">tempest-TestSecurityGroupsBasicOps-877248969-project-member</nova:user>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:project uuid="aa394380a92d48188f2de86f1a100c08">tempest-TestSecurityGroupsBasicOps-877248969</nova:project>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <nova:port uuid="3751c8d6-0f1e-4902-8016-cf37bf3c1ad3">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <entry name="serial">1f0c80e2-19cd-43ba-881d-e24e5bcd62fb</entry>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <entry name="uuid">1f0c80e2-19cd-43ba-881d-e24e5bcd62fb</entry>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:cd:6a:fb"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <target dev="tap3751c8d6-0f"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/console.log" append="off"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:01:37 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:01:37 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:01:37 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:01:37 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.076 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Preparing to wait for external event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.076 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.076 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.077 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.078 254096 DEBUG nova.virt.libvirt.vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-877248969-acc',id=115,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYG6882dVO70OVi4d9NVrB3PaeuhIrFZ+oR1NKshvHYJDUOm1rbaI60huuXoUEKrmzCPg+QgDBxi0uURLyDj9uJZlfeSkkPsCqzCs3wQ3F9X3LJ3PkXg4AAZvavey5RFw==',key_name='tempest-TestSecurityGroupsBasicOps-143893991',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa394380a92d48188f2de86f1a100c08',ramdisk_id='',reservation_id='r-2k83oce0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-877248969',owner_user_name='tempest-TestSecurityGroupsBasicOps-877248969-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:32Z,user_data=None,user_id='baa046e735b94aba93374dff061b9e77',uuid=1f0c80e2-19cd-43ba-881d-e24e5bcd62fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.078 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converting VIF {"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.079 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.079 254096 DEBUG os_vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.080 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.081 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.081 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3751c8d6-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.086 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3751c8d6-0f, col_values=(('external_ids', {'iface-id': '3751c8d6-0f1e-4902-8016-cf37bf3c1ad3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:6a:fb', 'vm-uuid': '1f0c80e2-19cd-43ba-881d-e24e5bcd62fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:37 np0005535469 NetworkManager[48891]: <info>  [1764090097.1262] manager: (tap3751c8d6-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.134 254096 INFO os_vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f')#033[00m
Nov 25 12:01:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:01:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 10K writes, 48K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1440 writes, 7257 keys, 1440 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s#012Interval WAL: 1440 writes, 1440 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     27.1      2.07              0.18        32    0.065       0      0       0.0       0.0#012  L6      1/0    7.81 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    105.9     88.4      2.76              0.70        31    0.089    181K    17K       0.0       0.0#012 Sum      1/0    7.81 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4     60.5     62.1      4.83              0.88        63    0.077    181K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4     83.6     81.8      0.84              0.19        14    0.060     51K   4061       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    105.9     88.4      2.76              0.70        31    0.089    181K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     27.7      2.03              0.18        31    0.065       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.1 total, 600.0 interval#012Flush(GB): cumulative 0.055, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.07 MB/s write, 0.29 GB read, 0.07 MB/s read, 4.8 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 33.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.00023 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2219,32.43 MB,10.6665%) FilterBlock(64,511.73 KB,0.164388%) IndexBlock(64,848.36 KB,0.272525%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.193 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] No VIF found with MAC fa:16:3e:cd:6a:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.193 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Using config drive#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.215 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.220 254096 DEBUG nova.network.neutron [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updated VIF entry in instance network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.221 254096 DEBUG nova.network.neutron [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.247 254096 DEBUG oslo_concurrency.lockutils [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.561 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Creating config drive at /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.566 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4a2v30t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.706 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4a2v30t" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.737 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.741 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.885 254096 DEBUG nova.compute.manager [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.886 254096 DEBUG nova.compute.manager [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.887 254096 DEBUG oslo_concurrency.lockutils [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.887 254096 DEBUG oslo_concurrency.lockutils [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.887 254096 DEBUG nova.network.neutron [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.905 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.906 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deleting local config drive /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config because it was imported into RBD.#033[00m
Nov 25 12:01:37 np0005535469 kernel: tap3751c8d6-0f: entered promiscuous mode
Nov 25 12:01:37 np0005535469 NetworkManager[48891]: <info>  [1764090097.9649] manager: (tap3751c8d6-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:37Z|01176|binding|INFO|Claiming lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for this chassis.
Nov 25 12:01:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:37Z|01177|binding|INFO|3751c8d6-0f1e-4902-8016-cf37bf3c1ad3: Claiming fa:16:3e:cd:6a:fb 10.100.0.7
Nov 25 12:01:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.978 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:6a:fb 10.100.0.7'], port_security=['fa:16:3e:cd:6a:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f0c80e2-19cd-43ba-881d-e24e5bcd62fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b79c328-9376-4e36-9211-72ee228f98d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa394380a92d48188f2de86f1a100c08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d20ac94-0311-45d3-bbc9-0b5ca7a32bc8 4b68beb0-85ec-4a9a-a335-1e3ed4aadbc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffca90a1-5def-405b-be68-948ef468bd95, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:01:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.980 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 in datapath 3b79c328-9376-4e36-9211-72ee228f98d6 bound to our chassis#033[00m
Nov 25 12:01:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.982 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b79c328-9376-4e36-9211-72ee228f98d6#033[00m
Nov 25 12:01:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:37Z|01178|binding|INFO|Setting lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 ovn-installed in OVS
Nov 25 12:01:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:37Z|01179|binding|INFO|Setting lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 up in Southbound
Nov 25 12:01:37 np0005535469 nova_compute[254092]: 2025-11-25 17:01:37.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b89df06-6d57-4edb-8929-c579392b1dda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.996 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b79c328-91 in ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.006 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b79c328-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c473830-0007-4093-af36-51fd0ed7c91e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 systemd-machined[216343]: New machine qemu-147-instance-00000073.
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.007 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6577c232-b68d-400b-9396-f5dd7af4d4e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 systemd[1]: Started Virtual Machine qemu-147-instance-00000073.
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.030 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[03fdd1db-3e71-4c1d-b916-240de1a7a461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 systemd-udevd[380794]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.054 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00f76205-f34a-4c47-aa68-1bdfd2874a02]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 NetworkManager[48891]: <info>  [1764090098.0675] device (tap3751c8d6-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:01:38 np0005535469 NetworkManager[48891]: <info>  [1764090098.0689] device (tap3751c8d6-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.109 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d61c2a-802f-454d-8f3b-2305de927054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.120 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[414520f5-c6ac-45d1-8de6-98bc2d0e96e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 NetworkManager[48891]: <info>  [1764090098.1209] manager: (tap3b79c328-90): new Veth device (/org/freedesktop/NetworkManager/Devices/488)
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.164 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7de674da-6378-4605-9162-200d00929fbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.167 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf8011f-7a6c-4e60-9ccc-538be3083147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 NetworkManager[48891]: <info>  [1764090098.2003] device (tap3b79c328-90): carrier: link connected
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.207 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1127e4-f114-47d9-93d9-d2f4330f969c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.226 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c864161-736f-42ff-bc28-e760d1570448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b79c328-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:5c:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655579, 'reachable_time': 29590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380822, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1eb271-9cea-48fb-8e2a-d18f810640b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:5c7f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655579, 'tstamp': 655579}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380823, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.261 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a13c822-d9b2-4281-b375-dfafd2b3a6a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b79c328-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:5c:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655579, 'reachable_time': 29590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380824, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 213 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.305 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ca3dc0-27b9-4539-b807-fabdcc7554a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.379 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8289deae-1fd6-4d08-8579-a821df2ff598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b79c328-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.381 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.381 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b79c328-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:38 np0005535469 kernel: tap3b79c328-90: entered promiscuous mode
Nov 25 12:01:38 np0005535469 NetworkManager[48891]: <info>  [1764090098.3845] manager: (tap3b79c328-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.388 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b79c328-90, col_values=(('external_ids', {'iface-id': 'a599bfa4-c512-4c62-b0a4-4d2ab863ab24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:01:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:38Z|01180|binding|INFO|Releasing lport a599bfa4-c512-4c62-b0a4-4d2ab863ab24 from this chassis (sb_readonly=0)
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.392 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b79c328-9376-4e36-9211-72ee228f98d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b79c328-9376-4e36-9211-72ee228f98d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.393 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20bf0343-0e88-47d9-b347-b5d1b20259a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.394 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-3b79c328-9376-4e36-9211-72ee228f98d6
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/3b79c328-9376-4e36-9211-72ee228f98d6.pid.haproxy
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 3b79c328-9376-4e36-9211-72ee228f98d6
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:01:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.395 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'env', 'PROCESS_TAG=haproxy-3b79c328-9376-4e36-9211-72ee228f98d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b79c328-9376-4e36-9211-72ee228f98d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.554 254096 DEBUG nova.compute.manager [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.555 254096 DEBUG oslo_concurrency.lockutils [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.555 254096 DEBUG oslo_concurrency.lockutils [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.555 254096 DEBUG oslo_concurrency.lockutils [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.556 254096 DEBUG nova.compute.manager [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Processing event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:01:38 np0005535469 podman[380881]: 2025-11-25 17:01:38.743728751 +0000 UTC m=+0.045922451 container create 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:01:38 np0005535469 systemd[1]: Started libpod-conmon-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27.scope.
Nov 25 12:01:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.809 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090098.8088908, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.809 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Started (Lifecycle Event)#033[00m
Nov 25 12:01:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb6ae46dd6df8dbd8f0da40a4c941046d2ca1eb5dcd17ed4f8729c326d3da3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.811 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.815 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:01:38 np0005535469 podman[380881]: 2025-11-25 17:01:38.720993322 +0000 UTC m=+0.023187052 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.818 254096 INFO nova.virt.libvirt.driver [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance spawned successfully.#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.818 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.834 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:38 np0005535469 podman[380881]: 2025-11-25 17:01:38.834544755 +0000 UTC m=+0.136738465 container init 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.839 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:01:38 np0005535469 podman[380881]: 2025-11-25 17:01:38.840000824 +0000 UTC m=+0.142194524 container start 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.842 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.842 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.843 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.843 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.844 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.844 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:01:38 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : New worker (380917) forked
Nov 25 12:01:38 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : Loading success.
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.875 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090098.809031, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.876 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.894 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.897 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090098.8145552, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.897 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.916 254096 INFO nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 6.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.917 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.918 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.923 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.947 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.973 254096 INFO nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 7.61 seconds to build instance.#033[00m
Nov 25 12:01:38 np0005535469 nova_compute[254092]: 2025-11-25 17:01:38.986 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:39 np0005535469 nova_compute[254092]: 2025-11-25 17:01:39.285 254096 DEBUG nova.network.neutron [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updated VIF entry in instance network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:01:39 np0005535469 nova_compute[254092]: 2025-11-25 17:01:39.286 254096 DEBUG nova.network.neutron [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:01:39 np0005535469 nova_compute[254092]: 2025-11-25 17:01:39.399 254096 DEBUG oslo_concurrency.lockutils [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:01:39 np0005535469 podman[380927]: 2025-11-25 17:01:39.660050428 +0000 UTC m=+0.078880760 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 12:01:39 np0005535469 podman[380926]: 2025-11-25 17:01:39.69206791 +0000 UTC m=+0.111154778 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 12:01:39 np0005535469 podman[380928]: 2025-11-25 17:01:39.713610746 +0000 UTC m=+0.128984314 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 12:01:39 np0005535469 nova_compute[254092]: 2025-11-25 17:01:39.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:01:40
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 170 op/s
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:01:40 np0005535469 nova_compute[254092]: 2025-11-25 17:01:40.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:01:40 np0005535469 nova_compute[254092]: 2025-11-25 17:01:40.634 254096 DEBUG nova.compute.manager [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:40 np0005535469 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG oslo_concurrency.lockutils [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:01:40 np0005535469 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG oslo_concurrency.lockutils [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:01:40 np0005535469 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG oslo_concurrency.lockutils [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:01:40 np0005535469 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG nova.compute.manager [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] No waiting events found dispatching network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:01:40 np0005535469 nova_compute[254092]: 2025-11-25 17:01:40.636 254096 WARNING nova.compute.manager [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received unexpected event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:01:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:42 np0005535469 nova_compute[254092]: 2025-11-25 17:01:42.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 164 op/s
Nov 25 12:01:44 np0005535469 nova_compute[254092]: 2025-11-25 17:01:44.134 254096 DEBUG nova.compute.manager [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:01:44 np0005535469 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG nova.compute.manager [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing instance network info cache due to event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:01:44 np0005535469 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG oslo_concurrency.lockutils [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:01:44 np0005535469 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG oslo_concurrency.lockutils [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:01:44 np0005535469 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG nova.network.neutron [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:01:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Nov 25 12:01:44 np0005535469 nova_compute[254092]: 2025-11-25 17:01:44.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:45 np0005535469 nova_compute[254092]: 2025-11-25 17:01:45.718 254096 DEBUG nova.network.neutron [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updated VIF entry in instance network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:01:45 np0005535469 nova_compute[254092]: 2025-11-25 17:01:45.719 254096 DEBUG nova.network.neutron [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:01:45 np0005535469 nova_compute[254092]: 2025-11-25 17:01:45.891 254096 DEBUG oslo_concurrency.lockutils [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:01:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Nov 25 12:01:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:47 np0005535469 nova_compute[254092]: 2025-11-25 17:01:47.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 79 op/s
Nov 25 12:01:49 np0005535469 nova_compute[254092]: 2025-11-25 17:01:49.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 225 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016948831355407957 of space, bias 1.0, pg target 0.5084649406622387 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:01:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:01:52 np0005535469 nova_compute[254092]: 2025-11-25 17:01:52.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 226 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Nov 25 12:01:53 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 7.032744408s
Nov 25 12:01:53 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 7.161718845s
Nov 25 12:01:53 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.162920952s, txc = 0x563f6707ec00
Nov 25 12:01:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:54 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 7.214835167s
Nov 25 12:01:54 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 7.214835644s
Nov 25 12:01:54 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.347607136s, txc = 0x5618dceb2000
Nov 25 12:01:54 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.332983017s, txc = 0x5618dc2cc900
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_flush, latency = 7.010043621s
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 7.254073143s
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.254876614s, txc = 0x55750bf89b00
Nov 25 12:01:54 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.675047874s, txc = 0x563f6643f200
Nov 25 12:01:54 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672909737s, txc = 0x563f67005b00
Nov 25 12:01:54 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672739983s, txc = 0x563f68024300
Nov 25 12:01:54 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672325134s, txc = 0x563f66436900
Nov 25 12:01:54 np0005535469 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672142982s, txc = 0x563f6707f500
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.703565598s, txc = 0x55750ca00300
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.703107357s, txc = 0x55750c880900
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.702624798s, txc = 0x55750bff4300
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.701981544s, txc = 0x55750bff5500
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.701769829s, txc = 0x55750bf82600
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.701525211s, txc = 0x55750cc8ec00
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700932980s, txc = 0x55750bff4900
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700676918s, txc = 0x55750bf81b00
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700468540s, txc = 0x55750bf5a300
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700205803s, txc = 0x55750c01ec00
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.699466705s, txc = 0x55750bec2f00
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.697698116s, txc = 0x55750c01e600
Nov 25 12:01:54 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.696033478s, txc = 0x55750c01f800
Nov 25 12:01:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 226 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Nov 25 12:01:54 np0005535469 nova_compute[254092]: 2025-11-25 17:01:54.857 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:01:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3354207974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:01:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:01:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3354207974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:01:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 239 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 2.6 MiB/s wr, 53 op/s
Nov 25 12:01:57 np0005535469 nova_compute[254092]: 2025-11-25 17:01:57.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:01:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 239 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 37 op/s
Nov 25 12:01:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:01:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:59Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:6a:fb 10.100.0.7
Nov 25 12:01:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:01:59Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:6a:fb 10.100.0.7
Nov 25 12:01:59 np0005535469 nova_compute[254092]: 2025-11-25 17:01:59.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 244 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 3.0 MiB/s wr, 49 op/s
Nov 25 12:02:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:00Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:c9:7b 10.100.0.3
Nov 25 12:02:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:00Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:c9:7b 10.100.0.3
Nov 25 12:02:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 266 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 3.2 MiB/s wr, 85 op/s
Nov 25 12:02:02 np0005535469 nova_compute[254092]: 2025-11-25 17:02:02.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 266 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 2.8 MiB/s wr, 80 op/s
Nov 25 12:02:04 np0005535469 nova_compute[254092]: 2025-11-25 17:02:04.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 2.9 MiB/s wr, 100 op/s
Nov 25 12:02:06 np0005535469 nova_compute[254092]: 2025-11-25 17:02:06.755 254096 INFO nova.compute.manager [None req-18bccb64-ee3f-4a67-a2be-697e2f46b053 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Get console output#033[00m
Nov 25 12:02:06 np0005535469 nova_compute[254092]: 2025-11-25 17:02:06.760 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:02:07 np0005535469 nova_compute[254092]: 2025-11-25 17:02:07.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:07 np0005535469 nova_compute[254092]: 2025-11-25 17:02:07.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:07.321 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:02:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:07.322 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:02:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:07.323 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Nov 25 12:02:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:09 np0005535469 nova_compute[254092]: 2025-11-25 17:02:09.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:02:10 np0005535469 nova_compute[254092]: 2025-11-25 17:02:10.208 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:10 np0005535469 nova_compute[254092]: 2025-11-25 17:02:10.209 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:10 np0005535469 nova_compute[254092]: 2025-11-25 17:02:10.209 254096 DEBUG nova.objects.instance [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Nov 25 12:02:10 np0005535469 nova_compute[254092]: 2025-11-25 17:02:10.606 254096 DEBUG nova.objects.instance [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_requests' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:10 np0005535469 podman[381016]: 2025-11-25 17:02:10.609957164 +0000 UTC m=+0.086040520 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 12:02:10 np0005535469 podman[381017]: 2025-11-25 17:02:10.615900568 +0000 UTC m=+0.089318771 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:02:10 np0005535469 nova_compute[254092]: 2025-11-25 17:02:10.625 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:02:10 np0005535469 podman[381018]: 2025-11-25 17:02:10.630330984 +0000 UTC m=+0.098973926 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:02:10 np0005535469 nova_compute[254092]: 2025-11-25 17:02:10.799 254096 DEBUG nova.policy [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:02:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 94f15886-27c9-46aa-9936-f2558df14349 does not exist
Nov 25 12:02:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev be5e1915-7b8d-494e-bd82-7f765d41102a does not exist
Nov 25 12:02:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 26088abf-ce88-43b4-9f04-c9d46545b748 does not exist
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:02:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:02:11 np0005535469 nova_compute[254092]: 2025-11-25 17:02:11.571 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully created port: 1e4a418b-b459-456b-ae99-26f8b034a7bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:02:12 np0005535469 podman[381326]: 2025-11-25 17:02:12.152582126 +0000 UTC m=+0.029130890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:02:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 1.3 MiB/s wr, 74 op/s
Nov 25 12:02:12 np0005535469 nova_compute[254092]: 2025-11-25 17:02:12.285 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:12 np0005535469 podman[381326]: 2025-11-25 17:02:12.403337004 +0000 UTC m=+0.279885748 container create aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:02:12 np0005535469 systemd[1]: Started libpod-conmon-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope.
Nov 25 12:02:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:12 np0005535469 podman[381326]: 2025-11-25 17:02:12.624982733 +0000 UTC m=+0.501531477 container init aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:02:12 np0005535469 podman[381326]: 2025-11-25 17:02:12.639366048 +0000 UTC m=+0.515914782 container start aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:02:12 np0005535469 busy_wilbur[381342]: 167 167
Nov 25 12:02:12 np0005535469 systemd[1]: libpod-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope: Deactivated successfully.
Nov 25 12:02:12 np0005535469 conmon[381342]: conmon aaa9fed22137f30cce6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope/container/memory.events
Nov 25 12:02:12 np0005535469 podman[381326]: 2025-11-25 17:02:12.756775708 +0000 UTC m=+0.633324482 container attach aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:02:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:02:12 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:02:12 np0005535469 podman[381326]: 2025-11-25 17:02:12.759496773 +0000 UTC m=+0.636045527 container died aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:02:12 np0005535469 nova_compute[254092]: 2025-11-25 17:02:12.998 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully updated port: 1e4a418b-b459-456b-ae99-26f8b034a7bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:02:13 np0005535469 nova_compute[254092]: 2025-11-25 17:02:13.026 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:13 np0005535469 nova_compute[254092]: 2025-11-25 17:02:13.027 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:13 np0005535469 nova_compute[254092]: 2025-11-25 17:02:13.027 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:02:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-87b495a205acfe8172566596bb6ac34c8979a7284083b3987f6a1ef51a91c734-merged.mount: Deactivated successfully.
Nov 25 12:02:13 np0005535469 podman[381326]: 2025-11-25 17:02:13.083897711 +0000 UTC m=+0.960446475 container remove aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:02:13 np0005535469 systemd[1]: libpod-conmon-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope: Deactivated successfully.
Nov 25 12:02:13 np0005535469 nova_compute[254092]: 2025-11-25 17:02:13.120 254096 DEBUG nova.compute.manager [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:13 np0005535469 nova_compute[254092]: 2025-11-25 17:02:13.120 254096 DEBUG nova.compute.manager [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-1e4a418b-b459-456b-ae99-26f8b034a7bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:02:13 np0005535469 nova_compute[254092]: 2025-11-25 17:02:13.120 254096 DEBUG oslo_concurrency.lockutils [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:13 np0005535469 podman[381367]: 2025-11-25 17:02:13.334779902 +0000 UTC m=+0.060319755 container create d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:02:13 np0005535469 systemd[1]: Started libpod-conmon-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope.
Nov 25 12:02:13 np0005535469 podman[381367]: 2025-11-25 17:02:13.301006485 +0000 UTC m=+0.026546378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:02:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:13 np0005535469 podman[381367]: 2025-11-25 17:02:13.419110594 +0000 UTC m=+0.144650467 container init d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:02:13 np0005535469 podman[381367]: 2025-11-25 17:02:13.427079403 +0000 UTC m=+0.152619256 container start d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 12:02:13 np0005535469 podman[381367]: 2025-11-25 17:02:13.430148327 +0000 UTC m=+0.155688200 container attach d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:02:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:13.639 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:13.643 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 99 KiB/s wr, 20 op/s
Nov 25 12:02:14 np0005535469 admiring_galileo[381384]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:02:14 np0005535469 admiring_galileo[381384]: --> relative data size: 1.0
Nov 25 12:02:14 np0005535469 admiring_galileo[381384]: --> All data devices are unavailable
Nov 25 12:02:14 np0005535469 systemd[1]: libpod-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope: Deactivated successfully.
Nov 25 12:02:14 np0005535469 podman[381367]: 2025-11-25 17:02:14.538906159 +0000 UTC m=+1.264446042 container died d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 12:02:14 np0005535469 systemd[1]: libpod-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope: Consumed 1.039s CPU time.
Nov 25 12:02:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6-merged.mount: Deactivated successfully.
Nov 25 12:02:14 np0005535469 podman[381367]: 2025-11-25 17:02:14.601516616 +0000 UTC m=+1.327056469 container remove d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:02:14 np0005535469 systemd[1]: libpod-conmon-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope: Deactivated successfully.
Nov 25 12:02:14 np0005535469 nova_compute[254092]: 2025-11-25 17:02:14.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:15 np0005535469 podman[381567]: 2025-11-25 17:02:15.394136497 +0000 UTC m=+0.050919358 container create 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:02:15 np0005535469 systemd[1]: Started libpod-conmon-78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d.scope.
Nov 25 12:02:15 np0005535469 podman[381567]: 2025-11-25 17:02:15.36980828 +0000 UTC m=+0.026591141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:02:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:15 np0005535469 podman[381567]: 2025-11-25 17:02:15.484957648 +0000 UTC m=+0.141740559 container init 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:02:15 np0005535469 podman[381567]: 2025-11-25 17:02:15.493650566 +0000 UTC m=+0.150433397 container start 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:02:15 np0005535469 podman[381567]: 2025-11-25 17:02:15.497398119 +0000 UTC m=+0.154181030 container attach 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:02:15 np0005535469 condescending_diffie[381584]: 167 167
Nov 25 12:02:15 np0005535469 systemd[1]: libpod-78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d.scope: Deactivated successfully.
Nov 25 12:02:15 np0005535469 podman[381567]: 2025-11-25 17:02:15.500411752 +0000 UTC m=+0.157194613 container died 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 12:02:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a912da37af3aabc05611fb3c9ac66aa2fe296cb0c02d771307b5ab2ed122611c-merged.mount: Deactivated successfully.
Nov 25 12:02:15 np0005535469 podman[381567]: 2025-11-25 17:02:15.554527075 +0000 UTC m=+0.211309906 container remove 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:02:15 np0005535469 systemd[1]: libpod-conmon-78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d.scope: Deactivated successfully.
Nov 25 12:02:15 np0005535469 podman[381609]: 2025-11-25 17:02:15.771076136 +0000 UTC m=+0.051398762 container create 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:02:15 np0005535469 systemd[1]: Started libpod-conmon-700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8.scope.
Nov 25 12:02:15 np0005535469 podman[381609]: 2025-11-25 17:02:15.750281635 +0000 UTC m=+0.030604281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:02:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:15 np0005535469 podman[381609]: 2025-11-25 17:02:15.872928369 +0000 UTC m=+0.153251025 container init 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:02:15 np0005535469 podman[381609]: 2025-11-25 17:02:15.883657444 +0000 UTC m=+0.163980070 container start 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:02:15 np0005535469 podman[381609]: 2025-11-25 17:02:15.887472288 +0000 UTC m=+0.167794914 container attach 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:02:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 101 KiB/s wr, 20 op/s
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.589 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]: {
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:    "0": [
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:        {
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "devices": [
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "/dev/loop3"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            ],
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_name": "ceph_lv0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_size": "21470642176",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "name": "ceph_lv0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "tags": {
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cluster_name": "ceph",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.crush_device_class": "",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.encrypted": "0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osd_id": "0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.type": "block",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.vdo": "0"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            },
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "type": "block",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "vg_name": "ceph_vg0"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:        }
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:    ],
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:    "1": [
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:        {
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "devices": [
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "/dev/loop4"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            ],
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_name": "ceph_lv1",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_size": "21470642176",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "name": "ceph_lv1",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "tags": {
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cluster_name": "ceph",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.crush_device_class": "",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.encrypted": "0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osd_id": "1",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.type": "block",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.vdo": "0"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            },
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "type": "block",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "vg_name": "ceph_vg1"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:        }
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:    ],
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:    "2": [
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:        {
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "devices": [
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "/dev/loop5"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            ],
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_name": "ceph_lv2",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_size": "21470642176",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "name": "ceph_lv2",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "tags": {
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.cluster_name": "ceph",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.crush_device_class": "",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.encrypted": "0",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osd_id": "2",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.type": "block",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:                "ceph.vdo": "0"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            },
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "type": "block",
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:            "vg_name": "ceph_vg2"
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:        }
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]:    ]
Nov 25 12:02:16 np0005535469 musing_blackwell[381625]: }
Nov 25 12:02:16 np0005535469 systemd[1]: libpod-700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8.scope: Deactivated successfully.
Nov 25 12:02:16 np0005535469 podman[381609]: 2025-11-25 17:02:16.731951061 +0000 UTC m=+1.012273687 container died 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:02:16 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b-merged.mount: Deactivated successfully.
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.762 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.763 254096 DEBUG oslo_concurrency.lockutils [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.763 254096 DEBUG nova.network.neutron [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 1e4a418b-b459-456b-ae99-26f8b034a7bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.767 254096 DEBUG nova.virt.libvirt.vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.767 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.768 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.768 254096 DEBUG os_vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.769 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.770 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.774 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e4a418b-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.774 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e4a418b-b4, col_values=(('external_ids', {'iface-id': '1e4a418b-b459-456b-ae99-26f8b034a7bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:fe:32', 'vm-uuid': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:16 np0005535469 NetworkManager[48891]: <info>  [1764090136.7771] manager: (tap1e4a418b-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:16 np0005535469 podman[381609]: 2025-11-25 17:02:16.794455375 +0000 UTC m=+1.074778001 container remove 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.795 254096 INFO os_vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4')#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.795 254096 DEBUG nova.virt.libvirt.vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.796 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.796 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.800 254096 DEBUG nova.virt.libvirt.guest [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] attach device xml: <interface type="ethernet">
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:00:fe:32"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <target dev="tap1e4a418b-b4"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]: </interface>
Nov 25 12:02:16 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 12:02:16 np0005535469 systemd[1]: libpod-conmon-700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8.scope: Deactivated successfully.
Nov 25 12:02:16 np0005535469 kernel: tap1e4a418b-b4: entered promiscuous mode
Nov 25 12:02:16 np0005535469 NetworkManager[48891]: <info>  [1764090136.8149] manager: (tap1e4a418b-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Nov 25 12:02:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:16Z|01181|binding|INFO|Claiming lport 1e4a418b-b459-456b-ae99-26f8b034a7bc for this chassis.
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:16Z|01182|binding|INFO|1e4a418b-b459-456b-ae99-26f8b034a7bc: Claiming fa:16:3e:00:fe:32 10.100.0.24
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.825 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:fe:32 10.100.0.24'], port_security=['fa:16:3e:00:fe:32 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9bf74ed-ebec-4028-924f-11064256236f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eadf10b2-41b2-4301-a0d6-9c1d0e6514cb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4a418b-b459-456b-ae99-26f8b034a7bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.826 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4a418b-b459-456b-ae99-26f8b034a7bc in datapath c9bf74ed-ebec-4028-924f-11064256236f bound to our chassis#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.828 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c9bf74ed-ebec-4028-924f-11064256236f#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[523f7f5f-de1d-4a0b-8e5b-f9066bdc7375]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.840 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc9bf74ed-e1 in ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.842 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc9bf74ed-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc302b9-8437-4817-bcdf-b13eb7dbfae2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.843 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5627bfa-f956-4627-8503-0f8ede271e03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 systemd-udevd[381656]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:02:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:16Z|01183|binding|INFO|Setting lport 1e4a418b-b459-456b-ae99-26f8b034a7bc ovn-installed in OVS
Nov 25 12:02:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:16Z|01184|binding|INFO|Setting lport 1e4a418b-b459-456b-ae99-26f8b034a7bc up in Southbound
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.856 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f000bf-b13e-4adc-9680-9c7b13ed0dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.857 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.860 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:16 np0005535469 NetworkManager[48891]: <info>  [1764090136.8698] device (tap1e4a418b-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:02:16 np0005535469 NetworkManager[48891]: <info>  [1764090136.8707] device (tap1e4a418b-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.884 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a39e6e48-7e74-4039-a369-b962c413615a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.901 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.901 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.902 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:a8:c9:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.902 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:00:fe:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.910 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7315fd4e-719b-45ee-9c02-36e6760b72eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 NetworkManager[48891]: <info>  [1764090136.9167] manager: (tapc9bf74ed-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/492)
Nov 25 12:02:16 np0005535469 systemd-udevd[381673]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af2ad0c3-0be7-46c8-acd3-0a847773c8c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.920 254096 DEBUG nova.virt.libvirt.guest [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:02:16</nova:creationTime>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:02:16 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    <nova:port uuid="1e4a418b-b459-456b-ae99-26f8b034a7bc">
Nov 25 12:02:16 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:16 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:02:16 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:02:16 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 12:02:16 np0005535469 nova_compute[254092]: 2025-11-25 17:02:16.937 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.947 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29225fe3-4b89-45fc-8bf0-12766cd0012f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.950 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a51c55ba-30cb-4fc6-b208-45df09b98771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 NetworkManager[48891]: <info>  [1764090136.9745] device (tapc9bf74ed-e0): carrier: link connected
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.981 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb32078-e0b6-4082-a0ed-b5158567e4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0ecfe6-e46f-4ff4-b8bc-a4a9a979d011]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9bf74ed-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:51:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659456, 'reachable_time': 36679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381737, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.012 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed76bcac-2583-415d-83dc-8955ed8e53c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:5156'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659456, 'tstamp': 659456}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381755, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.032 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5dfa1c-17c0-4563-86a1-2f94366d87cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9bf74ed-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:51:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659456, 'reachable_time': 36679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381757, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c59545e-4524-4477-b5a7-9f0340e59931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.125 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a71e66f0-c0ce-4ea8-a26e-e8240450c658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.127 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9bf74ed-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.127 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.128 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9bf74ed-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:17 np0005535469 NetworkManager[48891]: <info>  [1764090137.1306] manager: (tapc9bf74ed-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:17 np0005535469 kernel: tapc9bf74ed-e0: entered promiscuous mode
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.135 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc9bf74ed-e0, col_values=(('external_ids', {'iface-id': 'bdd28a66-21f2-47f4-b133-a95bb7f7eb47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:17Z|01185|binding|INFO|Releasing lport bdd28a66-21f2-47f4-b133-a95bb7f7eb47 from this chassis (sb_readonly=0)
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.153 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c9bf74ed-ebec-4028-924f-11064256236f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c9bf74ed-ebec-4028-924f-11064256236f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.154 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55519cb6-8846-45b0-98ee-bebeaf6db519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.155 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-c9bf74ed-ebec-4028-924f-11064256236f
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/c9bf74ed-ebec-4028-924f-11064256236f.pid.haproxy
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID c9bf74ed-ebec-4028-924f-11064256236f
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:02:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.157 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'env', 'PROCESS_TAG=haproxy-c9bf74ed-ebec-4028-924f-11064256236f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c9bf74ed-ebec-4028-924f-11064256236f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:02:17 np0005535469 podman[381830]: 2025-11-25 17:02:17.384886499 +0000 UTC m=+0.022620201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:02:17 np0005535469 podman[381830]: 2025-11-25 17:02:17.515172952 +0000 UTC m=+0.152906634 container create 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:02:17 np0005535469 systemd[1]: Started libpod-conmon-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope.
Nov 25 12:02:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:17 np0005535469 podman[381830]: 2025-11-25 17:02:17.600766311 +0000 UTC m=+0.238500023 container init 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:02:17 np0005535469 podman[381830]: 2025-11-25 17:02:17.610501317 +0000 UTC m=+0.248234999 container start 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:02:17 np0005535469 podman[381830]: 2025-11-25 17:02:17.61495121 +0000 UTC m=+0.252684922 container attach 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.615 254096 DEBUG nova.compute.manager [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG oslo_concurrency.lockutils [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG oslo_concurrency.lockutils [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG oslo_concurrency.lockutils [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG nova.compute.manager [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:17 np0005535469 nova_compute[254092]: 2025-11-25 17:02:17.617 254096 WARNING nova.compute.manager [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.#033[00m
Nov 25 12:02:17 np0005535469 systemd[1]: libpod-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope: Deactivated successfully.
Nov 25 12:02:17 np0005535469 amazing_banach[381862]: 167 167
Nov 25 12:02:17 np0005535469 conmon[381862]: conmon 86d5aa6ce8e9763a08ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope/container/memory.events
Nov 25 12:02:17 np0005535469 podman[381830]: 2025-11-25 17:02:17.620259375 +0000 UTC m=+0.257993067 container died 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 12:02:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-547a6d4ab3a9f9769bbbe89fa9770da782f92dc761584148ea42b3105590374e-merged.mount: Deactivated successfully.
Nov 25 12:02:17 np0005535469 podman[381870]: 2025-11-25 17:02:17.672820037 +0000 UTC m=+0.083216024 container create ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:02:17 np0005535469 podman[381830]: 2025-11-25 17:02:17.698527052 +0000 UTC m=+0.336260734 container remove 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:02:17 np0005535469 podman[381870]: 2025-11-25 17:02:17.612550684 +0000 UTC m=+0.022946701 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:02:17 np0005535469 systemd[1]: Started libpod-conmon-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope.
Nov 25 12:02:17 np0005535469 systemd[1]: libpod-conmon-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope: Deactivated successfully.
Nov 25 12:02:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50b459a7e868aa11f7b7f6e6a9b7180c5b5afde03ead9ed833c5fc59e6efb1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:17 np0005535469 podman[381870]: 2025-11-25 17:02:17.766453235 +0000 UTC m=+0.176849252 container init ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:02:17 np0005535469 podman[381870]: 2025-11-25 17:02:17.772407738 +0000 UTC m=+0.182803725 container start ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 12:02:17 np0005535469 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : New worker (381910) forked
Nov 25 12:02:17 np0005535469 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : Loading success.
Nov 25 12:02:17 np0005535469 podman[381924]: 2025-11-25 17:02:17.888193665 +0000 UTC m=+0.040328368 container create 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:02:17 np0005535469 systemd[1]: Started libpod-conmon-99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199.scope.
Nov 25 12:02:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:17 np0005535469 podman[381924]: 2025-11-25 17:02:17.869902022 +0000 UTC m=+0.022036725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:02:17 np0005535469 podman[381924]: 2025-11-25 17:02:17.971097158 +0000 UTC m=+0.123231871 container init 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:02:17 np0005535469 podman[381924]: 2025-11-25 17:02:17.982896921 +0000 UTC m=+0.135031654 container start 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:02:17 np0005535469 podman[381924]: 2025-11-25 17:02:17.987618111 +0000 UTC m=+0.139752824 container attach 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:02:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 25 12:02:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.823 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-1e4a418b-b459-456b-ae99-26f8b034a7bc" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.824 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-1e4a418b-b459-456b-ae99-26f8b034a7bc" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.838 254096 DEBUG nova.objects.instance [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.874 254096 DEBUG nova.virt.libvirt.vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.875 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.876 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.881 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.884 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.888 254096 DEBUG nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Attempting to detach device tap1e4a418b-b4 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.888 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:00:fe:32"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <target dev="tap1e4a418b-b4"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]: </interface>
Nov 25 12:02:18 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.901 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.905 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <name>instance-00000072</name>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:02:16</nova:creationTime>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <nova:port uuid="1e4a418b-b459-456b-ae99-26f8b034a7bc">
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:02:18 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]: {
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "osd_id": 1,
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "type": "bluestore"
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:    },
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "osd_id": 2,
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "type": "bluestore"
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:    },
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "osd_id": 0,
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:        "type": "bluestore"
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]:    }
Nov 25 12:02:18 np0005535469 blissful_fermat[381940]: }
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target dev='tap9a960a19-c5'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:00:fe:32'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target dev='tap1e4a418b-b4'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='net1'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:18 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:02:18 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.907 254096 INFO nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tap1e4a418b-b4 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the persistent domain config.#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.907 254096 DEBUG nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] (1/8): Attempting to detach device tap1e4a418b-b4 with device alias net1 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 12:02:18 np0005535469 nova_compute[254092]: 2025-11-25 17:02:18.907 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:00:fe:32"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]:  <target dev="tap1e4a418b-b4"/>
Nov 25 12:02:18 np0005535469 nova_compute[254092]: </interface>
Nov 25 12:02:18 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 12:02:18 np0005535469 systemd[1]: libpod-99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199.scope: Deactivated successfully.
Nov 25 12:02:18 np0005535469 podman[381924]: 2025-11-25 17:02:18.933828454 +0000 UTC m=+1.085963147 container died 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:02:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf-merged.mount: Deactivated successfully.
Nov 25 12:02:18 np0005535469 podman[381924]: 2025-11-25 17:02:18.998289662 +0000 UTC m=+1.150424345 container remove 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:02:19 np0005535469 kernel: tap1e4a418b-b4 (unregistering): left promiscuous mode
Nov 25 12:02:19 np0005535469 systemd[1]: libpod-conmon-99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199.scope: Deactivated successfully.
Nov 25 12:02:19 np0005535469 NetworkManager[48891]: <info>  [1764090139.0063] device (tap1e4a418b-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:02:19 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:19Z|01186|binding|INFO|Releasing lport 1e4a418b-b459-456b-ae99-26f8b034a7bc from this chassis (sb_readonly=0)
Nov 25 12:02:19 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:19Z|01187|binding|INFO|Setting lport 1e4a418b-b459-456b-ae99-26f8b034a7bc down in Southbound
Nov 25 12:02:19 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:19Z|01188|binding|INFO|Removing iface tap1e4a418b-b4 ovn-installed in OVS
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.020 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:fe:32 10.100.0.24'], port_security=['fa:16:3e:00:fe:32 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9bf74ed-ebec-4028-924f-11064256236f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eadf10b2-41b2-4301-a0d6-9c1d0e6514cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4a418b-b459-456b-ae99-26f8b034a7bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.022 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764090139.0218506, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.022 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4a418b-b459-456b-ae99-26f8b034a7bc in datapath c9bf74ed-ebec-4028-924f-11064256236f unbound from our chassis#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.023 254096 DEBUG nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Start waiting for the detach event from libvirt for device tap1e4a418b-b4 with device alias net1 for instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.023 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.024 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c9bf74ed-ebec-4028-924f-11064256236f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.025 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[626d8409-1afe-419b-b42f-f1a39c43b373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.026 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <name>instance-00000072</name>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:02:16</nova:creationTime>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:port uuid="1e4a418b-b459-456b-ae99-26f8b034a7bc">
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:02:19 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target dev='tap9a960a19-c5'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:19 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:02:19 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.026 254096 INFO nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tap1e4a418b-b4 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the live domain config.#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.027 254096 DEBUG nova.virt.libvirt.vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.027 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.028 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.026 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f namespace which is not needed anymore#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.028 254096 DEBUG os_vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.030 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4a418b-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.037 254096 INFO os_vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4')#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.038 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:02:19</nova:creationTime>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:02:19 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:19 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:02:19 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:02:19 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 12:02:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:02:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:02:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:02:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:02:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c9c5aa85-6fef-446f-ac04-64e6f8fc6759 does not exist
Nov 25 12:02:19 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a954c476-8917-471a-8798-15741ae50634 does not exist
Nov 25 12:02:19 np0005535469 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : haproxy version is 2.8.14-c23fe91
Nov 25 12:02:19 np0005535469 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : path to executable is /usr/sbin/haproxy
Nov 25 12:02:19 np0005535469 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [WARNING]  (381908) : Exiting Master process...
Nov 25 12:02:19 np0005535469 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [ALERT]    (381908) : Current worker (381910) exited with code 143 (Terminated)
Nov 25 12:02:19 np0005535469 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [WARNING]  (381908) : All workers exited. Exiting... (0)
Nov 25 12:02:19 np0005535469 systemd[1]: libpod-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope: Deactivated successfully.
Nov 25 12:02:19 np0005535469 conmon[381901]: conmon ff2760e56b93ab990cdc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope/container/memory.events
Nov 25 12:02:19 np0005535469 podman[382031]: 2025-11-25 17:02:19.166767663 +0000 UTC m=+0.049047937 container died ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 12:02:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c-userdata-shm.mount: Deactivated successfully.
Nov 25 12:02:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a50b459a7e868aa11f7b7f6e6a9b7180c5b5afde03ead9ed833c5fc59e6efb1b-merged.mount: Deactivated successfully.
Nov 25 12:02:19 np0005535469 podman[382031]: 2025-11-25 17:02:19.222665576 +0000 UTC m=+0.104945850 container cleanup ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 12:02:19 np0005535469 systemd[1]: libpod-conmon-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope: Deactivated successfully.
Nov 25 12:02:19 np0005535469 podman[382088]: 2025-11-25 17:02:19.297395176 +0000 UTC m=+0.051853724 container remove ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.303 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[836e79ef-df12-4906-943f-01e45a4bcec8]: (4, ('Tue Nov 25 05:02:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f (ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c)\nff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c\nTue Nov 25 05:02:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f (ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c)\nff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.305 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a39944d-43de-4f60-93bf-2a7ece518358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.306 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9bf74ed-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:19 np0005535469 kernel: tapc9bf74ed-e0: left promiscuous mode
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.320 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57273f22-43c0-4cd4-b64c-53af824b4e3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f30f7563-cd95-481c-be99-6875a5a0895e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.338 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c045344e-0b37-47b5-a62b-5877f897459d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.355 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03bc83ad-f96d-4f0a-96c8-ca046a0dbd10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659449, 'reachable_time': 42360, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382103, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.357 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:02:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.358 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f42fb5-3c6d-413d-a0ba-3f8f777bdea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:19 np0005535469 systemd[1]: run-netns-ovnmeta\x2dc9bf74ed\x2debec\x2d4028\x2d924f\x2d11064256236f.mount: Deactivated successfully.
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.571 254096 DEBUG nova.network.neutron [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updated VIF entry in instance network info cache for port 1e4a418b-b459-456b-ae99-26f8b034a7bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.572 254096 DEBUG nova.network.neutron [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.585 254096 DEBUG oslo_concurrency.lockutils [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.720 254096 DEBUG nova.compute.manager [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG oslo_concurrency.lockutils [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG oslo_concurrency.lockutils [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG oslo_concurrency.lockutils [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG nova.compute.manager [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.722 254096 WARNING nova.compute.manager [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.#033[00m
Nov 25 12:02:19 np0005535469 nova_compute[254092]: 2025-11-25 17:02:19.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:02:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:02:20 np0005535469 nova_compute[254092]: 2025-11-25 17:02:20.104 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:20 np0005535469 nova_compute[254092]: 2025-11-25 17:02:20.104 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:20 np0005535469 nova_compute[254092]: 2025-11-25 17:02:20.105 254096 DEBUG nova.network.neutron [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:02:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 15 KiB/s wr, 0 op/s
Nov 25 12:02:21 np0005535469 nova_compute[254092]: 2025-11-25 17:02:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.065 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-unplugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.066 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.066 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.067 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.067 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-unplugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.067 254096 WARNING nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-unplugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.069 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.069 254096 WARNING nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.070 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-deleted-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.070 254096 INFO nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Neutron deleted interface 1e4a418b-b459-456b-ae99-26f8b034a7bc; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.070 254096 DEBUG nova.network.neutron [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.086 254096 DEBUG nova.objects.instance [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.106 254096 DEBUG nova.objects.instance [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.124 254096 DEBUG nova.virt.libvirt.vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.125 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.126 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.132 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.138 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <name>instance-00000072</name>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:02:19</nova:creationTime>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target dev='tap9a960a19-c5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.139 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.145 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <name>instance-00000072</name>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:02:19</nova:creationTime>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target dev='tap9a960a19-c5'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.146 254096 WARNING nova.virt.libvirt.driver [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Detaching interface fa:16:3e:00:fe:32 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap1e4a418b-b4' not found.#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.147 254096 DEBUG nova.virt.libvirt.vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.147 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.148 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.149 254096 DEBUG os_vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.151 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4a418b-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.152 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.154 254096 INFO os_vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4')#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.155 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:02:22</nova:creationTime>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 12:02:22 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:02:22 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:02:22 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 12:02:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.477 254096 INFO nova.network.neutron [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Port 1e4a418b-b459-456b-ae99-26f8b034a7bc from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.478 254096 DEBUG nova.network.neutron [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.499 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.522 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-1e4a418b-b459-456b-ae99-26f8b034a7bc" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.603 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.604 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.604 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.604 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.605 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.606 254096 INFO nova.compute.manager [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Terminating instance#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.606 254096 DEBUG nova.compute.manager [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:02:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:22Z|01189|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 12:02:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:22Z|01190|binding|INFO|Releasing lport 60519f01-35a8-45ac-b477-17b0e31a750f from this chassis (sb_readonly=0)
Nov 25 12:02:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:22Z|01191|binding|INFO|Releasing lport a599bfa4-c512-4c62-b0a4-4d2ab863ab24 from this chassis (sb_readonly=0)
Nov 25 12:02:22 np0005535469 kernel: tap3751c8d6-0f (unregistering): left promiscuous mode
Nov 25 12:02:22 np0005535469 NetworkManager[48891]: <info>  [1764090142.6693] device (tap3751c8d6-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:02:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:22Z|01192|binding|INFO|Releasing lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 from this chassis (sb_readonly=0)
Nov 25 12:02:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:22Z|01193|binding|INFO|Setting lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 down in Southbound
Nov 25 12:02:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:22Z|01194|binding|INFO|Removing iface tap3751c8d6-0f ovn-installed in OVS
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.716 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:6a:fb 10.100.0.7'], port_security=['fa:16:3e:cd:6a:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f0c80e2-19cd-43ba-881d-e24e5bcd62fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b79c328-9376-4e36-9211-72ee228f98d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa394380a92d48188f2de86f1a100c08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d20ac94-0311-45d3-bbc9-0b5ca7a32bc8 4b68beb0-85ec-4a9a-a335-1e3ed4aadbc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffca90a1-5def-405b-be68-948ef468bd95, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:02:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.718 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 in datapath 3b79c328-9376-4e36-9211-72ee228f98d6 unbound from our chassis#033[00m
Nov 25 12:02:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.720 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b79c328-9376-4e36-9211-72ee228f98d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.721 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bb8d31-c2cd-4e5b-8eca-4b1cc7cb58ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.722 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 namespace which is not needed anymore#033[00m
Nov 25 12:02:22 np0005535469 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 25 12:02:22 np0005535469 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Consumed 15.922s CPU time.
Nov 25 12:02:22 np0005535469 systemd-machined[216343]: Machine qemu-147-instance-00000073 terminated.
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.851 254096 INFO nova.virt.libvirt.driver [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance destroyed successfully.#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.851 254096 DEBUG nova.objects.instance [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lazy-loading 'resources' on Instance uuid 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.863 254096 DEBUG nova.virt.libvirt.vif [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-877248969-acc',id=115,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYG6882dVO70OVi4d9NVrB3PaeuhIrFZ+oR1NKshvHYJDUOm1rbaI60huuXoUEKrmzCPg+QgDBxi0uURLyDj9uJZlfeSkkPsCqzCs3wQ3F9X3LJ3PkXg4AAZvavey5RFw==',key_name='tempest-TestSecurityGroupsBasicOps-143893991',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa394380a92d48188f2de86f1a100c08',ramdisk_id='',reservation_id='r-2k83oce0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-877248969',owner_user_name='tempest-TestSecurityGroupsBasicOps-877248969-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:38Z,user_data=None,user_id='baa046e735b94aba93374dff061b9e77',uuid=1f0c80e2-19cd-43ba-881d-e24e5bcd62fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.863 254096 DEBUG nova.network.os_vif_util [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converting VIF {"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.863 254096 DEBUG nova.network.os_vif_util [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.864 254096 DEBUG os_vif [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.866 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3751c8d6-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.867 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:22 np0005535469 nova_compute[254092]: 2025-11-25 17:02:22.873 254096 INFO os_vif [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f')#033[00m
Nov 25 12:02:22 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : haproxy version is 2.8.14-c23fe91
Nov 25 12:02:22 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : path to executable is /usr/sbin/haproxy
Nov 25 12:02:22 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [WARNING]  (380915) : Exiting Master process...
Nov 25 12:02:22 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [WARNING]  (380915) : Exiting Master process...
Nov 25 12:02:22 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [ALERT]    (380915) : Current worker (380917) exited with code 143 (Terminated)
Nov 25 12:02:22 np0005535469 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [WARNING]  (380915) : All workers exited. Exiting... (0)
Nov 25 12:02:22 np0005535469 systemd[1]: libpod-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27.scope: Deactivated successfully.
Nov 25 12:02:22 np0005535469 podman[382132]: 2025-11-25 17:02:22.926210768 +0000 UTC m=+0.060866521 container died 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 12:02:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27-userdata-shm.mount: Deactivated successfully.
Nov 25 12:02:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-60bb6ae46dd6df8dbd8f0da40a4c941046d2ca1eb5dcd17ed4f8729c326d3da3-merged.mount: Deactivated successfully.
Nov 25 12:02:22 np0005535469 podman[382132]: 2025-11-25 17:02:22.970134503 +0000 UTC m=+0.104790266 container cleanup 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 12:02:22 np0005535469 systemd[1]: libpod-conmon-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27.scope: Deactivated successfully.
Nov 25 12:02:23 np0005535469 podman[382188]: 2025-11-25 17:02:23.037543972 +0000 UTC m=+0.045028696 container remove 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.042 254096 DEBUG nova.compute.manager [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-unplugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.043 254096 DEBUG oslo_concurrency.lockutils [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.043 254096 DEBUG oslo_concurrency.lockutils [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.043 254096 DEBUG oslo_concurrency.lockutils [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.044 254096 DEBUG nova.compute.manager [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] No waiting events found dispatching network-vif-unplugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.044 254096 DEBUG nova.compute.manager [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-unplugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5fefb5-8966-43df-9f5d-5b76bd35e4af]: (4, ('Tue Nov 25 05:02:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 (00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27)\n00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27\nTue Nov 25 05:02:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 (00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27)\n00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.046 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feb87f30-9d6a-4902-9676-2991b4e9997c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.047 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b79c328-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:23 np0005535469 kernel: tap3b79c328-90: left promiscuous mode
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.064 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e12e6961-0806-4942-bd3e-5579198b75ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.077 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64d2c186-33d6-43ee-84cf-a8e0fa1a7d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.078 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93cd08da-cc64-4628-a185-9aab6333b0dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.096 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29b961be-8920-43e8-b44c-8f6d4b42e994]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655569, 'reachable_time': 29369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382205, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 systemd[1]: run-netns-ovnmeta\x2d3b79c328\x2d9376\x2d4e36\x2d9211\x2d72ee228f98d6.mount: Deactivated successfully.
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.099 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.100 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[64fe4c1b-284d-4c31-92f3-8ea0498fc133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.249 254096 INFO nova.virt.libvirt.driver [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deleting instance files /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_del#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.250 254096 INFO nova.virt.libvirt.driver [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deletion of /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_del complete#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.305 254096 INFO nova.compute.manager [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.306 254096 DEBUG oslo.service.loopingcall [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.306 254096 DEBUG nova.compute.manager [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.306 254096 DEBUG nova.network.neutron [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.772 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.772 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.773 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.773 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.773 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.774 254096 INFO nova.compute.manager [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Terminating instance#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.775 254096 DEBUG nova.compute.manager [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:02:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:23 np0005535469 kernel: tap9a960a19-c5 (unregistering): left promiscuous mode
Nov 25 12:02:23 np0005535469 NetworkManager[48891]: <info>  [1764090143.8139] device (tap9a960a19-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:02:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:23Z|01195|binding|INFO|Releasing lport 9a960a19-c599-4217-b99c-ac16fe6384b1 from this chassis (sb_readonly=0)
Nov 25 12:02:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:23Z|01196|binding|INFO|Setting lport 9a960a19-c599-4217-b99c-ac16fe6384b1 down in Southbound
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:23Z|01197|binding|INFO|Removing iface tap9a960a19-c5 ovn-installed in OVS
Nov 25 12:02:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:23Z|01198|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 12:02:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:23Z|01199|binding|INFO|Releasing lport 60519f01-35a8-45ac-b477-17b0e31a750f from this chassis (sb_readonly=0)
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.832 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:c9:7b 10.100.0.3'], port_security=['fa:16:3e:a8:c9:7b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '573985ee-22d8-4e8a-b764-ea06c40f2ee7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ece2fc14-3f44-4554-9543-96a461b3adc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9a960a19-c599-4217-b99c-ac16fe6384b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.833 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9a960a19-c599-4217-b99c-ac16fe6384b1 in datapath 131ae834-ee81-42ce-b61e-863b3a8d52e1 unbound from our chassis#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.835 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 131ae834-ee81-42ce-b61e-863b3a8d52e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.836 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb22bc5-9ce7-49d9-8bc3-37adecf8caae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.837 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 namespace which is not needed anymore#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.861 254096 DEBUG nova.network.neutron [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:23 np0005535469 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 25 12:02:23 np0005535469 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000072.scope: Consumed 15.360s CPU time.
Nov 25 12:02:23 np0005535469 systemd-machined[216343]: Machine qemu-146-instance-00000072 terminated.
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.888 254096 INFO nova.compute.manager [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 0.58 seconds to deallocate network for instance.#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.936 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:23 np0005535469 nova_compute[254092]: 2025-11-25 17:02:23.938 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:23 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : haproxy version is 2.8.14-c23fe91
Nov 25 12:02:23 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : path to executable is /usr/sbin/haproxy
Nov 25 12:02:23 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [WARNING]  (380643) : Exiting Master process...
Nov 25 12:02:23 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [WARNING]  (380643) : Exiting Master process...
Nov 25 12:02:23 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [ALERT]    (380643) : Current worker (380645) exited with code 143 (Terminated)
Nov 25 12:02:23 np0005535469 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [WARNING]  (380643) : All workers exited. Exiting... (0)
Nov 25 12:02:23 np0005535469 systemd[1]: libpod-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01.scope: Deactivated successfully.
Nov 25 12:02:23 np0005535469 podman[382227]: 2025-11-25 17:02:23.992547856 +0000 UTC m=+0.046422754 container died 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.025 254096 INFO nova.virt.libvirt.driver [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance destroyed successfully.#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.026 254096 DEBUG nova.objects.instance [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.035 254096 DEBUG oslo_concurrency.processutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a30edfa0bf3043d83325a2d2e3ffee6672b0c56070c560af5182afdcb7482f9a-merged.mount: Deactivated successfully.
Nov 25 12:02:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01-userdata-shm.mount: Deactivated successfully.
Nov 25 12:02:24 np0005535469 podman[382227]: 2025-11-25 17:02:24.044541392 +0000 UTC m=+0.098416300 container cleanup 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:02:24 np0005535469 systemd[1]: libpod-conmon-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01.scope: Deactivated successfully.
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.082 254096 DEBUG nova.virt.libvirt.vif [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.083 254096 DEBUG nova.network.os_vif_util [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.083 254096 DEBUG nova.network.os_vif_util [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.084 254096 DEBUG os_vif [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.086 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a960a19-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.092 254096 INFO os_vif [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5')#033[00m
Nov 25 12:02:24 np0005535469 podman[382265]: 2025-11-25 17:02:24.117691698 +0000 UTC m=+0.042219239 container remove 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.125 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8ea83c-0ba7-4bde-a515-17d7fe5b62a8]: (4, ('Tue Nov 25 05:02:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 (352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01)\n352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01\nTue Nov 25 05:02:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 (352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01)\n352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.127 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64ab9f0d-d3af-4d68-9bea-f2be4586659d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.129 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap131ae834-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:24 np0005535469 kernel: tap131ae834-e0: left promiscuous mode
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.146 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.146 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing instance network info cache due to event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.147 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.147 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.147 254096 DEBUG nova.network.neutron [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[346c879f-912c-4c56-87dd-b8ae16d4e4bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d01cd98-567e-4f84-b7c7-86772e74b3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.168 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de7da1e8-862e-4fd1-85a3-35b471694b39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.190 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f6ba04-ea0f-4544-bff7-dbd695a53d9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655088, 'reachable_time': 33864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382299, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:24 np0005535469 systemd[1]: run-netns-ovnmeta\x2d131ae834\x2dee81\x2d42ce\x2db61e\x2d863b3a8d52e1.mount: Deactivated successfully.
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.197 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:02:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.198 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[547553f3-6254-4cae-9ad3-648a730ef7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 4.0 KiB/s wr, 0 op/s
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.292 254096 DEBUG nova.network.neutron [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.523 254096 INFO nova.virt.libvirt.driver [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deleting instance files /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_del#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.524 254096 INFO nova.virt.libvirt.driver [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deletion of /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_del complete#033[00m
Nov 25 12:02:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:02:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092346102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.558 254096 DEBUG oslo_concurrency.processutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.565 254096 DEBUG nova.compute.provider_tree [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.572 254096 INFO nova.compute.manager [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.572 254096 DEBUG oslo.service.loopingcall [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.573 254096 DEBUG nova.compute.manager [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.573 254096 DEBUG nova.network.neutron [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.579 254096 DEBUG nova.scheduler.client.report [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.600 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.618 254096 DEBUG nova.network.neutron [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.628 254096 INFO nova.scheduler.client.report [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Deleted allocations for instance 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.631 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.631 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-deleted-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.631 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-unplugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-unplugged-9a960a19-c599-4217-b99c-ac16fe6384b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-unplugged-9a960a19-c599-4217-b99c-ac16fe6384b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.776 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:24 np0005535469 nova_compute[254092]: 2025-11-25 17:02:24.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.380 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.380 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.381 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.381 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.381 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] No waiting events found dispatching network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.382 254096 WARNING nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received unexpected event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.382 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.383 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.383 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.383 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.384 254096 DEBUG nova.network.neutron [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.481 254096 DEBUG nova.network.neutron [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.501 254096 INFO nova.compute.manager [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 0.93 seconds to deallocate network for instance.#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.552 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.552 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.561 254096 INFO nova.network.neutron [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Port 9a960a19-c599-4217-b99c-ac16fe6384b1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.562 254096 DEBUG nova.network.neutron [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.574 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:25 np0005535469 nova_compute[254092]: 2025-11-25 17:02:25.631 254096 DEBUG oslo_concurrency.processutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:02:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138400497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.078 254096 DEBUG oslo_concurrency.processutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.085 254096 DEBUG nova.compute.provider_tree [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.111 254096 DEBUG nova.scheduler.client.report [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.145 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.172 254096 INFO nova.scheduler.client.report [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.218 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.220 254096 DEBUG nova.compute.manager [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG oslo_concurrency.lockutils [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG oslo_concurrency.lockutils [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG oslo_concurrency.lockutils [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG nova.compute.manager [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:26 np0005535469 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 WARNING nova.compute.manager [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:02:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 121 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 13 KiB/s wr, 56 op/s
Nov 25 12:02:27 np0005535469 nova_compute[254092]: 2025-11-25 17:02:27.503 254096 DEBUG nova.compute.manager [req-d80494db-578d-4920-a83d-a4bf4a1ba759 req-3785548d-1941-4bec-9ec4-6b0ee87c8b08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-deleted-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:28 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:28Z|01200|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 12:02:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 121 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 11 KiB/s wr, 56 op/s
Nov 25 12:02:28 np0005535469 nova_compute[254092]: 2025-11-25 17:02:28.296 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:28 np0005535469 nova_compute[254092]: 2025-11-25 17:02:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:29Z|01201|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:29Z|01202|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.384 254096 DEBUG nova.compute.manager [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG nova.compute.manager [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing instance network info cache due to event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG oslo_concurrency.lockutils [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG oslo_concurrency.lockutils [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG nova.network.neutron [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.386 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.457 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.457 254096 INFO nova.compute.manager [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Terminating instance#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.458 254096 DEBUG nova.compute.manager [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:29 np0005535469 kernel: tap1caaa3da-b3 (unregistering): left promiscuous mode
Nov 25 12:02:29 np0005535469 NetworkManager[48891]: <info>  [1764090149.5098] device (tap1caaa3da-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:29Z|01203|binding|INFO|Releasing lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 from this chassis (sb_readonly=0)
Nov 25 12:02:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:29Z|01204|binding|INFO|Setting lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 down in Southbound
Nov 25 12:02:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:29Z|01205|binding|INFO|Removing iface tap1caaa3da-b3 ovn-installed in OVS
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.518 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.522 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:ec:69 10.100.0.8'], port_security=['fa:16:3e:97:ec:69 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'caf64ca2-5f73-454a-8442-9965c9853cba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b034178-39ad-4db7-adab-aaf6bc34bd4a e7198f6b-79d7-48d7-845d-93c396c87f35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e96b6e1-6935-4458-bc78-50ea3ed2412d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1caaa3da-b3eb-4441-b6b2-8eaa71146e77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.523 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 in datapath 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 unbound from our chassis#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.524 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.525 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c458bbc-b7eb-45ca-af0a-d3e6b4862483]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.526 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 namespace which is not needed anymore#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 25 12:02:29 np0005535469 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Consumed 16.470s CPU time.
Nov 25 12:02:29 np0005535469 systemd-machined[216343]: Machine qemu-145-instance-00000071 terminated.
Nov 25 12:02:29 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : haproxy version is 2.8.14-c23fe91
Nov 25 12:02:29 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : path to executable is /usr/sbin/haproxy
Nov 25 12:02:29 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [WARNING]  (378870) : Exiting Master process...
Nov 25 12:02:29 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [WARNING]  (378870) : Exiting Master process...
Nov 25 12:02:29 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [ALERT]    (378870) : Current worker (378872) exited with code 143 (Terminated)
Nov 25 12:02:29 np0005535469 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [WARNING]  (378870) : All workers exited. Exiting... (0)
Nov 25 12:02:29 np0005535469 systemd[1]: libpod-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72.scope: Deactivated successfully.
Nov 25 12:02:29 np0005535469 podman[382367]: 2025-11-25 17:02:29.661354541 +0000 UTC m=+0.042770974 container died 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:02:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72-userdata-shm.mount: Deactivated successfully.
Nov 25 12:02:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a186dab47ed7f953257a28b5bb609e6cd78bd265db3c45fccf8e31014045c990-merged.mount: Deactivated successfully.
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.692 254096 INFO nova.virt.libvirt.driver [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance destroyed successfully.#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.693 254096 DEBUG nova.objects.instance [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid caf64ca2-5f73-454a-8442-9965c9853cba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:29 np0005535469 podman[382367]: 2025-11-25 17:02:29.696709361 +0000 UTC m=+0.078125794 container cleanup 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.703 254096 DEBUG nova.virt.libvirt.vif [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=113,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC5IGkA0EZcfpUrnJAPAyNPE3aP4ux+1YVZrN6xmNxNmyBv5luv7uNh5XsGgePIRhuMTv5vEwnWkpC0iguYDb2SFlQPUW7qNQRGe9ic9lTfmn148JQBqNQ9VGr6RxpqguQ==',key_name='tempest-TestSecurityGroupsBasicOps-1631979734',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:00:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-o6t5fl29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:00:55Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=caf64ca2-5f73-454a-8442-9965c9853cba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.704 254096 DEBUG nova.network.os_vif_util [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.705 254096 DEBUG nova.network.os_vif_util [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.705 254096 DEBUG os_vif [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.707 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1caaa3da-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.712 254096 INFO os_vif [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3')#033[00m
Nov 25 12:02:29 np0005535469 systemd[1]: libpod-conmon-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72.scope: Deactivated successfully.
Nov 25 12:02:29 np0005535469 podman[382404]: 2025-11-25 17:02:29.759264136 +0000 UTC m=+0.040524972 container remove 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a204f1a-c83b-4eed-ad01-fe4e2f33b4c1]: (4, ('Tue Nov 25 05:02:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 (35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72)\n35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72\nTue Nov 25 05:02:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 (35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72)\n35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.767 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aef5e4-90e5-4190-b72d-6ff43e8e14b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.768 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f953cb4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 kernel: tap0f953cb4-a0: left promiscuous mode
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.783 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4efc4827-b6a5-4dbd-80f0-156c6658995f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72dfe14d-bf32-41b2-9524-a00d1c649105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.794 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5232ef4a-a92f-4a4c-872e-3b521ccc407f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[124e3724-3997-4dbd-966f-20e4fe33fdde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651180, 'reachable_time': 40219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382438, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 systemd[1]: run-netns-ovnmeta\x2d0f953cb4\x2dad2a\x2d4f81\x2d8b2f\x2da1d71f5b8cf2.mount: Deactivated successfully.
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.813 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:02:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.813 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2ab195-7cb5-40cf-a42b-657f1c87443a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:29 np0005535469 nova_compute[254092]: 2025-11-25 17:02:29.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:30 np0005535469 nova_compute[254092]: 2025-11-25 17:02:30.135 254096 INFO nova.virt.libvirt.driver [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deleting instance files /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba_del#033[00m
Nov 25 12:02:30 np0005535469 nova_compute[254092]: 2025-11-25 17:02:30.136 254096 INFO nova.virt.libvirt.driver [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deletion of /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba_del complete#033[00m
Nov 25 12:02:30 np0005535469 nova_compute[254092]: 2025-11-25 17:02:30.184 254096 INFO nova.compute.manager [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:02:30 np0005535469 nova_compute[254092]: 2025-11-25 17:02:30.185 254096 DEBUG oslo.service.loopingcall [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:02:30 np0005535469 nova_compute[254092]: 2025-11-25 17:02:30.185 254096 DEBUG nova.compute.manager [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:02:30 np0005535469 nova_compute[254092]: 2025-11-25 17:02:30.185 254096 DEBUG nova.network.neutron [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:02:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 76 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 11 KiB/s wr, 60 op/s
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.270 254096 DEBUG nova.network.neutron [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.284 254096 INFO nova.compute.manager [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 1.10 seconds to deallocate network for instance.#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.325 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.326 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.329 254096 DEBUG nova.network.neutron [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updated VIF entry in instance network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.330 254096 DEBUG nova.network.neutron [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.333 254096 DEBUG nova.compute.manager [req-9e467b53-fa53-4e7f-a8eb-c0fff2fcbd6f req-5dd574ec-553f-47a5-a201-7f6fe0bcb5b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-deleted-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.349 254096 DEBUG oslo_concurrency.lockutils [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.378 254096 DEBUG oslo_concurrency.processutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.477 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-unplugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.478 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.478 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.479 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.479 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] No waiting events found dispatching network-vif-unplugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.480 254096 WARNING nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received unexpected event network-vif-unplugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.480 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.481 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.481 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.483 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.483 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] No waiting events found dispatching network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.484 254096 WARNING nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received unexpected event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:02:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:02:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100472373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.840 254096 DEBUG oslo_concurrency.processutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.847 254096 DEBUG nova.compute.provider_tree [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.864 254096 DEBUG nova.scheduler.client.report [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.885 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.926 254096 INFO nova.scheduler.client.report [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance caf64ca2-5f73-454a-8442-9965c9853cba#033[00m
Nov 25 12:02:31 np0005535469 nova_compute[254092]: 2025-11-25 17:02:31.986 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 41 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 11 KiB/s wr, 82 op/s
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:02:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351245696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:02:32 np0005535469 nova_compute[254092]: 2025-11-25 17:02:32.956 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.161 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.162 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3765MB free_disk=59.97190475463867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.162 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.163 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.227 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.227 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.242 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:02:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372710800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.719 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.728 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.750 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.780 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:02:33 np0005535469 nova_compute[254092]: 2025-11-25 17:02:33.780 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 41 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 9.5 KiB/s wr, 81 op/s
Nov 25 12:02:34 np0005535469 nova_compute[254092]: 2025-11-25 17:02:34.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:34 np0005535469 nova_compute[254092]: 2025-11-25 17:02:34.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:35 np0005535469 nova_compute[254092]: 2025-11-25 17:02:35.782 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 9.8 KiB/s wr, 83 op/s
Nov 25 12:02:37 np0005535469 nova_compute[254092]: 2025-11-25 17:02:37.849 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090142.8476992, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:02:37 np0005535469 nova_compute[254092]: 2025-11-25 17:02:37.850 254096 INFO nova.compute.manager [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:02:37 np0005535469 nova_compute[254092]: 2025-11-25 17:02:37.874 254096 DEBUG nova.compute.manager [None req-edc7846e-6919-4b1b-b55f-4c07b52f5def - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:02:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:02:38 np0005535469 nova_compute[254092]: 2025-11-25 17:02:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:02:38 np0005535469 nova_compute[254092]: 2025-11-25 17:02:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:02:38 np0005535469 nova_compute[254092]: 2025-11-25 17:02:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:02:38 np0005535469 nova_compute[254092]: 2025-11-25 17:02:38.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:02:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:39 np0005535469 nova_compute[254092]: 2025-11-25 17:02:39.023 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090144.0231204, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:02:39 np0005535469 nova_compute[254092]: 2025-11-25 17:02:39.024 254096 INFO nova.compute.manager [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:02:39 np0005535469 nova_compute[254092]: 2025-11-25 17:02:39.049 254096 DEBUG nova.compute.manager [None req-71500f56-a537-41fa-95f6-a7c715edce1f - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:02:39 np0005535469 nova_compute[254092]: 2025-11-25 17:02:39.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:39 np0005535469 nova_compute[254092]: 2025-11-25 17:02:39.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:39 np0005535469 nova_compute[254092]: 2025-11-25 17:02:39.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:02:40
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'volumes', 'vms']
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:02:41 np0005535469 podman[382508]: 2025-11-25 17:02:41.654961772 +0000 UTC m=+0.062295060 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 12:02:41 np0005535469 podman[382507]: 2025-11-25 17:02:41.692583564 +0000 UTC m=+0.103253253 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:02:41 np0005535469 podman[382509]: 2025-11-25 17:02:41.722266878 +0000 UTC m=+0.124331861 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:02:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 25 12:02:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 12:02:44 np0005535469 nova_compute[254092]: 2025-11-25 17:02:44.691 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090149.687964, caf64ca2-5f73-454a-8442-9965c9853cba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:02:44 np0005535469 nova_compute[254092]: 2025-11-25 17:02:44.691 254096 INFO nova.compute.manager [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:02:44 np0005535469 nova_compute[254092]: 2025-11-25 17:02:44.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:44 np0005535469 nova_compute[254092]: 2025-11-25 17:02:44.726 254096 DEBUG nova.compute.manager [None req-b91c3135-c46e-454a-b5a9-4d5afb28690f - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:02:44 np0005535469 nova_compute[254092]: 2025-11-25 17:02:44.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 12:02:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:02:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:49 np0005535469 nova_compute[254092]: 2025-11-25 17:02:49.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:49 np0005535469 nova_compute[254092]: 2025-11-25 17:02:49.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.103 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.104 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.124 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.228 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.229 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.235 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.235 254096 INFO nova.compute.claims [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.407 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:02:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:02:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:02:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742630985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.861 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.867 254096 DEBUG nova.compute.provider_tree [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.887 254096 DEBUG nova.scheduler.client.report [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.921 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.922 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.960 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.960 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.975 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:02:51 np0005535469 nova_compute[254092]: 2025-11-25 17:02:51.988 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.071 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.072 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.072 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Creating image(s)#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.092 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.111 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.131 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.134 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.237 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.238 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.239 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.239 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.258 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.261 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73a187fa-5479-4191-bd44-757c3840137a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.512 254096 DEBUG nova.policy [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.538 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73a187fa-5479-4191-bd44-757c3840137a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.626 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.727 254096 DEBUG nova.objects.instance [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.739 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.740 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Ensure instance console log exists: /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.740 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.741 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:52 np0005535469 nova_compute[254092]: 2025-11-25 17:02:52.741 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:53 np0005535469 nova_compute[254092]: 2025-11-25 17:02:53.571 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Successfully created port: b0512c6a-fbc4-4639-8508-e6493d18bd3a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:02:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.483 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Successfully updated port: b0512c6a-fbc4-4639-8508-e6493d18bd3a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.538 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.538 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.539 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.605 254096 DEBUG nova.compute.manager [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.605 254096 DEBUG nova.compute.manager [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing instance network info cache due to event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.605 254096 DEBUG oslo_concurrency.lockutils [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.661 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:54 np0005535469 nova_compute[254092]: 2025-11-25 17:02:54.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.232 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.232 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.275 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:02:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:02:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949095382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:02:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:02:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949095382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.365 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.366 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.381 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.382 254096 INFO nova.compute.claims [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.488 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.510 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.511 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance network_info: |[{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.512 254096 DEBUG oslo_concurrency.lockutils [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.513 254096 DEBUG nova.network.neutron [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.518 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start _get_guest_xml network_info=[{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.530 254096 WARNING nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.535 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.536 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.543 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.591 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.593 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.594 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.594 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.595 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.597 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.597 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.597 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.598 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.598 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.603 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:02:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154018297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:02:55 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.994 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:55.999 254096 DEBUG nova.compute.provider_tree [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.013 254096 DEBUG nova.scheduler.client.report [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.032 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.033 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:02:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:02:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002318026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.072 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.072 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.079 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.100 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.104 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.141 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.170 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.287 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.288 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.289 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Creating image(s)#033[00m
Nov 25 12:02:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 88 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.308 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.327 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.344 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.347 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.379 254096 DEBUG nova.policy [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.415 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.416 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.416 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.416 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.434 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.438 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:02:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/595674024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.564 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.567 254096 DEBUG nova.virt.libvirt.vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-365721910',display_name='tempest-TestNetworkBasicOps-server-365721910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-365721910',id=116,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8fF/LQWIeXhdQwSKSLAErz4jvUPWKjw4L1GZb+ASjyhqtlkSjIxhUDwHlkKBv0qHWvbUkdKHkhYl9JuEVV2LarQZvIoe1QUEsDx05YVfl0dpyKfWcSmCOAyR6fZFjEqA==',key_name='tempest-TestNetworkBasicOps-1895434798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5bv1kvw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:52Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=73a187fa-5479-4191-bd44-757c3840137a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.568 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.570 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.572 254096 DEBUG nova.objects.instance [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.588 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <uuid>73a187fa-5479-4191-bd44-757c3840137a</uuid>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <name>instance-00000074</name>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-365721910</nova:name>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:02:55</nova:creationTime>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <nova:port uuid="b0512c6a-fbc4-4639-8508-e6493d18bd3a">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <entry name="serial">73a187fa-5479-4191-bd44-757c3840137a</entry>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <entry name="uuid">73a187fa-5479-4191-bd44-757c3840137a</entry>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/73a187fa-5479-4191-bd44-757c3840137a_disk">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/73a187fa-5479-4191-bd44-757c3840137a_disk.config">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d7:b5:b4"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <target dev="tapb0512c6a-fb"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/console.log" append="off"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:02:56 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:02:56 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:02:56 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:02:56 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.590 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Preparing to wait for external event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.591 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.592 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.593 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.595 254096 DEBUG nova.virt.libvirt.vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-365721910',display_name='tempest-TestNetworkBasicOps-server-365721910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-365721910',id=116,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8fF/LQWIeXhdQwSKSLAErz4jvUPWKjw4L1GZb+ASjyhqtlkSjIxhUDwHlkKBv0qHWvbUkdKHkhYl9JuEVV2LarQZvIoe1QUEsDx05YVfl0dpyKfWcSmCOAyR6fZFjEqA==',key_name='tempest-TestNetworkBasicOps-1895434798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5bv1kvw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:52Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=73a187fa-5479-4191-bd44-757c3840137a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.596 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.598 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.599 254096 DEBUG os_vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.602 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.603 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.609 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0512c6a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.611 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0512c6a-fb, col_values=(('external_ids', {'iface-id': 'b0512c6a-fbc4-4639-8508-e6493d18bd3a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:b5:b4', 'vm-uuid': '73a187fa-5479-4191-bd44-757c3840137a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:56 np0005535469 NetworkManager[48891]: <info>  [1764090176.6141] manager: (tapb0512c6a-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.622 254096 INFO os_vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb')#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.688 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.689 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.689 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:d7:b5:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.689 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Using config drive#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.709 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.795 254096 DEBUG nova.network.neutron [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated VIF entry in instance network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.795 254096 DEBUG nova.network.neutron [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.818 254096 DEBUG oslo_concurrency.lockutils [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:56 np0005535469 nova_compute[254092]: 2025-11-25 17:02:56.959 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.012 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.072 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Creating config drive at /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.077 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8h_emcr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.154 254096 DEBUG nova.objects.instance [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid 12deb2c6-31fb-4186-940b-8131e43ea3f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.175 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.175 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Ensure instance console log exists: /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.176 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.177 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.177 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.236 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8h_emcr" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.274 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.280 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config 73a187fa-5479-4191-bd44-757c3840137a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.379 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Successfully created port: 142675a5-3c37-4e43-9f80-e8fedd63f3cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.490 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config 73a187fa-5479-4191-bd44-757c3840137a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.492 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deleting local config drive /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config because it was imported into RBD.#033[00m
Nov 25 12:02:57 np0005535469 kernel: tapb0512c6a-fb: entered promiscuous mode
Nov 25 12:02:57 np0005535469 NetworkManager[48891]: <info>  [1764090177.5926] manager: (tapb0512c6a-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Nov 25 12:02:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:57Z|01206|binding|INFO|Claiming lport b0512c6a-fbc4-4639-8508-e6493d18bd3a for this chassis.
Nov 25 12:02:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:57Z|01207|binding|INFO|b0512c6a-fbc4-4639-8508-e6493d18bd3a: Claiming fa:16:3e:d7:b5:b4 10.100.0.9
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.605 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b5:b4 10.100.0.9'], port_security=['fa:16:3e:d7:b5:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73a187fa-5479-4191-bd44-757c3840137a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05b9ee3b-4cbe-486c-b386-9a71c1c7373a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0512c6a-fbc4-4639-8508-e6493d18bd3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.607 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0512c6a-fbc4-4639-8508-e6493d18bd3a in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce bound to our chassis#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.608 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.618 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48fe6405-c08b-499a-aaa3-822055e92850]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.620 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd54b59f-81 in ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.622 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd54b59f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.622 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3e2029-62cc-44fa-bf0f-1f911490e6ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7da812b8-9eb1-4c83-a448-95819239aa58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 systemd-udevd[383076]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:02:57 np0005535469 systemd-machined[216343]: New machine qemu-148-instance-00000074.
Nov 25 12:02:57 np0005535469 NetworkManager[48891]: <info>  [1764090177.6381] device (tapb0512c6a-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.636 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3ad6fe-7dab-488d-9186-b5f4e5ae0304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 NetworkManager[48891]: <info>  [1764090177.6390] device (tapb0512c6a-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Nov 25 12:02:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:57Z|01208|binding|INFO|Setting lport b0512c6a-fbc4-4639-8508-e6493d18bd3a ovn-installed in OVS
Nov 25 12:02:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:57Z|01209|binding|INFO|Setting lport b0512c6a-fbc4-4639-8508-e6493d18bd3a up in Southbound
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.663 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6efaf004-f326-4549-9d90-acb89ff7f403]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.691 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da8a5e08-c51d-45ad-89a8-c6bc1afc4456]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[588150be-28e1-475a-8900-e3b7dbcc77bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 NetworkManager[48891]: <info>  [1764090177.6976] manager: (tapcd54b59f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Nov 25 12:02:57 np0005535469 systemd-udevd[383080]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.727 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2eb403-d49f-4562-9a75-10d56837dfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.731 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[049b5b74-7132-424b-8f38-0a7931c66378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 NetworkManager[48891]: <info>  [1764090177.7569] device (tapcd54b59f-80): carrier: link connected
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.761 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9835f549-ccbb-4a63-b64d-3f50f9bc5329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[519dd5ff-3d2e-42aa-bd24-4e07a5b6c926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383109, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[739a8534-962f-4f14-bd10-0a89bf64beab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:f118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663534, 'tstamp': 663534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383110, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.808 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f96714f2-74c2-49e0-8e44-279b89bd9931]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383111, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.838 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b39cfbf8-7d36-4335-a401-6687822306bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.893 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92a605ed-ede1-491f-a8a0-6232b90e72d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.894 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.895 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.896 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd54b59f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 kernel: tapcd54b59f-80: entered promiscuous mode
Nov 25 12:02:57 np0005535469 NetworkManager[48891]: <info>  [1764090177.9016] manager: (tapcd54b59f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.902 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.906 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd54b59f-80, col_values=(('external_ids', {'iface-id': 'd6826306-6b17-44f3-a972-7d5089250616'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:02:57Z|01210|binding|INFO|Releasing lport d6826306-6b17-44f3-a972-7d5089250616 from this chassis (sb_readonly=0)
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.908 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.909 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[36bec1dd-2fac-4b1f-b60c-e2630555ee51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.910 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.pid.haproxy
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID cd54b59f-8568-4ad8-a75d-4fcdad6f8dce
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:02:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.910 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'env', 'PROCESS_TAG=haproxy-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.925 254096 DEBUG nova.compute.manager [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.926 254096 DEBUG oslo_concurrency.lockutils [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.926 254096 DEBUG oslo_concurrency.lockutils [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.926 254096 DEBUG oslo_concurrency.lockutils [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:57 np0005535469 nova_compute[254092]: 2025-11-25 17:02:57.927 254096 DEBUG nova.compute.manager [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Processing event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.093 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090178.0933955, 73a187fa-5479-4191-bd44-757c3840137a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.095 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Started (Lifecycle Event)#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.097 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.101 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.104 254096 INFO nova.virt.libvirt.driver [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance spawned successfully.#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.105 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.118 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.124 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.131 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.131 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.132 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.132 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.133 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.133 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.158 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.158 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090178.0935426, 73a187fa-5479-4191-bd44-757c3840137a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.159 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.185 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.190 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090178.0998085, 73a187fa-5479-4191-bd44-757c3840137a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.191 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.208 254096 INFO nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 6.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.208 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.210 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.215 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.255 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.274 254096 INFO nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 7.07 seconds to build instance.#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.291 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:02:58 np0005535469 podman[383185]: 2025-11-25 17:02:58.303522491 +0000 UTC m=+0.053235111 container create 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:02:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 88 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:02:58 np0005535469 systemd[1]: Started libpod-conmon-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434.scope.
Nov 25 12:02:58 np0005535469 podman[383185]: 2025-11-25 17:02:58.271206475 +0000 UTC m=+0.020919115 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:02:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:02:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e893ac8ad73aa3d08cc59807e1b727426905ed7800c4aa1c5d54d074cb489584/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:02:58 np0005535469 podman[383185]: 2025-11-25 17:02:58.406151137 +0000 UTC m=+0.155863757 container init 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:02:58 np0005535469 podman[383185]: 2025-11-25 17:02:58.411263127 +0000 UTC m=+0.160975747 container start 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 12:02:58 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : New worker (383207) forked
Nov 25 12:02:58 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : Loading success.
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.604 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Successfully updated port: 142675a5-3c37-4e43-9f80-e8fedd63f3cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.617 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.617 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.617 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:02:58 np0005535469 nova_compute[254092]: 2025-11-25 17:02:58.776 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:02:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.663 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.678 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.679 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance network_info: |[{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.683 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start _get_guest_xml network_info=[{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.688 254096 WARNING nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.694 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.695 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.698 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.698 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.699 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.699 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.700 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.700 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.701 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.701 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.701 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.702 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.702 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.702 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.704 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.704 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.708 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:02:59 np0005535469 nova_compute[254092]: 2025-11-25 17:02:59.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.020 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.022 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.024 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.025 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.026 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] No waiting events found dispatching network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.027 254096 WARNING nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received unexpected event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a for instance with vm_state active and task_state None.#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.028 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.029 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing instance network info cache due to event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.031 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.032 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.033 254096 DEBUG nova.network.neutron [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005194116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.142 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.169 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.175 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 103 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 66 op/s
Nov 25 12:03:00 np0005535469 NetworkManager[48891]: <info>  [1764090180.5802] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Nov 25 12:03:00 np0005535469 NetworkManager[48891]: <info>  [1764090180.5819] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.589 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680489669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.660 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.662 254096 DEBUG nova.virt.libvirt.vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=117,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-ez50uxgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=12deb2c6-31fb-4186-940b-8131e43ea3f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.662 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.663 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.664 254096 DEBUG nova.objects.instance [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12deb2c6-31fb-4186-940b-8131e43ea3f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:00Z|01211|binding|INFO|Releasing lport d6826306-6b17-44f3-a972-7d5089250616 from this chassis (sb_readonly=0)
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.684 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <uuid>12deb2c6-31fb-4186-940b-8131e43ea3f8</uuid>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <name>instance-00000075</name>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716</nova:name>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:02:59</nova:creationTime>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <nova:port uuid="142675a5-3c37-4e43-9f80-e8fedd63f3cf">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <entry name="serial">12deb2c6-31fb-4186-940b-8131e43ea3f8</entry>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <entry name="uuid">12deb2c6-31fb-4186-940b-8131e43ea3f8</entry>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/12deb2c6-31fb-4186-940b-8131e43ea3f8_disk">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:4d:64:18"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <target dev="tap142675a5-3c"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/console.log" append="off"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:03:00 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:03:00 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:03:00 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:03:00 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.685 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Preparing to wait for external event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.685 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.685 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.686 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.686 254096 DEBUG nova.virt.libvirt.vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=117,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-ez50uxgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=12deb2c6-31fb-4186-940b-8131e43ea3f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.687 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.687 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.688 254096 DEBUG os_vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.688 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.689 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.694 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.694 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap142675a5-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.694 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap142675a5-3c, col_values=(('external_ids', {'iface-id': '142675a5-3c37-4e43-9f80-e8fedd63f3cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:64:18', 'vm-uuid': '12deb2c6-31fb-4186-940b-8131e43ea3f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.697 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 NetworkManager[48891]: <info>  [1764090180.6984] manager: (tap142675a5-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.706 254096 INFO os_vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c')#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.752 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.753 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.753 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:4d:64:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.753 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Using config drive#033[00m
Nov 25 12:03:00 np0005535469 nova_compute[254092]: 2025-11-25 17:03:00.770 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.954614) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090180954659, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 251, "total_data_size": 2344495, "memory_usage": 2377952, "flush_reason": "Manual Compaction"}
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090180967210, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2310245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48401, "largest_seqno": 49921, "table_properties": {"data_size": 2303162, "index_size": 4090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14991, "raw_average_key_size": 20, "raw_value_size": 2288989, "raw_average_value_size": 3068, "num_data_blocks": 182, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090022, "oldest_key_time": 1764090022, "file_creation_time": 1764090180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 12642 microseconds, and 6135 cpu microseconds.
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.967252) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2310245 bytes OK
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.967271) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.969491) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.969521) EVENT_LOG_v1 {"time_micros": 1764090180969511, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.969548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2337797, prev total WAL file size 2337797, number of live WAL files 2.
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.970788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2256KB)], [110(8002KB)]
Nov 25 12:03:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090180970855, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 10504823, "oldest_snapshot_seqno": -1}
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7035 keys, 8754317 bytes, temperature: kUnknown
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090181040183, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8754317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8708835, "index_size": 26810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 183920, "raw_average_key_size": 26, "raw_value_size": 8584273, "raw_average_value_size": 1220, "num_data_blocks": 1044, "num_entries": 7035, "num_filter_entries": 7035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.040408) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8754317 bytes
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.042176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.4 rd, 126.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.8 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(8.3) write-amplify(3.8) OK, records in: 7549, records dropped: 514 output_compression: NoCompression
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.042202) EVENT_LOG_v1 {"time_micros": 1764090181042191, "job": 66, "event": "compaction_finished", "compaction_time_micros": 69400, "compaction_time_cpu_micros": 20362, "output_level": 6, "num_output_files": 1, "total_output_size": 8754317, "num_input_records": 7549, "num_output_records": 7035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090181042838, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090181044632, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.970689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:03:01 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.123 254096 DEBUG nova.compute.manager [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG nova.compute.manager [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing instance network info cache due to event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG oslo_concurrency.lockutils [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG oslo_concurrency.lockutils [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG nova.network.neutron [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.233 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Creating config drive at /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.238 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d5flnyv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.282 254096 DEBUG nova.network.neutron [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updated VIF entry in instance network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.282 254096 DEBUG nova.network.neutron [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.298 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.378 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d5flnyv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.404 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.410 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.615 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.616 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deleting local config drive /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config because it was imported into RBD.#033[00m
Nov 25 12:03:01 np0005535469 kernel: tap142675a5-3c: entered promiscuous mode
Nov 25 12:03:01 np0005535469 NetworkManager[48891]: <info>  [1764090181.6708] manager: (tap142675a5-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:01Z|01212|binding|INFO|Claiming lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf for this chassis.
Nov 25 12:03:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:01Z|01213|binding|INFO|142675a5-3c37-4e43-9f80-e8fedd63f3cf: Claiming fa:16:3e:4d:64:18 10.100.0.3
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.686 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:64:18 10.100.0.3'], port_security=['fa:16:3e:4d:64:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '12deb2c6-31fb-4186-940b-8131e43ea3f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d070883-7c27-4b8a-ba4e-1b0864814a05 3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=142675a5-3c37-4e43-9f80-e8fedd63f3cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.687 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 142675a5-3c37-4e43-9f80-e8fedd63f3cf in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 bound to our chassis#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.689 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51d8d234-0a41-496f-82c7-0c98aa4761b8#033[00m
Nov 25 12:03:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:01Z|01214|binding|INFO|Setting lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf ovn-installed in OVS
Nov 25 12:03:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:01Z|01215|binding|INFO|Setting lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf up in Southbound
Nov 25 12:03:01 np0005535469 nova_compute[254092]: 2025-11-25 17:03:01.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c246db76-40ed-4655-bcf2-d396ad1513b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.704 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51d8d234-01 in ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.705 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51d8d234-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.705 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a024c44d-d90b-4a30-947d-ea74619ee278]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[47972a2e-086a-4447-aae3-624ee98f266e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 systemd-machined[216343]: New machine qemu-149-instance-00000075.
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.718 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[78aee2a6-b40b-4904-a836-8a666d96c21b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 systemd[1]: Started Virtual Machine qemu-149-instance-00000075.
Nov 25 12:03:01 np0005535469 systemd-udevd[383355]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:03:01 np0005535469 NetworkManager[48891]: <info>  [1764090181.7607] device (tap142675a5-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:03:01 np0005535469 NetworkManager[48891]: <info>  [1764090181.7617] device (tap142675a5-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5009f8-7700-4473-8c4e-d8a9799a2db1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.792 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d47719a3-660f-43ef-b531-623f75bbb876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 systemd-udevd[383357]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79427c90-1fd0-41c6-9d4e-ffedb5b20dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 NetworkManager[48891]: <info>  [1764090181.7998] manager: (tap51d8d234-00): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.845 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8e07f7-8c36-4a18-b009-50f22903b97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.847 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3a254952-1022-499f-92fb-e180191ba284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 NetworkManager[48891]: <info>  [1764090181.8730] device (tap51d8d234-00): carrier: link connected
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.880 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28199e01-e8ce-4005-a460-9b8a0771eb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.899 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c151136-6d58-4eb9-9c3a-28fd4d1a86c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383385, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31594a13-362d-4d32-b744-d1f7f3417949]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:9b5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663946, 'tstamp': 663946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383386, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72de177a-2dde-47d8-8ba3-2c94cbb813d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383387, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.959 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3493c3-4115-4a67-b0b8-0cbddd884f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c2967-301b-45c9-81f1-5229d1fedf89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.054 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.055 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.055 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51d8d234-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:02 np0005535469 NetworkManager[48891]: <info>  [1764090182.0578] manager: (tap51d8d234-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Nov 25 12:03:02 np0005535469 kernel: tap51d8d234-00: entered promiscuous mode
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.063 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51d8d234-00, col_values=(('external_ids', {'iface-id': '81151907-1dea-47e9-9a64-99f3b4cd12fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:02 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:02Z|01216|binding|INFO|Releasing lport 81151907-1dea-47e9-9a64-99f3b4cd12fa from this chassis (sb_readonly=0)
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.128 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.131 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51d8d234-0a41-496f-82c7-0c98aa4761b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51d8d234-0a41-496f-82c7-0c98aa4761b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85fe2416-cfdd-463c-99bb-5d66e32a184c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.133 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-51d8d234-0a41-496f-82c7-0c98aa4761b8
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/51d8d234-0a41-496f-82c7-0c98aa4761b8.pid.haproxy
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 51d8d234-0a41-496f-82c7-0c98aa4761b8
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:03:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.135 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'env', 'PROCESS_TAG=haproxy-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51d8d234-0a41-496f-82c7-0c98aa4761b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:03:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.465 254096 DEBUG nova.network.neutron [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated VIF entry in instance network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.466 254096 DEBUG nova.network.neutron [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.485 254096 DEBUG oslo_concurrency.lockutils [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:02 np0005535469 podman[383459]: 2025-11-25 17:03:02.501306759 +0000 UTC m=+0.063392210 container create f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.505 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090182.5052378, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.506 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Started (Lifecycle Event)#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.520 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.524 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090182.5061996, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.524 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.542 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.547 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:03:02 np0005535469 systemd[1]: Started libpod-conmon-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087.scope.
Nov 25 12:03:02 np0005535469 podman[383459]: 2025-11-25 17:03:02.465398824 +0000 UTC m=+0.027484335 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:03:02 np0005535469 nova_compute[254092]: 2025-11-25 17:03:02.563 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:03:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:03:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1fa6406befdaaca4f1d07638e3ce32c9871c9601e12c9f35106bd16f6c1c0d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:02 np0005535469 podman[383459]: 2025-11-25 17:03:02.598249267 +0000 UTC m=+0.160334718 container init f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:03:02 np0005535469 podman[383459]: 2025-11-25 17:03:02.609209078 +0000 UTC m=+0.171294549 container start f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:03:02 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : New worker (383482) forked
Nov 25 12:03:02 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : Loading success.
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.235 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.235 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.236 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.237 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.237 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Processing event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.237 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.238 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.238 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.238 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.239 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] No waiting events found dispatching network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.239 254096 WARNING nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received unexpected event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.240 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.246 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.247 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090183.246282, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.248 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.255 254096 INFO nova.virt.libvirt.driver [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance spawned successfully.#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.255 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.281 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.290 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.291 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.292 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.293 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.294 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.294 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.301 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.326 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.369 254096 INFO nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 7.08 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.370 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.436 254096 INFO nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 8.10 seconds to build instance.#033[00m
Nov 25 12:03:03 np0005535469 nova_compute[254092]: 2025-11-25 17:03:03.450 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 25 12:03:04 np0005535469 nova_compute[254092]: 2025-11-25 17:03:04.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:05 np0005535469 nova_compute[254092]: 2025-11-25 17:03:05.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 12:03:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 25 12:03:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:09.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:03:09 np0005535469 nova_compute[254092]: 2025-11-25 17:03:09.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:09.138 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:03:09 np0005535469 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG nova.compute.manager [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:09 np0005535469 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG nova.compute.manager [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing instance network info cache due to event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:03:09 np0005535469 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG oslo_concurrency.lockutils [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:09 np0005535469 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG oslo_concurrency.lockutils [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:09 np0005535469 nova_compute[254092]: 2025-11-25 17:03:09.652 254096 DEBUG nova.network.neutron [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:03:09 np0005535469 nova_compute[254092]: 2025-11-25 17:03:09.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:03:10 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 12:03:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 137 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 178 op/s
Nov 25 12:03:10 np0005535469 nova_compute[254092]: 2025-11-25 17:03:10.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:11 np0005535469 nova_compute[254092]: 2025-11-25 17:03:11.753 254096 DEBUG nova.network.neutron [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updated VIF entry in instance network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:03:11 np0005535469 nova_compute[254092]: 2025-11-25 17:03:11.753 254096 DEBUG nova.network.neutron [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:11 np0005535469 nova_compute[254092]: 2025-11-25 17:03:11.902 254096 DEBUG oslo_concurrency.lockutils [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 147 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Nov 25 12:03:12 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:12Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:b5:b4 10.100.0.9
Nov 25 12:03:12 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:12Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:b5:b4 10.100.0.9
Nov 25 12:03:12 np0005535469 podman[383494]: 2025-11-25 17:03:12.648402794 +0000 UTC m=+0.054328650 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:03:12 np0005535469 podman[383493]: 2025-11-25 17:03:12.676102464 +0000 UTC m=+0.084098698 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:03:12 np0005535469 podman[383495]: 2025-11-25 17:03:12.686836879 +0000 UTC m=+0.084822609 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:03:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.140 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 147 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Nov 25 12:03:14 np0005535469 nova_compute[254092]: 2025-11-25 17:03:14.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:15 np0005535469 nova_compute[254092]: 2025-11-25 17:03:15.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 165 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Nov 25 12:03:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:17Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:64:18 10.100.0.3
Nov 25 12:03:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:17Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:64:18 10.100.0.3
Nov 25 12:03:18 np0005535469 nova_compute[254092]: 2025-11-25 17:03:18.141 254096 INFO nova.compute.manager [None req-ca50335a-bedf-4755-a1a5-28ade0816061 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Get console output#033[00m
Nov 25 12:03:18 np0005535469 nova_compute[254092]: 2025-11-25 17:03:18.147 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:03:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 165 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Nov 25 12:03:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:19 np0005535469 nova_compute[254092]: 2025-11-25 17:03:19.904 254096 DEBUG nova.compute.manager [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:19 np0005535469 nova_compute[254092]: 2025-11-25 17:03:19.906 254096 DEBUG nova.compute.manager [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing instance network info cache due to event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:03:19 np0005535469 nova_compute[254092]: 2025-11-25 17:03:19.906 254096 DEBUG oslo_concurrency.lockutils [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:19 np0005535469 nova_compute[254092]: 2025-11-25 17:03:19.906 254096 DEBUG oslo_concurrency.lockutils [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:19 np0005535469 nova_compute[254092]: 2025-11-25 17:03:19.907 254096 DEBUG nova.network.neutron [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:03:19 np0005535469 nova_compute[254092]: 2025-11-25 17:03:19.997 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:03:20 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9904e545-a491-41a1-a29e-be89a6e15be3 does not exist
Nov 25 12:03:20 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7ee7dbb0-eaea-4376-9d93-df1aef7b8595 does not exist
Nov 25 12:03:20 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f3fd32c5-181a-4489-abd3-88bdae72dc8c does not exist
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:03:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 178 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:03:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:03:20 np0005535469 nova_compute[254092]: 2025-11-25 17:03:20.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:20 np0005535469 podman[383828]: 2025-11-25 17:03:20.745092871 +0000 UTC m=+0.073014443 container create bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:03:20 np0005535469 podman[383828]: 2025-11-25 17:03:20.701371023 +0000 UTC m=+0.029292585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:03:20 np0005535469 systemd[1]: Started libpod-conmon-bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29.scope.
Nov 25 12:03:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:03:20 np0005535469 podman[383828]: 2025-11-25 17:03:20.974397131 +0000 UTC m=+0.302318683 container init bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:03:20 np0005535469 podman[383828]: 2025-11-25 17:03:20.987959173 +0000 UTC m=+0.315880705 container start bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:03:20 np0005535469 cool_pascal[383844]: 167 167
Nov 25 12:03:20 np0005535469 systemd[1]: libpod-bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29.scope: Deactivated successfully.
Nov 25 12:03:21 np0005535469 podman[383828]: 2025-11-25 17:03:21.067596447 +0000 UTC m=+0.395517999 container attach bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 12:03:21 np0005535469 podman[383828]: 2025-11-25 17:03:21.069414797 +0000 UTC m=+0.397336359 container died bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:03:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-fa33f7921770edc5a1bfef3073060f2a652ae9260e283cdc0d79ffb6ae10944d-merged.mount: Deactivated successfully.
Nov 25 12:03:21 np0005535469 podman[383828]: 2025-11-25 17:03:21.257854605 +0000 UTC m=+0.585776147 container remove bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:03:21 np0005535469 systemd[1]: libpod-conmon-bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29.scope: Deactivated successfully.
Nov 25 12:03:21 np0005535469 podman[383869]: 2025-11-25 17:03:21.447720033 +0000 UTC m=+0.037894840 container create 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:03:21 np0005535469 nova_compute[254092]: 2025-11-25 17:03:21.482 254096 DEBUG nova.network.neutron [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated VIF entry in instance network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:03:21 np0005535469 nova_compute[254092]: 2025-11-25 17:03:21.484 254096 DEBUG nova.network.neutron [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:21 np0005535469 nova_compute[254092]: 2025-11-25 17:03:21.501 254096 DEBUG oslo_concurrency.lockutils [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:21 np0005535469 systemd[1]: Started libpod-conmon-172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e.scope.
Nov 25 12:03:21 np0005535469 podman[383869]: 2025-11-25 17:03:21.430866811 +0000 UTC m=+0.021041648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:03:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:03:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:21 np0005535469 podman[383869]: 2025-11-25 17:03:21.556123867 +0000 UTC m=+0.146298714 container init 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:03:21 np0005535469 podman[383869]: 2025-11-25 17:03:21.565439052 +0000 UTC m=+0.155613859 container start 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:03:21 np0005535469 podman[383869]: 2025-11-25 17:03:21.569437372 +0000 UTC m=+0.159612189 container attach 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:03:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 200 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 4.1 MiB/s wr, 123 op/s
Nov 25 12:03:22 np0005535469 nova_compute[254092]: 2025-11-25 17:03:22.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:22 np0005535469 competent_sanderson[383885]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:03:22 np0005535469 competent_sanderson[383885]: --> relative data size: 1.0
Nov 25 12:03:22 np0005535469 competent_sanderson[383885]: --> All data devices are unavailable
Nov 25 12:03:22 np0005535469 systemd[1]: libpod-172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e.scope: Deactivated successfully.
Nov 25 12:03:22 np0005535469 podman[383869]: 2025-11-25 17:03:22.610001533 +0000 UTC m=+1.200176350 container died 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:03:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759-merged.mount: Deactivated successfully.
Nov 25 12:03:22 np0005535469 podman[383869]: 2025-11-25 17:03:22.660916919 +0000 UTC m=+1.251091736 container remove 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:03:22 np0005535469 systemd[1]: libpod-conmon-172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e.scope: Deactivated successfully.
Nov 25 12:03:23 np0005535469 podman[384067]: 2025-11-25 17:03:23.363177831 +0000 UTC m=+0.088175440 container create f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:03:23 np0005535469 podman[384067]: 2025-11-25 17:03:23.299202656 +0000 UTC m=+0.024200295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:03:23 np0005535469 systemd[1]: Started libpod-conmon-f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227.scope.
Nov 25 12:03:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:03:23 np0005535469 podman[384067]: 2025-11-25 17:03:23.483930173 +0000 UTC m=+0.208927852 container init f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:03:23 np0005535469 podman[384067]: 2025-11-25 17:03:23.493577477 +0000 UTC m=+0.218575096 container start f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:03:23 np0005535469 condescending_bhabha[384083]: 167 167
Nov 25 12:03:23 np0005535469 systemd[1]: libpod-f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227.scope: Deactivated successfully.
Nov 25 12:03:23 np0005535469 podman[384067]: 2025-11-25 17:03:23.503092809 +0000 UTC m=+0.228090458 container attach f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:03:23 np0005535469 podman[384067]: 2025-11-25 17:03:23.503777047 +0000 UTC m=+0.228774716 container died f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:03:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4d51c7aa2a31891c73042b3eeadb06f1ec7fa91d7004bbad0ddc9fd85f39454d-merged.mount: Deactivated successfully.
Nov 25 12:03:23 np0005535469 podman[384067]: 2025-11-25 17:03:23.565311955 +0000 UTC m=+0.290309564 container remove f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:03:23 np0005535469 systemd[1]: libpod-conmon-f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227.scope: Deactivated successfully.
Nov 25 12:03:23 np0005535469 podman[384109]: 2025-11-25 17:03:23.776141597 +0000 UTC m=+0.045752815 container create f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:03:23 np0005535469 systemd[1]: Started libpod-conmon-f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5.scope.
Nov 25 12:03:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:03:23 np0005535469 podman[384109]: 2025-11-25 17:03:23.754895525 +0000 UTC m=+0.024506763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:03:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:23 np0005535469 podman[384109]: 2025-11-25 17:03:23.882506695 +0000 UTC m=+0.152117913 container init f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:03:23 np0005535469 podman[384109]: 2025-11-25 17:03:23.890289838 +0000 UTC m=+0.159901056 container start f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 12:03:23 np0005535469 podman[384109]: 2025-11-25 17:03:23.907394708 +0000 UTC m=+0.177005926 container attach f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:03:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 200 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.9 MiB/s wr, 107 op/s
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]: {
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:    "0": [
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:        {
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "devices": [
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "/dev/loop3"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            ],
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_name": "ceph_lv0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_size": "21470642176",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "name": "ceph_lv0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "tags": {
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cluster_name": "ceph",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.crush_device_class": "",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.encrypted": "0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osd_id": "0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.type": "block",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.vdo": "0"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            },
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "type": "block",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "vg_name": "ceph_vg0"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:        }
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:    ],
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:    "1": [
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:        {
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "devices": [
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "/dev/loop4"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            ],
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_name": "ceph_lv1",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_size": "21470642176",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "name": "ceph_lv1",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "tags": {
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cluster_name": "ceph",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.crush_device_class": "",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.encrypted": "0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osd_id": "1",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.type": "block",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.vdo": "0"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            },
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "type": "block",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "vg_name": "ceph_vg1"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:        }
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:    ],
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:    "2": [
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:        {
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "devices": [
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "/dev/loop5"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            ],
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_name": "ceph_lv2",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_size": "21470642176",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "name": "ceph_lv2",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "tags": {
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.cluster_name": "ceph",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.crush_device_class": "",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.encrypted": "0",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osd_id": "2",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.type": "block",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:                "ceph.vdo": "0"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            },
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "type": "block",
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:            "vg_name": "ceph_vg2"
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:        }
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]:    ]
Nov 25 12:03:24 np0005535469 nifty_goodall[384126]: }
Nov 25 12:03:24 np0005535469 systemd[1]: libpod-f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5.scope: Deactivated successfully.
Nov 25 12:03:24 np0005535469 podman[384109]: 2025-11-25 17:03:24.646089849 +0000 UTC m=+0.915701077 container died f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:03:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8-merged.mount: Deactivated successfully.
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:25 np0005535469 podman[384109]: 2025-11-25 17:03:25.025370251 +0000 UTC m=+1.294981479 container remove f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:03:25 np0005535469 systemd[1]: libpod-conmon-f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5.scope: Deactivated successfully.
Nov 25 12:03:25 np0005535469 podman[384291]: 2025-11-25 17:03:25.613897723 +0000 UTC m=+0.042362803 container create e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:03:25 np0005535469 systemd[1]: Started libpod-conmon-e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db.scope.
Nov 25 12:03:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:03:25 np0005535469 podman[384291]: 2025-11-25 17:03:25.591688204 +0000 UTC m=+0.020153304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:03:25 np0005535469 podman[384291]: 2025-11-25 17:03:25.691836541 +0000 UTC m=+0.120301671 container init e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:03:25 np0005535469 podman[384291]: 2025-11-25 17:03:25.699306786 +0000 UTC m=+0.127771856 container start e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:03:25 np0005535469 podman[384291]: 2025-11-25 17:03:25.702034631 +0000 UTC m=+0.130499761 container attach e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:03:25 np0005535469 strange_mclean[384308]: 167 167
Nov 25 12:03:25 np0005535469 systemd[1]: libpod-e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db.scope: Deactivated successfully.
Nov 25 12:03:25 np0005535469 podman[384291]: 2025-11-25 17:03:25.704395976 +0000 UTC m=+0.132861066 container died e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:03:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-67d079414dd73fc1facc06b94ddc08e778364628bf97c05b23e268a8641def9c-merged.mount: Deactivated successfully.
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:25 np0005535469 podman[384291]: 2025-11-25 17:03:25.746202743 +0000 UTC m=+0.174667823 container remove e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:03:25 np0005535469 systemd[1]: libpod-conmon-e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db.scope: Deactivated successfully.
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.819 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.821 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.836 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.904 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.905 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.914 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:03:25 np0005535469 nova_compute[254092]: 2025-11-25 17:03:25.915 254096 INFO nova.compute.claims [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:03:25 np0005535469 podman[384331]: 2025-11-25 17:03:25.915573878 +0000 UTC m=+0.044647716 container create afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:03:25 np0005535469 systemd[1]: Started libpod-conmon-afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f.scope.
Nov 25 12:03:25 np0005535469 podman[384331]: 2025-11-25 17:03:25.897254245 +0000 UTC m=+0.026328103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:03:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:03:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:03:26 np0005535469 podman[384331]: 2025-11-25 17:03:26.016050374 +0000 UTC m=+0.145124212 container init afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:03:26 np0005535469 podman[384331]: 2025-11-25 17:03:26.028772152 +0000 UTC m=+0.157846020 container start afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:03:26 np0005535469 podman[384331]: 2025-11-25 17:03:26.032146736 +0000 UTC m=+0.161220574 container attach afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.042 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.9 MiB/s wr, 108 op/s
Nov 25 12:03:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:03:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922475283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.483 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.490 254096 DEBUG nova.compute.provider_tree [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.639 254096 DEBUG nova.scheduler.client.report [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.673 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.674 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.739 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.739 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.769 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.803 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.922 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.923 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.923 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Creating image(s)#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.949 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.973 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:26 np0005535469 fervent_pare[384348]: {
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "osd_id": 1,
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "type": "bluestore"
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:    },
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "osd_id": 2,
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "type": "bluestore"
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:    },
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "osd_id": 0,
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:        "type": "bluestore"
Nov 25 12:03:26 np0005535469 fervent_pare[384348]:    }
Nov 25 12:03:26 np0005535469 fervent_pare[384348]: }
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.994 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:26 np0005535469 nova_compute[254092]: 2025-11-25 17:03:26.998 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:27 np0005535469 systemd[1]: libpod-afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f.scope: Deactivated successfully.
Nov 25 12:03:27 np0005535469 podman[384331]: 2025-11-25 17:03:27.022412346 +0000 UTC m=+1.151486184 container died afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:03:27 np0005535469 nova_compute[254092]: 2025-11-25 17:03:27.072 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:27 np0005535469 nova_compute[254092]: 2025-11-25 17:03:27.126 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:27 np0005535469 nova_compute[254092]: 2025-11-25 17:03:27.127 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:27 np0005535469 nova_compute[254092]: 2025-11-25 17:03:27.127 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:27 np0005535469 nova_compute[254092]: 2025-11-25 17:03:27.184 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:27 np0005535469 nova_compute[254092]: 2025-11-25 17:03:27.189 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc02b95b-290f-441d-9b04-957187d0f885_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6-merged.mount: Deactivated successfully.
Nov 25 12:03:27 np0005535469 nova_compute[254092]: 2025-11-25 17:03:27.231 254096 DEBUG nova.policy [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:03:27 np0005535469 podman[384331]: 2025-11-25 17:03:27.252354683 +0000 UTC m=+1.381428521 container remove afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:03:27 np0005535469 systemd[1]: libpod-conmon-afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f.scope: Deactivated successfully.
Nov 25 12:03:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:03:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:03:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:03:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:03:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b10c8dc0-b501-452f-a6f6-23e959a2a05a does not exist
Nov 25 12:03:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d4570110-d28d-407a-be2a-fc2346339b91 does not exist
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.163 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc02b95b-290f-441d-9b04-957187d0f885_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:03:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.235 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:03:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.342 254096 DEBUG nova.objects.instance [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid dc02b95b-290f-441d-9b04-957187d0f885 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.356 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.357 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Ensure instance console log exists: /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.358 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.359 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.359 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:28 np0005535469 nova_compute[254092]: 2025-11-25 17:03:28.660 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Successfully created port: 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:03:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.568 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Successfully updated port: 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.586 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.586 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.586 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.668 254096 DEBUG nova.compute.manager [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.668 254096 DEBUG nova.compute.manager [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing instance network info cache due to event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.669 254096 DEBUG oslo_concurrency.lockutils [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:29 np0005535469 nova_compute[254092]: 2025-11-25 17:03:29.759 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:03:30 np0005535469 nova_compute[254092]: 2025-11-25 17:03:30.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 208 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.3 MiB/s wr, 61 op/s
Nov 25 12:03:30 np0005535469 nova_compute[254092]: 2025-11-25 17:03:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:30 np0005535469 nova_compute[254092]: 2025-11-25 17:03:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:30 np0005535469 nova_compute[254092]: 2025-11-25 17:03:30.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:30 np0005535469 nova_compute[254092]: 2025-11-25 17:03:30.896 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:30 np0005535469 nova_compute[254092]: 2025-11-25 17:03:30.897 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:30 np0005535469 nova_compute[254092]: 2025-11-25 17:03:30.914 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.010 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.011 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.021 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.022 254096 INFO nova.compute.claims [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.240 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:03:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190977656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.719 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.724 254096 DEBUG nova.compute.provider_tree [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.734 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.739 254096 DEBUG nova.scheduler.client.report [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.752 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.752 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance network_info: |[{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.752 254096 DEBUG oslo_concurrency.lockutils [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.753 254096 DEBUG nova.network.neutron [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.755 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start _get_guest_xml network_info=[{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.757 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.758 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.765 254096 WARNING nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.773 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.773 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.777 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.777 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.777 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.780 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.780 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.780 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.783 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.818 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.819 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.841 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.859 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.953 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.954 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.955 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Creating image(s)#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.975 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:31 np0005535469 nova_compute[254092]: 2025-11-25 17:03:31.997 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.016 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.020 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.077 254096 DEBUG nova.policy [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.101 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.102 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.102 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.103 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.123 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.126 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:03:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048479177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.257 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.289 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.293 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 246 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 3.2 MiB/s wr, 76 op/s
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.683 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.762 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:03:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:03:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143354871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.857 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.860 254096 DEBUG nova.virt.libvirt.vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1163677477',display_name='tempest-TestNetworkBasicOps-server-1163677477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1163677477',id=118,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCxvG1kVT8rj7rz3z1G56tHhBp7SuNAkkLNuv/fKw8COC5VTTwy6Y98cgqFvLD/dMqbJMho2xMxaZGbc9qY1ZWoyzmuMb54S1JTVXIQHKR2yNSi3mjSBeFFRS3qe8724pw==',key_name='tempest-TestNetworkBasicOps-1314414902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-akatof1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=dc02b95b-290f-441d-9b04-957187d0f885,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.860 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.862 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.864 254096 DEBUG nova.objects.instance [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc02b95b-290f-441d-9b04-957187d0f885 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.877 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <uuid>dc02b95b-290f-441d-9b04-957187d0f885</uuid>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <name>instance-00000076</name>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-1163677477</nova:name>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:03:31</nova:creationTime>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <nova:port uuid="9fb8b6ba-4534-4320-a88f-3da6cdc1eb28">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <entry name="serial">dc02b95b-290f-441d-9b04-957187d0f885</entry>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <entry name="uuid">dc02b95b-290f-441d-9b04-957187d0f885</entry>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/dc02b95b-290f-441d-9b04-957187d0f885_disk">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/dc02b95b-290f-441d-9b04-957187d0f885_disk.config">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:c6:a2:cd"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <target dev="tap9fb8b6ba-45"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/console.log" append="off"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:03:32 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:03:32 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:03:32 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:03:32 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.877 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Preparing to wait for external event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.878 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.878 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.879 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.880 254096 DEBUG nova.virt.libvirt.vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1163677477',display_name='tempest-TestNetworkBasicOps-server-1163677477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1163677477',id=118,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCxvG1kVT8rj7rz3z1G56tHhBp7SuNAkkLNuv/fKw8COC5VTTwy6Y98cgqFvLD/dMqbJMho2xMxaZGbc9qY1ZWoyzmuMb54S1JTVXIQHKR2yNSi3mjSBeFFRS3qe8724pw==',key_name='tempest-TestNetworkBasicOps-1314414902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-akatof1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=dc02b95b-290f-441d-9b04-957187d0f885,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.880 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.881 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.882 254096 DEBUG os_vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.884 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.884 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.890 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fb8b6ba-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.891 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fb8b6ba-45, col_values=(('external_ids', {'iface-id': '9fb8b6ba-4534-4320-a88f-3da6cdc1eb28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:a2:cd', 'vm-uuid': 'dc02b95b-290f-441d-9b04-957187d0f885'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:32 np0005535469 NetworkManager[48891]: <info>  [1764090212.8942] manager: (tap9fb8b6ba-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.900 254096 INFO os_vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45')#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.991 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.991 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.992 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:c6:a2:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:03:32 np0005535469 nova_compute[254092]: 2025-11-25 17:03:32.992 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Using config drive#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.010 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.083 254096 DEBUG nova.objects.instance [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.096 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.096 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Ensure instance console log exists: /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.097 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.097 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.097 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.791 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Successfully created port: 29c53aaa-054f-442e-8673-22d0d7fc5f72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.821 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Creating config drive at /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config#033[00m
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.826 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyn3pllm1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:33 np0005535469 nova_compute[254092]: 2025-11-25 17:03:33.989 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyn3pllm1" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.034 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.041 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config dc02b95b-290f-441d-9b04-957187d0f885_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.106 254096 DEBUG nova.network.neutron [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updated VIF entry in instance network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.108 254096 DEBUG nova.network.neutron [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.148 254096 DEBUG oslo_concurrency.lockutils [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 246 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.362 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config dc02b95b-290f-441d-9b04-957187d0f885_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.362 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deleting local config drive /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config because it was imported into RBD.#033[00m
Nov 25 12:03:34 np0005535469 kernel: tap9fb8b6ba-45: entered promiscuous mode
Nov 25 12:03:34 np0005535469 NetworkManager[48891]: <info>  [1764090214.4299] manager: (tap9fb8b6ba-45): new Tun device (/org/freedesktop/NetworkManager/Devices/505)
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:34Z|01217|binding|INFO|Claiming lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for this chassis.
Nov 25 12:03:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:34Z|01218|binding|INFO|9fb8b6ba-4534-4320-a88f-3da6cdc1eb28: Claiming fa:16:3e:c6:a2:cd 10.100.0.8
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.480 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:a2:cd 10.100.0.8'], port_security=['fa:16:3e:c6:a2:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dc02b95b-290f-441d-9b04-957187d0f885', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1a0257b-ea79-4747-8da2-0da26a4a2e35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.481 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce bound to our chassis#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.483 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:34Z|01219|binding|INFO|Setting lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 ovn-installed in OVS
Nov 25 12:03:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:34Z|01220|binding|INFO|Setting lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 up in Southbound
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1991711a-2726-4f35-bec0-4483a2a7fff4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:34 np0005535469 systemd-machined[216343]: New machine qemu-150-instance-00000076.
Nov 25 12:03:34 np0005535469 systemd-udevd[384958]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.528 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.528 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:34 np0005535469 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Nov 25 12:03:34 np0005535469 NetworkManager[48891]: <info>  [1764090214.5361] device (tap9fb8b6ba-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:03:34 np0005535469 NetworkManager[48891]: <info>  [1764090214.5370] device (tap9fb8b6ba-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.547 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed3c6c6-cccc-43cc-9279-efaa2a2656ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.549 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecc8477-b54c-4970-a107-df215b701c57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.593 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a993a-ed57-4b24-aec9-69cb446c6c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.616 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[43a6fff5-4159-48c9-a238-80ec900e4854]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384971, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.640 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c77decd-40df-4383-b3f4-fb436aabbfc4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663545, 'tstamp': 663545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384972, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663547, 'tstamp': 663547}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384972, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.643 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.646 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd54b59f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.646 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd54b59f-80, col_values=(('external_ids', {'iface-id': 'd6826306-6b17-44f3-a972-7d5089250616'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:34 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.782 254096 DEBUG nova.compute.manager [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.783 254096 DEBUG oslo_concurrency.lockutils [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.784 254096 DEBUG oslo_concurrency.lockutils [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.784 254096 DEBUG oslo_concurrency.lockutils [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.784 254096 DEBUG nova.compute.manager [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Processing event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.789 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Successfully updated port: 29c53aaa-054f-442e-8673-22d0d7fc5f72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.814 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.815 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.815 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.895 254096 DEBUG nova.compute.manager [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-changed-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.896 254096 DEBUG nova.compute.manager [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Refreshing instance network info cache due to event network-changed-29c53aaa-054f-442e-8673-22d0d7fc5f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.900 254096 DEBUG oslo_concurrency.lockutils [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:03:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544051314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.970 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.977 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090214.9771938, dc02b95b-290f-441d-9b04-957187d0f885 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.978 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Started (Lifecycle Event)#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.981 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.985 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.990 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.993 254096 INFO nova.virt.libvirt.driver [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance spawned successfully.#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.993 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:03:34 np0005535469 nova_compute[254092]: 2025-11-25 17:03:34.997 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.001 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.012 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.012 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.013 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.013 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.013 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.014 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.017 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.017 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090214.9807823, dc02b95b-290f-441d-9b04-957187d0f885 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.017 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.074 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.077 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090214.990103, dc02b95b-290f-441d-9b04-957187d0f885 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.077 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.107 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.109 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.109 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.110 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.113 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.114 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.117 254096 INFO nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 8.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.118 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.119 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.119 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.146 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.195 254096 INFO nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 9.32 seconds to build instance.#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.228 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.330 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.331 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3296MB free_disk=59.876468658447266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.331 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.331 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73a187fa-5479-4191-bd44-757c3840137a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 12deb2c6-31fb-4186-940b-8131e43ea3f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance dc02b95b-290f-441d-9b04-957187d0f885 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.396 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.482 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:03:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121133011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.934 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.938 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.952 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.979 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.979 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:35 np0005535469 nova_compute[254092]: 2025-11-25 17:03:35.987 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updating instance_info_cache with network_info: [{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.003 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.003 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance network_info: |[{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.004 254096 DEBUG oslo_concurrency.lockutils [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.004 254096 DEBUG nova.network.neutron [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Refreshing network info cache for port 29c53aaa-054f-442e-8673-22d0d7fc5f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.006 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start _get_guest_xml network_info=[{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.011 254096 WARNING nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.017 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.017 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.023 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.023 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.029 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Nov 25 12:03:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:03:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109160244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.550 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.582 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.586 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.861 254096 DEBUG nova.compute.manager [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.861 254096 DEBUG oslo_concurrency.lockutils [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.862 254096 DEBUG oslo_concurrency.lockutils [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.862 254096 DEBUG oslo_concurrency.lockutils [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.862 254096 DEBUG nova.compute.manager [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] No waiting events found dispatching network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:03:36 np0005535469 nova_compute[254092]: 2025-11-25 17:03:36.863 254096 WARNING nova.compute.manager [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received unexpected event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:03:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:03:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315091351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.021 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.022 254096 DEBUG nova.virt.libvirt.vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=119,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-4a5cegzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:31Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.023 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.024 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.025 254096 DEBUG nova.objects.instance [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.046 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <uuid>4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3</uuid>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <name>instance-00000077</name>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422</nova:name>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:03:36</nova:creationTime>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <nova:port uuid="29c53aaa-054f-442e-8673-22d0d7fc5f72">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <entry name="serial">4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3</entry>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <entry name="uuid">4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3</entry>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:da:b2:95"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <target dev="tap29c53aaa-05"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/console.log" append="off"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:03:37 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:03:37 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:03:37 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:03:37 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.052 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Preparing to wait for external event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.052 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.052 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.053 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.053 254096 DEBUG nova.virt.libvirt.vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=119,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-4a5cegzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:31Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.053 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.054 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.054 254096 DEBUG os_vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.056 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.056 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.060 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29c53aaa-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.060 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29c53aaa-05, col_values=(('external_ids', {'iface-id': '29c53aaa-054f-442e-8673-22d0d7fc5f72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:b2:95', 'vm-uuid': '4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:37 np0005535469 NetworkManager[48891]: <info>  [1764090217.0628] manager: (tap29c53aaa-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.068 254096 INFO os_vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05')#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.116 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.116 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.116 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:da:b2:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.117 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Using config drive#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.137 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.695 254096 DEBUG nova.compute.manager [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.696 254096 DEBUG nova.compute.manager [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing instance network info cache due to event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.696 254096 DEBUG oslo_concurrency.lockutils [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.697 254096 DEBUG oslo_concurrency.lockutils [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.697 254096 DEBUG nova.network.neutron [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.698 254096 DEBUG nova.network.neutron [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updated VIF entry in instance network info cache for port 29c53aaa-054f-442e-8673-22d0d7fc5f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.699 254096 DEBUG nova.network.neutron [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updating instance_info_cache with network_info: [{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.726 254096 DEBUG oslo_concurrency.lockutils [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.729 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Creating config drive at /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.735 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7utnzcdy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.878 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7utnzcdy" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.902 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.905 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:37 np0005535469 nova_compute[254092]: 2025-11-25 17:03:37.979 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.832 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.927s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.833 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deleting local config drive /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config because it was imported into RBD.#033[00m
Nov 25 12:03:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:38 np0005535469 kernel: tap29c53aaa-05: entered promiscuous mode
Nov 25 12:03:38 np0005535469 NetworkManager[48891]: <info>  [1764090218.8759] manager: (tap29c53aaa-05): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Nov 25 12:03:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:38Z|01221|binding|INFO|Claiming lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 for this chassis.
Nov 25 12:03:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:38Z|01222|binding|INFO|29c53aaa-054f-442e-8673-22d0d7fc5f72: Claiming fa:16:3e:da:b2:95 10.100.0.10
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.896 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:b2:95 10.100.0.10'], port_security=['fa:16:3e:da:b2:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=29c53aaa-054f-442e-8673-22d0d7fc5f72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:03:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.898 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 29c53aaa-054f-442e-8673-22d0d7fc5f72 in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 bound to our chassis#033[00m
Nov 25 12:03:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.899 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51d8d234-0a41-496f-82c7-0c98aa4761b8#033[00m
Nov 25 12:03:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:38Z|01223|binding|INFO|Setting lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 ovn-installed in OVS
Nov 25 12:03:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:38Z|01224|binding|INFO|Setting lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 up in Southbound
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:38 np0005535469 systemd-machined[216343]: New machine qemu-151-instance-00000077.
Nov 25 12:03:38 np0005535469 nova_compute[254092]: 2025-11-25 17:03:38.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.920 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abf38955-7d17-4526-8b53-903f2b358026]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:38 np0005535469 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Nov 25 12:03:38 np0005535469 systemd-udevd[385197]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:03:38 np0005535469 NetworkManager[48891]: <info>  [1764090218.9469] device (tap29c53aaa-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:03:38 np0005535469 NetworkManager[48891]: <info>  [1764090218.9482] device (tap29c53aaa-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:03:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.956 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9368d79c-51fb-4349-9cae-9ff762169d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.959 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[94afebe8-1349-4705-b15b-ea6c014acc6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.994 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5d03e426-1720-463c-af21-a23809ab832e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.012 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[674b036b-e5e4-4913-a549-c2dc6f437649]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385208, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.024 254096 DEBUG nova.network.neutron [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updated VIF entry in instance network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.024 254096 DEBUG nova.network.neutron [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.025 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24feb474-7a3b-481d-b296-c39a43fd415e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663959, 'tstamp': 663959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385210, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663963, 'tstamp': 663963}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385210, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.026 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51d8d234-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51d8d234-00, col_values=(('external_ids', {'iface-id': '81151907-1dea-47e9-9a64-99f3b4cd12fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.031 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.116 254096 DEBUG oslo_concurrency.lockutils [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.211 254096 DEBUG nova.compute.manager [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.211 254096 DEBUG oslo_concurrency.lockutils [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.212 254096 DEBUG oslo_concurrency.lockutils [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.212 254096 DEBUG oslo_concurrency.lockutils [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:39 np0005535469 nova_compute[254092]: 2025-11-25 17:03:39.212 254096 DEBUG nova.compute.manager [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Processing event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:03:40
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.273 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.274 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090220.272911, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Started (Lifecycle Event)#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.278 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.281 254096 INFO nova.virt.libvirt.driver [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance spawned successfully.#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.282 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.299 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.305 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.306 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.306 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.307 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.308 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.308 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.313 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.353 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.354 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090220.275255, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.355 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.380 254096 INFO nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 8.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.381 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.382 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.393 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090220.278339, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.394 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.419 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.423 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.456 254096 INFO nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 9.48 seconds to build instance.#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.471 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.665 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.666 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.666 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:03:40 np0005535469 nova_compute[254092]: 2025-11-25 17:03:40.666 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:03:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:03:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.88 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2402 writes, 10K keys, 2402 commit groups, 1.0 writes per commit group, ingest: 12.11 MB, 0.02 MB/s#012Interval WAL: 2402 writes, 888 syncs, 2.70 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:03:41 np0005535469 nova_compute[254092]: 2025-11-25 17:03:41.319 254096 DEBUG nova.compute.manager [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:41 np0005535469 nova_compute[254092]: 2025-11-25 17:03:41.319 254096 DEBUG oslo_concurrency.lockutils [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:41 np0005535469 nova_compute[254092]: 2025-11-25 17:03:41.319 254096 DEBUG oslo_concurrency.lockutils [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:41 np0005535469 nova_compute[254092]: 2025-11-25 17:03:41.320 254096 DEBUG oslo_concurrency.lockutils [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:41 np0005535469 nova_compute[254092]: 2025-11-25 17:03:41.320 254096 DEBUG nova.compute.manager [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] No waiting events found dispatching network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:03:41 np0005535469 nova_compute[254092]: 2025-11-25 17:03:41.320 254096 WARNING nova.compute.manager [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received unexpected event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:03:42 np0005535469 nova_compute[254092]: 2025-11-25 17:03:42.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 135 op/s
Nov 25 12:03:43 np0005535469 podman[385254]: 2025-11-25 17:03:43.657549217 +0000 UTC m=+0.070143195 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:03:43 np0005535469 podman[385253]: 2025-11-25 17:03:43.677505134 +0000 UTC m=+0.089517527 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:03:43 np0005535469 podman[385255]: 2025-11-25 17:03:43.707272801 +0000 UTC m=+0.105582278 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 12:03:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:43 np0005535469 nova_compute[254092]: 2025-11-25 17:03:43.957 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:43 np0005535469 nova_compute[254092]: 2025-11-25 17:03:43.968 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:03:43 np0005535469 nova_compute[254092]: 2025-11-25 17:03:43.968 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:03:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 12:03:44 np0005535469 nova_compute[254092]: 2025-11-25 17:03:44.947 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:45 np0005535469 nova_compute[254092]: 2025-11-25 17:03:45.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:45 np0005535469 nova_compute[254092]: 2025-11-25 17:03:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 25 12:03:47 np0005535469 nova_compute[254092]: 2025-11-25 17:03:47.069 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 129 op/s
Nov 25 12:03:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:50Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:a2:cd 10.100.0.8
Nov 25 12:03:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:50Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:a2:cd 10.100.0.8
Nov 25 12:03:50 np0005535469 nova_compute[254092]: 2025-11-25 17:03:50.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:03:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.2 total, 600.0 interval#012Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2826 writes, 11K keys, 2826 commit groups, 1.0 writes per commit group, ingest: 15.40 MB, 0.03 MB/s#012Interval WAL: 2826 writes, 994 syncs, 2.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:03:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 313 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 946 KiB/s wr, 137 op/s
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002392791153871199 of space, bias 1.0, pg target 0.7178373461613596 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:03:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:03:52 np0005535469 nova_compute[254092]: 2025-11-25 17:03:52.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 326 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Nov 25 12:03:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 326 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 25 12:03:55 np0005535469 nova_compute[254092]: 2025-11-25 17:03:55.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:03:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2759510647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:03:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:03:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2759510647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:03:55 np0005535469 nova_compute[254092]: 2025-11-25 17:03:55.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:03:55 np0005535469 nova_compute[254092]: 2025-11-25 17:03:55.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:03:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 347 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 172 op/s
Nov 25 12:03:56 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:56Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:b2:95 10.100.0.10
Nov 25 12:03:56 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:56Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:b2:95 10.100.0.10
Nov 25 12:03:56 np0005535469 nova_compute[254092]: 2025-11-25 17:03:56.713 254096 INFO nova.compute.manager [None req-12fe7c42-fa57-4a08-ae91-b950af7f9d4c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Get console output#033[00m
Nov 25 12:03:56 np0005535469 nova_compute[254092]: 2025-11-25 17:03:56.724 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.019 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.020 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.022 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.022 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.022 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.023 254096 INFO nova.compute.manager [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Terminating instance#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.024 254096 DEBUG nova.compute.manager [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 kernel: tap9fb8b6ba-45 (unregistering): left promiscuous mode
Nov 25 12:03:57 np0005535469 NetworkManager[48891]: <info>  [1764090237.0957] device (tap9fb8b6ba-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.115 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:57Z|01225|binding|INFO|Releasing lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 from this chassis (sb_readonly=0)
Nov 25 12:03:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:57Z|01226|binding|INFO|Setting lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 down in Southbound
Nov 25 12:03:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:03:57Z|01227|binding|INFO|Removing iface tap9fb8b6ba-45 ovn-installed in OVS
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.128 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:a2:cd 10.100.0.8'], port_security=['fa:16:3e:c6:a2:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dc02b95b-290f-441d-9b04-957187d0f885', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1a0257b-ea79-4747-8da2-0da26a4a2e35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.130 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce unbound from our chassis#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.131 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.155 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d9ea93-c158-42d4-ba63-23e3e9aefd92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:57 np0005535469 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 25 12:03:57 np0005535469 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 13.991s CPU time.
Nov 25 12:03:57 np0005535469 systemd-machined[216343]: Machine qemu-150-instance-00000076 terminated.
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.189 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[84bea833-03fc-468b-8fa9-69c76049a9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.192 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[01e63c89-be24-4e31-80df-c066770258a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.222 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bb36f38f-9ce9-46d0-9a14-7f2d4fc37a4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b940767-8bdc-4ed3-82a5-42035816dbe4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385326, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.262 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9da0db21-7f35-4e99-a0b5-f40148be7939]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663545, 'tstamp': 663545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385331, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663547, 'tstamp': 663547}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385331, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.263 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.266 254096 INFO nova.virt.libvirt.driver [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance destroyed successfully.#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.267 254096 DEBUG nova.objects.instance [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid dc02b95b-290f-441d-9b04-957187d0f885 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.271 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd54b59f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.277 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.278 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd54b59f-80, col_values=(('external_ids', {'iface-id': 'd6826306-6b17-44f3-a972-7d5089250616'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.278 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.279 254096 DEBUG nova.virt.libvirt.vif [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:03:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1163677477',display_name='tempest-TestNetworkBasicOps-server-1163677477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1163677477',id=118,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCxvG1kVT8rj7rz3z1G56tHhBp7SuNAkkLNuv/fKw8COC5VTTwy6Y98cgqFvLD/dMqbJMho2xMxaZGbc9qY1ZWoyzmuMb54S1JTVXIQHKR2yNSi3mjSBeFFRS3qe8724pw==',key_name='tempest-TestNetworkBasicOps-1314414902',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:03:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-akatof1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:03:35Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=dc02b95b-290f-441d-9b04-957187d0f885,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.280 254096 DEBUG nova.network.os_vif_util [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.281 254096 DEBUG nova.network.os_vif_util [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.282 254096 DEBUG os_vif [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.286 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fb8b6ba-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.293 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.295 254096 INFO os_vif [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45')#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG nova.compute.manager [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-unplugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG oslo_concurrency.lockutils [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG oslo_concurrency.lockutils [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG oslo_concurrency.lockutils [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.655 254096 DEBUG nova.compute.manager [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] No waiting events found dispatching network-vif-unplugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.655 254096 DEBUG nova.compute.manager [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-unplugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.716 254096 INFO nova.virt.libvirt.driver [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deleting instance files /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885_del#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.718 254096 INFO nova.virt.libvirt.driver [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deletion of /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885_del complete#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.761 254096 INFO nova.compute.manager [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.762 254096 DEBUG oslo.service.loopingcall [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.762 254096 DEBUG nova.compute.manager [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:03:57 np0005535469 nova_compute[254092]: 2025-11-25 17:03:57.762 254096 DEBUG nova.network.neutron [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:03:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 347 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 4.2 MiB/s wr, 105 op/s
Nov 25 12:03:58 np0005535469 nova_compute[254092]: 2025-11-25 17:03:58.676 254096 DEBUG nova.network.neutron [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:03:58 np0005535469 nova_compute[254092]: 2025-11-25 17:03:58.693 254096 INFO nova.compute.manager [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 0.93 seconds to deallocate network for instance.#033[00m
Nov 25 12:03:58 np0005535469 nova_compute[254092]: 2025-11-25 17:03:58.744 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:58 np0005535469 nova_compute[254092]: 2025-11-25 17:03:58.744 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:58 np0005535469 nova_compute[254092]: 2025-11-25 17:03:58.785 254096 DEBUG nova.compute.manager [req-c7d3b3f7-c52b-4be3-a8c0-3303ecb25179 req-11907621-a425-4ce0-8ba1-ef51b149efc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-deleted-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:03:58 np0005535469 nova_compute[254092]: 2025-11-25 17:03:58.867 254096 DEBUG oslo_concurrency.processutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:03:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:03:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856563403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.336 254096 DEBUG oslo_concurrency.processutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.346 254096 DEBUG nova.compute.provider_tree [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.364 254096 DEBUG nova.scheduler.client.report [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.389 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.422 254096 INFO nova.scheduler.client.report [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance dc02b95b-290f-441d-9b04-957187d0f885#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.507 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.761 254096 DEBUG nova.compute.manager [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.762 254096 DEBUG oslo_concurrency.lockutils [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.762 254096 DEBUG oslo_concurrency.lockutils [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.763 254096 DEBUG oslo_concurrency.lockutils [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.763 254096 DEBUG nova.compute.manager [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] No waiting events found dispatching network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:03:59 np0005535469 nova_compute[254092]: 2025-11-25 17:03:59.764 254096 WARNING nova.compute.manager [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received unexpected event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:04:00 np0005535469 nova_compute[254092]: 2025-11-25 17:04:00.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 320 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 4.2 MiB/s wr, 130 op/s
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.304 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.305 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.305 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.306 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.307 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.310 254096 INFO nova.compute.manager [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Terminating instance#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.313 254096 DEBUG nova.compute.manager [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:04:01 np0005535469 kernel: tapb0512c6a-fb (unregistering): left promiscuous mode
Nov 25 12:04:01 np0005535469 NetworkManager[48891]: <info>  [1764090241.3839] device (tapb0512c6a-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:04:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:01Z|01228|binding|INFO|Releasing lport b0512c6a-fbc4-4639-8508-e6493d18bd3a from this chassis (sb_readonly=0)
Nov 25 12:04:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:01Z|01229|binding|INFO|Setting lport b0512c6a-fbc4-4639-8508-e6493d18bd3a down in Southbound
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:01Z|01230|binding|INFO|Removing iface tapb0512c6a-fb ovn-installed in OVS
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.402 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b5:b4 10.100.0.9'], port_security=['fa:16:3e:d7:b5:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73a187fa-5479-4191-bd44-757c3840137a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05b9ee3b-4cbe-486c-b386-9a71c1c7373a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0512c6a-fbc4-4639-8508-e6493d18bd3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.403 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0512c6a-fbc4-4639-8508-e6493d18bd3a in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce unbound from our chassis#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.405 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.406 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73088e11-a1ee-40c6-9349-7c45ff21d1af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.407 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce namespace which is not needed anymore#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.416 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 25 12:04:01 np0005535469 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Consumed 15.256s CPU time.
Nov 25 12:04:01 np0005535469 systemd-machined[216343]: Machine qemu-148-instance-00000074 terminated.
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.564 254096 INFO nova.virt.libvirt.driver [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance destroyed successfully.#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.564 254096 DEBUG nova.objects.instance [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:04:01 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : haproxy version is 2.8.14-c23fe91
Nov 25 12:04:01 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : path to executable is /usr/sbin/haproxy
Nov 25 12:04:01 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [WARNING]  (383205) : Exiting Master process...
Nov 25 12:04:01 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [WARNING]  (383205) : Exiting Master process...
Nov 25 12:04:01 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [ALERT]    (383205) : Current worker (383207) exited with code 143 (Terminated)
Nov 25 12:04:01 np0005535469 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [WARNING]  (383205) : All workers exited. Exiting... (0)
Nov 25 12:04:01 np0005535469 systemd[1]: libpod-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434.scope: Deactivated successfully.
Nov 25 12:04:01 np0005535469 podman[385400]: 2025-11-25 17:04:01.579016709 +0000 UTC m=+0.055697339 container died 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.579 254096 DEBUG nova.virt.libvirt.vif [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:02:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-365721910',display_name='tempest-TestNetworkBasicOps-server-365721910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-365721910',id=116,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8fF/LQWIeXhdQwSKSLAErz4jvUPWKjw4L1GZb+ASjyhqtlkSjIxhUDwHlkKBv0qHWvbUkdKHkhYl9JuEVV2LarQZvIoe1QUEsDx05YVfl0dpyKfWcSmCOAyR6fZFjEqA==',key_name='tempest-TestNetworkBasicOps-1895434798',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:02:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5bv1kvw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:02:58Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=73a187fa-5479-4191-bd44-757c3840137a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.580 254096 DEBUG nova.network.os_vif_util [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.580 254096 DEBUG nova.network.os_vif_util [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.581 254096 DEBUG os_vif [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.583 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0512c6a-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.589 254096 INFO os_vif [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb')#033[00m
Nov 25 12:04:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434-userdata-shm.mount: Deactivated successfully.
Nov 25 12:04:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e893ac8ad73aa3d08cc59807e1b727426905ed7800c4aa1c5d54d074cb489584-merged.mount: Deactivated successfully.
Nov 25 12:04:01 np0005535469 podman[385400]: 2025-11-25 17:04:01.633444421 +0000 UTC m=+0.110125031 container cleanup 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:04:01 np0005535469 systemd[1]: libpod-conmon-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434.scope: Deactivated successfully.
Nov 25 12:04:01 np0005535469 podman[385456]: 2025-11-25 17:04:01.719690497 +0000 UTC m=+0.059101262 container remove 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5af08c-4c65-4ee0-adaa-a2c1cc47e4c0]: (4, ('Tue Nov 25 05:04:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce (1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434)\n1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434\nTue Nov 25 05:04:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce (1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434)\n1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.731 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3f0b04-280a-444a-b3ea-dbf2cf782dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.734 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 kernel: tapcd54b59f-80: left promiscuous mode
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.755 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffcf4c2-3fc3-4980-98cc-8f09cc22cec9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce149889-9a03-41b8-9567-73dbd591466f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.772 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cae6b5f9-ba2b-4b7e-826c-1ff13d35c5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05feeacc-913d-4bda-9208-4bfddd077957]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663527, 'reachable_time': 28426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385475, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 systemd[1]: run-netns-ovnmeta\x2dcd54b59f\x2d8568\x2d4ad8\x2da75d\x2d4fcdad6f8dce.mount: Deactivated successfully.
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.797 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:04:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.797 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a4571518-4fc3-4986-90eb-0c552d28493e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.911 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-unplugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.913 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] No waiting events found dispatching network-vif-unplugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-unplugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] No waiting events found dispatching network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 WARNING nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received unexpected event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.960 254096 INFO nova.virt.libvirt.driver [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deleting instance files /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a_del#033[00m
Nov 25 12:04:01 np0005535469 nova_compute[254092]: 2025-11-25 17:04:01.962 254096 INFO nova.virt.libvirt.driver [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deletion of /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a_del complete#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.021 254096 INFO nova.compute.manager [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.022 254096 DEBUG oslo.service.loopingcall [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.022 254096 DEBUG nova.compute.manager [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.022 254096 DEBUG nova.network.neutron [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:04:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 279 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.4 MiB/s wr, 147 op/s
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.801 254096 DEBUG nova.network.neutron [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.821 254096 INFO nova.compute.manager [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 0.80 seconds to deallocate network for instance.#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.871 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.872 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.886 254096 DEBUG nova.compute.manager [req-0dafa6df-9821-41c6-a14a-3bff348c6a28 req-aa840026-3e9b-4eac-800c-7f2212c7762a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-deleted-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.912 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.912 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.912 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.913 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.913 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.914 254096 INFO nova.compute.manager [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Terminating instance#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.915 254096 DEBUG nova.compute.manager [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:04:02 np0005535469 nova_compute[254092]: 2025-11-25 17:04:02.947 254096 DEBUG oslo_concurrency.processutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:02 np0005535469 kernel: tap29c53aaa-05 (unregistering): left promiscuous mode
Nov 25 12:04:02 np0005535469 NetworkManager[48891]: <info>  [1764090242.9914] device (tap29c53aaa-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:04:02 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:02Z|01231|binding|INFO|Releasing lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 from this chassis (sb_readonly=0)
Nov 25 12:04:03 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:02Z|01232|binding|INFO|Setting lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 down in Southbound
Nov 25 12:04:03 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:02Z|01233|binding|INFO|Removing iface tap29c53aaa-05 ovn-installed in OVS
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.006 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:b2:95 10.100.0.10'], port_security=['fa:16:3e:da:b2:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=29c53aaa-054f-442e-8673-22d0d7fc5f72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.008 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 29c53aaa-054f-442e-8673-22d0d7fc5f72 in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 unbound from our chassis#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.010 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51d8d234-0a41-496f-82c7-0c98aa4761b8#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.042 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ef6ff1-0f3d-4a74-9b55-cf408418da6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.082 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ab2c2d-9180-4857-8110-c95cadddb7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:03 np0005535469 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.085 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[02b44783-b0d7-404e-864e-42f0172a9651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:03 np0005535469 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 17.727s CPU time.
Nov 25 12:04:03 np0005535469 systemd-machined[216343]: Machine qemu-151-instance-00000077 terminated.
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.121 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1cab360f-0d09-47e7-954f-a75a4993c727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.140 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5ec2cb-4fac-45c7-8af1-bc0571da2eac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385489, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.154 254096 INFO nova.virt.libvirt.driver [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance destroyed successfully.#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.155 254096 DEBUG nova.objects.instance [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.161 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b365abbd-dd9a-43f0-8ac5-e6b79d889dfb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663959, 'tstamp': 663959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385515, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663963, 'tstamp': 663963}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385515, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.163 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.166 254096 DEBUG nova.virt.libvirt.vif [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=119,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:03:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-4a5cegzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:03:40Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.167 254096 DEBUG nova.network.os_vif_util [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.168 254096 DEBUG nova.network.os_vif_util [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.168 254096 DEBUG os_vif [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29c53aaa-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.170 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51d8d234-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.171 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.171 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51d8d234-00, col_values=(('external_ids', {'iface-id': '81151907-1dea-47e9-9a64-99f3b4cd12fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.172 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.175 254096 INFO os_vif [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05')#033[00m
Nov 25 12:04:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:04:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752532862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.460 254096 DEBUG oslo_concurrency.processutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.467 254096 DEBUG nova.compute.provider_tree [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.488 254096 DEBUG nova.scheduler.client.report [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.496 254096 INFO nova.virt.libvirt.driver [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deleting instance files /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_del#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.496 254096 INFO nova.virt.libvirt.driver [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deletion of /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_del complete#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.526 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.567 254096 INFO nova.compute.manager [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.567 254096 DEBUG oslo.service.loopingcall [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.568 254096 DEBUG nova.compute.manager [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.568 254096 DEBUG nova.network.neutron [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.572 254096 INFO nova.scheduler.client.report [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 73a187fa-5479-4191-bd44-757c3840137a#033[00m
Nov 25 12:04:03 np0005535469 nova_compute[254092]: 2025-11-25 17:04:03.648 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-unplugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] No waiting events found dispatching network-vif-unplugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-unplugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] No waiting events found dispatching network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:04 np0005535469 nova_compute[254092]: 2025-11-25 17:04:04.012 254096 WARNING nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received unexpected event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:04:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 279 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 12:04:05 np0005535469 nova_compute[254092]: 2025-11-25 17:04:05.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:05 np0005535469 nova_compute[254092]: 2025-11-25 17:04:05.561 254096 DEBUG nova.network.neutron [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:05 np0005535469 nova_compute[254092]: 2025-11-25 17:04:05.578 254096 INFO nova.compute.manager [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 2.01 seconds to deallocate network for instance.#033[00m
Nov 25 12:04:05 np0005535469 nova_compute[254092]: 2025-11-25 17:04:05.656 254096 DEBUG nova.compute.manager [req-0f43215a-db42-496b-87e7-0ac2a0034c23 req-c26d33b8-3599-4fed-9944-abb3651dac19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-deleted-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:05 np0005535469 nova_compute[254092]: 2025-11-25 17:04:05.660 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:05 np0005535469 nova_compute[254092]: 2025-11-25 17:04:05.660 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:05 np0005535469 nova_compute[254092]: 2025-11-25 17:04:05.726 254096 DEBUG oslo_concurrency.processutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:04:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596410790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:04:06 np0005535469 nova_compute[254092]: 2025-11-25 17:04:06.192 254096 DEBUG oslo_concurrency.processutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:06 np0005535469 nova_compute[254092]: 2025-11-25 17:04:06.199 254096 DEBUG nova.compute.provider_tree [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:04:06 np0005535469 nova_compute[254092]: 2025-11-25 17:04:06.216 254096 DEBUG nova.scheduler.client.report [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:04:06 np0005535469 nova_compute[254092]: 2025-11-25 17:04:06.244 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:06 np0005535469 nova_compute[254092]: 2025-11-25 17:04:06.273 254096 INFO nova.scheduler.client.report [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3#033[00m
Nov 25 12:04:06 np0005535469 nova_compute[254092]: 2025-11-25 17:04:06.344 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 121 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 151 op/s
Nov 25 12:04:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:04:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.5 total, 600.0 interval#012Cumulative writes: 32K writes, 127K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.85 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2559 writes, 10K keys, 2559 commit groups, 1.0 writes per commit group, ingest: 12.29 MB, 0.02 MB/s#012Interval WAL: 2559 writes, 986 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.755 254096 DEBUG nova.compute.manager [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.756 254096 DEBUG nova.compute.manager [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing instance network info cache due to event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.756 254096 DEBUG oslo_concurrency.lockutils [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.757 254096 DEBUG oslo_concurrency.lockutils [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.758 254096 DEBUG nova.network.neutron [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.831 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.832 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.833 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.834 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.834 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.836 254096 INFO nova.compute.manager [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Terminating instance#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.838 254096 DEBUG nova.compute.manager [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:04:07 np0005535469 kernel: tap142675a5-3c (unregistering): left promiscuous mode
Nov 25 12:04:07 np0005535469 NetworkManager[48891]: <info>  [1764090247.8904] device (tap142675a5-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:04:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:07Z|01234|binding|INFO|Releasing lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf from this chassis (sb_readonly=0)
Nov 25 12:04:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:07Z|01235|binding|INFO|Setting lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf down in Southbound
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:07Z|01236|binding|INFO|Removing iface tap142675a5-3c ovn-installed in OVS
Nov 25 12:04:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.909 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:64:18 10.100.0.3'], port_security=['fa:16:3e:4d:64:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '12deb2c6-31fb-4186-940b-8131e43ea3f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d070883-7c27-4b8a-ba4e-1b0864814a05 3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=142675a5-3c37-4e43-9f80-e8fedd63f3cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:04:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.910 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 142675a5-3c37-4e43-9f80-e8fedd63f3cf in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 unbound from our chassis#033[00m
Nov 25 12:04:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.912 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51d8d234-0a41-496f-82c7-0c98aa4761b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:04:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.912 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93cd8707-c374-41fb-8440-1c09bde72b37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.913 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 namespace which is not needed anymore#033[00m
Nov 25 12:04:07 np0005535469 nova_compute[254092]: 2025-11-25 17:04:07.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:07 np0005535469 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 25 12:04:07 np0005535469 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Consumed 15.976s CPU time.
Nov 25 12:04:07 np0005535469 systemd-machined[216343]: Machine qemu-149-instance-00000075 terminated.
Nov 25 12:04:08 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : haproxy version is 2.8.14-c23fe91
Nov 25 12:04:08 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : path to executable is /usr/sbin/haproxy
Nov 25 12:04:08 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [WARNING]  (383480) : Exiting Master process...
Nov 25 12:04:08 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [WARNING]  (383480) : Exiting Master process...
Nov 25 12:04:08 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [ALERT]    (383480) : Current worker (383482) exited with code 143 (Terminated)
Nov 25 12:04:08 np0005535469 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [WARNING]  (383480) : All workers exited. Exiting... (0)
Nov 25 12:04:08 np0005535469 systemd[1]: libpod-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087.scope: Deactivated successfully.
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:08 np0005535469 podman[385589]: 2025-11-25 17:04:08.06836162 +0000 UTC m=+0.046432974 container died f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.085 254096 INFO nova.virt.libvirt.driver [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance destroyed successfully.#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.086 254096 DEBUG nova.objects.instance [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid 12deb2c6-31fb-4186-940b-8131e43ea3f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.098 254096 DEBUG nova.virt.libvirt.vif [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:02:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=117,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:03:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-ez50uxgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:03:03Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=12deb2c6-31fb-4186-940b-8131e43ea3f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.098 254096 DEBUG nova.network.os_vif_util [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.099 254096 DEBUG nova.network.os_vif_util [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.099 254096 DEBUG os_vif [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087-userdata-shm.mount: Deactivated successfully.
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.101 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap142675a5-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ee1fa6406befdaaca4f1d07638e3ce32c9871c9601e12c9f35106bd16f6c1c0d-merged.mount: Deactivated successfully.
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.105 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.108 254096 INFO os_vif [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c')#033[00m
Nov 25 12:04:08 np0005535469 podman[385589]: 2025-11-25 17:04:08.110245359 +0000 UTC m=+0.088316713 container cleanup f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:04:08 np0005535469 systemd[1]: libpod-conmon-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087.scope: Deactivated successfully.
Nov 25 12:04:08 np0005535469 podman[385637]: 2025-11-25 17:04:08.175730444 +0000 UTC m=+0.042667531 container remove f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f83ad36e-c97f-49cd-995f-da44fc30ec87]: (4, ('Tue Nov 25 05:04:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 (f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087)\nf95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087\nTue Nov 25 05:04:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 (f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087)\nf95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[318d8709-191b-4d35-8f4e-a64debb87cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.184 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:08 np0005535469 kernel: tap51d8d234-00: left promiscuous mode
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2665aae5-24a0-482d-a555-9d1204adbfd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1fce61-0e05-4685-a069-d318656a31e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.210 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abafb875-2bef-4a9d-8fa1-7efbc64a08ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.227 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c0647207-75da-43e5-ab67-46b2777b51dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663937, 'reachable_time': 27179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385663, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:08 np0005535469 systemd[1]: run-netns-ovnmeta\x2d51d8d234\x2d0a41\x2d496f\x2d82c7\x2d0c98aa4761b8.mount: Deactivated successfully.
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.231 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:04:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.231 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2356c90e-68b6-4323-a1a7-6e7593a72008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 121 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 172 KiB/s rd, 129 KiB/s wr, 109 op/s
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.516 254096 INFO nova.virt.libvirt.driver [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deleting instance files /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8_del#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.517 254096 INFO nova.virt.libvirt.driver [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deletion of /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8_del complete#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.581 254096 INFO nova.compute.manager [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.582 254096 DEBUG oslo.service.loopingcall [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.583 254096 DEBUG nova.compute.manager [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:04:08 np0005535469 nova_compute[254092]: 2025-11-25 17:04:08.583 254096 DEBUG nova.network.neutron [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:04:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:09.510 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.511 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:09.511 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.866 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-unplugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.866 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.867 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.867 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.867 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] No waiting events found dispatching network-vif-unplugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.868 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-unplugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.868 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.868 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.869 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.869 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.869 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] No waiting events found dispatching network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.870 254096 WARNING nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received unexpected event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.890 254096 DEBUG nova.network.neutron [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.901 254096 INFO nova.compute.manager [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 1.32 seconds to deallocate network for instance.#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.941 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.942 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.955 254096 DEBUG nova.compute.manager [req-8d5b3268-1fcb-45fb-8bf5-4521a53ab1d5 req-5c58149b-e29f-4fb8-a344-2d2ddc157894 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-deleted-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:09 np0005535469 nova_compute[254092]: 2025-11-25 17:04:09.996 254096 DEBUG oslo_concurrency.processutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.114 254096 DEBUG nova.network.neutron [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updated VIF entry in instance network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.115 254096 DEBUG nova.network.neutron [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.127 254096 DEBUG oslo_concurrency.lockutils [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:04:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 98 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 129 KiB/s wr, 114 op/s
Nov 25 12:04:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:04:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199238265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.473 254096 DEBUG oslo_concurrency.processutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.478 254096 DEBUG nova.compute.provider_tree [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.503 254096 DEBUG nova.scheduler.client.report [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.530 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.558 254096 INFO nova.scheduler.client.report [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance 12deb2c6-31fb-4186-940b-8131e43ea3f8#033[00m
Nov 25 12:04:10 np0005535469 nova_compute[254092]: 2025-11-25 17:04:10.625 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:12 np0005535469 nova_compute[254092]: 2025-11-25 17:04:12.265 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090237.2632968, dc02b95b-290f-441d-9b04-957187d0f885 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:12 np0005535469 nova_compute[254092]: 2025-11-25 17:04:12.266 254096 INFO nova.compute.manager [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:04:12 np0005535469 nova_compute[254092]: 2025-11-25 17:04:12.286 254096 DEBUG nova.compute.manager [None req-3b53a3c4-ef3b-4fd1-9613-e4d084d00f18 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 115 KiB/s wr, 112 op/s
Nov 25 12:04:13 np0005535469 nova_compute[254092]: 2025-11-25 17:04:13.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:13.641 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 35 KiB/s wr, 86 op/s
Nov 25 12:04:14 np0005535469 podman[385688]: 2025-11-25 17:04:14.70897685 +0000 UTC m=+0.089790113 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 12:04:14 np0005535469 podman[385689]: 2025-11-25 17:04:14.721919215 +0000 UTC m=+0.100464407 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:04:14 np0005535469 podman[385690]: 2025-11-25 17:04:14.760170814 +0000 UTC m=+0.133013619 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 12:04:15 np0005535469 nova_compute[254092]: 2025-11-25 17:04:15.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 35 KiB/s wr, 86 op/s
Nov 25 12:04:16 np0005535469 nova_compute[254092]: 2025-11-25 17:04:16.562 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090241.561269, 73a187fa-5479-4191-bd44-757c3840137a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:16 np0005535469 nova_compute[254092]: 2025-11-25 17:04:16.563 254096 INFO nova.compute.manager [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:04:16 np0005535469 nova_compute[254092]: 2025-11-25 17:04:16.581 254096 DEBUG nova.compute.manager [None req-d049721e-1c0c-439b-bd08-b9fe24fb59a7 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:18 np0005535469 nova_compute[254092]: 2025-11-25 17:04:18.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:18 np0005535469 nova_compute[254092]: 2025-11-25 17:04:18.150 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090243.1498055, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:18 np0005535469 nova_compute[254092]: 2025-11-25 17:04:18.150 254096 INFO nova.compute.manager [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:04:18 np0005535469 nova_compute[254092]: 2025-11-25 17:04:18.168 254096 DEBUG nova.compute.manager [None req-5d545722-ca76-42e6-b412-de15cae2adad - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 25 12:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:19.513 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:20 np0005535469 nova_compute[254092]: 2025-11-25 17:04:20.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 25 12:04:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 25 12:04:23 np0005535469 nova_compute[254092]: 2025-11-25 17:04:23.083 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090248.0821123, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:23 np0005535469 nova_compute[254092]: 2025-11-25 17:04:23.083 254096 INFO nova.compute.manager [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:04:23 np0005535469 nova_compute[254092]: 2025-11-25 17:04:23.101 254096 DEBUG nova.compute.manager [None req-2a6fb1d7-0be5-4fc5-8c9b-8ee6770e1536 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:23 np0005535469 nova_compute[254092]: 2025-11-25 17:04:23.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:04:24 np0005535469 nova_compute[254092]: 2025-11-25 17:04:24.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:25 np0005535469 nova_compute[254092]: 2025-11-25 17:04:25.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 12:04:25 np0005535469 nova_compute[254092]: 2025-11-25 17:04:25.887 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:25 np0005535469 nova_compute[254092]: 2025-11-25 17:04:25.887 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:25 np0005535469 nova_compute[254092]: 2025-11-25 17:04:25.906 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.029 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.030 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.038 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.038 254096 INFO nova.compute.claims [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.205 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:04:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814858778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.632 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.639 254096 DEBUG nova.compute.provider_tree [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.652 254096 DEBUG nova.scheduler.client.report [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.695 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.696 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.744 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.745 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.797 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.816 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.914 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.915 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.916 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Creating image(s)#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.939 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.961 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.981 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:26 np0005535469 nova_compute[254092]: 2025-11-25 17:04:26.984 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:27 np0005535469 nova_compute[254092]: 2025-11-25 17:04:27.056 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:27 np0005535469 nova_compute[254092]: 2025-11-25 17:04:27.057 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:27 np0005535469 nova_compute[254092]: 2025-11-25 17:04:27.057 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:27 np0005535469 nova_compute[254092]: 2025-11-25 17:04:27.058 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:27 np0005535469 nova_compute[254092]: 2025-11-25 17:04:27.078 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:27 np0005535469 nova_compute[254092]: 2025-11-25 17:04:27.082 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:27 np0005535469 nova_compute[254092]: 2025-11-25 17:04:27.240 254096 DEBUG nova.policy [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:04:28 np0005535469 nova_compute[254092]: 2025-11-25 17:04:28.110 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:04:28 np0005535469 nova_compute[254092]: 2025-11-25 17:04:28.247 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:28 np0005535469 nova_compute[254092]: 2025-11-25 17:04:28.298 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:04:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fcdad622-b63a-4f24-8a25-4a4c496e0cd6 does not exist
Nov 25 12:04:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fad3c683-0a64-46ea-b453-6984bd41ad20 does not exist
Nov 25 12:04:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2e3b12b3-43aa-4150-a580-ddecd7303524 does not exist
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:04:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:04:28 np0005535469 nova_compute[254092]: 2025-11-25 17:04:28.613 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully created port: a13b6cf4-602d-4af3-b369-9dfa273e1514 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:04:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:28 np0005535469 podman[386189]: 2025-11-25 17:04:28.953913123 +0000 UTC m=+0.087960154 container create d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:04:28 np0005535469 podman[386189]: 2025-11-25 17:04:28.88743997 +0000 UTC m=+0.021487021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:04:29 np0005535469 systemd[1]: Started libpod-conmon-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope.
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.057 254096 DEBUG nova.objects.instance [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.068 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.068 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Ensure instance console log exists: /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.069 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.069 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.069 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:29 np0005535469 podman[386189]: 2025-11-25 17:04:29.10950222 +0000 UTC m=+0.243549271 container init d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 12:04:29 np0005535469 podman[386189]: 2025-11-25 17:04:29.115668429 +0000 UTC m=+0.249715460 container start d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 12:04:29 np0005535469 musing_boyd[386220]: 167 167
Nov 25 12:04:29 np0005535469 systemd[1]: libpod-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope: Deactivated successfully.
Nov 25 12:04:29 np0005535469 conmon[386220]: conmon d45e1a3758a2b6a4e9b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope/container/memory.events
Nov 25 12:04:29 np0005535469 podman[386189]: 2025-11-25 17:04:29.169525827 +0000 UTC m=+0.303572868 container attach d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:04:29 np0005535469 podman[386189]: 2025-11-25 17:04:29.169949318 +0000 UTC m=+0.303996349 container died d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.233 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3f6be5949ab66033ef739e525333872c0fa1649fd950a9317beb08e20bdc14eb-merged.mount: Deactivated successfully.
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.235 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.250 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:04:29 np0005535469 podman[386189]: 2025-11-25 17:04:29.252917154 +0000 UTC m=+0.386964185 container remove d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:04:29 np0005535469 systemd[1]: libpod-conmon-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope: Deactivated successfully.
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.314 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.314 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.321 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.321 254096 INFO nova.compute.claims [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.408 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:29 np0005535469 podman[386250]: 2025-11-25 17:04:29.412154551 +0000 UTC m=+0.041739206 container create c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.442 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully updated port: a13b6cf4-602d-4af3-b369-9dfa273e1514 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:04:29 np0005535469 systemd[1]: Started libpod-conmon-c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959.scope.
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.455 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.456 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:04:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:29 np0005535469 podman[386250]: 2025-11-25 17:04:29.39352003 +0000 UTC m=+0.023104715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.521 254096 DEBUG nova.compute.manager [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.521 254096 DEBUG nova.compute.manager [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.521 254096 DEBUG oslo_concurrency.lockutils [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:04:29 np0005535469 nova_compute[254092]: 2025-11-25 17:04:29.590 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:04:29 np0005535469 podman[386250]: 2025-11-25 17:04:29.90684365 +0000 UTC m=+0.536428345 container init c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:04:29 np0005535469 podman[386250]: 2025-11-25 17:04:29.915010424 +0000 UTC m=+0.544595099 container start c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:04:29 np0005535469 podman[386250]: 2025-11-25 17:04:29.948194153 +0000 UTC m=+0.577778848 container attach c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:04:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351854010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.220 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.226 254096 DEBUG nova.compute.provider_tree [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.244 254096 DEBUG nova.scheduler.client.report [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.305 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.305 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:04:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 57 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 419 KiB/s wr, 11 op/s
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.397 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.397 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.439 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.481 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.627 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.628 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.628 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Creating image(s)#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.646 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.664 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.683 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.686 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.722 254096 DEBUG nova.policy [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.759 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.760 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.761 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.761 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.780 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:30 np0005535469 nova_compute[254092]: 2025-11-25 17:04:30.783 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c5d6f631-cec2-431d-b476-feafa21e4f80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:31 np0005535469 strange_satoshi[386268]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:04:31 np0005535469 strange_satoshi[386268]: --> relative data size: 1.0
Nov 25 12:04:31 np0005535469 strange_satoshi[386268]: --> All data devices are unavailable
Nov 25 12:04:31 np0005535469 systemd[1]: libpod-c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959.scope: Deactivated successfully.
Nov 25 12:04:31 np0005535469 podman[386250]: 2025-11-25 17:04:31.08041811 +0000 UTC m=+1.710002795 container died c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:04:31 np0005535469 nova_compute[254092]: 2025-11-25 17:04:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b-merged.mount: Deactivated successfully.
Nov 25 12:04:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 88 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:04:32 np0005535469 podman[386250]: 2025-11-25 17:04:32.451067873 +0000 UTC m=+3.080652538 container remove c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:04:32 np0005535469 systemd[1]: libpod-conmon-c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959.scope: Deactivated successfully.
Nov 25 12:04:32 np0005535469 nova_compute[254092]: 2025-11-25 17:04:32.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:32 np0005535469 nova_compute[254092]: 2025-11-25 17:04:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:33 np0005535469 podman[386562]: 2025-11-25 17:04:33.093275237 +0000 UTC m=+0.021711697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:04:33 np0005535469 podman[386562]: 2025-11-25 17:04:33.391754834 +0000 UTC m=+0.320191274 container create 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.581 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.623 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c5d6f631-cec2-431d-b476-feafa21e4f80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.839s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:33 np0005535469 systemd[1]: Started libpod-conmon-643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4.scope.
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.657 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.657 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance network_info: |[{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.658 254096 DEBUG oslo_concurrency.lockutils [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.658 254096 DEBUG nova.network.neutron [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.661 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start _get_guest_xml network_info=[{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:04:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.701 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:04:33 np0005535469 podman[386562]: 2025-11-25 17:04:33.743613414 +0000 UTC m=+0.672049894 container init 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:04:33 np0005535469 podman[386562]: 2025-11-25 17:04:33.755713207 +0000 UTC m=+0.684149667 container start 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.761 254096 WARNING nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:04:33 np0005535469 tender_saha[386593]: 167 167
Nov 25 12:04:33 np0005535469 systemd[1]: libpod-643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4.scope: Deactivated successfully.
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.769 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.770 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.773 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.773 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.773 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.776 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.776 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.776 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:04:33 np0005535469 nova_compute[254092]: 2025-11-25 17:04:33.778 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:33 np0005535469 podman[386562]: 2025-11-25 17:04:33.954315074 +0000 UTC m=+0.882751564 container attach 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:04:33 np0005535469 podman[386562]: 2025-11-25 17:04:33.955114916 +0000 UTC m=+0.883551396 container died 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:04:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ecc096ffeb811a5ed747939091b0d93c399811f9feb458a098f279b4412b9018-merged.mount: Deactivated successfully.
Nov 25 12:04:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:04:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270052297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.295 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.325 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.331 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 88 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:04:34 np0005535469 podman[386562]: 2025-11-25 17:04:34.37235748 +0000 UTC m=+1.300793930 container remove 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:04:34 np0005535469 systemd[1]: libpod-conmon-643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4.scope: Deactivated successfully.
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.421 254096 DEBUG nova.objects.instance [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid c5d6f631-cec2-431d-b476-feafa21e4f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.436 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.437 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Ensure instance console log exists: /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.437 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.438 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.438 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.459 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Successfully created port: b68643ca-2301-486a-984d-43fc41d1f773 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:04:34 np0005535469 podman[386736]: 2025-11-25 17:04:34.537781207 +0000 UTC m=+0.023390463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:04:34 np0005535469 podman[386736]: 2025-11-25 17:04:34.740447006 +0000 UTC m=+0.226056292 container create e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 25 12:04:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:04:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722532707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.842 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.843 254096 DEBUG nova.virt.libvirt.vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.844 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.845 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.846 254096 DEBUG nova.objects.instance [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.859 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <name>instance-00000078</name>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:04:33</nova:creationTime>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <entry name="serial">fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <entry name="uuid">fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:7c:d6:7a"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <target dev="tapa13b6cf4-60"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log" append="off"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:04:34 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:04:34 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:04:34 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:04:34 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:04:34 np0005535469 systemd[1]: Started libpod-conmon-e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3.scope.
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.860 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Preparing to wait for external event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.861 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.862 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.862 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.864 254096 DEBUG nova.virt.libvirt.vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.864 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.865 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.865 254096 DEBUG os_vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.868 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.873 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.874 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa13b6cf4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.874 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa13b6cf4-60, col_values=(('external_ids', {'iface-id': 'a13b6cf4-602d-4af3-b369-9dfa273e1514', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:d6:7a', 'vm-uuid': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.876 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:34 np0005535469 NetworkManager[48891]: <info>  [1764090274.8775] manager: (tapa13b6cf4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:34 np0005535469 nova_compute[254092]: 2025-11-25 17:04:34.887 254096 INFO os_vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60')#033[00m
Nov 25 12:04:34 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.175 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.176 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.176 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:7c:d6:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.177 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Using config drive#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.208 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:35 np0005535469 podman[386736]: 2025-11-25 17:04:35.231800264 +0000 UTC m=+0.717409530 container init e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:04:35 np0005535469 podman[386736]: 2025-11-25 17:04:35.241975872 +0000 UTC m=+0.727585118 container start e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:04:35 np0005535469 podman[386736]: 2025-11-25 17:04:35.392776198 +0000 UTC m=+0.878385434 container attach e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.518 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.834 254096 DEBUG nova.network.neutron [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.835 254096 DEBUG nova.network.neutron [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.850 254096 DEBUG oslo_concurrency.lockutils [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.887 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Creating config drive at /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config#033[00m
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.892 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3i5zl3m1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:04:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537005772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:04:35 np0005535469 nova_compute[254092]: 2025-11-25 17:04:35.995 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:36 np0005535469 elastic_newton[386754]: {
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:    "0": [
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:        {
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "devices": [
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "/dev/loop3"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            ],
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_name": "ceph_lv0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_size": "21470642176",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "name": "ceph_lv0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "tags": {
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cluster_name": "ceph",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.crush_device_class": "",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.encrypted": "0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osd_id": "0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.type": "block",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.vdo": "0"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            },
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "type": "block",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "vg_name": "ceph_vg0"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:        }
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:    ],
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:    "1": [
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:        {
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "devices": [
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "/dev/loop4"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            ],
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_name": "ceph_lv1",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_size": "21470642176",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "name": "ceph_lv1",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "tags": {
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cluster_name": "ceph",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.crush_device_class": "",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.encrypted": "0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osd_id": "1",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.type": "block",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.vdo": "0"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            },
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "type": "block",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "vg_name": "ceph_vg1"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:        }
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:    ],
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:    "2": [
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:        {
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "devices": [
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "/dev/loop5"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            ],
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_name": "ceph_lv2",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_size": "21470642176",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "name": "ceph_lv2",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "tags": {
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.cluster_name": "ceph",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.crush_device_class": "",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.encrypted": "0",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osd_id": "2",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.type": "block",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:                "ceph.vdo": "0"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            },
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "type": "block",
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:            "vg_name": "ceph_vg2"
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:        }
Nov 25 12:04:36 np0005535469 elastic_newton[386754]:    ]
Nov 25 12:04:36 np0005535469 elastic_newton[386754]: }
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.051 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3i5zl3m1" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:36 np0005535469 systemd[1]: libpod-e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3.scope: Deactivated successfully.
Nov 25 12:04:36 np0005535469 podman[386736]: 2025-11-25 17:04:36.065262243 +0000 UTC m=+1.550871469 container died e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.087 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.091 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d-merged.mount: Deactivated successfully.
Nov 25 12:04:36 np0005535469 podman[386736]: 2025-11-25 17:04:36.158946023 +0000 UTC m=+1.644555269 container remove e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.167 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.168 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:04:36 np0005535469 systemd[1]: libpod-conmon-e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3.scope: Deactivated successfully.
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.210 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Successfully updated port: b68643ca-2301-486a-984d-43fc41d1f773 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.226 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.226 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.227 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.321 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.322 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deleting local config drive /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config because it was imported into RBD.#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.354 254096 DEBUG nova.compute.manager [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-changed-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.354 254096 DEBUG nova.compute.manager [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing instance network info cache due to event network-changed-b68643ca-2301-486a-984d-43fc41d1f773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.355 254096 DEBUG oslo_concurrency.lockutils [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:04:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 12:04:36 np0005535469 kernel: tapa13b6cf4-60: entered promiscuous mode
Nov 25 12:04:36 np0005535469 NetworkManager[48891]: <info>  [1764090276.3878] manager: (tapa13b6cf4-60): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Nov 25 12:04:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:36Z|01237|binding|INFO|Claiming lport a13b6cf4-602d-4af3-b369-9dfa273e1514 for this chassis.
Nov 25 12:04:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:36Z|01238|binding|INFO|a13b6cf4-602d-4af3-b369-9dfa273e1514: Claiming fa:16:3e:7c:d6:7a 10.100.0.11
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.399 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.406 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3686MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.408 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.408 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.413 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:04:36 np0005535469 systemd-udevd[386943]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.424 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d6:7a 10.100.0.11'], port_security=['fa:16:3e:7c:d6:7a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e177dffd-fd87-489e-a59d-1d241fe7a148', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad26d7dc-a577-438b-b143-107c43340ab4, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a13b6cf4-602d-4af3-b369-9dfa273e1514) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a13b6cf4-602d-4af3-b369-9dfa273e1514 in datapath 136c69a7-c4f8-40a2-be13-7ef82b7b3709 bound to our chassis#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.426 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 136c69a7-c4f8-40a2-be13-7ef82b7b3709#033[00m
Nov 25 12:04:36 np0005535469 systemd-machined[216343]: New machine qemu-152-instance-00000078.
Nov 25 12:04:36 np0005535469 NetworkManager[48891]: <info>  [1764090276.4321] device (tapa13b6cf4-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:04:36 np0005535469 NetworkManager[48891]: <info>  [1764090276.4329] device (tapa13b6cf4-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.438 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4452e7b8-eb4b-4c8a-ba34-bca5d04435bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.439 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap136c69a7-c1 in ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.441 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap136c69a7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10b36087-3979-459e-ac2a-d2c8d2aae6cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54a56681-8766-4eab-bb67-9e9f25bc2965]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.451 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e38b11-9096-43af-9f3f-bf9c4b1dc71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 systemd[1]: Started Virtual Machine qemu-152-instance-00000078.
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:36Z|01239|binding|INFO|Setting lport a13b6cf4-602d-4af3-b369-9dfa273e1514 ovn-installed in OVS
Nov 25 12:04:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:36Z|01240|binding|INFO|Setting lport a13b6cf4-602d-4af3-b369-9dfa273e1514 up in Southbound
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.473 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[adcb4468-76cf-452a-9faa-ac8f3c3954e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance c5d6f631-cec2-431d-b476-feafa21e4f80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.506 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4644bd-419b-4c74-845c-5a45f1e61bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 NetworkManager[48891]: <info>  [1764090276.5130] manager: (tap136c69a7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/510)
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8007109-81c3-4164-b216-473196f68ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 systemd-udevd[386948]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.546 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0c5286-30eb-4f17-8863-d3166b6e9b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.549 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e26eca-00be-4b6d-abf1-383767bf7fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 NetworkManager[48891]: <info>  [1764090276.5735] device (tap136c69a7-c0): carrier: link connected
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.579 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0a3601-3a7d-4ea3-a04b-e93b7332ff92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.596 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4986fdbe-ce31-4507-b755-7c77efa92e45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap136c69a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:85:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673416, 'reachable_time': 42133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387005, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.614 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c08235ed-a52b-45c9-a349-be77a5835900]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:8524'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673416, 'tstamp': 673416}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387006, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.630 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1353b96a-4a2e-4efd-8ccf-8e730796bb25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap136c69a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:85:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673416, 'reachable_time': 42133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387008, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.664 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1b153b-a006-4a91-bcdb-c479764c7e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.728 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2018b91d-de80-4787-9c23-3d1dece207ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap136c69a7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.730 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap136c69a7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:36 np0005535469 kernel: tap136c69a7-c0: entered promiscuous mode
Nov 25 12:04:36 np0005535469 NetworkManager[48891]: <info>  [1764090276.7321] manager: (tap136c69a7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.736 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap136c69a7-c0, col_values=(('external_ids', {'iface-id': '77b25960-ed8e-4022-ac16-27312b8189a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:36Z|01241|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.738 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/136c69a7-c4f8-40a2-be13-7ef82b7b3709.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/136c69a7-c4f8-40a2-be13-7ef82b7b3709.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6362363-4d1e-41dc-9c9a-2f112e17ae89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.740 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-136c69a7-c4f8-40a2-be13-7ef82b7b3709
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/136c69a7-c4f8-40a2-be13-7ef82b7b3709.pid.haproxy
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 136c69a7-c4f8-40a2-be13-7ef82b7b3709
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:04:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.741 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'env', 'PROCESS_TAG=haproxy-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/136c69a7-c4f8-40a2-be13-7ef82b7b3709.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.803 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090276.8025553, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.804 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Started (Lifecycle Event)#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.826 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.829 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090276.8026872, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.829 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.846 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.850 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:04:36 np0005535469 podman[387115]: 2025-11-25 17:04:36.867713564 +0000 UTC m=+0.042593810 container create e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:04:36 np0005535469 nova_compute[254092]: 2025-11-25 17:04:36.870 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:04:36 np0005535469 systemd[1]: Started libpod-conmon-e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c.scope.
Nov 25 12:04:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:36 np0005535469 podman[387115]: 2025-11-25 17:04:36.850577453 +0000 UTC m=+0.025457719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:04:36 np0005535469 podman[387115]: 2025-11-25 17:04:36.94705646 +0000 UTC m=+0.121936726 container init e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:04:36 np0005535469 podman[387115]: 2025-11-25 17:04:36.955152732 +0000 UTC m=+0.130032978 container start e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:04:36 np0005535469 dazzling_hellman[387131]: 167 167
Nov 25 12:04:36 np0005535469 podman[387115]: 2025-11-25 17:04:36.958676178 +0000 UTC m=+0.133556444 container attach e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:04:36 np0005535469 systemd[1]: libpod-e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c.scope: Deactivated successfully.
Nov 25 12:04:36 np0005535469 podman[387115]: 2025-11-25 17:04:36.961802504 +0000 UTC m=+0.136682760 container died e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:04:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e88c1eb3cb7eda942452ec02566e68cd9d0585bbd09bf543ec5aed6e7997d9a8-merged.mount: Deactivated successfully.
Nov 25 12:04:36 np0005535469 podman[387115]: 2025-11-25 17:04:36.993359029 +0000 UTC m=+0.168239275 container remove e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:04:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:04:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684110260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:04:37 np0005535469 systemd[1]: libpod-conmon-e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c.scope: Deactivated successfully.
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.036 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.043 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.054 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.069 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.070 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:37 np0005535469 podman[387172]: 2025-11-25 17:04:37.084873819 +0000 UTC m=+0.041672294 container create be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:04:37 np0005535469 systemd[1]: Started libpod-conmon-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f.scope.
Nov 25 12:04:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ef1abaaa64a067771ec5da28f54ab4615764922df7ecc3992312613eab4146/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:37 np0005535469 podman[387193]: 2025-11-25 17:04:37.156269278 +0000 UTC m=+0.037805798 container create 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 12:04:37 np0005535469 podman[387172]: 2025-11-25 17:04:37.064118521 +0000 UTC m=+0.020917006 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:04:37 np0005535469 podman[387172]: 2025-11-25 17:04:37.165706587 +0000 UTC m=+0.122505072 container init be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:04:37 np0005535469 podman[387172]: 2025-11-25 17:04:37.171743442 +0000 UTC m=+0.128541917 container start be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 12:04:37 np0005535469 systemd[1]: Started libpod-conmon-0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8.scope.
Nov 25 12:04:37 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : New worker (387217) forked
Nov 25 12:04:37 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : Loading success.
Nov 25 12:04:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:37 np0005535469 podman[387193]: 2025-11-25 17:04:37.14031465 +0000 UTC m=+0.021851200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:04:37 np0005535469 podman[387193]: 2025-11-25 17:04:37.252757104 +0000 UTC m=+0.134293624 container init 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:04:37 np0005535469 podman[387193]: 2025-11-25 17:04:37.260828196 +0000 UTC m=+0.142364716 container start 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:04:37 np0005535469 podman[387193]: 2025-11-25 17:04:37.264365752 +0000 UTC m=+0.145902262 container attach 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.392 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.409 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.410 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance network_info: |[{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.410 254096 DEBUG oslo_concurrency.lockutils [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.410 254096 DEBUG nova.network.neutron [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing network info cache for port b68643ca-2301-486a-984d-43fc41d1f773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.413 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start _get_guest_xml network_info=[{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.417 254096 WARNING nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.424 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.425 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.428 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.428 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.429 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.429 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.430 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.430 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.430 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.432 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.432 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.432 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.436 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:04:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794994296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.920 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.944 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:37 np0005535469 nova_compute[254092]: 2025-11-25 17:04:37.948 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]: {
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "osd_id": 1,
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "type": "bluestore"
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:    },
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "osd_id": 2,
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "type": "bluestore"
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:    },
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "osd_id": 0,
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:        "type": "bluestore"
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]:    }
Nov 25 12:04:38 np0005535469 interesting_proskuriakova[387215]: }
Nov 25 12:04:38 np0005535469 systemd[1]: libpod-0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8.scope: Deactivated successfully.
Nov 25 12:04:38 np0005535469 podman[387193]: 2025-11-25 17:04:38.202591856 +0000 UTC m=+1.084128376 container died 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:04:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc-merged.mount: Deactivated successfully.
Nov 25 12:04:38 np0005535469 podman[387193]: 2025-11-25 17:04:38.269886942 +0000 UTC m=+1.151423462 container remove 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:04:38 np0005535469 systemd[1]: libpod-conmon-0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8.scope: Deactivated successfully.
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:04:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2bfa7d58-78fd-4e61-b93c-ae769a6cff5b does not exist
Nov 25 12:04:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev efa73423-c184-40b5-86d4-5eb73c0c55ca does not exist
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015512114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:04:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.381 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.385 254096 DEBUG nova.virt.libvirt.vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=121,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItBRZm2yDqYd+wVxwOqSlZSDzlMlxevUAlkeawR8Ad9B/U2D7ctHPpd22ukw4f9eeesjVGXob1S5He80yqU0J/RQxjf579CY5kWM2qsSzAfmn3IDw4uBaYcepmxt/Q0ZA==',key_name='tempest-TestSecurityGroupsBasicOps-1015716509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-h2rb4r91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:30Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=c5d6f631-cec2-431d-b476-feafa21e4f80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.385 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.386 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.387 254096 DEBUG nova.objects.instance [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5d6f631-cec2-431d-b476-feafa21e4f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.403 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <uuid>c5d6f631-cec2-431d-b476-feafa21e4f80</uuid>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <name>instance-00000079</name>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184</nova:name>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:04:37</nova:creationTime>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <nova:port uuid="b68643ca-2301-486a-984d-43fc41d1f773">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <entry name="serial">c5d6f631-cec2-431d-b476-feafa21e4f80</entry>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <entry name="uuid">c5d6f631-cec2-431d-b476-feafa21e4f80</entry>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c5d6f631-cec2-431d-b476-feafa21e4f80_disk">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:dd:cf:7c"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <target dev="tapb68643ca-23"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/console.log" append="off"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:04:38 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:04:38 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:04:38 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:04:38 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.404 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Preparing to wait for external event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.404 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.404 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.405 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.405 254096 DEBUG nova.virt.libvirt.vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=121,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItBRZm2yDqYd+wVxwOqSlZSDzlMlxevUAlkeawR8Ad9B/U2D7ctHPpd22ukw4f9eeesjVGXob1S5He80yqU0J/RQxjf579CY5kWM2qsSzAfmn3IDw4uBaYcepmxt/Q0ZA==',key_name='tempest-TestSecurityGroupsBasicOps-1015716509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-h2rb4r91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:30Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=c5d6f631-cec2-431d-b476-feafa21e4f80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.406 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.406 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.406 254096 DEBUG os_vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.407 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.408 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.411 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb68643ca-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.411 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb68643ca-23, col_values=(('external_ids', {'iface-id': 'b68643ca-2301-486a-984d-43fc41d1f773', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:cf:7c', 'vm-uuid': 'c5d6f631-cec2-431d-b476-feafa21e4f80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:38 np0005535469 NetworkManager[48891]: <info>  [1764090278.4136] manager: (tapb68643ca-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.415 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.422 254096 INFO os_vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23')#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:dd:cf:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Using config drive#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.489 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.698 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.698 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Processing event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 WARNING nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.701 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.705 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090278.7047448, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.705 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.706 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.710 254096 INFO nova.virt.libvirt.driver [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance spawned successfully.#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.710 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.733 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.739 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.742 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.742 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.742 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.743 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.743 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.743 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.800 254096 INFO nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 11.89 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.800 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.863 254096 INFO nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 12.91 seconds to build instance.#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.888 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Creating config drive at /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.896 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptypfhp13 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.946 254096 DEBUG nova.network.neutron [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updated VIF entry in instance network info cache for port b68643ca-2301-486a-984d-43fc41d1f773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.947 254096 DEBUG nova.network.neutron [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.950 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:38 np0005535469 nova_compute[254092]: 2025-11-25 17:04:38.961 254096 DEBUG oslo_concurrency.lockutils [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.049 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptypfhp13" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.086 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.092 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.139 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.272 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.273 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deleting local config drive /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config because it was imported into RBD.#033[00m
Nov 25 12:04:39 np0005535469 kernel: tapb68643ca-23: entered promiscuous mode
Nov 25 12:04:39 np0005535469 NetworkManager[48891]: <info>  [1764090279.3737] manager: (tapb68643ca-23): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Nov 25 12:04:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:39Z|01242|binding|INFO|Claiming lport b68643ca-2301-486a-984d-43fc41d1f773 for this chassis.
Nov 25 12:04:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:39Z|01243|binding|INFO|b68643ca-2301-486a-984d-43fc41d1f773: Claiming fa:16:3e:dd:cf:7c 10.100.0.6
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:39 np0005535469 NetworkManager[48891]: <info>  [1764090279.3943] device (tapb68643ca-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.389 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:cf:7c 10.100.0.6'], port_security=['fa:16:3e:dd:cf:7c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5d6f631-cec2-431d-b476-feafa21e4f80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '141c3ba7-cca0-4ab6-b807-4cd786e57588 76216c71-c251-4ffd-abb8-124655d34fd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fde4dc0-c42b-4191-847a-6d09a5cf68e7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b68643ca-2301-486a-984d-43fc41d1f773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.390 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b68643ca-2301-486a-984d-43fc41d1f773 in datapath 14274c68-4eff-4ea0-b9e4-434b52f5c24f bound to our chassis#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.391 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14274c68-4eff-4ea0-b9e4-434b52f5c24f#033[00m
Nov 25 12:04:39 np0005535469 NetworkManager[48891]: <info>  [1764090279.4002] device (tapb68643ca-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.410 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ed97a8-abb1-4ad1-af80-8c193546e401]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.411 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14274c68-41 in ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.414 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14274c68-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.415 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8c719a-f95a-4fed-9a1c-ee1ccbcc7315]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87db727e-31aa-4f86-89b5-47ad946ffab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.431 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc66dc3-8e2d-49d0-882a-2085c82a4634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 systemd-machined[216343]: New machine qemu-153-instance-00000079.
Nov 25 12:04:39 np0005535469 systemd[1]: Started Virtual Machine qemu-153-instance-00000079.
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c02ff18b-1ff5-4673-a6c4-e49ada03f499]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:39Z|01244|binding|INFO|Setting lport b68643ca-2301-486a-984d-43fc41d1f773 ovn-installed in OVS
Nov 25 12:04:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:39Z|01245|binding|INFO|Setting lport b68643ca-2301-486a-984d-43fc41d1f773 up in Southbound
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.493 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[503a3439-9ae3-4ffe-aa90-53bc40138a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 systemd-udevd[387459]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:04:39 np0005535469 NetworkManager[48891]: <info>  [1764090279.5022] manager: (tap14274c68-40): new Veth device (/org/freedesktop/NetworkManager/Devices/514)
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6ab014-57bc-469d-b6ff-9570c26d4777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.535 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e79349f9-3445-4858-b67d-f709c7603629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.537 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a33b7696-b9d8-4843-9d16-82db54c0d475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 NetworkManager[48891]: <info>  [1764090279.5662] device (tap14274c68-40): carrier: link connected
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.573 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b08c6657-42bd-4844-8dc5-dc8efd6d9139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[90c1077d-8936-456c-9879-7c672fa9ef3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14274c68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:c8:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673715, 'reachable_time': 25107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387489, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[804ad0a5-9689-4ac0-ac18-66fbf5ae8c5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:c8c9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673715, 'tstamp': 673715}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387490, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.633 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03f51289-8ea7-4060-982d-102b8e2fb33e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14274c68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:c8:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673715, 'reachable_time': 25107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387491, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.683 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3c9d7a-d4ff-4a22-871f-7a5d2cedf91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[385fc5cd-013b-4e2f-bfed-542e64b9cf4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14274c68-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14274c68-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:39 np0005535469 NetworkManager[48891]: <info>  [1764090279.7506] manager: (tap14274c68-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Nov 25 12:04:39 np0005535469 kernel: tap14274c68-40: entered promiscuous mode
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.758 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14274c68-40, col_values=(('external_ids', {'iface-id': '426285e8-6729-4138-ae8b-27c8efd6b758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:04:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:39Z|01246|binding|INFO|Releasing lport 426285e8-6729-4138-ae8b-27c8efd6b758 from this chassis (sb_readonly=0)
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.763 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14274c68-4eff-4ea0-b9e4-434b52f5c24f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14274c68-4eff-4ea0-b9e4-434b52f5c24f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a088ad7-9f91-45cc-bddf-00ec87719651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.765 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-14274c68-4eff-4ea0-b9e4-434b52f5c24f
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/14274c68-4eff-4ea0-b9e4-434b52f5c24f.pid.haproxy
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 14274c68-4eff-4ea0-b9e4-434b52f5c24f
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:04:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.767 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'env', 'PROCESS_TAG=haproxy-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14274c68-4eff-4ea0-b9e4-434b52f5c24f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:04:39 np0005535469 nova_compute[254092]: 2025-11-25 17:04:39.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.137 254096 DEBUG nova.compute.manager [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.138 254096 DEBUG oslo_concurrency.lockutils [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.139 254096 DEBUG oslo_concurrency.lockutils [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.139 254096 DEBUG oslo_concurrency.lockutils [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.139 254096 DEBUG nova.compute.manager [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Processing event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:04:40
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:40 np0005535469 podman[387523]: 2025-11-25 17:04:40.218440418 +0000 UTC m=+0.050418514 container create 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:04:40 np0005535469 systemd[1]: Started libpod-conmon-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c.scope.
Nov 25 12:04:40 np0005535469 podman[387523]: 2025-11-25 17:04:40.18938518 +0000 UTC m=+0.021363276 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:04:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:04:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc152218d65abff31069b204813b7e011958f1c03540ebac42a934b9611177c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:04:40 np0005535469 podman[387523]: 2025-11-25 17:04:40.310200034 +0000 UTC m=+0.142178110 container init 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 12:04:40 np0005535469 podman[387523]: 2025-11-25 17:04:40.314935834 +0000 UTC m=+0.146913910 container start 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 12:04:40 np0005535469 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : New worker (387545) forked
Nov 25 12:04:40 np0005535469 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : Loading success.
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.550 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.574 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090280.574189, c5d6f631-cec2-431d-b476-feafa21e4f80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.575 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Started (Lifecycle Event)#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.581 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.604 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.605 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.612 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.617 254096 INFO nova.virt.libvirt.driver [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance spawned successfully.#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.619 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.673 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.677 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090280.5751793, c5d6f631-cec2-431d-b476-feafa21e4f80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.680 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.700 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.701 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.701 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.702 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.702 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.702 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.721 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.724 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090280.597982, c5d6f631-cec2-431d-b476-feafa21e4f80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.724 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.747 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.750 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.755 254096 INFO nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 10.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.755 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.763 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.811 254096 INFO nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 11.52 seconds to build instance.#033[00m
Nov 25 12:04:40 np0005535469 nova_compute[254092]: 2025-11-25 17:04:40.823 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:42 np0005535469 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG nova.compute.manager [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:42 np0005535469 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG oslo_concurrency.lockutils [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:42 np0005535469 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG oslo_concurrency.lockutils [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:42 np0005535469 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG oslo_concurrency.lockutils [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:42 np0005535469 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG nova.compute.manager [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] No waiting events found dispatching network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:04:42 np0005535469 nova_compute[254092]: 2025-11-25 17:04:42.216 254096 WARNING nova.compute.manager [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received unexpected event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:04:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 113 op/s
Nov 25 12:04:43 np0005535469 nova_compute[254092]: 2025-11-25 17:04:43.475 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.160 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:45 np0005535469 NetworkManager[48891]: <info>  [1764090285.3937] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Nov 25 12:04:45 np0005535469 NetworkManager[48891]: <info>  [1764090285.3943] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Nov 25 12:04:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:45Z|01247|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 12:04:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:45Z|01248|binding|INFO|Releasing lport 426285e8-6729-4138-ae8b-27c8efd6b758 from this chassis (sb_readonly=0)
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:45Z|01249|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 12:04:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:45Z|01250|binding|INFO|Releasing lport 426285e8-6729-4138-ae8b-27c8efd6b758 from this chassis (sb_readonly=0)
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.418 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:45 np0005535469 podman[387597]: 2025-11-25 17:04:45.648987697 +0000 UTC m=+0.062086133 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 12:04:45 np0005535469 podman[387598]: 2025-11-25 17:04:45.654704864 +0000 UTC m=+0.063499033 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 12:04:45 np0005535469 podman[387599]: 2025-11-25 17:04:45.701896249 +0000 UTC m=+0.107273453 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.798 254096 DEBUG nova.compute.manager [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG nova.compute.manager [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG oslo_concurrency.lockutils [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG oslo_concurrency.lockutils [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:04:45 np0005535469 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG nova.network.neutron [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:04:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.479 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.502 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.503 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid c5d6f631-cec2-431d-b476-feafa21e4f80 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.504 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.505 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.505 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.506 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.547 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.757 254096 DEBUG nova.network.neutron [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.759 254096 DEBUG nova.network.neutron [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.783 254096 DEBUG oslo_concurrency.lockutils [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.877 254096 DEBUG nova.compute.manager [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-changed-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.878 254096 DEBUG nova.compute.manager [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing instance network info cache due to event network-changed-b68643ca-2301-486a-984d-43fc41d1f773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.879 254096 DEBUG oslo_concurrency.lockutils [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.879 254096 DEBUG oslo_concurrency.lockutils [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:04:47 np0005535469 nova_compute[254092]: 2025-11-25 17:04:47.879 254096 DEBUG nova.network.neutron [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing network info cache for port b68643ca-2301-486a-984d-43fc41d1f773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:04:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 25 12:04:48 np0005535469 nova_compute[254092]: 2025-11-25 17:04:48.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:49 np0005535469 nova_compute[254092]: 2025-11-25 17:04:49.567 254096 DEBUG nova.network.neutron [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updated VIF entry in instance network info cache for port b68643ca-2301-486a-984d-43fc41d1f773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:04:49 np0005535469 nova_compute[254092]: 2025-11-25 17:04:49.569 254096 DEBUG nova.network.neutron [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:04:49 np0005535469 nova_compute[254092]: 2025-11-25 17:04:49.587 254096 DEBUG oslo_concurrency.lockutils [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:04:50 np0005535469 nova_compute[254092]: 2025-11-25 17:04:50.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 78 KiB/s wr, 148 op/s
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007068107174252784 of space, bias 1.0, pg target 0.21204321522758351 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:04:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:04:51 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:51Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:d6:7a 10.100.0.11
Nov 25 12:04:51 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:51Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:d6:7a 10.100.0.11
Nov 25 12:04:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 146 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Nov 25 12:04:53 np0005535469 nova_compute[254092]: 2025-11-25 17:04:53.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:04:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:54Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:cf:7c 10.100.0.6
Nov 25 12:04:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:04:54Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:cf:7c 10.100.0.6
Nov 25 12:04:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 146 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 117 op/s
Nov 25 12:04:55 np0005535469 nova_compute[254092]: 2025-11-25 17:04:55.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Nov 25 12:04:57 np0005535469 nova_compute[254092]: 2025-11-25 17:04:57.102 254096 INFO nova.compute.manager [None req-6eaa24f1-d76e-4be5-9453-22e72120ca67 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Get console output#033[00m
Nov 25 12:04:57 np0005535469 nova_compute[254092]: 2025-11-25 17:04:57.109 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:04:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 12:04:58 np0005535469 nova_compute[254092]: 2025-11-25 17:04:58.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:04:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:00 np0005535469 nova_compute[254092]: 2025-11-25 17:05:00.210 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:00 np0005535469 nova_compute[254092]: 2025-11-25 17:05:00.210 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:00 np0005535469 nova_compute[254092]: 2025-11-25 17:05:00.211 254096 DEBUG nova.objects.instance [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:00 np0005535469 nova_compute[254092]: 2025-11-25 17:05:00.244 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 12:05:01 np0005535469 nova_compute[254092]: 2025-11-25 17:05:01.519 254096 DEBUG nova.objects.instance [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_requests' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:01 np0005535469 nova_compute[254092]: 2025-11-25 17:05:01.536 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:05:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 4.2 MiB/s wr, 127 op/s
Nov 25 12:05:02 np0005535469 nova_compute[254092]: 2025-11-25 17:05:02.510 254096 DEBUG nova.policy [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:05:03 np0005535469 nova_compute[254092]: 2025-11-25 17:05:03.120 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully created port: d5b14553-424b-4985-9ed6-2f4afac92c00 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:05:03 np0005535469 nova_compute[254092]: 2025-11-25 17:05:03.482 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:04 np0005535469 nova_compute[254092]: 2025-11-25 17:05:04.287 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully updated port: d5b14553-424b-4985-9ed6-2f4afac92c00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:05:04 np0005535469 nova_compute[254092]: 2025-11-25 17:05:04.311 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:04 np0005535469 nova_compute[254092]: 2025-11-25 17:05:04.311 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:04 np0005535469 nova_compute[254092]: 2025-11-25 17:05:04.311 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:05:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 3.1 MiB/s wr, 87 op/s
Nov 25 12:05:04 np0005535469 nova_compute[254092]: 2025-11-25 17:05:04.433 254096 DEBUG nova.compute.manager [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:04 np0005535469 nova_compute[254092]: 2025-11-25 17:05:04.434 254096 DEBUG nova.compute.manager [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:05:04 np0005535469 nova_compute[254092]: 2025-11-25 17:05:04.435 254096 DEBUG oslo_concurrency.lockutils [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:05 np0005535469 nova_compute[254092]: 2025-11-25 17:05:05.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 427 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.699 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.700 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.700 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.700 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.701 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.702 254096 INFO nova.compute.manager [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Terminating instance#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.703 254096 DEBUG nova.compute.manager [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:05:06 np0005535469 kernel: tapb68643ca-23 (unregistering): left promiscuous mode
Nov 25 12:05:06 np0005535469 NetworkManager[48891]: <info>  [1764090306.7710] device (tapb68643ca-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG nova.compute.manager [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-changed-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG nova.compute.manager [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing instance network info cache due to event network-changed-b68643ca-2301-486a-984d-43fc41d1f773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG oslo_concurrency.lockutils [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG oslo_concurrency.lockutils [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.771 254096 DEBUG nova.network.neutron [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing network info cache for port b68643ca-2301-486a-984d-43fc41d1f773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:05:06 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:06Z|01251|binding|INFO|Releasing lport b68643ca-2301-486a-984d-43fc41d1f773 from this chassis (sb_readonly=0)
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:06 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:06Z|01252|binding|INFO|Setting lport b68643ca-2301-486a-984d-43fc41d1f773 down in Southbound
Nov 25 12:05:06 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:06Z|01253|binding|INFO|Removing iface tapb68643ca-23 ovn-installed in OVS
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.787 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:cf:7c 10.100.0.6'], port_security=['fa:16:3e:dd:cf:7c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5d6f631-cec2-431d-b476-feafa21e4f80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '141c3ba7-cca0-4ab6-b807-4cd786e57588 76216c71-c251-4ffd-abb8-124655d34fd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fde4dc0-c42b-4191-847a-6d09a5cf68e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b68643ca-2301-486a-984d-43fc41d1f773) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:05:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.788 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b68643ca-2301-486a-984d-43fc41d1f773 in datapath 14274c68-4eff-4ea0-b9e4-434b52f5c24f unbound from our chassis#033[00m
Nov 25 12:05:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.789 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14274c68-4eff-4ea0-b9e4-434b52f5c24f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:05:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.790 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edf06ffc-cb01-44c4-903d-62e3a72a081f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.790 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f namespace which is not needed anymore#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:06 np0005535469 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 25 12:05:06 np0005535469 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Consumed 14.214s CPU time.
Nov 25 12:05:06 np0005535469 systemd-machined[216343]: Machine qemu-153-instance-00000079 terminated.
Nov 25 12:05:06 np0005535469 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : haproxy version is 2.8.14-c23fe91
Nov 25 12:05:06 np0005535469 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : path to executable is /usr/sbin/haproxy
Nov 25 12:05:06 np0005535469 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [WARNING]  (387543) : Exiting Master process...
Nov 25 12:05:06 np0005535469 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [ALERT]    (387543) : Current worker (387545) exited with code 143 (Terminated)
Nov 25 12:05:06 np0005535469 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [WARNING]  (387543) : All workers exited. Exiting... (0)
Nov 25 12:05:06 np0005535469 systemd[1]: libpod-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c.scope: Deactivated successfully.
Nov 25 12:05:06 np0005535469 podman[387685]: 2025-11-25 17:05:06.941392189 +0000 UTC m=+0.058544727 container died 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.949 254096 INFO nova.virt.libvirt.driver [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance destroyed successfully.#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.950 254096 DEBUG nova.objects.instance [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid c5d6f631-cec2-431d-b476-feafa21e4f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.965 254096 DEBUG nova.virt.libvirt.vif [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=121,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItBRZm2yDqYd+wVxwOqSlZSDzlMlxevUAlkeawR8Ad9B/U2D7ctHPpd22ukw4f9eeesjVGXob1S5He80yqU0J/RQxjf579CY5kWM2qsSzAfmn3IDw4uBaYcepmxt/Q0ZA==',key_name='tempest-TestSecurityGroupsBasicOps-1015716509',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-h2rb4r91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:40Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=c5d6f631-cec2-431d-b476-feafa21e4f80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.968 254096 DEBUG nova.network.os_vif_util [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.968 254096 DEBUG nova.network.os_vif_util [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.969 254096 DEBUG os_vif [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.972 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb68643ca-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:05:06 np0005535469 nova_compute[254092]: 2025-11-25 17:05:06.979 254096 INFO os_vif [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23')#033[00m
Nov 25 12:05:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4bc152218d65abff31069b204813b7e011958f1c03540ebac42a934b9611177c-merged.mount: Deactivated successfully.
Nov 25 12:05:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c-userdata-shm.mount: Deactivated successfully.
Nov 25 12:05:07 np0005535469 podman[387685]: 2025-11-25 17:05:07.004065908 +0000 UTC m=+0.121218456 container cleanup 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:05:07 np0005535469 systemd[1]: libpod-conmon-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c.scope: Deactivated successfully.
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.134 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.206 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.207 254096 DEBUG oslo_concurrency.lockutils [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.208 254096 DEBUG nova.network.neutron [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.211 254096 DEBUG nova.virt.libvirt.vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.212 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.213 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.213 254096 DEBUG os_vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.214 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.214 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.218 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5b14553-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.218 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5b14553-42, col_values=(('external_ids', {'iface-id': 'd5b14553-424b-4985-9ed6-2f4afac92c00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:9c:66', 'vm-uuid': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 NetworkManager[48891]: <info>  [1764090307.2217] manager: (tapd5b14553-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.230 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.234 254096 INFO os_vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42')#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.235 254096 DEBUG nova.virt.libvirt.vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.236 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.236 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.241 254096 DEBUG nova.virt.libvirt.guest [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] attach device xml: <interface type="ethernet">
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:8b:9c:66"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <target dev="tapd5b14553-42"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]: </interface>
Nov 25 12:05:07 np0005535469 nova_compute[254092]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 12:05:07 np0005535469 kernel: tapd5b14553-42: entered promiscuous mode
Nov 25 12:05:07 np0005535469 NetworkManager[48891]: <info>  [1764090307.2568] manager: (tapd5b14553-42): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Nov 25 12:05:07 np0005535469 systemd-udevd[387665]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:05:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:07Z|01254|binding|INFO|Claiming lport d5b14553-424b-4985-9ed6-2f4afac92c00 for this chassis.
Nov 25 12:05:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:07Z|01255|binding|INFO|d5b14553-424b-4985-9ed6-2f4afac92c00: Claiming fa:16:3e:8b:9c:66 10.100.0.28
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 NetworkManager[48891]: <info>  [1764090307.2793] device (tapd5b14553-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:05:07 np0005535469 NetworkManager[48891]: <info>  [1764090307.2809] device (tapd5b14553-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:07Z|01256|binding|INFO|Setting lport d5b14553-424b-4985-9ed6-2f4afac92c00 ovn-installed in OVS
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:07Z|01257|binding|INFO|Setting lport d5b14553-424b-4985-9ed6-2f4afac92c00 up in Southbound
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.413 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9c:66 10.100.0.28'], port_security=['fa:16:3e:8b:9c:66 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d5b14553-424b-4985-9ed6-2f4afac92c00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:7c:d6:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:8b:9c:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:05:07 np0005535469 podman[387740]: 2025-11-25 17:05:07.450882244 +0000 UTC m=+0.425075771 container remove 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.458 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6f95fa-bf52-4df3-a5cc-f94e0c069c44]: (4, ('Tue Nov 25 05:05:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f (69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c)\n69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c\nTue Nov 25 05:05:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f (69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c)\n69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.460 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[213943e1-91cb-4d35-b72c-54e6139cc295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.462 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14274c68-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 kernel: tap14274c68-40: left promiscuous mode
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.561 254096 DEBUG nova.virt.libvirt.guest [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:05:07</nova:creationTime>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:05:07 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    <nova:port uuid="d5b14553-424b-4985-9ed6-2f4afac92c00">
Nov 25 12:05:07 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:05:07 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:05:07 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:05:07 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.568 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.570 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0343eee6-43ef-475a-9858-83165949b92a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.581 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7313a9a-e38e-4069-88ed-2ab35c439d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac644b8-15d9-4f42-9763-8d8c772a3b55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a78f81c-07c4-4513-8b01-b5205edb4f77]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673707, 'reachable_time': 28531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387765, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 systemd[1]: run-netns-ovnmeta\x2d14274c68\x2d4eff\x2d4ea0\x2db9e4\x2d434b52f5c24f.mount: Deactivated successfully.
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.601 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.601 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d4baad06-0ecc-474a-ba98-fab105f75a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.602 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d5b14553-424b-4985-9ed6-2f4afac92c00 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f unbound from our chassis#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.603 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.614 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.617 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fa7c7b-ad81-4a44-9a2a-601a0a13c64e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.618 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a48b006-a1 in ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.621 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a48b006-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.621 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[143524c2-908c-4524-8f16-e8e0d2495be1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.621 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e93db2fe-0c11-4a56-9e9c-8e9149ece79e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.633 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[83e158a4-b01f-451c-b463-a632e16a4ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.656 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b119000-303a-4110-ab8c-e69d1b0d357a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.693 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[86b53e8c-10a5-43bf-8b14-0b15d10dda87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.699 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dae71665-97b3-43bf-9e62-10c138079a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 NetworkManager[48891]: <info>  [1764090307.7012] manager: (tap4a48b006-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.744 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c17975e6-29a5-44ff-a5a5-b9dd7a0eb014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.749 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7084f8-c80f-41e7-aeb4-89827f0c3762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 NetworkManager[48891]: <info>  [1764090307.7829] device (tap4a48b006-a0): carrier: link connected
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.790 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[87692b4a-7c18-4270-b24b-5e2ea0090cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[455e2716-e0d9-4bcf-8b1d-e525b8bd37b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387789, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3caed1ee-4581-49f1-a945-78744d2b2ff1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:295'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676537, 'tstamp': 676537}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387790, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.861 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0cf4e2-77d5-484f-b19d-a3a215642e35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387791, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f64ebeb8-dbc7-465e-a3f7-3f6fe632913b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39e88408-72be-4ec1-9012-1f1993ad1064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.958 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a48b006-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 NetworkManager[48891]: <info>  [1764090307.9616] manager: (tap4a48b006-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Nov 25 12:05:07 np0005535469 kernel: tap4a48b006-a0: entered promiscuous mode
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.964 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a48b006-a0, col_values=(('external_ids', {'iface-id': '85d5b09a-dc15-4154-acec-abe7a2e5fc19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:07Z|01258|binding|INFO|Releasing lport 85d5b09a-dc15-4154-acec-abe7a2e5fc19 from this chassis (sb_readonly=0)
Nov 25 12:05:07 np0005535469 nova_compute[254092]: 2025-11-25 17:05:07.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.984 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a48b006-a4d1-4fa5-88f1-79386ed2958f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a48b006-a4d1-4fa5-88f1-79386ed2958f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.985 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd42f658-8acf-407c-9332-974b9936fd24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.986 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-4a48b006-a4d1-4fa5-88f1-79386ed2958f
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/4a48b006-a4d1-4fa5-88f1-79386ed2958f.pid.haproxy
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 4a48b006-a4d1-4fa5-88f1-79386ed2958f
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:05:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.987 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'env', 'PROCESS_TAG=haproxy-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a48b006-a4d1-4fa5-88f1-79386ed2958f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:05:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 31 KiB/s wr, 1 op/s
Nov 25 12:05:08 np0005535469 podman[387823]: 2025-11-25 17:05:08.471722083 +0000 UTC m=+0.122248354 container create f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:05:08 np0005535469 podman[387823]: 2025-11-25 17:05:08.383681249 +0000 UTC m=+0.034207590 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:05:08 np0005535469 systemd[1]: Started libpod-conmon-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465.scope.
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.548 254096 DEBUG nova.network.neutron [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updated VIF entry in instance network info cache for port b68643ca-2301-486a-984d-43fc41d1f773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.549 254096 DEBUG nova.network.neutron [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5bfab8528d115e346b339b56d12524bd34cf4549cb171cd97809410d53295b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.611 254096 DEBUG oslo_concurrency.lockutils [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:08 np0005535469 podman[387823]: 2025-11-25 17:05:08.615421884 +0000 UTC m=+0.265948155 container init f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:05:08 np0005535469 podman[387823]: 2025-11-25 17:05:08.621054509 +0000 UTC m=+0.271580760 container start f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 12:05:08 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : New worker (387844) forked
Nov 25 12:05:08 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : Loading success.
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.826 254096 DEBUG nova.network.neutron [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.826 254096 DEBUG nova.network.neutron [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.849 254096 DEBUG oslo_concurrency.lockutils [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.862 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-unplugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.862 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.862 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] No waiting events found dispatching network-vif-unplugged-b68643ca-2301-486a-984d-43fc41d1f773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-unplugged-b68643ca-2301-486a-984d-43fc41d1f773 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] No waiting events found dispatching network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 WARNING nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received unexpected event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 WARNING nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:05:08 np0005535469 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 WARNING nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:05:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:09Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:9c:66 10.100.0.28
Nov 25 12:05:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:09Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:9c:66 10.100.0.28
Nov 25 12:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:05:10 np0005535469 nova_compute[254092]: 2025-11-25 17:05:10.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 181 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 31 KiB/s wr, 18 op/s
Nov 25 12:05:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:10.981 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:05:10 np0005535469 nova_compute[254092]: 2025-11-25 17:05:10.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:10.982 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.015 254096 INFO nova.virt.libvirt.driver [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deleting instance files /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80_del#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.016 254096 INFO nova.virt.libvirt.driver [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deletion of /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80_del complete#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.061 254096 INFO nova.compute.manager [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 4.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.061 254096 DEBUG oslo.service.loopingcall [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.062 254096 DEBUG nova.compute.manager [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.062 254096 DEBUG nova.network.neutron [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.952 254096 DEBUG nova.network.neutron [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:11 np0005535469 nova_compute[254092]: 2025-11-25 17:05:11.974 254096 INFO nova.compute.manager [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.017 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.018 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.046 254096 DEBUG nova.compute.manager [req-e06ce0ac-d76c-4b57-8bb9-703fe7dc695e req-3bfdba21-d10d-4081-b61a-4aac228889b0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-deleted-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.104 254096 DEBUG oslo_concurrency.processutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 34 KiB/s wr, 23 op/s
Nov 25 12:05:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:05:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221322770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.516 254096 DEBUG oslo_concurrency.processutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.522 254096 DEBUG nova.compute.provider_tree [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.539 254096 DEBUG nova.scheduler.client.report [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.561 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.597 254096 INFO nova.scheduler.client.report [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance c5d6f631-cec2-431d-b476-feafa21e4f80#033[00m
Nov 25 12:05:12 np0005535469 nova_compute[254092]: 2025-11-25 17:05:12.662 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:13.643 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:13.658 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 11 KiB/s wr, 22 op/s
Nov 25 12:05:15 np0005535469 nova_compute[254092]: 2025-11-25 17:05:15.274 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:16Z|01259|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 12:05:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:16Z|01260|binding|INFO|Releasing lport 85d5b09a-dc15-4154-acec-abe7a2e5fc19 from this chassis (sb_readonly=0)
Nov 25 12:05:16 np0005535469 nova_compute[254092]: 2025-11-25 17:05:16.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 29 op/s
Nov 25 12:05:16 np0005535469 podman[387877]: 2025-11-25 17:05:16.63838166 +0000 UTC m=+0.054431124 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:05:16 np0005535469 podman[387878]: 2025-11-25 17:05:16.677523353 +0000 UTC m=+0.091572332 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:05:16 np0005535469 podman[387876]: 2025-11-25 17:05:16.680449473 +0000 UTC m=+0.090242405 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 25 12:05:16 np0005535469 nova_compute[254092]: 2025-11-25 17:05:16.972 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:16 np0005535469 nova_compute[254092]: 2025-11-25 17:05:16.973 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:16.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:16 np0005535469 nova_compute[254092]: 2025-11-25 17:05:16.987 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.048 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.048 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.055 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.055 254096 INFO nova.compute.claims [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.161 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:05:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275450836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.622 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.628 254096 DEBUG nova.compute.provider_tree [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.655 254096 DEBUG nova.scheduler.client.report [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.683 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.683 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.725 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.726 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.745 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.764 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.909 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.911 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.912 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Creating image(s)#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.950 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:17 np0005535469 nova_compute[254092]: 2025-11-25 17:05:17.986 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.022 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.027 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.070 254096 DEBUG nova.policy [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.108 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.108 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.109 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.110 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.134 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.138 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b747b045-786f-49a8-907c-cc222a07fa05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 8.2 KiB/s wr, 28 op/s
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.441 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b747b045-786f-49a8-907c-cc222a07fa05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.501 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.579 254096 DEBUG nova.objects.instance [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid b747b045-786f-49a8-907c-cc222a07fa05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.591 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.592 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Ensure instance console log exists: /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.592 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.592 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:18 np0005535469 nova_compute[254092]: 2025-11-25 17:05:18.593 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:19 np0005535469 nova_compute[254092]: 2025-11-25 17:05:19.536 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Successfully created port: 4f707331-3a0e-47b6-98ee-569db81bd594 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:05:19 np0005535469 nova_compute[254092]: 2025-11-25 17:05:19.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 132 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 380 KiB/s wr, 53 op/s
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.816 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Successfully updated port: 4f707331-3a0e-47b6-98ee-569db81bd594 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.838 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.838 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.838 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.913 254096 DEBUG nova.compute.manager [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-changed-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.913 254096 DEBUG nova.compute.manager [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Refreshing instance network info cache due to event network-changed-4f707331-3a0e-47b6-98ee-569db81bd594. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:05:20 np0005535469 nova_compute[254092]: 2025-11-25 17:05:20.914 254096 DEBUG oslo_concurrency.lockutils [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:21 np0005535469 nova_compute[254092]: 2025-11-25 17:05:21.098 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:05:21 np0005535469 nova_compute[254092]: 2025-11-25 17:05:21.948 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090306.947035, c5d6f631-cec2-431d-b476-feafa21e4f80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:05:21 np0005535469 nova_compute[254092]: 2025-11-25 17:05:21.949 254096 INFO nova.compute.manager [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:05:21 np0005535469 nova_compute[254092]: 2025-11-25 17:05:21.964 254096 DEBUG nova.compute.manager [None req-bf84427c-3325-4aa3-a504-ac65871c6100 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.111 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updating instance_info_cache with network_info: [{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.140 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.140 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance network_info: |[{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.141 254096 DEBUG oslo_concurrency.lockutils [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.141 254096 DEBUG nova.network.neutron [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Refreshing network info cache for port 4f707331-3a0e-47b6-98ee-569db81bd594 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.145 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start _get_guest_xml network_info=[{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.149 254096 WARNING nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.180 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.181 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.192 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.192 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.193 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.193 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.195 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.195 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.195 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.196 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.196 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.196 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.199 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 12:05:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:05:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2839384658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.710 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.746 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:22 np0005535469 nova_compute[254092]: 2025-11-25 17:05:22.752 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:05:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883095188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.195 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.197 254096 DEBUG nova.virt.libvirt.vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2108522710',display_name='tempest-TestNetworkBasicOps-server-2108522710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2108522710',id=122,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvG1eO5jPF+SV0Ao2r1oFJ5d8qXkQHoty8TB6rNrtbLrWvCsgRx2hMEMQlhuNosicoMv5mD+tjKe9vDZEtzlGPJcyIy9mr2/vCsJq0iL2bTTiQYg0Y14H1/7blYpbPNoQ==',key_name='tempest-TestNetworkBasicOps-1506231644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5q8f6jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:17Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=b747b045-786f-49a8-907c-cc222a07fa05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.198 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.199 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.200 254096 DEBUG nova.objects.instance [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid b747b045-786f-49a8-907c-cc222a07fa05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.234 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <uuid>b747b045-786f-49a8-907c-cc222a07fa05</uuid>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <name>instance-0000007a</name>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-2108522710</nova:name>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:05:22</nova:creationTime>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <nova:port uuid="4f707331-3a0e-47b6-98ee-569db81bd594">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <entry name="serial">b747b045-786f-49a8-907c-cc222a07fa05</entry>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <entry name="uuid">b747b045-786f-49a8-907c-cc222a07fa05</entry>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b747b045-786f-49a8-907c-cc222a07fa05_disk">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/b747b045-786f-49a8-907c-cc222a07fa05_disk.config">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:15:5c:fc"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <target dev="tap4f707331-3a"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/console.log" append="off"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:05:23 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:05:23 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:05:23 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:05:23 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.237 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Preparing to wait for external event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.237 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.238 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.238 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.239 254096 DEBUG nova.virt.libvirt.vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2108522710',display_name='tempest-TestNetworkBasicOps-server-2108522710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2108522710',id=122,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvG1eO5jPF+SV0Ao2r1oFJ5d8qXkQHoty8TB6rNrtbLrWvCsgRx2hMEMQlhuNosicoMv5mD+tjKe9vDZEtzlGPJcyIy9mr2/vCsJq0iL2bTTiQYg0Y14H1/7blYpbPNoQ==',key_name='tempest-TestNetworkBasicOps-1506231644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5q8f6jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:17Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=b747b045-786f-49a8-907c-cc222a07fa05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.239 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.240 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.240 254096 DEBUG os_vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.241 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.242 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.242 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.246 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f707331-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.246 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f707331-3a, col_values=(('external_ids', {'iface-id': '4f707331-3a0e-47b6-98ee-569db81bd594', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:5c:fc', 'vm-uuid': 'b747b045-786f-49a8-907c-cc222a07fa05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.275 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:23 np0005535469 NetworkManager[48891]: <info>  [1764090323.2770] manager: (tap4f707331-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.286 254096 INFO os_vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a')#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.353 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.354 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.354 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:15:5c:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.356 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Using config drive#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.395 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.676 254096 DEBUG nova.network.neutron [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updated VIF entry in instance network info cache for port 4f707331-3a0e-47b6-98ee-569db81bd594. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.676 254096 DEBUG nova.network.neutron [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updating instance_info_cache with network_info: [{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.696 254096 DEBUG oslo_concurrency.lockutils [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.706 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Creating config drive at /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.719 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpainul2bu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.868 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpainul2bu" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.895 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:23 np0005535469 nova_compute[254092]: 2025-11-25 17:05:23.900 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config b747b045-786f-49a8-907c-cc222a07fa05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.082 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config b747b045-786f-49a8-907c-cc222a07fa05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.084 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deleting local config drive /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config because it was imported into RBD.#033[00m
Nov 25 12:05:24 np0005535469 kernel: tap4f707331-3a: entered promiscuous mode
Nov 25 12:05:24 np0005535469 NetworkManager[48891]: <info>  [1764090324.1557] manager: (tap4f707331-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Nov 25 12:05:24 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:24Z|01261|binding|INFO|Claiming lport 4f707331-3a0e-47b6-98ee-569db81bd594 for this chassis.
Nov 25 12:05:24 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:24Z|01262|binding|INFO|4f707331-3a0e-47b6-98ee-569db81bd594: Claiming fa:16:3e:15:5c:fc 10.100.0.26
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.165 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:5c:fc 10.100.0.26'], port_security=['fa:16:3e:15:5c:fc 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'b747b045-786f-49a8-907c-cc222a07fa05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30729089-f7ac-4b8e-887e-74cce32287f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4f707331-3a0e-47b6-98ee-569db81bd594) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.167 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4f707331-3a0e-47b6-98ee-569db81bd594 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f bound to our chassis#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.168 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f#033[00m
Nov 25 12:05:24 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:24Z|01263|binding|INFO|Setting lport 4f707331-3a0e-47b6-98ee-569db81bd594 ovn-installed in OVS
Nov 25 12:05:24 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:24Z|01264|binding|INFO|Setting lport 4f707331-3a0e-47b6-98ee-569db81bd594 up in Southbound
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5370a3c1-696b-433b-b44a-9712353c7178]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:24 np0005535469 systemd-machined[216343]: New machine qemu-154-instance-0000007a.
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:24 np0005535469 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Nov 25 12:05:24 np0005535469 systemd-udevd[388263]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:05:24 np0005535469 NetworkManager[48891]: <info>  [1764090324.2212] device (tap4f707331-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:05:24 np0005535469 NetworkManager[48891]: <info>  [1764090324.2223] device (tap4f707331-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.222 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7d9726-3d17-4dab-8fc6-294228335cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.225 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4deecb15-9d99-40a4-8ceb-0f5eebee6e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.261 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[af568685-102b-45da-86cb-51cc2b5d00ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.280 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef0c7ec-0bd3-48eb-843b-b23756d7f276]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388273, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.294 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[472f53fc-8720-4472-8cdd-558354aee892]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676551, 'tstamp': 676551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388275, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676554, 'tstamp': 676554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388275, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.296 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.298 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.299 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a48b006-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.299 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.300 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a48b006-a0, col_values=(('external_ids', {'iface-id': '85d5b09a-dc15-4154-acec-abe7a2e5fc19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.300 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.384 254096 DEBUG nova.compute.manager [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.384 254096 DEBUG oslo_concurrency.lockutils [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.385 254096 DEBUG oslo_concurrency.lockutils [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.385 254096 DEBUG oslo_concurrency.lockutils [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.385 254096 DEBUG nova.compute.manager [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Processing event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:05:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.985 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090324.9847481, b747b045-786f-49a8-907c-cc222a07fa05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.985 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Started (Lifecycle Event)#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.988 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.991 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.994 254096 INFO nova.virt.libvirt.driver [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance spawned successfully.#033[00m
Nov 25 12:05:24 np0005535469 nova_compute[254092]: 2025-11-25 17:05:24.994 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.004 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.010 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.015 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.016 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.016 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.017 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.017 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.018 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.042 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.043 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090324.984854, b747b045-786f-49a8-907c-cc222a07fa05 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.043 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.058 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.061 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090324.9915025, b747b045-786f-49a8-907c-cc222a07fa05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.061 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.077 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.108 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.121 254096 INFO nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.122 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.172 254096 INFO nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 8.14 seconds to build instance.#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.190 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:25 np0005535469 nova_compute[254092]: 2025-11-25 17:05:25.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 12:05:26 np0005535469 nova_compute[254092]: 2025-11-25 17:05:26.459 254096 DEBUG nova.compute.manager [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:26 np0005535469 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG oslo_concurrency.lockutils [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:26 np0005535469 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG oslo_concurrency.lockutils [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:26 np0005535469 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG oslo_concurrency.lockutils [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:26 np0005535469 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG nova.compute.manager [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] No waiting events found dispatching network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:05:26 np0005535469 nova_compute[254092]: 2025-11-25 17:05:26.461 254096 WARNING nova.compute.manager [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received unexpected event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:05:27 np0005535469 nova_compute[254092]: 2025-11-25 17:05:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.830 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.830 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.846 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.920 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.920 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.927 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:05:28 np0005535469 nova_compute[254092]: 2025-11-25 17:05:28.927 254096 INFO nova.compute.claims [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.107 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:05:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761654020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.567 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.576 254096 DEBUG nova.compute.provider_tree [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.592 254096 DEBUG nova.scheduler.client.report [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.614 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.616 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.669 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.670 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.689 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.706 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.801 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.804 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.805 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Creating image(s)#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.843 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.882 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.930 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:29 np0005535469 nova_compute[254092]: 2025-11-25 17:05:29.936 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.048 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.049 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.050 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.051 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.081 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.087 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cbf5c589-9701-44c9-9600-739675853610_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.409 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cbf5c589-9701-44c9-9600-739675853610_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.489 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image cbf5c589-9701-44c9-9600-739675853610_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.528 254096 DEBUG nova.policy [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.579 254096 DEBUG nova.objects.instance [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid cbf5c589-9701-44c9-9600-739675853610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.589 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.589 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Ensure instance console log exists: /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.589 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.590 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:30 np0005535469 nova_compute[254092]: 2025-11-25 17:05:30.590 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 197 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 80 op/s
Nov 25 12:05:32 np0005535469 nova_compute[254092]: 2025-11-25 17:05:32.590 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Successfully created port: 332ae922-3280-48c2-8889-d1ab181a43db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.817 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Successfully updated port: 332ae922-3280-48c2-8889-d1ab181a43db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.832 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.833 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.833 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:05:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.934 254096 DEBUG nova.compute.manager [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-changed-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.934 254096 DEBUG nova.compute.manager [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing instance network info cache due to event network-changed-332ae922-3280-48c2-8889-d1ab181a43db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:05:33 np0005535469 nova_compute[254092]: 2025-11-25 17:05:33.934 254096 DEBUG oslo_concurrency.lockutils [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:34 np0005535469 nova_compute[254092]: 2025-11-25 17:05:34.285 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:05:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 197 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 77 op/s
Nov 25 12:05:34 np0005535469 nova_compute[254092]: 2025-11-25 17:05:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.564 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.582 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.582 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance network_info: |[{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.582 254096 DEBUG oslo_concurrency.lockutils [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.583 254096 DEBUG nova.network.neutron [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.585 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start _get_guest_xml network_info=[{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.589 254096 WARNING nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.593 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.594 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.600 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.600 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.601 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.601 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.601 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.604 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:05:35 np0005535469 nova_compute[254092]: 2025-11-25 17:05:35.606 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:05:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/877801198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.058 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.084 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.088 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:36 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 12:05:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:05:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:05:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040764990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.609 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.611 254096 DEBUG nova.virt.libvirt.vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=123,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-jw45rb9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:29Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=cbf5c589-9701-44c9-9600-739675853610,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.612 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.613 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.614 254096 DEBUG nova.objects.instance [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid cbf5c589-9701-44c9-9600-739675853610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.626 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <uuid>cbf5c589-9701-44c9-9600-739675853610</uuid>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <name>instance-0000007b</name>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728</nova:name>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:05:35</nova:creationTime>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <nova:port uuid="332ae922-3280-48c2-8889-d1ab181a43db">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <entry name="serial">cbf5c589-9701-44c9-9600-739675853610</entry>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <entry name="uuid">cbf5c589-9701-44c9-9600-739675853610</entry>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/cbf5c589-9701-44c9-9600-739675853610_disk">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/cbf5c589-9701-44c9-9600-739675853610_disk.config">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:98:d4:72"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <target dev="tap332ae922-32"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/console.log" append="off"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:05:36 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:05:36 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:05:36 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:05:36 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.628 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Preparing to wait for external event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.628 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.629 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.630 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.632 254096 DEBUG nova.virt.libvirt.vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=123,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-jw45rb9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:29Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=cbf5c589-9701-44c9-9600-739675853610,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.633 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.635 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.636 254096 DEBUG os_vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.638 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.640 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.645 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap332ae922-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.647 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap332ae922-32, col_values=(('external_ids', {'iface-id': '332ae922-3280-48c2-8889-d1ab181a43db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:d4:72', 'vm-uuid': 'cbf5c589-9701-44c9-9600-739675853610'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:36 np0005535469 NetworkManager[48891]: <info>  [1764090336.6507] manager: (tap332ae922-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.659 254096 INFO os_vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32')#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.845 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.846 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.846 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:98:d4:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.846 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Using config drive#033[00m
Nov 25 12:05:36 np0005535469 nova_compute[254092]: 2025-11-25 17:05:36.864 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.053 254096 DEBUG nova.network.neutron [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updated VIF entry in instance network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.054 254096 DEBUG nova.network.neutron [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.066 254096 DEBUG oslo_concurrency.lockutils [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:37Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:5c:fc 10.100.0.26
Nov 25 12:05:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:37Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:5c:fc 10.100.0.26
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.490 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Creating config drive at /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.495 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8jtavxj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.554 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.554 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.646 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8jtavxj" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.670 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.676 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config cbf5c589-9701-44c9-9600-739675853610_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.959 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config cbf5c589-9701-44c9-9600-739675853610_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:37 np0005535469 nova_compute[254092]: 2025-11-25 17:05:37.961 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Deleting local config drive /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config because it was imported into RBD.#033[00m
Nov 25 12:05:38 np0005535469 kernel: tap332ae922-32: entered promiscuous mode
Nov 25 12:05:38 np0005535469 NetworkManager[48891]: <info>  [1764090338.0199] manager: (tap332ae922-32): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Nov 25 12:05:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:38Z|01265|binding|INFO|Claiming lport 332ae922-3280-48c2-8889-d1ab181a43db for this chassis.
Nov 25 12:05:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:38Z|01266|binding|INFO|332ae922-3280-48c2-8889-d1ab181a43db: Claiming fa:16:3e:98:d4:72 10.100.0.8
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:05:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201702930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:05:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:38Z|01267|binding|INFO|Setting lport 332ae922-3280-48c2-8889-d1ab181a43db ovn-installed in OVS
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:38 np0005535469 systemd-udevd[388664]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.067 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:38 np0005535469 systemd-machined[216343]: New machine qemu-155-instance-0000007b.
Nov 25 12:05:38 np0005535469 NetworkManager[48891]: <info>  [1764090338.0804] device (tap332ae922-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:05:38 np0005535469 NetworkManager[48891]: <info>  [1764090338.0814] device (tap332ae922-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:05:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:38Z|01268|binding|INFO|Setting lport 332ae922-3280-48c2-8889-d1ab181a43db up in Southbound
Nov 25 12:05:38 np0005535469 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.083 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:d4:72 10.100.0.8'], port_security=['fa:16:3e:98:d4:72 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cbf5c589-9701-44c9-9600-739675853610', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2db36ef1-2db7-4ccb-b8a5-63a9a57f3dde 7a710be2-f756-4f31-8d8e-270c10735b5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=332ae922-3280-48c2-8889-d1ab181a43db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.084 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 332ae922-3280-48c2-8889-d1ab181a43db in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 bound to our chassis#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.085 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf9312b9-f4d2-496f-a143-7586e12fbee3#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec33aad-2028-41d7-af10-d92975701337]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.097 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf9312b9-f1 in ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.099 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf9312b9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efb71741-9c0a-43e3-a854-75e9c165370b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9282fcc9-91f1-4cf8-99a6-eb2af19fa2f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.117 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[54ef51bb-6be0-4ee2-a221-e5b5e8b8fa9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.145 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8802ae8-74a1-4349-83ea-651b1b63521d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.184 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f08f29-6a21-48ca-88c5-b3397ec4f1cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 NetworkManager[48891]: <info>  [1764090338.1905] manager: (tapbf9312b9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[546f129c-14ef-4f1e-9995-712f9e951ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.210 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.211 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.218 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.219 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.225 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.226 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.230 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a2507c12-6ff4-49a9-b737-abe36aefc7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.234 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7a48d2e9-2fe8-4663-bfda-4680b38300f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 NetworkManager[48891]: <info>  [1764090338.2665] device (tapbf9312b9-f0): carrier: link connected
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.271 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f18db40f-6b18-4da7-b692-0eb3030c0d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.289 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d236f983-5ce1-4824-a78a-81258fc2ee60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 39201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388697, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.303 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1f5c70-c55c-47f7-8caa-4814789f6b2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:2fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679585, 'tstamp': 679585}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388712, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.327 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c05c5ecc-2711-4867-aaf0-7e8205baf646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 39201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388715, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8b2757-3306-42cc-957a-27873f75f805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.437 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3872aafc-97b7-4f6a-aa1c-050bb6336b86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.439 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.439 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.439 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9312b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:38 np0005535469 NetworkManager[48891]: <info>  [1764090338.4420] manager: (tapbf9312b9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:38 np0005535469 kernel: tapbf9312b9-f0: entered promiscuous mode
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.445 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf9312b9-f0, col_values=(('external_ids', {'iface-id': 'c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:05:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:38Z|01269|binding|INFO|Releasing lport c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35 from this chassis (sb_readonly=0)
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.463 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf9312b9-f4d2-496f-a143-7586e12fbee3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf9312b9-f4d2-496f-a143-7586e12fbee3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.464 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8946d8-5f8c-4b7a-86ba-c5d24ef2ebaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.464 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-bf9312b9-f4d2-496f-a143-7586e12fbee3
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/bf9312b9-f4d2-496f-a143-7586e12fbee3.pid.haproxy
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID bf9312b9-f4d2-496f-a143-7586e12fbee3
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:05:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.465 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'env', 'PROCESS_TAG=haproxy-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf9312b9-f4d2-496f-a143-7586e12fbee3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.492 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.494 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3269MB free_disk=59.90095520019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.494 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.495 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.528 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090338.526368, cbf5c589-9701-44c9-9600-739675853610 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.528 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Started (Lifecycle Event)#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.551 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.554 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090338.5277677, cbf5c589-9701-44c9-9600-739675853610 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.555 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.574 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.579 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.601 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.611 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.611 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b747b045-786f-49a8-907c-cc222a07fa05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.611 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cbf5c589-9701-44c9-9600-739675853610 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.612 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.612 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.629 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.654 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.655 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.682 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.711 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.717 254096 DEBUG nova.compute.manager [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.718 254096 DEBUG oslo_concurrency.lockutils [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.718 254096 DEBUG oslo_concurrency.lockutils [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.719 254096 DEBUG oslo_concurrency.lockutils [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.719 254096 DEBUG nova.compute.manager [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Processing event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.719 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.737 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090338.729908, cbf5c589-9701-44c9-9600-739675853610 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.737 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.746 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.753 254096 INFO nova.virt.libvirt.driver [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance spawned successfully.#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.753 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.779 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.784 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.787 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.788 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.788 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.788 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.789 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.789 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.833 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:05:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.866 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.888 254096 INFO nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 9.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.889 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:05:38 np0005535469 podman[388871]: 2025-11-25 17:05:38.893858676 +0000 UTC m=+0.059776651 container create 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:05:38 np0005535469 systemd[1]: Started libpod-conmon-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948.scope.
Nov 25 12:05:38 np0005535469 podman[388871]: 2025-11-25 17:05:38.857288553 +0000 UTC m=+0.023206428 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.958 254096 INFO nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 10.07 seconds to build instance.#033[00m
Nov 25 12:05:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:38 np0005535469 nova_compute[254092]: 2025-11-25 17:05:38.983 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a68caf33163305d309df46a027dcb2a4dc5df4d4d0497aef68c09c791db936/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:39 np0005535469 podman[388871]: 2025-11-25 17:05:39.008276375 +0000 UTC m=+0.174194260 container init 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 12:05:39 np0005535469 podman[388871]: 2025-11-25 17:05:39.016311364 +0000 UTC m=+0.182229219 container start 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:05:39 np0005535469 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : New worker (388931) forked
Nov 25 12:05:39 np0005535469 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : Loading success.
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783463155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:05:39 np0005535469 nova_compute[254092]: 2025-11-25 17:05:39.334 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:05:39 np0005535469 nova_compute[254092]: 2025-11-25 17:05:39.341 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:05:39 np0005535469 nova_compute[254092]: 2025-11-25 17:05:39.367 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:05:39 np0005535469 nova_compute[254092]: 2025-11-25 17:05:39.401 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:05:39 np0005535469 nova_compute[254092]: 2025-11-25 17:05:39.402 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 86612fb8-28c9-4bb6-a942-6dafa3caa0f5 does not exist
Nov 25 12:05:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 64a2cb5d-e142-429c-8a06-3c2aa06d51e1 does not exist
Nov 25 12:05:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e0cddf10-368b-432d-866b-215efa82304e does not exist
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:05:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:05:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:05:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:05:40
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms']
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.364 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 224 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 135 op/s
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:05:40 np0005535469 podman[389214]: 2025-11-25 17:05:40.54815469 +0000 UTC m=+0.069988721 container create 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:05:40 np0005535469 podman[389214]: 2025-11-25 17:05:40.509371296 +0000 UTC m=+0.031205357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:05:40 np0005535469 systemd[1]: Started libpod-conmon-162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce.scope.
Nov 25 12:05:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.697 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.697 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.698 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.698 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:05:40 np0005535469 podman[389214]: 2025-11-25 17:05:40.786495757 +0000 UTC m=+0.308329828 container init 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:05:40 np0005535469 podman[389214]: 2025-11-25 17:05:40.796687247 +0000 UTC m=+0.318521278 container start 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.798 254096 DEBUG nova.compute.manager [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.798 254096 DEBUG oslo_concurrency.lockutils [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.799 254096 DEBUG oslo_concurrency.lockutils [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.799 254096 DEBUG oslo_concurrency.lockutils [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.800 254096 DEBUG nova.compute.manager [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] No waiting events found dispatching network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:05:40 np0005535469 nova_compute[254092]: 2025-11-25 17:05:40.800 254096 WARNING nova.compute.manager [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received unexpected event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db for instance with vm_state active and task_state None.#033[00m
Nov 25 12:05:40 np0005535469 festive_brahmagupta[389230]: 167 167
Nov 25 12:05:40 np0005535469 systemd[1]: libpod-162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce.scope: Deactivated successfully.
Nov 25 12:05:40 np0005535469 podman[389214]: 2025-11-25 17:05:40.939509714 +0000 UTC m=+0.461343775 container attach 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:05:40 np0005535469 podman[389214]: 2025-11-25 17:05:40.940858191 +0000 UTC m=+0.462692222 container died 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 12:05:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dc29dfffff1a59ff1c6c7c5dcf49070f888c03b3c13b67415771157dc8d5888d-merged.mount: Deactivated successfully.
Nov 25 12:05:41 np0005535469 podman[389214]: 2025-11-25 17:05:41.39328621 +0000 UTC m=+0.915120251 container remove 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 12:05:41 np0005535469 systemd[1]: libpod-conmon-162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce.scope: Deactivated successfully.
Nov 25 12:05:41 np0005535469 nova_compute[254092]: 2025-11-25 17:05:41.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:41 np0005535469 podman[389256]: 2025-11-25 17:05:41.597779189 +0000 UTC m=+0.023582417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:05:41 np0005535469 podman[389256]: 2025-11-25 17:05:41.713545104 +0000 UTC m=+0.139348282 container create 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:05:41 np0005535469 systemd[1]: Started libpod-conmon-223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36.scope.
Nov 25 12:05:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:42 np0005535469 podman[389256]: 2025-11-25 17:05:42.035206747 +0000 UTC m=+0.461009915 container init 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:05:42 np0005535469 podman[389256]: 2025-11-25 17:05:42.046277081 +0000 UTC m=+0.472080259 container start 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:05:42 np0005535469 podman[389256]: 2025-11-25 17:05:42.12539065 +0000 UTC m=+0.551193828 container attach 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:05:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 204 op/s
Nov 25 12:05:43 np0005535469 infallible_faraday[389273]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:05:43 np0005535469 infallible_faraday[389273]: --> relative data size: 1.0
Nov 25 12:05:43 np0005535469 infallible_faraday[389273]: --> All data devices are unavailable
Nov 25 12:05:43 np0005535469 systemd[1]: libpod-223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36.scope: Deactivated successfully.
Nov 25 12:05:43 np0005535469 podman[389256]: 2025-11-25 17:05:43.086963445 +0000 UTC m=+1.512766623 container died 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:05:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e-merged.mount: Deactivated successfully.
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.532 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.573 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.573 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.652 254096 DEBUG nova.compute.manager [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-changed-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.653 254096 DEBUG nova.compute.manager [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing instance network info cache due to event network-changed-332ae922-3280-48c2-8889-d1ab181a43db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.653 254096 DEBUG oslo_concurrency.lockutils [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.654 254096 DEBUG oslo_concurrency.lockutils [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:43 np0005535469 nova_compute[254092]: 2025-11-25 17:05:43.654 254096 DEBUG nova.network.neutron [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:05:43 np0005535469 podman[389256]: 2025-11-25 17:05:43.662606354 +0000 UTC m=+2.088409532 container remove 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:05:43 np0005535469 systemd[1]: libpod-conmon-223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36.scope: Deactivated successfully.
Nov 25 12:05:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:44 np0005535469 nova_compute[254092]: 2025-11-25 17:05:44.140 254096 DEBUG nova.compute.manager [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:05:44 np0005535469 nova_compute[254092]: 2025-11-25 17:05:44.140 254096 DEBUG nova.compute.manager [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:05:44 np0005535469 nova_compute[254092]: 2025-11-25 17:05:44.140 254096 DEBUG oslo_concurrency.lockutils [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:05:44 np0005535469 nova_compute[254092]: 2025-11-25 17:05:44.141 254096 DEBUG oslo_concurrency.lockutils [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:05:44 np0005535469 nova_compute[254092]: 2025-11-25 17:05:44.141 254096 DEBUG nova.network.neutron [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:05:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 161 op/s
Nov 25 12:05:44 np0005535469 podman[389457]: 2025-11-25 17:05:44.398750265 +0000 UTC m=+0.073844767 container create 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:05:44 np0005535469 podman[389457]: 2025-11-25 17:05:44.349210106 +0000 UTC m=+0.024304628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:05:44 np0005535469 systemd[1]: Started libpod-conmon-21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f.scope.
Nov 25 12:05:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:44 np0005535469 podman[389457]: 2025-11-25 17:05:44.521284566 +0000 UTC m=+0.196379088 container init 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:05:44 np0005535469 podman[389457]: 2025-11-25 17:05:44.529040628 +0000 UTC m=+0.204135140 container start 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:05:44 np0005535469 podman[389457]: 2025-11-25 17:05:44.53309679 +0000 UTC m=+0.208191312 container attach 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:05:44 np0005535469 wizardly_shockley[389474]: 167 167
Nov 25 12:05:44 np0005535469 systemd[1]: libpod-21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f.scope: Deactivated successfully.
Nov 25 12:05:44 np0005535469 podman[389457]: 2025-11-25 17:05:44.536503463 +0000 UTC m=+0.211598005 container died 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:05:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4b14ccb44408ba7c42b964885bd0dd9935d81a115c299bb21d86727c3036869d-merged.mount: Deactivated successfully.
Nov 25 12:05:44 np0005535469 podman[389457]: 2025-11-25 17:05:44.586135975 +0000 UTC m=+0.261230517 container remove 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:05:44 np0005535469 systemd[1]: libpod-conmon-21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f.scope: Deactivated successfully.
Nov 25 12:05:44 np0005535469 podman[389497]: 2025-11-25 17:05:44.826811996 +0000 UTC m=+0.064356186 container create 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:05:44 np0005535469 systemd[1]: Started libpod-conmon-6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0.scope.
Nov 25 12:05:44 np0005535469 podman[389497]: 2025-11-25 17:05:44.803311231 +0000 UTC m=+0.040855421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:05:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:44 np0005535469 podman[389497]: 2025-11-25 17:05:44.920285929 +0000 UTC m=+0.157830159 container init 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:05:44 np0005535469 podman[389497]: 2025-11-25 17:05:44.929874012 +0000 UTC m=+0.167418192 container start 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:05:44 np0005535469 podman[389497]: 2025-11-25 17:05:44.933452321 +0000 UTC m=+0.170996511 container attach 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.159 254096 DEBUG nova.network.neutron [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updated VIF entry in instance network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.161 254096 DEBUG nova.network.neutron [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.189 254096 DEBUG oslo_concurrency.lockutils [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.527 254096 DEBUG nova.network.neutron [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.527 254096 DEBUG nova.network.neutron [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.539 254096 DEBUG oslo_concurrency.lockutils [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:05:45 np0005535469 nova_compute[254092]: 2025-11-25 17:05:45.569 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:05:45 np0005535469 zen_bassi[389513]: {
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:    "0": [
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:        {
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "devices": [
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "/dev/loop3"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            ],
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_name": "ceph_lv0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_size": "21470642176",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "name": "ceph_lv0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "tags": {
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cluster_name": "ceph",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.crush_device_class": "",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.encrypted": "0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osd_id": "0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.type": "block",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.vdo": "0"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            },
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "type": "block",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "vg_name": "ceph_vg0"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:        }
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:    ],
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:    "1": [
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:        {
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "devices": [
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "/dev/loop4"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            ],
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_name": "ceph_lv1",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_size": "21470642176",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "name": "ceph_lv1",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "tags": {
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cluster_name": "ceph",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.crush_device_class": "",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.encrypted": "0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osd_id": "1",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.type": "block",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.vdo": "0"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            },
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "type": "block",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "vg_name": "ceph_vg1"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:        }
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:    ],
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:    "2": [
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:        {
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "devices": [
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "/dev/loop5"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            ],
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_name": "ceph_lv2",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_size": "21470642176",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "name": "ceph_lv2",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "tags": {
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.cluster_name": "ceph",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.crush_device_class": "",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.encrypted": "0",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osd_id": "2",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.type": "block",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:                "ceph.vdo": "0"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            },
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "type": "block",
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:            "vg_name": "ceph_vg2"
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:        }
Nov 25 12:05:45 np0005535469 zen_bassi[389513]:    ]
Nov 25 12:05:45 np0005535469 zen_bassi[389513]: }
Nov 25 12:05:45 np0005535469 systemd[1]: libpod-6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0.scope: Deactivated successfully.
Nov 25 12:05:45 np0005535469 podman[389497]: 2025-11-25 17:05:45.742936383 +0000 UTC m=+0.980480583 container died 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:05:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86-merged.mount: Deactivated successfully.
Nov 25 12:05:45 np0005535469 podman[389497]: 2025-11-25 17:05:45.806380693 +0000 UTC m=+1.043924873 container remove 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:05:45 np0005535469 systemd[1]: libpod-conmon-6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0.scope: Deactivated successfully.
Nov 25 12:05:46 np0005535469 podman[389672]: 2025-11-25 17:05:46.39324424 +0000 UTC m=+0.036784690 container create 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 25 12:05:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 166 op/s
Nov 25 12:05:46 np0005535469 systemd[1]: Started libpod-conmon-728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03.scope.
Nov 25 12:05:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:46 np0005535469 podman[389672]: 2025-11-25 17:05:46.473055749 +0000 UTC m=+0.116596219 container init 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:05:46 np0005535469 podman[389672]: 2025-11-25 17:05:46.375217135 +0000 UTC m=+0.018757605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:05:46 np0005535469 podman[389672]: 2025-11-25 17:05:46.479057913 +0000 UTC m=+0.122598363 container start 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:05:46 np0005535469 podman[389672]: 2025-11-25 17:05:46.481577652 +0000 UTC m=+0.125118102 container attach 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:05:46 np0005535469 sweet_chaum[389688]: 167 167
Nov 25 12:05:46 np0005535469 systemd[1]: libpod-728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03.scope: Deactivated successfully.
Nov 25 12:05:46 np0005535469 podman[389672]: 2025-11-25 17:05:46.484498112 +0000 UTC m=+0.128038562 container died 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 12:05:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-87860ac5170e9f7e89fc930bb6d5a2a1f93fdcc24e2f8949a86c077f530f1d63-merged.mount: Deactivated successfully.
Nov 25 12:05:46 np0005535469 podman[389672]: 2025-11-25 17:05:46.516892741 +0000 UTC m=+0.160433191 container remove 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:05:46 np0005535469 systemd[1]: libpod-conmon-728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03.scope: Deactivated successfully.
Nov 25 12:05:46 np0005535469 nova_compute[254092]: 2025-11-25 17:05:46.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:46 np0005535469 podman[389710]: 2025-11-25 17:05:46.703220751 +0000 UTC m=+0.039744420 container create db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:05:46 np0005535469 systemd[1]: Started libpod-conmon-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope.
Nov 25 12:05:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:05:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:05:46 np0005535469 podman[389710]: 2025-11-25 17:05:46.775230697 +0000 UTC m=+0.111754406 container init db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:05:46 np0005535469 podman[389710]: 2025-11-25 17:05:46.78335547 +0000 UTC m=+0.119879139 container start db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:05:46 np0005535469 podman[389710]: 2025-11-25 17:05:46.68784237 +0000 UTC m=+0.024366059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:05:46 np0005535469 podman[389710]: 2025-11-25 17:05:46.78808535 +0000 UTC m=+0.124609069 container attach db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:05:46 np0005535469 podman[389728]: 2025-11-25 17:05:46.811917423 +0000 UTC m=+0.069540229 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 12:05:46 np0005535469 podman[389729]: 2025-11-25 17:05:46.843353885 +0000 UTC m=+0.090586475 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:05:46 np0005535469 podman[389725]: 2025-11-25 17:05:46.859316073 +0000 UTC m=+0.118164392 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]: {
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "osd_id": 1,
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "type": "bluestore"
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:    },
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "osd_id": 2,
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "type": "bluestore"
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:    },
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "osd_id": 0,
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:        "type": "bluestore"
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]:    }
Nov 25 12:05:47 np0005535469 zealous_antonelli[389730]: }
Nov 25 12:05:47 np0005535469 systemd[1]: libpod-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope: Deactivated successfully.
Nov 25 12:05:47 np0005535469 systemd[1]: libpod-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope: Consumed 1.013s CPU time.
Nov 25 12:05:47 np0005535469 podman[389710]: 2025-11-25 17:05:47.806254206 +0000 UTC m=+1.142777895 container died db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:05:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51-merged.mount: Deactivated successfully.
Nov 25 12:05:47 np0005535469 podman[389710]: 2025-11-25 17:05:47.86217323 +0000 UTC m=+1.198696899 container remove db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:05:47 np0005535469 systemd[1]: libpod-conmon-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope: Deactivated successfully.
Nov 25 12:05:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:05:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:05:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 67a44557-5f9f-4bb0-9907-9f536d2c66d5 does not exist
Nov 25 12:05:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6cb48162-5c50-4a24-978f-281f0c9dd401 does not exist
Nov 25 12:05:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Nov 25 12:05:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:05:50 np0005535469 nova_compute[254092]: 2025-11-25 17:05:50.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 254 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 148 op/s
Nov 25 12:05:51 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:51Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:d4:72 10.100.0.8
Nov 25 12:05:51 np0005535469 ovn_controller[153477]: 2025-11-25T17:05:51Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:d4:72 10.100.0.8
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020349662409650322 of space, bias 1.0, pg target 0.6104898722895097 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:05:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:05:51 np0005535469 nova_compute[254092]: 2025-11-25 17:05:51.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 260 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 111 op/s
Nov 25 12:05:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:05:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 260 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Nov 25 12:05:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:05:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951635910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:05:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:05:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951635910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:05:55 np0005535469 nova_compute[254092]: 2025-11-25 17:05:55.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:05:56 np0005535469 nova_compute[254092]: 2025-11-25 17:05:56.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:05:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:05:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:00 np0005535469 nova_compute[254092]: 2025-11-25 17:06:00.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:06:01 np0005535469 nova_compute[254092]: 2025-11-25 17:06:01.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.460 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.461 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.475 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.560 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.560 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.571 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.571 254096 INFO nova.compute.claims [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:06:03 np0005535469 nova_compute[254092]: 2025-11-25 17:06:03.728 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228265069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.248 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.256 254096 DEBUG nova.compute.provider_tree [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.270 254096 DEBUG nova.scheduler.client.report [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.290 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.291 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.332 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.333 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.351 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.363 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:06:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 677 KiB/s wr, 35 op/s
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.457 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.458 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.459 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Creating image(s)#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.495 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.531 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.564 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.569 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.621 254096 DEBUG nova.policy [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.680 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.681 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.682 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.682 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.712 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:04 np0005535469 nova_compute[254092]: 2025-11-25 17:06:04.717 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.048 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.099 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.194 254096 DEBUG nova.objects.instance [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid a4e18007-11e8-4531-9dc8-8cbc10fe2b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.207 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.208 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Ensure instance console log exists: /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.208 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.209 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.209 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:05 np0005535469 nova_compute[254092]: 2025-11-25 17:06:05.887 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Successfully created port: 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:06:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 316 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 1.9 MiB/s wr, 112 op/s
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.697 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Successfully updated port: 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.719 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.719 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.719 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.817 254096 DEBUG nova.compute.manager [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.818 254096 DEBUG nova.compute.manager [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing instance network info cache due to event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.818 254096 DEBUG oslo_concurrency.lockutils [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:06 np0005535469 nova_compute[254092]: 2025-11-25 17:06:06.866 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.920 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.944 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.945 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance network_info: |[{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.946 254096 DEBUG oslo_concurrency.lockutils [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.946 254096 DEBUG nova.network.neutron [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.950 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start _get_guest_xml network_info=[{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.954 254096 WARNING nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.970 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.971 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.975 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.976 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.977 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.977 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.978 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.978 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.979 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.979 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.979 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.980 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.980 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.981 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.981 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.981 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:06:07 np0005535469 nova_compute[254092]: 2025-11-25 17:06:07.986 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 316 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.3 MiB/s wr, 77 op/s
Nov 25 12:06:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:06:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732536767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:06:08 np0005535469 nova_compute[254092]: 2025-11-25 17:06:08.488 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:08 np0005535469 nova_compute[254092]: 2025-11-25 17:06:08.513 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:08 np0005535469 nova_compute[254092]: 2025-11-25 17:06:08.555 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:06:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670394506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.014 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.018 254096 DEBUG nova.virt.libvirt.vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=124,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-nzwglgyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:04Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=a4e18007-11e8-4531-9dc8-8cbc10fe2b75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.019 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.021 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.023 254096 DEBUG nova.objects.instance [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4e18007-11e8-4531-9dc8-8cbc10fe2b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.040 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <uuid>a4e18007-11e8-4531-9dc8-8cbc10fe2b75</uuid>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <name>instance-0000007c</name>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693</nova:name>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:06:07</nova:creationTime>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <nova:port uuid="843d689b-6e0b-4ce9-9177-6c3cd41a19d6">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <entry name="serial">a4e18007-11e8-4531-9dc8-8cbc10fe2b75</entry>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <entry name="uuid">a4e18007-11e8-4531-9dc8-8cbc10fe2b75</entry>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:81:3c:32"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <target dev="tap843d689b-6e"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/console.log" append="off"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:06:09 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:06:09 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:06:09 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:06:09 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.042 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Preparing to wait for external event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.043 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.046 254096 DEBUG nova.virt.libvirt.vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=124,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-nzwglgyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:04Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=a4e18007-11e8-4531-9dc8-8cbc10fe2b75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.046 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.047 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.048 254096 DEBUG os_vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.050 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.051 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.057 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap843d689b-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.058 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap843d689b-6e, col_values=(('external_ids', {'iface-id': '843d689b-6e0b-4ce9-9177-6c3cd41a19d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:3c:32', 'vm-uuid': 'a4e18007-11e8-4531-9dc8-8cbc10fe2b75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:09 np0005535469 NetworkManager[48891]: <info>  [1764090369.0622] manager: (tap843d689b-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.070 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.072 254096 INFO os_vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e')#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.136 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.138 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.138 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:81:3c:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.139 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Using config drive#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.168 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.475 254096 DEBUG nova.network.neutron [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updated VIF entry in instance network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.476 254096 DEBUG nova.network.neutron [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.497 254096 DEBUG oslo_concurrency.lockutils [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.513 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Creating config drive at /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.518 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ur27q0a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.674 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ur27q0a" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.708 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.712 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.888 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.889 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deleting local config drive /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config because it was imported into RBD.#033[00m
Nov 25 12:06:09 np0005535469 kernel: tap843d689b-6e: entered promiscuous mode
Nov 25 12:06:09 np0005535469 NetworkManager[48891]: <info>  [1764090369.9521] manager: (tap843d689b-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Nov 25 12:06:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:09Z|01270|binding|INFO|Claiming lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for this chassis.
Nov 25 12:06:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:09Z|01271|binding|INFO|843d689b-6e0b-4ce9-9177-6c3cd41a19d6: Claiming fa:16:3e:81:3c:32 10.100.0.5
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.967 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:3c:32 10.100.0.5'], port_security=['fa:16:3e:81:3c:32 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a4e18007-11e8-4531-9dc8-8cbc10fe2b75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a710be2-f756-4f31-8d8e-270c10735b5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=843d689b-6e0b-4ce9-9177-6c3cd41a19d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.968 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 bound to our chassis#033[00m
Nov 25 12:06:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.969 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf9312b9-f4d2-496f-a143-7586e12fbee3#033[00m
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:09Z|01272|binding|INFO|Setting lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 ovn-installed in OVS
Nov 25 12:06:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:09Z|01273|binding|INFO|Setting lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 up in Southbound
Nov 25 12:06:09 np0005535469 nova_compute[254092]: 2025-11-25 17:06:09.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.986 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f6cb0c-595c-4944-9441-bd299f89fc3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:09 np0005535469 systemd-machined[216343]: New machine qemu-156-instance-0000007c.
Nov 25 12:06:10 np0005535469 systemd[1]: Started Virtual Machine qemu-156-instance-0000007c.
Nov 25 12:06:10 np0005535469 systemd-udevd[390213]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.015 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[81a2913e-1357-4e4f-ac68-e8142f68b61c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.019 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[83553275-9fe6-41da-8198-b623c49e4d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:10 np0005535469 NetworkManager[48891]: <info>  [1764090370.0248] device (tap843d689b-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:06:10 np0005535469 NetworkManager[48891]: <info>  [1764090370.0263] device (tap843d689b-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.046 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2b5819-6249-4570-a7af-6248894d76bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94038631-53eb-4a07-8cf3-3ca1bf201b91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 39201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390221, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ba1dd3-2212-4532-9056-9162e8b09964]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679598, 'tstamp': 679598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390224, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679602, 'tstamp': 679602}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390224, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.091 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9312b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf9312b9-f0, col_values=(('external_ids', {'iface-id': 'c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.406 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090370.4056184, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Started (Lifecycle Event)#033[00m
Nov 25 12:06:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.436 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.440 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090370.4058151, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.441 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.464 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.468 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.495 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.703 254096 DEBUG nova.compute.manager [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.705 254096 DEBUG oslo_concurrency.lockutils [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.706 254096 DEBUG oslo_concurrency.lockutils [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.706 254096 DEBUG oslo_concurrency.lockutils [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.707 254096 DEBUG nova.compute.manager [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Processing event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.708 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.711 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090370.7108247, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.711 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.713 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.717 254096 INFO nova.virt.libvirt.driver [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance spawned successfully.#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.717 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.732 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.737 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.744 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.744 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.745 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.745 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.746 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.746 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.765 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.819 254096 INFO nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 6.36 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.820 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.879 254096 INFO nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 7.35 seconds to build instance.#033[00m
Nov 25 12:06:10 np0005535469 nova_compute[254092]: 2025-11-25 17:06:10.892 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Nov 25 12:06:12 np0005535469 nova_compute[254092]: 2025-11-25 17:06:12.798 254096 DEBUG nova.compute.manager [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:12 np0005535469 nova_compute[254092]: 2025-11-25 17:06:12.800 254096 DEBUG oslo_concurrency.lockutils [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:12 np0005535469 nova_compute[254092]: 2025-11-25 17:06:12.800 254096 DEBUG oslo_concurrency.lockutils [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:12 np0005535469 nova_compute[254092]: 2025-11-25 17:06:12.801 254096 DEBUG oslo_concurrency.lockutils [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:12 np0005535469 nova_compute[254092]: 2025-11-25 17:06:12.801 254096 DEBUG nova.compute.manager [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] No waiting events found dispatching network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:12 np0005535469 nova_compute[254092]: 2025-11-25 17:06:12.802 254096 WARNING nova.compute.manager [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received unexpected event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.643 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.644 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.770 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.771 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.959 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.960 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.960 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.961 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.962 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.964 254096 INFO nova.compute.manager [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Terminating instance#033[00m
Nov 25 12:06:13 np0005535469 nova_compute[254092]: 2025-11-25 17:06:13.965 254096 DEBUG nova.compute.manager [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:06:14 np0005535469 kernel: tap4f707331-3a (unregistering): left promiscuous mode
Nov 25 12:06:14 np0005535469 NetworkManager[48891]: <info>  [1764090374.0188] device (tap4f707331-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:14 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:14Z|01274|binding|INFO|Releasing lport 4f707331-3a0e-47b6-98ee-569db81bd594 from this chassis (sb_readonly=0)
Nov 25 12:06:14 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:14Z|01275|binding|INFO|Setting lport 4f707331-3a0e-47b6-98ee-569db81bd594 down in Southbound
Nov 25 12:06:14 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:14Z|01276|binding|INFO|Removing iface tap4f707331-3a ovn-installed in OVS
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.034 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:5c:fc 10.100.0.26'], port_security=['fa:16:3e:15:5c:fc 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'b747b045-786f-49a8-907c-cc222a07fa05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30729089-f7ac-4b8e-887e-74cce32287f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4f707331-3a0e-47b6-98ee-569db81bd594) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.035 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4f707331-3a0e-47b6-98ee-569db81bd594 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f unbound from our chassis#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.036 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.051 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b585b8f-4c34-4edf-958d-7d4733390bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.097 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ad3e31-32bb-41ee-9814-5752891adf86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.101 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b7071753-ff4b-4618-b4d0-52afd2d397ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:14 np0005535469 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 25 12:06:14 np0005535469 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 14.819s CPU time.
Nov 25 12:06:14 np0005535469 systemd-machined[216343]: Machine qemu-154-instance-0000007a terminated.
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.136 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e401413f-7ac2-4b48-88a2-625e78342ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.159 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea16f1ce-c59e-4208-8855-d3118c3e3c47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 922, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 922, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 600, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 600, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390279, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1732aa52-9cdc-423a-87c6-7f8b20f0fdee]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676551, 'tstamp': 676551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390280, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676554, 'tstamp': 676554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390280, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.186 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.199 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a48b006-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.200 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a48b006-a0, col_values=(('external_ids', {'iface-id': '85d5b09a-dc15-4154-acec-abe7a2e5fc19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.205 254096 INFO nova.virt.libvirt.driver [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance destroyed successfully.#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.206 254096 DEBUG nova.objects.instance [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid b747b045-786f-49a8-907c-cc222a07fa05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.219 254096 DEBUG nova.virt.libvirt.vif [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2108522710',display_name='tempest-TestNetworkBasicOps-server-2108522710',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2108522710',id=122,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvG1eO5jPF+SV0Ao2r1oFJ5d8qXkQHoty8TB6rNrtbLrWvCsgRx2hMEMQlhuNosicoMv5mD+tjKe9vDZEtzlGPJcyIy9mr2/vCsJq0iL2bTTiQYg0Y14H1/7blYpbPNoQ==',key_name='tempest-TestNetworkBasicOps-1506231644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:05:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5q8f6jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:05:25Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=b747b045-786f-49a8-907c-cc222a07fa05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.219 254096 DEBUG nova.network.os_vif_util [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.220 254096 DEBUG nova.network.os_vif_util [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.220 254096 DEBUG os_vif [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.221 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f707331-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.227 254096 INFO os_vif [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a')#033[00m
Nov 25 12:06:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.564 254096 INFO nova.virt.libvirt.driver [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deleting instance files /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05_del#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.566 254096 INFO nova.virt.libvirt.driver [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deletion of /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05_del complete#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.642 254096 INFO nova.compute.manager [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.643 254096 DEBUG oslo.service.loopingcall [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.645 254096 DEBUG nova.compute.manager [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.646 254096 DEBUG nova.network.neutron [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.906 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.907 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing instance network info cache due to event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.908 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.908 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:14 np0005535469 nova_compute[254092]: 2025-11-25 17:06:14.908 254096 DEBUG nova.network.neutron [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:06:15 np0005535469 nova_compute[254092]: 2025-11-25 17:06:15.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:15 np0005535469 nova_compute[254092]: 2025-11-25 17:06:15.418 254096 DEBUG nova.network.neutron [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:15 np0005535469 nova_compute[254092]: 2025-11-25 17:06:15.440 254096 INFO nova.compute.manager [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 0.79 seconds to deallocate network for instance.#033[00m
Nov 25 12:06:15 np0005535469 nova_compute[254092]: 2025-11-25 17:06:15.505 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:15 np0005535469 nova_compute[254092]: 2025-11-25 17:06:15.505 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:15 np0005535469 nova_compute[254092]: 2025-11-25 17:06:15.646 254096 DEBUG oslo_concurrency.processutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269446327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.108 254096 DEBUG oslo_concurrency.processutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.117 254096 DEBUG nova.compute.provider_tree [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.136 254096 DEBUG nova.scheduler.client.report [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.139 254096 DEBUG nova.network.neutron [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updated VIF entry in instance network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.140 254096 DEBUG nova.network.neutron [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.172 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-unplugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.179 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.179 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] No waiting events found dispatching network-vif-unplugged-4f707331-3a0e-47b6-98ee-569db81bd594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.179 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-unplugged-4f707331-3a0e-47b6-98ee-569db81bd594 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.181 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] No waiting events found dispatching network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.181 254096 WARNING nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received unexpected event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.194 254096 INFO nova.scheduler.client.report [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance b747b045-786f-49a8-907c-cc222a07fa05#033[00m
Nov 25 12:06:16 np0005535469 nova_compute[254092]: 2025-11-25 17:06:16.274 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.028 254096 DEBUG nova.compute.manager [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-deleted-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG nova.compute.manager [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG nova.compute.manager [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing instance network info cache due to event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG oslo_concurrency.lockutils [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG oslo_concurrency.lockutils [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.030 254096 DEBUG nova.network.neutron [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.424 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-d5b14553-424b-4985-9ed6-2f4afac92c00" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.424 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-d5b14553-424b-4985-9ed6-2f4afac92c00" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.442 254096 DEBUG nova.objects.instance [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.466 254096 DEBUG nova.virt.libvirt.vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.467 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.468 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.471 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.475 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.478 254096 DEBUG nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Attempting to detach device tapd5b14553-42 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.479 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:8b:9c:66"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <target dev="tapd5b14553-42"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: </interface>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.492 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.495 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <name>instance-00000078</name>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:05:07</nova:creationTime>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:port uuid="d5b14553-424b-4985-9ed6-2f4afac92c00">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target dev='tapa13b6cf4-60'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:8b:9c:66'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target dev='tapd5b14553-42'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='net1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.499 254096 INFO nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tapd5b14553-42 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the persistent domain config.#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.499 254096 DEBUG nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] (1/8): Attempting to detach device tapd5b14553-42 with device alias net1 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.499 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <mac address="fa:16:3e:8b:9c:66"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <model type="virtio"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <mtu size="1442"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <target dev="tapd5b14553-42"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: </interface>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 12:06:17 np0005535469 kernel: tapd5b14553-42 (unregistering): left promiscuous mode
Nov 25 12:06:17 np0005535469 NetworkManager[48891]: <info>  [1764090377.6074] device (tapd5b14553-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:06:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:17Z|01277|binding|INFO|Releasing lport d5b14553-424b-4985-9ed6-2f4afac92c00 from this chassis (sb_readonly=0)
Nov 25 12:06:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:17Z|01278|binding|INFO|Setting lport d5b14553-424b-4985-9ed6-2f4afac92c00 down in Southbound
Nov 25 12:06:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:17Z|01279|binding|INFO|Removing iface tapd5b14553-42 ovn-installed in OVS
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.621 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764090377.6206884, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.622 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9c:66 10.100.0.28', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d5b14553-424b-4985-9ed6-2f4afac92c00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.625 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d5b14553-424b-4985-9ed6-2f4afac92c00 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f unbound from our chassis#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.628 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c593889-1baa-42d8-b9c2-487abae5f4b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.630 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f namespace which is not needed anymore#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.628 254096 DEBUG nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Start waiting for the detach event from libvirt for device tapd5b14553-42 with device alias net1 for instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.628 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.629 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.633 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <name>instance-00000078</name>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:05:07</nova:creationTime>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:port uuid="d5b14553-424b-4985-9ed6-2f4afac92c00">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target dev='tapa13b6cf4-60'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.634 254096 INFO nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tapd5b14553-42 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the live domain config.#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.634 254096 DEBUG nova.virt.libvirt.vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.635 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.635 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.636 254096 DEBUG os_vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.637 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b14553-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.653 254096 INFO os_vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42')#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.653 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:06:17</nova:creationTime>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:06:17 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:17 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:06:17 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 12:06:17 np0005535469 podman[390334]: 2025-11-25 17:06:17.66652192 +0000 UTC m=+0.075399310 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 12:06:17 np0005535469 podman[390335]: 2025-11-25 17:06:17.691198727 +0000 UTC m=+0.099288904 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:06:17 np0005535469 podman[390336]: 2025-11-25 17:06:17.712046599 +0000 UTC m=+0.114731908 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:06:17 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : haproxy version is 2.8.14-c23fe91
Nov 25 12:06:17 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : path to executable is /usr/sbin/haproxy
Nov 25 12:06:17 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [WARNING]  (387842) : Exiting Master process...
Nov 25 12:06:17 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [WARNING]  (387842) : Exiting Master process...
Nov 25 12:06:17 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [ALERT]    (387842) : Current worker (387844) exited with code 143 (Terminated)
Nov 25 12:06:17 np0005535469 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [WARNING]  (387842) : All workers exited. Exiting... (0)
Nov 25 12:06:17 np0005535469 systemd[1]: libpod-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465.scope: Deactivated successfully.
Nov 25 12:06:17 np0005535469 podman[390413]: 2025-11-25 17:06:17.766941104 +0000 UTC m=+0.041181441 container died f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.775 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6b5bfab8528d115e346b339b56d12524bd34cf4549cb171cd97809410d53295b-merged.mount: Deactivated successfully.
Nov 25 12:06:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465-userdata-shm.mount: Deactivated successfully.
Nov 25 12:06:17 np0005535469 podman[390413]: 2025-11-25 17:06:17.806056827 +0000 UTC m=+0.080297204 container cleanup f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:06:17 np0005535469 systemd[1]: libpod-conmon-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465.scope: Deactivated successfully.
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.848 254096 DEBUG nova.compute.manager [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-unplugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.848 254096 DEBUG oslo_concurrency.lockutils [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 DEBUG oslo_concurrency.lockutils [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 DEBUG oslo_concurrency.lockutils [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 DEBUG nova.compute.manager [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-unplugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 WARNING nova.compute.manager [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-unplugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:06:17 np0005535469 podman[390441]: 2025-11-25 17:06:17.871182263 +0000 UTC m=+0.042730473 container remove f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.879 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3dc20f-31c0-4a9c-9fa9-89c9e98c4d05]: (4, ('Tue Nov 25 05:06:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f (f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465)\nf602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465\nTue Nov 25 05:06:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f (f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465)\nf602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.881 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[341ca183-55af-49e2-8fb5-75399515b804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.882 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:17 np0005535469 kernel: tap4a48b006-a0: left promiscuous mode
Nov 25 12:06:17 np0005535469 nova_compute[254092]: 2025-11-25 17:06:17.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.901 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[414c3260-be2b-43b4-a8da-c0bce12bcb08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef4c2d3-2011-4536-b6b7-e84240f31a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.918 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[919ba36f-754e-4e87-b1d7-907f435340cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.931 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbdea09-d530-4346-9a36-2db4a6cdd675]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676527, 'reachable_time': 43008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390456, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:17 np0005535469 systemd[1]: run-netns-ovnmeta\x2d4a48b006\x2da4d1\x2d4fa5\x2d88f1\x2d79386ed2958f.mount: Deactivated successfully.
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.935 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:06:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.935 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[7adedc87-a271-4b3c-8392-9021fe0f92d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 538 KiB/s wr, 113 op/s
Nov 25 12:06:18 np0005535469 nova_compute[254092]: 2025-11-25 17:06:18.736 254096 DEBUG nova.network.neutron [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updated VIF entry in instance network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:06:18 np0005535469 nova_compute[254092]: 2025-11-25 17:06:18.737 254096 DEBUG nova.network.neutron [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:18 np0005535469 nova_compute[254092]: 2025-11-25 17:06:18.750 254096 DEBUG oslo_concurrency.lockutils [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:18 np0005535469 nova_compute[254092]: 2025-11-25 17:06:18.810 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:18 np0005535469 nova_compute[254092]: 2025-11-25 17:06:18.811 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:18 np0005535469 nova_compute[254092]: 2025-11-25 17:06:18.811 254096 DEBUG nova.network.neutron [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:06:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.117 254096 DEBUG nova.compute.manager [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-deleted-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.117 254096 INFO nova.compute.manager [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Neutron deleted interface d5b14553-424b-4985-9ed6-2f4afac92c00; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.117 254096 DEBUG nova.network.neutron [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.140 254096 DEBUG nova.objects.instance [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.165 254096 DEBUG nova.objects.instance [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.184 254096 DEBUG nova.virt.libvirt.vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.185 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.185 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.189 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.194 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <name>instance-00000078</name>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:06:17</nova:creationTime>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target dev='tapa13b6cf4-60'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.194 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.202 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <name>instance-00000078</name>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:06:17</nova:creationTime>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <memory unit='KiB'>131072</memory>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <vcpu placement='static'>1</vcpu>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <resource>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <partition>/machine</partition>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </resource>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <sysinfo type='smbios'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='manufacturer'>RDO</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <entry name='family'>Virtual Machine</entry>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <boot dev='hd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <smbios mode='sysinfo'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <vmcoreinfo state='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <vendor>AMD</vendor>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='x2apic'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='hypervisor'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='stibp'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='ssbd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='overflow-recov'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='succor'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='ibrs'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='amd-ssbd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='lbrv'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pause-filter'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='xsaves'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='svm'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='require' name='topoext'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='npt'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <feature policy='disable' name='nrip-save'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <clock offset='utc'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <timer name='hpet' present='no'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <on_poweroff>destroy</on_poweroff>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <on_reboot>restart</on_reboot>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <on_crash>destroy</on_crash>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <disk type='network' device='disk'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target dev='vda' bus='virtio'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='virtio-disk0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <disk type='network' device='cdrom'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <auth username='openstack'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <host name='192.168.122.100' port='6789'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target dev='sda' bus='sata'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <readonly/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='sata0-0-0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pcie.0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='1' port='0x10'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='2' port='0x11'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='3' port='0x12'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='4' port='0x13'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='5' port='0x14'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='6' port='0x15'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='7' port='0x16'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='8' port='0x17'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.8'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='9' port='0x18'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.9'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='10' port='0x19'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.10'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='11' port='0x1a'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.11'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='12' port='0x1b'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.12'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='13' port='0x1c'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.13'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='14' port='0x1d'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.14'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='15' port='0x1e'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.15'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='16' port='0x1f'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.16'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='17' port='0x20'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.17'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='18' port='0x21'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.18'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='19' port='0x22'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.19'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='20' port='0x23'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.20'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='21' port='0x24'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.21'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='22' port='0x25'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.22'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='23' port='0x26'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.23'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='24' port='0x27'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.24'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-root-port'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target chassis='25' port='0x28'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.25'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model name='pcie-pci-bridge'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='pci.26'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='usb'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <controller type='sata' index='0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='ide'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </controller>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <interface type='ethernet'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target dev='tapa13b6cf4-60'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model type='virtio'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <mtu size='1442'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='net0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <serial type='pty'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target type='isa-serial' port='0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:        <model name='isa-serial'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      </target>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <source path='/dev/pts/0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <target type='serial' port='0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='serial0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </console>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <input type='tablet' bus='usb'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='input0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='usb' bus='0' port='1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <input type='mouse' bus='ps2'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='input1'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <input type='keyboard' bus='ps2'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='input2'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </input>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <listen type='address' address='::0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </graphics>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <audio id='1' type='none'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='video0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <watchdog model='itco' action='reset'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='watchdog0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </watchdog>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <memballoon model='virtio'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <stats period='10'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='balloon0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <rng model='virtio'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <backend model='random'>/dev/urandom</backend>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <alias name='rng0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <label>+107:+107</label>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <imagelabel>+107:+107</imagelabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </seclabel>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.202 254096 WARNING nova.virt.libvirt.driver [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Detaching interface fa:16:3e:8b:9c:66 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapd5b14553-42' not found.#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.203 254096 DEBUG nova.virt.libvirt.vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.204 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.204 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.205 254096 DEBUG os_vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.206 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b14553-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.206 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.209 254096 INFO os_vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42')#033[00m
Nov 25 12:06:19 np0005535469 nova_compute[254092]: 2025-11-25 17:06:19.209 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:creationTime>2025-11-25 17:06:19</nova:creationTime>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:flavor name="m1.nano">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:memory>128</nova:memory>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:disk>1</nova:disk>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:swap>0</nova:swap>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:flavor>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:owner>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:owner>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  <nova:ports>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 12:06:19 np0005535469 nova_compute[254092]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:    </nova:port>
Nov 25 12:06:19 np0005535469 nova_compute[254092]:  </nova:ports>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: </nova:instance>
Nov 25 12:06:19 np0005535469 nova_compute[254092]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 12:06:20 np0005535469 nova_compute[254092]: 2025-11-25 17:06:20.016 254096 DEBUG nova.compute.manager [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:20 np0005535469 nova_compute[254092]: 2025-11-25 17:06:20.016 254096 DEBUG oslo_concurrency.lockutils [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:20 np0005535469 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 DEBUG oslo_concurrency.lockutils [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:20 np0005535469 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 DEBUG oslo_concurrency.lockutils [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:20 np0005535469 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 DEBUG nova.compute.manager [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:20 np0005535469 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 WARNING nova.compute.manager [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:06:20 np0005535469 nova_compute[254092]: 2025-11-25 17:06:20.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 538 KiB/s wr, 113 op/s
Nov 25 12:06:21 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:21Z|01280|binding|INFO|Releasing lport c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35 from this chassis (sb_readonly=0)
Nov 25 12:06:21 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:21Z|01281|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 12:06:21 np0005535469 nova_compute[254092]: 2025-11-25 17:06:21.725 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:21 np0005535469 nova_compute[254092]: 2025-11-25 17:06:21.840 254096 INFO nova.network.neutron [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Port d5b14553-424b-4985-9ed6-2f4afac92c00 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 12:06:21 np0005535469 nova_compute[254092]: 2025-11-25 17:06:21.840 254096 DEBUG nova.network.neutron [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:21 np0005535469 nova_compute[254092]: 2025-11-25 17:06:21.854 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:21 np0005535469 nova_compute[254092]: 2025-11-25 17:06:21.877 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-d5b14553-424b-4985-9ed6-2f4afac92c00" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 101 op/s
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.786 254096 DEBUG nova.compute.manager [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG nova.compute.manager [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG oslo_concurrency.lockutils [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG oslo_concurrency.lockutils [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG nova.network.neutron [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.874 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.875 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.875 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.876 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.876 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.877 254096 INFO nova.compute.manager [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Terminating instance#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.879 254096 DEBUG nova.compute.manager [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:06:22 np0005535469 kernel: tapa13b6cf4-60 (unregistering): left promiscuous mode
Nov 25 12:06:22 np0005535469 NetworkManager[48891]: <info>  [1764090382.9501] device (tapa13b6cf4-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:22Z|01282|binding|INFO|Releasing lport a13b6cf4-602d-4af3-b369-9dfa273e1514 from this chassis (sb_readonly=0)
Nov 25 12:06:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:22Z|01283|binding|INFO|Setting lport a13b6cf4-602d-4af3-b369-9dfa273e1514 down in Southbound
Nov 25 12:06:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:22Z|01284|binding|INFO|Removing iface tapa13b6cf4-60 ovn-installed in OVS
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.980 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d6:7a 10.100.0.11'], port_security=['fa:16:3e:7c:d6:7a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e177dffd-fd87-489e-a59d-1d241fe7a148', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad26d7dc-a577-438b-b143-107c43340ab4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a13b6cf4-602d-4af3-b369-9dfa273e1514) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.981 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a13b6cf4-602d-4af3-b369-9dfa273e1514 in datapath 136c69a7-c4f8-40a2-be13-7ef82b7b3709 unbound from our chassis#033[00m
Nov 25 12:06:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.981 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 136c69a7-c4f8-40a2-be13-7ef82b7b3709, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:06:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3fc7ea-9b8e-4037-9b4f-6c41c94eef4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.984 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 namespace which is not needed anymore#033[00m
Nov 25 12:06:22 np0005535469 nova_compute[254092]: 2025-11-25 17:06:22.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:23 np0005535469 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 25 12:06:23 np0005535469 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Consumed 18.021s CPU time.
Nov 25 12:06:23 np0005535469 systemd-machined[216343]: Machine qemu-152-instance-00000078 terminated.
Nov 25 12:06:23 np0005535469 kernel: tapa13b6cf4-60: entered promiscuous mode
Nov 25 12:06:23 np0005535469 kernel: tapa13b6cf4-60 (unregistering): left promiscuous mode
Nov 25 12:06:23 np0005535469 NetworkManager[48891]: <info>  [1764090383.1012] manager: (tapa13b6cf4-60): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.105 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.122 254096 INFO nova.virt.libvirt.driver [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance destroyed successfully.#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.124 254096 DEBUG nova.objects.instance [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:23 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : haproxy version is 2.8.14-c23fe91
Nov 25 12:06:23 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : path to executable is /usr/sbin/haproxy
Nov 25 12:06:23 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [WARNING]  (387211) : Exiting Master process...
Nov 25 12:06:23 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [WARNING]  (387211) : Exiting Master process...
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.134 254096 DEBUG nova.virt.libvirt.vif [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:06:23 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [ALERT]    (387211) : Current worker (387217) exited with code 143 (Terminated)
Nov 25 12:06:23 np0005535469 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [WARNING]  (387211) : All workers exited. Exiting... (0)
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.134 254096 DEBUG nova.network.os_vif_util [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.135 254096 DEBUG nova.network.os_vif_util [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.135 254096 DEBUG os_vif [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:06:23 np0005535469 systemd[1]: libpod-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f.scope: Deactivated successfully.
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.139 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa13b6cf4-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.144 254096 INFO os_vif [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60')#033[00m
Nov 25 12:06:23 np0005535469 podman[390479]: 2025-11-25 17:06:23.145111697 +0000 UTC m=+0.056720186 container died be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:06:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f-userdata-shm.mount: Deactivated successfully.
Nov 25 12:06:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e1ef1abaaa64a067771ec5da28f54ab4615764922df7ecc3992312613eab4146-merged.mount: Deactivated successfully.
Nov 25 12:06:23 np0005535469 podman[390479]: 2025-11-25 17:06:23.18678168 +0000 UTC m=+0.098390169 container cleanup be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:06:23 np0005535469 systemd[1]: libpod-conmon-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f.scope: Deactivated successfully.
Nov 25 12:06:23 np0005535469 podman[390533]: 2025-11-25 17:06:23.252084651 +0000 UTC m=+0.044721108 container remove be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.258 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b89fec4-593f-46e7-9b73-d54aaad4c767]: (4, ('Tue Nov 25 05:06:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 (be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f)\nbe459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f\nTue Nov 25 05:06:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 (be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f)\nbe459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.260 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42344219-32b3-4050-9238-aca543198ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.260 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap136c69a7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:23 np0005535469 kernel: tap136c69a7-c0: left promiscuous mode
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.279 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[add525f8-0863-4032-a9d8-2d7445b0b790]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.299 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[983af79e-fbbb-416b-96c5-46e014bac094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.301 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba889745-209c-4a7d-8397-6884e96ca091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.315 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9d961b-1e9a-4d2f-83d9-885f03979017]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673409, 'reachable_time': 31258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390548, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.318 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:06:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.318 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[567bd2e8-9915-427a-9f5c-c0a828fc6722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:23 np0005535469 systemd[1]: run-netns-ovnmeta\x2d136c69a7\x2dc4f8\x2d40a2\x2dbe13\x2d7ef82b7b3709.mount: Deactivated successfully.
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.520 254096 INFO nova.virt.libvirt.driver [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deleting instance files /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_del#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.521 254096 INFO nova.virt.libvirt.driver [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deletion of /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_del complete#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.601 254096 INFO nova.compute.manager [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.601 254096 DEBUG oslo.service.loopingcall [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.602 254096 DEBUG nova.compute.manager [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:06:23 np0005535469 nova_compute[254092]: 2025-11-25 17:06:23.603 254096 DEBUG nova.network.neutron [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:06:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.5 KiB/s wr, 95 op/s
Nov 25 12:06:24 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:24Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:3c:32 10.100.0.5
Nov 25 12:06:24 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:24Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:3c:32 10.100.0.5
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.854 254096 DEBUG nova.network.neutron [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.880 254096 INFO nova.compute.manager [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 1.28 seconds to deallocate network for instance.#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.904 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-unplugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.906 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.906 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.906 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-unplugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-unplugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.908 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.908 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.908 254096 WARNING nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.927 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:24 np0005535469 nova_compute[254092]: 2025-11-25 17:06:24.928 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.014 254096 DEBUG nova.network.neutron [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.015 254096 DEBUG nova.network.neutron [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.037 254096 DEBUG oslo_concurrency.lockutils [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.052 254096 DEBUG oslo_concurrency.processutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892230981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.532 254096 DEBUG oslo_concurrency.processutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.542 254096 DEBUG nova.compute.provider_tree [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.560 254096 DEBUG nova.scheduler.client.report [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.583 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.610 254096 INFO nova.scheduler.client.report [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9#033[00m
Nov 25 12:06:25 np0005535469 nova_compute[254092]: 2025-11-25 17:06:25.780 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 198 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Nov 25 12:06:26 np0005535469 nova_compute[254092]: 2025-11-25 17:06:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:27 np0005535469 nova_compute[254092]: 2025-11-25 17:06:27.023 254096 DEBUG nova.compute.manager [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-deleted-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:27 np0005535469 nova_compute[254092]: 2025-11-25 17:06:27.023 254096 INFO nova.compute.manager [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Neutron deleted interface a13b6cf4-602d-4af3-b369-9dfa273e1514; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:06:27 np0005535469 nova_compute[254092]: 2025-11-25 17:06:27.024 254096 DEBUG nova.network.neutron [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 25 12:06:27 np0005535469 nova_compute[254092]: 2025-11-25 17:06:27.026 254096 DEBUG nova.compute.manager [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Detach interface failed, port_id=a13b6cf4-602d-4af3-b369-9dfa273e1514, reason: Instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:06:27 np0005535469 nova_compute[254092]: 2025-11-25 17:06:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:28 np0005535469 nova_compute[254092]: 2025-11-25 17:06:28.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 198 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 25 12:06:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:29 np0005535469 nova_compute[254092]: 2025-11-25 17:06:29.203 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090374.202772, b747b045-786f-49a8-907c-cc222a07fa05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:29 np0005535469 nova_compute[254092]: 2025-11-25 17:06:29.204 254096 INFO nova.compute.manager [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:06:29 np0005535469 nova_compute[254092]: 2025-11-25 17:06:29.225 254096 DEBUG nova.compute.manager [None req-c4f46b7f-1820-42ab-9a90-c0f7dc5d9115 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:30Z|01285|binding|INFO|Releasing lport c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35 from this chassis (sb_readonly=0)
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 200 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.793 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.793 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.794 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.794 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.794 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.795 254096 INFO nova.compute.manager [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Terminating instance#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.796 254096 DEBUG nova.compute.manager [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:06:30 np0005535469 kernel: tap843d689b-6e (unregistering): left promiscuous mode
Nov 25 12:06:30 np0005535469 NetworkManager[48891]: <info>  [1764090390.8530] device (tap843d689b-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:06:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:30Z|01286|binding|INFO|Releasing lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 from this chassis (sb_readonly=0)
Nov 25 12:06:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:30Z|01287|binding|INFO|Setting lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 down in Southbound
Nov 25 12:06:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:30Z|01288|binding|INFO|Removing iface tap843d689b-6e ovn-installed in OVS
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.864 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:3c:32 10.100.0.5', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a4e18007-11e8-4531-9dc8-8cbc10fe2b75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=843d689b-6e0b-4ce9-9177-6c3cd41a19d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.866 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 unbound from our chassis#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.867 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf9312b9-f4d2-496f-a143-7586e12fbee3#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.888 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a17f4e39-8609-4744-b60b-7d7c168d0a63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3fca2da6-6c51-4796-983e-30d32585fa50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.918 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abfaab40-849f-4267-881b-5f77a0262826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:30 np0005535469 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 25 12:06:30 np0005535469 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Consumed 15.270s CPU time.
Nov 25 12:06:30 np0005535469 systemd-machined[216343]: Machine qemu-156-instance-0000007c terminated.
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.947 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c073617-60e4-4e2d-aa39-a5e91d4df36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.970 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65daf25b-a10f-4b5a-b093-ace3af10307e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390583, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2da183a1-24dc-4b47-8838-5db037cf80cd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679598, 'tstamp': 679598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390584, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679602, 'tstamp': 679602}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390584, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.991 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:30 np0005535469 nova_compute[254092]: 2025-11-25 17:06:30.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9312b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:31.000 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf9312b9-f0, col_values=(('external_ids', {'iface-id': 'c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:31.000 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:31 np0005535469 kernel: tap843d689b-6e: entered promiscuous mode
Nov 25 12:06:31 np0005535469 kernel: tap843d689b-6e (unregistering): left promiscuous mode
Nov 25 12:06:31 np0005535469 NetworkManager[48891]: <info>  [1764090391.0224] manager: (tap843d689b-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/531)
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.038 254096 INFO nova.virt.libvirt.driver [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance destroyed successfully.#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.039 254096 DEBUG nova.objects.instance [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid a4e18007-11e8-4531-9dc8-8cbc10fe2b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.059 254096 DEBUG nova.virt.libvirt.vif [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:06:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=124,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:06:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-nzwglgyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:06:10Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=a4e18007-11e8-4531-9dc8-8cbc10fe2b75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.060 254096 DEBUG nova.network.os_vif_util [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.061 254096 DEBUG nova.network.os_vif_util [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.061 254096 DEBUG os_vif [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.062 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843d689b-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.063 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.067 254096 INFO os_vif [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e')#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.085 254096 DEBUG nova.compute.manager [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-unplugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG oslo_concurrency.lockutils [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG oslo_concurrency.lockutils [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG oslo_concurrency.lockutils [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG nova.compute.manager [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] No waiting events found dispatching network-vif-unplugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.087 254096 DEBUG nova.compute.manager [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-unplugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.477 254096 INFO nova.virt.libvirt.driver [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deleting instance files /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_del#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.477 254096 INFO nova.virt.libvirt.driver [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deletion of /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_del complete#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.548 254096 INFO nova.compute.manager [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.549 254096 DEBUG oslo.service.loopingcall [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.549 254096 DEBUG nova.compute.manager [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:06:31 np0005535469 nova_compute[254092]: 2025-11-25 17:06:31.549 254096 DEBUG nova.network.neutron [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:06:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 200 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 12:06:32 np0005535469 nova_compute[254092]: 2025-11-25 17:06:32.894 254096 DEBUG nova.network.neutron [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:32 np0005535469 nova_compute[254092]: 2025-11-25 17:06:32.917 254096 INFO nova.compute.manager [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 25 12:06:32 np0005535469 nova_compute[254092]: 2025-11-25 17:06:32.967 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:32 np0005535469 nova_compute[254092]: 2025-11-25 17:06:32.968 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.054 254096 DEBUG oslo_concurrency.processutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.189 254096 DEBUG nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.189 254096 DEBUG oslo_concurrency.lockutils [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 DEBUG oslo_concurrency.lockutils [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 DEBUG oslo_concurrency.lockutils [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 DEBUG nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] No waiting events found dispatching network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 WARNING nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received unexpected event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.191 254096 DEBUG nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-deleted-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243127858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.503 254096 DEBUG oslo_concurrency.processutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.509 254096 DEBUG nova.compute.provider_tree [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.523 254096 DEBUG nova.scheduler.client.report [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.540 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.568 254096 INFO nova.scheduler.client.report [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance a4e18007-11e8-4531-9dc8-8cbc10fe2b75#033[00m
Nov 25 12:06:33 np0005535469 nova_compute[254092]: 2025-11-25 17:06:33.637 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 200 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.988 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.988 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.988 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.989 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.989 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.990 254096 INFO nova.compute.manager [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Terminating instance#033[00m
Nov 25 12:06:34 np0005535469 nova_compute[254092]: 2025-11-25 17:06:34.991 254096 DEBUG nova.compute.manager [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:06:35 np0005535469 kernel: tap332ae922-32 (unregistering): left promiscuous mode
Nov 25 12:06:35 np0005535469 NetworkManager[48891]: <info>  [1764090395.0457] device (tap332ae922-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:06:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:35Z|01289|binding|INFO|Releasing lport 332ae922-3280-48c2-8889-d1ab181a43db from this chassis (sb_readonly=0)
Nov 25 12:06:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:35Z|01290|binding|INFO|Setting lport 332ae922-3280-48c2-8889-d1ab181a43db down in Southbound
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:35Z|01291|binding|INFO|Removing iface tap332ae922-32 ovn-installed in OVS
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.062 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:d4:72 10.100.0.8'], port_security=['fa:16:3e:98:d4:72 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cbf5c589-9701-44c9-9600-739675853610', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2db36ef1-2db7-4ccb-b8a5-63a9a57f3dde 7a710be2-f756-4f31-8d8e-270c10735b5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=332ae922-3280-48c2-8889-d1ab181a43db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.063 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 332ae922-3280-48c2-8889-d1ab181a43db in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 unbound from our chassis#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.065 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf9312b9-f4d2-496f-a143-7586e12fbee3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.065 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec2d73f-77a2-4836-bf89-8369036ede84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.066 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 namespace which is not needed anymore#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.081 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 25 12:06:35 np0005535469 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 14.364s CPU time.
Nov 25 12:06:35 np0005535469 systemd-machined[216343]: Machine qemu-155-instance-0000007b terminated.
Nov 25 12:06:35 np0005535469 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : haproxy version is 2.8.14-c23fe91
Nov 25 12:06:35 np0005535469 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : path to executable is /usr/sbin/haproxy
Nov 25 12:06:35 np0005535469 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [WARNING]  (388929) : Exiting Master process...
Nov 25 12:06:35 np0005535469 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [ALERT]    (388929) : Current worker (388931) exited with code 143 (Terminated)
Nov 25 12:06:35 np0005535469 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [WARNING]  (388929) : All workers exited. Exiting... (0)
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.227 254096 INFO nova.virt.libvirt.driver [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance destroyed successfully.#033[00m
Nov 25 12:06:35 np0005535469 systemd[1]: libpod-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948.scope: Deactivated successfully.
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.228 254096 DEBUG nova.objects.instance [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid cbf5c589-9701-44c9-9600-739675853610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:35 np0005535469 podman[390656]: 2025-11-25 17:06:35.233500389 +0000 UTC m=+0.062927306 container died 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.239 254096 DEBUG nova.virt.libvirt.vif [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=123,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:05:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-jw45rb9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:05:38Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=cbf5c589-9701-44c9-9600-739675853610,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.239 254096 DEBUG nova.network.os_vif_util [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.242 254096 DEBUG nova.network.os_vif_util [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.243 254096 DEBUG os_vif [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.244 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.245 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap332ae922-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.249 254096 INFO os_vif [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32')#033[00m
Nov 25 12:06:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-46a68caf33163305d309df46a027dcb2a4dc5df4d4d0497aef68c09c791db936-merged.mount: Deactivated successfully.
Nov 25 12:06:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948-userdata-shm.mount: Deactivated successfully.
Nov 25 12:06:35 np0005535469 podman[390656]: 2025-11-25 17:06:35.275146432 +0000 UTC m=+0.104573359 container cleanup 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 12:06:35 np0005535469 systemd[1]: libpod-conmon-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948.scope: Deactivated successfully.
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.294 254096 DEBUG nova.compute.manager [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-changed-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG nova.compute.manager [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing instance network info cache due to event network-changed-332ae922-3280-48c2-8889-d1ab181a43db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG oslo_concurrency.lockutils [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG oslo_concurrency.lockutils [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG nova.network.neutron [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:06:35 np0005535469 podman[390715]: 2025-11-25 17:06:35.333603415 +0000 UTC m=+0.036801040 container remove 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.340 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f858db-bcf8-46bc-9ec9-c57a98192679]: (4, ('Tue Nov 25 05:06:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 (91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948)\n91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948\nTue Nov 25 05:06:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 (91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948)\n91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.341 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8882450c-6a3c-41c2-a2eb-cd921fdb98c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.342 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.395 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 kernel: tapbf9312b9-f0: left promiscuous mode
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.415 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ac8ef8-5d96-4ab4-8a35-78d4bf4a792a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.427 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72fe6aa0-35f4-4fd2-8731-e089d6105b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.427 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40e6e3a8-2cea-4414-8509-752bcfe8f50c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a54c807c-0094-43ac-ad11-2691168ea244]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679576, 'reachable_time': 23661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390729, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 systemd[1]: run-netns-ovnmeta\x2dbf9312b9\x2df4d2\x2d496f\x2da143\x2d7586e12fbee3.mount: Deactivated successfully.
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.452 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:06:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.452 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5c0e2f-6354-424a-b6c5-e630b1d878cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.642 254096 INFO nova.virt.libvirt.driver [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Deleting instance files /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610_del#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.644 254096 INFO nova.virt.libvirt.driver [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Deletion of /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610_del complete#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.703 254096 INFO nova.compute.manager [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.704 254096 DEBUG oslo.service.loopingcall [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.704 254096 DEBUG nova.compute.manager [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:06:35 np0005535469 nova_compute[254092]: 2025-11-25 17:06:35.704 254096 DEBUG nova.network.neutron [-] [instance: cbf5c589-9701-44c9-9600-739675853610] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:06:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 79 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.840 254096 DEBUG nova.network.neutron [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.867 254096 INFO nova.compute.manager [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 1.16 seconds to deallocate network for instance.#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.918 254096 DEBUG nova.network.neutron [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updated VIF entry in instance network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.919 254096 DEBUG nova.network.neutron [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.923 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.923 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.939 254096 DEBUG oslo_concurrency.lockutils [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:36 np0005535469 nova_compute[254092]: 2025-11-25 17:06:36.979 254096 DEBUG oslo_concurrency.processutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.374 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-unplugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.375 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.375 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.376 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.376 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] No waiting events found dispatching network-vif-unplugged-332ae922-3280-48c2-8889-d1ab181a43db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.376 254096 WARNING nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received unexpected event network-vif-unplugged-332ae922-3280-48c2-8889-d1ab181a43db for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.377 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.377 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.377 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] No waiting events found dispatching network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 WARNING nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received unexpected event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-deleted-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.379 254096 INFO nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Neutron deleted interface 332ae922-3280-48c2-8889-d1ab181a43db; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.379 254096 DEBUG nova.network.neutron [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.396 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Detach interface failed, port_id=332ae922-3280-48c2-8889-d1ab181a43db, reason: Instance cbf5c589-9701-44c9-9600-739675853610 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:06:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/899762936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.429 254096 DEBUG oslo_concurrency.processutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.434 254096 DEBUG nova.compute.provider_tree [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.446 254096 DEBUG nova.scheduler.client.report [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.466 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.499 254096 INFO nova.scheduler.client.report [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance cbf5c589-9701-44c9-9600-739675853610#033[00m
Nov 25 12:06:37 np0005535469 nova_compute[254092]: 2025-11-25 17:06:37.576 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:38 np0005535469 nova_compute[254092]: 2025-11-25 17:06:38.121 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090383.1200678, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:38 np0005535469 nova_compute[254092]: 2025-11-25 17:06:38.122 254096 INFO nova.compute.manager [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:06:38 np0005535469 nova_compute[254092]: 2025-11-25 17:06:38.143 254096 DEBUG nova.compute.manager [None req-6cd9b905-e4f3-4a6b-b4a1-a1bca36d9130 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 79 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 33 KiB/s wr, 41 op/s
Nov 25 12:06:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279901587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:39 np0005535469 nova_compute[254092]: 2025-11-25 17:06:39.925 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.068 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.069 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3755MB free_disk=59.966068267822266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.069 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.070 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.121 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.121 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:06:40
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta']
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.162 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 41 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 33 KiB/s wr, 62 op/s
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590871679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.598 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.603 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.629 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.665 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:06:40 np0005535469 nova_compute[254092]: 2025-11-25 17:06:40.666 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.868099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090400868130, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 2061, "num_deletes": 251, "total_data_size": 3345100, "memory_usage": 3395120, "flush_reason": "Manual Compaction"}
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090400898031, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 3278837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49922, "largest_seqno": 51982, "table_properties": {"data_size": 3269523, "index_size": 5872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19061, "raw_average_key_size": 20, "raw_value_size": 3250956, "raw_average_value_size": 3440, "num_data_blocks": 260, "num_entries": 945, "num_filter_entries": 945, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090181, "oldest_key_time": 1764090181, "file_creation_time": 1764090400, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 29983 microseconds, and 7020 cpu microseconds.
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.898078) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 3278837 bytes OK
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.898097) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.902680) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.902699) EVENT_LOG_v1 {"time_micros": 1764090400902693, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.902716) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 3336446, prev total WAL file size 3336446, number of live WAL files 2.
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.903465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(3201KB)], [113(8549KB)]
Nov 25 12:06:40 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090400903493, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 12033154, "oldest_snapshot_seqno": -1}
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7466 keys, 10311875 bytes, temperature: kUnknown
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090401007864, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10311875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10262109, "index_size": 30006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 193554, "raw_average_key_size": 25, "raw_value_size": 10128684, "raw_average_value_size": 1356, "num_data_blocks": 1177, "num_entries": 7466, "num_filter_entries": 7466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090400, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.008067) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10311875 bytes
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.010586) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.2 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7980, records dropped: 514 output_compression: NoCompression
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.010623) EVENT_LOG_v1 {"time_micros": 1764090401010609, "job": 68, "event": "compaction_finished", "compaction_time_micros": 104437, "compaction_time_cpu_micros": 23871, "output_level": 6, "num_output_files": 1, "total_output_size": 10311875, "num_input_records": 7980, "num_output_records": 7466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090401011325, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090401012943, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.903397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:06:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:06:41 np0005535469 nova_compute[254092]: 2025-11-25 17:06:41.666 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:41 np0005535469 nova_compute[254092]: 2025-11-25 17:06:41.666 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:06:41 np0005535469 nova_compute[254092]: 2025-11-25 17:06:41.704 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:06:41 np0005535469 nova_compute[254092]: 2025-11-25 17:06:41.705 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:06:42 np0005535469 nova_compute[254092]: 2025-11-25 17:06:42.036 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:42 np0005535469 nova_compute[254092]: 2025-11-25 17:06:42.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Nov 25 12:06:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Nov 25 12:06:45 np0005535469 nova_compute[254092]: 2025-11-25 17:06:45.250 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:45 np0005535469 nova_compute[254092]: 2025-11-25 17:06:45.416 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.035 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090391.0342882, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.035 254096 INFO nova.compute.manager [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.054 254096 DEBUG nova.compute.manager [None req-d3d84c8b-b3c3-4117-8481-17b1e8d0ca9a - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.756 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.757 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.769 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.836 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.837 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.843 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.844 254096 INFO nova.compute.claims [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:06:46 np0005535469 nova_compute[254092]: 2025-11-25 17:06:46.955 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157608422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.420 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.429 254096 DEBUG nova.compute.provider_tree [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.445 254096 DEBUG nova.scheduler.client.report [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.465 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.466 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.511 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.511 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.529 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.549 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.635 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.636 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.637 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Creating image(s)#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.655 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.678 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.702 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.706 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.778 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.779 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.780 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.780 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.808 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.812 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c64717b5-8862-4f84-989e-9f21bdc37759_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:47 np0005535469 nova_compute[254092]: 2025-11-25 17:06:47.969 254096 DEBUG nova.policy [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.125 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c64717b5-8862-4f84-989e-9f21bdc37759_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.229 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:06:48 np0005535469 podman[390957]: 2025-11-25 17:06:48.242438061 +0000 UTC m=+0.065682613 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:06:48 np0005535469 podman[390941]: 2025-11-25 17:06:48.247553392 +0000 UTC m=+0.076432858 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 12:06:48 np0005535469 podman[390958]: 2025-11-25 17:06:48.270289885 +0000 UTC m=+0.093683701 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.329 254096 DEBUG nova.objects.instance [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid c64717b5-8862-4f84-989e-9f21bdc37759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.339 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.339 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Ensure instance console log exists: /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.340 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.340 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:48 np0005535469 nova_compute[254092]: 2025-11-25 17:06:48.340 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 341 B/s wr, 20 op/s
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:06:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e9519a5a-22f9-420b-9120-db8677a1a2bf does not exist
Nov 25 12:06:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 601e55e3-6c0d-4323-8a21-75f3512931ea does not exist
Nov 25 12:06:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b3759223-48a0-4194-9dda-378bf4c9ef37 does not exist
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:06:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.471 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Successfully updated port: c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.489 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.490 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.490 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.606 254096 DEBUG nova.compute.manager [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.606 254096 DEBUG nova.compute.manager [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing instance network info cache due to event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.606 254096 DEBUG oslo_concurrency.lockutils [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:06:49 np0005535469 podman[391326]: 2025-11-25 17:06:49.789536285 +0000 UTC m=+0.055374760 container create 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:06:49 np0005535469 systemd[1]: Started libpod-conmon-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope.
Nov 25 12:06:49 np0005535469 nova_compute[254092]: 2025-11-25 17:06:49.841 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:06:49 np0005535469 podman[391326]: 2025-11-25 17:06:49.770336748 +0000 UTC m=+0.036175243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:06:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:06:49 np0005535469 podman[391326]: 2025-11-25 17:06:49.907764057 +0000 UTC m=+0.173602552 container init 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 12:06:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:06:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:06:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:06:49 np0005535469 podman[391326]: 2025-11-25 17:06:49.915615943 +0000 UTC m=+0.181454418 container start 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:06:49 np0005535469 podman[391326]: 2025-11-25 17:06:49.919002746 +0000 UTC m=+0.184841221 container attach 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:06:49 np0005535469 practical_knuth[391342]: 167 167
Nov 25 12:06:49 np0005535469 systemd[1]: libpod-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope: Deactivated successfully.
Nov 25 12:06:49 np0005535469 conmon[391342]: conmon 4ca64966e3a73dba273f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope/container/memory.events
Nov 25 12:06:49 np0005535469 podman[391326]: 2025-11-25 17:06:49.92717485 +0000 UTC m=+0.193013325 container died 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:06:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7ec95dff1dca1f233ffb1fb980ecde4451e90bf122af5635050a8453f0dd5706-merged.mount: Deactivated successfully.
Nov 25 12:06:49 np0005535469 podman[391326]: 2025-11-25 17:06:49.966418756 +0000 UTC m=+0.232257231 container remove 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:06:49 np0005535469 systemd[1]: libpod-conmon-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope: Deactivated successfully.
Nov 25 12:06:50 np0005535469 podman[391366]: 2025-11-25 17:06:50.175255515 +0000 UTC m=+0.066914567 container create afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:06:50 np0005535469 nova_compute[254092]: 2025-11-25 17:06:50.224 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090395.2232823, cbf5c589-9701-44c9-9600-739675853610 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:50 np0005535469 nova_compute[254092]: 2025-11-25 17:06:50.225 254096 INFO nova.compute.manager [-] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:06:50 np0005535469 systemd[1]: Started libpod-conmon-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope.
Nov 25 12:06:50 np0005535469 podman[391366]: 2025-11-25 17:06:50.146715552 +0000 UTC m=+0.038374624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:06:50 np0005535469 nova_compute[254092]: 2025-11-25 17:06:50.248 254096 DEBUG nova.compute.manager [None req-adead496-b7f4-4535-86fa-e8e7a5a6d386 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:50 np0005535469 nova_compute[254092]: 2025-11-25 17:06:50.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:06:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:50 np0005535469 podman[391366]: 2025-11-25 17:06:50.299967655 +0000 UTC m=+0.191626687 container init afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:06:50 np0005535469 podman[391366]: 2025-11-25 17:06:50.311180092 +0000 UTC m=+0.202839154 container start afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:06:50 np0005535469 podman[391366]: 2025-11-25 17:06:50.315735018 +0000 UTC m=+0.207394080 container attach afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:06:50 np0005535469 nova_compute[254092]: 2025-11-25 17:06:50.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 60 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 192 KiB/s wr, 22 op/s
Nov 25 12:06:51 np0005535469 bold_engelbart[391383]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:06:51 np0005535469 bold_engelbart[391383]: --> relative data size: 1.0
Nov 25 12:06:51 np0005535469 bold_engelbart[391383]: --> All data devices are unavailable
Nov 25 12:06:51 np0005535469 systemd[1]: libpod-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope: Deactivated successfully.
Nov 25 12:06:51 np0005535469 systemd[1]: libpod-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope: Consumed 1.121s CPU time.
Nov 25 12:06:51 np0005535469 podman[391366]: 2025-11-25 17:06:51.484800773 +0000 UTC m=+1.376459805 container died afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 12:06:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016-merged.mount: Deactivated successfully.
Nov 25 12:06:51 np0005535469 podman[391366]: 2025-11-25 17:06:51.550549586 +0000 UTC m=+1.442208618 container remove afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:06:51 np0005535469 systemd[1]: libpod-conmon-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope: Deactivated successfully.
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.6564656996809276e-05 of space, bias 1.0, pg target 0.010969397099042783 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:06:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.589 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance network_info: |[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG oslo_concurrency.lockutils [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG nova.network.neutron [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.616 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start _get_guest_xml network_info=[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.624 254096 WARNING nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.633 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.634 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.637 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.637 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.638 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.638 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.638 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.640 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.640 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.640 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:06:51 np0005535469 nova_compute[254092]: 2025-11-25 17:06:51.643 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:06:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966176760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.144 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.166 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.170 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:52 np0005535469 podman[391585]: 2025-11-25 17:06:52.180789303 +0000 UTC m=+0.043298389 container create b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:06:52 np0005535469 systemd[1]: Started libpod-conmon-b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0.scope.
Nov 25 12:06:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:06:52 np0005535469 podman[391585]: 2025-11-25 17:06:52.162782078 +0000 UTC m=+0.025291164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:06:52 np0005535469 podman[391585]: 2025-11-25 17:06:52.262148674 +0000 UTC m=+0.124657830 container init b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:06:52 np0005535469 podman[391585]: 2025-11-25 17:06:52.268857538 +0000 UTC m=+0.131366624 container start b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:06:52 np0005535469 podman[391585]: 2025-11-25 17:06:52.272560009 +0000 UTC m=+0.135069185 container attach b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:06:52 np0005535469 flamboyant_volhard[391622]: 167 167
Nov 25 12:06:52 np0005535469 systemd[1]: libpod-b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0.scope: Deactivated successfully.
Nov 25 12:06:52 np0005535469 podman[391585]: 2025-11-25 17:06:52.275005796 +0000 UTC m=+0.137514892 container died b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:06:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-70d9377299eb680a4080d6c1518e71ffcb13623abec1db2db7eb9fb9156e7d28-merged.mount: Deactivated successfully.
Nov 25 12:06:52 np0005535469 podman[391585]: 2025-11-25 17:06:52.323915558 +0000 UTC m=+0.186424674 container remove b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:06:52 np0005535469 systemd[1]: libpod-conmon-b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0.scope: Deactivated successfully.
Nov 25 12:06:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:06:52 np0005535469 podman[391663]: 2025-11-25 17:06:52.521679762 +0000 UTC m=+0.047904005 container create ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:06:52 np0005535469 systemd[1]: Started libpod-conmon-ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913.scope.
Nov 25 12:06:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:06:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074771413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:06:52 np0005535469 podman[391663]: 2025-11-25 17:06:52.495580807 +0000 UTC m=+0.021805080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.589 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.591 254096 DEBUG nova.virt.libvirt.vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191068635',display_name='tempest-TestNetworkBasicOps-server-191068635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-191068635',id=125,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJpHrgKSNh4nxd1EAQPL6cFToRs8NuFCJRe5V6wi+HlNMXfO3CA8jeqzTpxAut873d8a0itMoBeEb+RuoOJ/ichSywJk9w6n7vQ8jIINL58mZvzURBI/PRqCBb7SIebsPQ==',key_name='tempest-TestNetworkBasicOps-186218804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-xthfdnfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:47Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=c64717b5-8862-4f84-989e-9f21bdc37759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.592 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.593 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.594 254096 DEBUG nova.objects.instance [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid c64717b5-8862-4f84-989e-9f21bdc37759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:06:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.610 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <uuid>c64717b5-8862-4f84-989e-9f21bdc37759</uuid>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <name>instance-0000007d</name>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-191068635</nova:name>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:06:51</nova:creationTime>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <nova:port uuid="c7eaeb08-d94a-4ecb-a87f-459a8d848a74">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <entry name="serial">c64717b5-8862-4f84-989e-9f21bdc37759</entry>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <entry name="uuid">c64717b5-8862-4f84-989e-9f21bdc37759</entry>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c64717b5-8862-4f84-989e-9f21bdc37759_disk">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c64717b5-8862-4f84-989e-9f21bdc37759_disk.config">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:51:b5:45"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <target dev="tapc7eaeb08-d9"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/console.log" append="off"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:06:52 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:06:52 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:06:52 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:06:52 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Preparing to wait for external event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.613 254096 DEBUG nova.virt.libvirt.vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191068635',display_name='tempest-TestNetworkBasicOps-server-191068635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-191068635',id=125,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJpHrgKSNh4nxd1EAQPL6cFToRs8NuFCJRe5V6wi+HlNMXfO3CA8jeqzTpxAut873d8a0itMoBeEb+RuoOJ/ichSywJk9w6n7vQ8jIINL58mZvzURBI/PRqCBb7SIebsPQ==',key_name='tempest-TestNetworkBasicOps-186218804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-xthfdnfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:47Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=c64717b5-8862-4f84-989e-9f21bdc37759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.613 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.614 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.614 254096 DEBUG os_vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.620 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:52 np0005535469 podman[391663]: 2025-11-25 17:06:52.620488023 +0000 UTC m=+0.146712276 container init ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.621 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.627 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7eaeb08-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.628 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7eaeb08-d9, col_values=(('external_ids', {'iface-id': 'c7eaeb08-d94a-4ecb-a87f-459a8d848a74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:b5:45', 'vm-uuid': 'c64717b5-8862-4f84-989e-9f21bdc37759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.629 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:52 np0005535469 podman[391663]: 2025-11-25 17:06:52.631205316 +0000 UTC m=+0.157429549 container start ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:06:52 np0005535469 NetworkManager[48891]: <info>  [1764090412.6316] manager: (tapc7eaeb08-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.633 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:06:52 np0005535469 podman[391663]: 2025-11-25 17:06:52.636541222 +0000 UTC m=+0.162765465 container attach ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.640 254096 INFO os_vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.684 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.685 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.685 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:51:b5:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.686 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Using config drive#033[00m
Nov 25 12:06:52 np0005535469 nova_compute[254092]: 2025-11-25 17:06:52.714 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]: {
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:    "0": [
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:        {
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "devices": [
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "/dev/loop3"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            ],
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_name": "ceph_lv0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_size": "21470642176",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "name": "ceph_lv0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "tags": {
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cluster_name": "ceph",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.crush_device_class": "",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.encrypted": "0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osd_id": "0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.type": "block",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.vdo": "0"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            },
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "type": "block",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "vg_name": "ceph_vg0"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:        }
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:    ],
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:    "1": [
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:        {
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "devices": [
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "/dev/loop4"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            ],
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_name": "ceph_lv1",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_size": "21470642176",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "name": "ceph_lv1",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "tags": {
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cluster_name": "ceph",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.crush_device_class": "",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.encrypted": "0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osd_id": "1",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.type": "block",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.vdo": "0"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            },
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "type": "block",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "vg_name": "ceph_vg1"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:        }
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:    ],
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:    "2": [
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:        {
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "devices": [
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "/dev/loop5"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            ],
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_name": "ceph_lv2",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_size": "21470642176",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "name": "ceph_lv2",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "tags": {
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.cluster_name": "ceph",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.crush_device_class": "",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.encrypted": "0",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osd_id": "2",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.type": "block",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:                "ceph.vdo": "0"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            },
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "type": "block",
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:            "vg_name": "ceph_vg2"
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:        }
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]:    ]
Nov 25 12:06:53 np0005535469 nifty_mirzakhani[391680]: }
Nov 25 12:06:53 np0005535469 systemd[1]: libpod-ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913.scope: Deactivated successfully.
Nov 25 12:06:53 np0005535469 podman[391663]: 2025-11-25 17:06:53.373349782 +0000 UTC m=+0.899574025 container died ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:06:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500-merged.mount: Deactivated successfully.
Nov 25 12:06:53 np0005535469 podman[391663]: 2025-11-25 17:06:53.439588139 +0000 UTC m=+0.965812382 container remove ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:06:53 np0005535469 systemd[1]: libpod-conmon-ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913.scope: Deactivated successfully.
Nov 25 12:06:53 np0005535469 nova_compute[254092]: 2025-11-25 17:06:53.742 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Creating config drive at /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config#033[00m
Nov 25 12:06:53 np0005535469 nova_compute[254092]: 2025-11-25 17:06:53.748 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3hto8v6y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:53 np0005535469 nova_compute[254092]: 2025-11-25 17:06:53.899 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3hto8v6y" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:53 np0005535469 nova_compute[254092]: 2025-11-25 17:06:53.938 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:53 np0005535469 nova_compute[254092]: 2025-11-25 17:06:53.945 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config c64717b5-8862-4f84-989e-9f21bdc37759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.147 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config c64717b5-8862-4f84-989e-9f21bdc37759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.148 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deleting local config drive /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config because it was imported into RBD.#033[00m
Nov 25 12:06:54 np0005535469 podman[391905]: 2025-11-25 17:06:54.220837987 +0000 UTC m=+0.060487690 container create a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:06:54 np0005535469 kernel: tapc7eaeb08-d9: entered promiscuous mode
Nov 25 12:06:54 np0005535469 NetworkManager[48891]: <info>  [1764090414.2318] manager: (tapc7eaeb08-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Nov 25 12:06:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:54Z|01292|binding|INFO|Claiming lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for this chassis.
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:54Z|01293|binding|INFO|c7eaeb08-d94a-4ecb-a87f-459a8d848a74: Claiming fa:16:3e:51:b5:45 10.100.0.7
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.274 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:54 np0005535469 systemd[1]: Started libpod-conmon-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope.
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.283 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c64717b5-8862-4f84-989e-9f21bdc37759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.284 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 bound to our chassis#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.286 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe8ef6db-e551-4904-a3ea-4af9320e49b5#033[00m
Nov 25 12:06:54 np0005535469 podman[391905]: 2025-11-25 17:06:54.194141295 +0000 UTC m=+0.033791008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:06:54 np0005535469 systemd-udevd[391937]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6415e9-8ad2-43d1-ad26-19c690c70a34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.308 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe8ef6db-e1 in ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.310 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe8ef6db-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.311 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c61573b-bfbb-4879-84f2-1e98434d063e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.312 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[124f599c-43d2-4f98-b194-74eaf76ae29b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 NetworkManager[48891]: <info>  [1764090414.3205] device (tapc7eaeb08-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:06:54 np0005535469 NetworkManager[48891]: <info>  [1764090414.3220] device (tapc7eaeb08-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:06:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.333 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[e231c046-7dbc-49ac-8a85-9c60b59b2e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 systemd-machined[216343]: New machine qemu-157-instance-0000007d.
Nov 25 12:06:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:54Z|01294|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 ovn-installed in OVS
Nov 25 12:06:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:54Z|01295|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 up in Southbound
Nov 25 12:06:54 np0005535469 systemd[1]: Started Virtual Machine qemu-157-instance-0000007d.
Nov 25 12:06:54 np0005535469 podman[391905]: 2025-11-25 17:06:54.363959343 +0000 UTC m=+0.203609126 container init a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:54 np0005535469 podman[391905]: 2025-11-25 17:06:54.377406132 +0000 UTC m=+0.217055835 container start a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:06:54 np0005535469 podman[391905]: 2025-11-25 17:06:54.380691242 +0000 UTC m=+0.220340935 container attach a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c27a760d-c286-4929-99d2-481f61c510a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 systemd[1]: libpod-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope: Deactivated successfully.
Nov 25 12:06:54 np0005535469 distracted_hoover[391931]: 167 167
Nov 25 12:06:54 np0005535469 conmon[391931]: conmon a64378e0742d54511a80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope/container/memory.events
Nov 25 12:06:54 np0005535469 podman[391905]: 2025-11-25 17:06:54.386801349 +0000 UTC m=+0.226451052 container died a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:06:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f63c608166666fa43e4ff66dbee3dbf0bb9c33f59feb1f9a11227abb862ccb83-merged.mount: Deactivated successfully.
Nov 25 12:06:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:06:54 np0005535469 podman[391905]: 2025-11-25 17:06:54.433452359 +0000 UTC m=+0.273102092 container remove a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.433 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5c6d98-ab15-4140-83f6-1e5a4523cd89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1944594-6ad3-48c0-94fe-a880205c5575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 NetworkManager[48891]: <info>  [1764090414.4421] manager: (tapfe8ef6db-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Nov 25 12:06:54 np0005535469 systemd[1]: libpod-conmon-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope: Deactivated successfully.
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.488 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7953de-f19b-424f-80bb-f9b878bbb459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.492 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[039dbfc4-2ec2-452e-ab69-028cc7df7e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 NetworkManager[48891]: <info>  [1764090414.5297] device (tapfe8ef6db-e0): carrier: link connected
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.537 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[772bec90-5fff-46b1-b3ed-f0f3fb2ac84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.551 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[97deb257-6616-4d4e-a476-2cbc5e637ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687212, 'reachable_time': 27072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391985, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.568 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[271654f5-5c7c-4ceb-9f5d-37347ff9ddbc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:36f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687212, 'tstamp': 687212}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391987, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de2a027f-125f-4962-993a-a93547b9e504]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687212, 'reachable_time': 27072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391994, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.619 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2753ad3f-5f6b-46bf-ae91-dd1ae7d5f540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 podman[391992]: 2025-11-25 17:06:54.631890882 +0000 UTC m=+0.050554878 container create a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.681 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[168ddc46-313d-4aca-bd4f-4e5d0da7ac0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 systemd[1]: Started libpod-conmon-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope.
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.682 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe8ef6db-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:54 np0005535469 NetworkManager[48891]: <info>  [1764090414.6861] manager: (tapfe8ef6db-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Nov 25 12:06:54 np0005535469 kernel: tapfe8ef6db-e0: entered promiscuous mode
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.687 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe8ef6db-e0, col_values=(('external_ids', {'iface-id': 'abb62292-a5ed-40d9-8b98-1dcedecc4b03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:06:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:54Z|01296|binding|INFO|Releasing lport abb62292-a5ed-40d9-8b98-1dcedecc4b03 from this chassis (sb_readonly=0)
Nov 25 12:06:54 np0005535469 podman[391992]: 2025-11-25 17:06:54.603820812 +0000 UTC m=+0.022484838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.705 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.705 254096 DEBUG nova.compute.manager [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.705 254096 DEBUG oslo_concurrency.lockutils [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.705 254096 DEBUG oslo_concurrency.lockutils [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.706 254096 DEBUG oslo_concurrency.lockutils [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.706 254096 DEBUG nova.compute.manager [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Processing event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8103f029-bbfc-48b6-a032-8e43bd613080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.708 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:06:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.710 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'env', 'PROCESS_TAG=haproxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe8ef6db-e551-4904-a3ea-4af9320e49b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:06:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:06:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:54 np0005535469 podman[391992]: 2025-11-25 17:06:54.747203474 +0000 UTC m=+0.165867490 container init a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.753 254096 DEBUG nova.network.neutron [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updated VIF entry in instance network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.754 254096 DEBUG nova.network.neutron [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:06:54 np0005535469 podman[391992]: 2025-11-25 17:06:54.757392683 +0000 UTC m=+0.176056679 container start a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 12:06:54 np0005535469 podman[391992]: 2025-11-25 17:06:54.760313093 +0000 UTC m=+0.178977089 container attach a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:06:54 np0005535469 nova_compute[254092]: 2025-11-25 17:06:54.766 254096 DEBUG oslo_concurrency.lockutils [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:06:55 np0005535469 podman[392081]: 2025-11-25 17:06:55.059004656 +0000 UTC m=+0.042667681 container create a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 12:06:55 np0005535469 systemd[1]: Started libpod-conmon-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2.scope.
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.099 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:06:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.101 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090415.0986474, c64717b5-8862-4f84-989e-9f21bdc37759 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.102 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Started (Lifecycle Event)#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.104 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:06:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdb7dcc8a3df1de0e20ad88d7e2ca7a9b516d5bc7839b4b7f23d1b1edc5d54e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.108 254096 INFO nova.virt.libvirt.driver [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance spawned successfully.#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.108 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:06:55 np0005535469 podman[392081]: 2025-11-25 17:06:55.122396225 +0000 UTC m=+0.106059260 container init a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.125 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:55 np0005535469 podman[392081]: 2025-11-25 17:06:55.127922137 +0000 UTC m=+0.111585162 container start a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 12:06:55 np0005535469 podman[392081]: 2025-11-25 17:06:55.036564321 +0000 UTC m=+0.020227366 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.136 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.141 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.141 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.142 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.142 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.143 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.143 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:06:55 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : New worker (392109) forked
Nov 25 12:06:55 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : Loading success.
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.173 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.173 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090415.1004786, c64717b5-8862-4f84-989e-9f21bdc37759 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.174 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.199 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090415.1038833, c64717b5-8862-4f84-989e-9f21bdc37759 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.211 254096 INFO nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 7.58 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.211 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.235 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.238 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.265 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.278 254096 INFO nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 8.47 seconds to build instance.#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.299 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945348001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945348001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:55 np0005535469 elastic_edison[392016]: {
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "osd_id": 1,
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "type": "bluestore"
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:    },
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "osd_id": 2,
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "type": "bluestore"
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:    },
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "osd_id": 0,
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:        "type": "bluestore"
Nov 25 12:06:55 np0005535469 elastic_edison[392016]:    }
Nov 25 12:06:55 np0005535469 elastic_edison[392016]: }
Nov 25 12:06:55 np0005535469 systemd[1]: libpod-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope: Deactivated successfully.
Nov 25 12:06:55 np0005535469 systemd[1]: libpod-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope: Consumed 1.003s CPU time.
Nov 25 12:06:55 np0005535469 conmon[392016]: conmon a49d419cd9b3ef31f633 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope/container/memory.events
Nov 25 12:06:55 np0005535469 podman[391992]: 2025-11-25 17:06:55.782024337 +0000 UTC m=+1.200688323 container died a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:06:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b-merged.mount: Deactivated successfully.
Nov 25 12:06:55 np0005535469 podman[391992]: 2025-11-25 17:06:55.844720498 +0000 UTC m=+1.263384494 container remove a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 12:06:55 np0005535469 systemd[1]: libpod-conmon-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope: Deactivated successfully.
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:06:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 863fda66-fd77-4b93-9507-3e0a06e113c3 does not exist
Nov 25 12:06:55 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 00c2b88d-ea1c-44e5-883e-a7e6a964d255 does not exist
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.944 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.945 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:06:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:06:55 np0005535469 nova_compute[254092]: 2025-11-25 17:06:55.963 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.033 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.034 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.041 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.041 254096 INFO nova.compute.claims [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.163 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 12:06:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:06:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/345240936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.633 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.640 254096 DEBUG nova.compute.provider_tree [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.661 254096 DEBUG nova.scheduler.client.report [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.681 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.682 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.720 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.721 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.772 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.788 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.851 254096 DEBUG nova.compute.manager [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.851 254096 DEBUG oslo_concurrency.lockutils [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.852 254096 DEBUG oslo_concurrency.lockutils [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.852 254096 DEBUG oslo_concurrency.lockutils [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.853 254096 DEBUG nova.compute.manager [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.853 254096 WARNING nova.compute.manager [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.861 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.862 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.863 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Creating image(s)#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.891 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.919 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.948 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:56 np0005535469 nova_compute[254092]: 2025-11-25 17:06:56.955 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.050 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.051 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.052 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.053 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.077 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.081 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.230 254096 DEBUG nova.policy [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.386 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.466 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.552 254096 DEBUG nova.objects.instance [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.565 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.566 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Ensure instance console log exists: /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.566 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.567 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.567 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:06:57 np0005535469 nova_compute[254092]: 2025-11-25 17:06:57.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 12:06:58 np0005535469 nova_compute[254092]: 2025-11-25 17:06:58.770 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Successfully created port: 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:06:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:06:59 np0005535469 nova_compute[254092]: 2025-11-25 17:06:59.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:59 np0005535469 NetworkManager[48891]: <info>  [1764090419.8721] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Nov 25 12:06:59 np0005535469 NetworkManager[48891]: <info>  [1764090419.8744] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Nov 25 12:06:59 np0005535469 nova_compute[254092]: 2025-11-25 17:06:59.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:06:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:06:59Z|01297|binding|INFO|Releasing lport abb62292-a5ed-40d9-8b98-1dcedecc4b03 from this chassis (sb_readonly=0)
Nov 25 12:06:59 np0005535469 nova_compute[254092]: 2025-11-25 17:06:59.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.375 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Successfully updated port: 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.403 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.404 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.405 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 116 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 2.8 MiB/s wr, 72 op/s
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.463 254096 DEBUG nova.compute.manager [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.464 254096 DEBUG nova.compute.manager [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing instance network info cache due to event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.465 254096 DEBUG oslo_concurrency.lockutils [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.465 254096 DEBUG oslo_concurrency.lockutils [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.466 254096 DEBUG nova.network.neutron [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.641 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.722 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.722 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.723 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.724 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.724 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.726 254096 INFO nova.compute.manager [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Terminating instance#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.728 254096 DEBUG nova.compute.manager [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:07:00 np0005535469 kernel: tapc7eaeb08-d9 (unregistering): left promiscuous mode
Nov 25 12:07:00 np0005535469 NetworkManager[48891]: <info>  [1764090420.7741] device (tapc7eaeb08-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:07:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:00Z|01298|binding|INFO|Releasing lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 from this chassis (sb_readonly=0)
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:00Z|01299|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 down in Southbound
Nov 25 12:07:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:00Z|01300|binding|INFO|Removing iface tapc7eaeb08-d9 ovn-installed in OVS
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.793 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c64717b5-8862-4f84-989e-9f21bdc37759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:07:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.794 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 unbound from our chassis#033[00m
Nov 25 12:07:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.795 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe8ef6db-e551-4904-a3ea-4af9320e49b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:07:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.796 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c47ab58-a5cf-46b1-a2d0-14cae4950699]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.797 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace which is not needed anymore#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:00 np0005535469 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 25 12:07:00 np0005535469 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Consumed 6.562s CPU time.
Nov 25 12:07:00 np0005535469 systemd-machined[216343]: Machine qemu-157-instance-0000007d terminated.
Nov 25 12:07:00 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : haproxy version is 2.8.14-c23fe91
Nov 25 12:07:00 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : path to executable is /usr/sbin/haproxy
Nov 25 12:07:00 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [WARNING]  (392107) : Exiting Master process...
Nov 25 12:07:00 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [ALERT]    (392107) : Current worker (392109) exited with code 143 (Terminated)
Nov 25 12:07:00 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [WARNING]  (392107) : All workers exited. Exiting... (0)
Nov 25 12:07:00 np0005535469 systemd[1]: libpod-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2.scope: Deactivated successfully.
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.976 254096 INFO nova.virt.libvirt.driver [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance destroyed successfully.#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.977 254096 DEBUG nova.objects.instance [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid c64717b5-8862-4f84-989e-9f21bdc37759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:00 np0005535469 podman[392420]: 2025-11-25 17:07:00.981622892 +0000 UTC m=+0.063945195 container died a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.994 254096 DEBUG nova.virt.libvirt.vif [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191068635',display_name='tempest-TestNetworkBasicOps-server-191068635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-191068635',id=125,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJpHrgKSNh4nxd1EAQPL6cFToRs8NuFCJRe5V6wi+HlNMXfO3CA8jeqzTpxAut873d8a0itMoBeEb+RuoOJ/ichSywJk9w6n7vQ8jIINL58mZvzURBI/PRqCBb7SIebsPQ==',key_name='tempest-TestNetworkBasicOps-186218804',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:06:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-xthfdnfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:06:55Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=c64717b5-8862-4f84-989e-9f21bdc37759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.995 254096 DEBUG nova.network.os_vif_util [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.996 254096 DEBUG nova.network.os_vif_util [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:00 np0005535469 nova_compute[254092]: 2025-11-25 17:07:00.997 254096 DEBUG os_vif [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.001 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7eaeb08-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.010 254096 INFO os_vif [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')#033[00m
Nov 25 12:07:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2-userdata-shm.mount: Deactivated successfully.
Nov 25 12:07:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ffdb7dcc8a3df1de0e20ad88d7e2ca7a9b516d5bc7839b4b7f23d1b1edc5d54e-merged.mount: Deactivated successfully.
Nov 25 12:07:01 np0005535469 podman[392420]: 2025-11-25 17:07:01.031274364 +0000 UTC m=+0.113596637 container cleanup a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 12:07:01 np0005535469 systemd[1]: libpod-conmon-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2.scope: Deactivated successfully.
Nov 25 12:07:01 np0005535469 podman[392474]: 2025-11-25 17:07:01.087151407 +0000 UTC m=+0.035712511 container remove a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.092 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0dafd8-549f-47c9-8627-79a66518b534]: (4, ('Tue Nov 25 05:07:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2)\na1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2\nTue Nov 25 05:07:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2)\na1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.093 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b05f7947-564d-4401-afa5-93f4d6148793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:01 np0005535469 kernel: tapfe8ef6db-e0: left promiscuous mode
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cfe9a6-d742-4634-b1c5-39999df69c48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf0808e-febe-4e78-9a06-d2b1916cbcaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15453177-dafa-44cf-ae70-021198cbf609]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.163 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4cee5f85-2872-4846-98f4-e9f4a5f33edb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687201, 'reachable_time': 19992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392492, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.166 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:07:01 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.166 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[91e11f9b-f90a-4c99-b536-de7d9520927b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:01 np0005535469 systemd[1]: run-netns-ovnmeta\x2dfe8ef6db\x2de551\x2d4904\x2da3ea\x2d4af9320e49b5.mount: Deactivated successfully.
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.403 254096 INFO nova.virt.libvirt.driver [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deleting instance files /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759_del#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.404 254096 INFO nova.virt.libvirt.driver [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deletion of /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759_del complete#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.476 254096 INFO nova.compute.manager [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.477 254096 DEBUG oslo.service.loopingcall [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.478 254096 DEBUG nova.compute.manager [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:07:01 np0005535469 nova_compute[254092]: 2025-11-25 17:07:01.478 254096 DEBUG nova.network.neutron [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.088 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.115 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.115 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance network_info: |[{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.120 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start _get_guest_xml network_info=[{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.126 254096 WARNING nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.134 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.135 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.140 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.141 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.141 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.141 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.142 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.142 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.148 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.238 254096 DEBUG nova.network.neutron [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updated VIF entry in instance network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.240 254096 DEBUG nova.network.neutron [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.338 254096 DEBUG oslo_concurrency.lockutils [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 134 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 126 op/s
Nov 25 12:07:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794554221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.621 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.646 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.650 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.972 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.973 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing instance network info cache due to event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.973 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.973 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:02 np0005535469 nova_compute[254092]: 2025-11-25 17:07:02.974 254096 DEBUG nova.network.neutron [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:07:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470768128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.141 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.144 254096 DEBUG nova.virt.libvirt.vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=126,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-uvh0a05a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=bd1d0296-ae28-4eac-9f38-80e6ca17dbff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.144 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.146 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.148 254096 DEBUG nova.objects.instance [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.164 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <uuid>bd1d0296-ae28-4eac-9f38-80e6ca17dbff</uuid>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <name>instance-0000007e</name>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353</nova:name>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:07:02</nova:creationTime>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <nova:port uuid="54e9d2ac-a4ca-41fe-9c2e-76eba828c99c">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <entry name="serial">bd1d0296-ae28-4eac-9f38-80e6ca17dbff</entry>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <entry name="uuid">bd1d0296-ae28-4eac-9f38-80e6ca17dbff</entry>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:09:40:a3"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <target dev="tap54e9d2ac-a4"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/console.log" append="off"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:07:03 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:07:03 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:07:03 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:07:03 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.166 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Preparing to wait for external event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.166 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.167 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.167 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.168 254096 DEBUG nova.virt.libvirt.vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=126,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-uvh0a05a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=bd1d0296-ae28-4eac-9f38-80e6ca17dbff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.168 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.169 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.169 254096 DEBUG os_vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.171 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.175 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54e9d2ac-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.175 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54e9d2ac-a4, col_values=(('external_ids', {'iface-id': '54e9d2ac-a4ca-41fe-9c2e-76eba828c99c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:40:a3', 'vm-uuid': 'bd1d0296-ae28-4eac-9f38-80e6ca17dbff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:03 np0005535469 NetworkManager[48891]: <info>  [1764090423.1786] manager: (tap54e9d2ac-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.190 254096 INFO os_vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4')#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.207 254096 DEBUG nova.network.neutron [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.229 254096 INFO nova.compute.manager [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 1.75 seconds to deallocate network for instance.#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.250 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.250 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.251 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:09:40:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.251 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Using config drive#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.276 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.283 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.284 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.367 254096 DEBUG oslo_concurrency.processutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:07:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927573114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.792 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Creating config drive at /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.799 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpltzamna4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.841 254096 DEBUG oslo_concurrency.processutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.850 254096 DEBUG nova.compute.provider_tree [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.863 254096 DEBUG nova.scheduler.client.report [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:07:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.895 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.920 254096 INFO nova.scheduler.client.report [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance c64717b5-8862-4f84-989e-9f21bdc37759#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.940 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpltzamna4" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.969 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:03 np0005535469 nova_compute[254092]: 2025-11-25 17:07:03.973 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.021 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.192 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.193 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deleting local config drive /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config because it was imported into RBD.#033[00m
Nov 25 12:07:04 np0005535469 kernel: tap54e9d2ac-a4: entered promiscuous mode
Nov 25 12:07:04 np0005535469 NetworkManager[48891]: <info>  [1764090424.2465] manager: (tap54e9d2ac-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:04 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:04Z|01301|binding|INFO|Claiming lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for this chassis.
Nov 25 12:07:04 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:04Z|01302|binding|INFO|54e9d2ac-a4ca-41fe-9c2e-76eba828c99c: Claiming fa:16:3e:09:40:a3 10.100.0.6
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.261 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:40:a3 10.100.0.6'], port_security=['fa:16:3e:09:40:a3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd1d0296-ae28-4eac-9f38-80e6ca17dbff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b953fef-d250-41a4-af84-97bf9c7f4822 779da3fb-8b66-4cf7-a59e-bc7311564ace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.262 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c in datapath ef2caff8-43ec-4364-a979-521405023410 bound to our chassis#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.263 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef2caff8-43ec-4364-a979-521405023410#033[00m
Nov 25 12:07:04 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:04Z|01303|binding|INFO|Setting lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c ovn-installed in OVS
Nov 25 12:07:04 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:04Z|01304|binding|INFO|Setting lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c up in Southbound
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:04 np0005535469 systemd-udevd[392652]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:07:04 np0005535469 systemd-machined[216343]: New machine qemu-158-instance-0000007e.
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.277 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b809437b-014e-4cef-9444-f94cc59b6255]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.279 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef2caff8-41 in ovnmeta-ef2caff8-43ec-4364-a979-521405023410 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.281 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef2caff8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce652ad-a77f-4ebd-95cc-77a6d4897276]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.282 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b10ca708-efcc-4690-9359-5e7174489fb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 NetworkManager[48891]: <info>  [1764090424.2841] device (tap54e9d2ac-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:07:04 np0005535469 systemd[1]: Started Virtual Machine qemu-158-instance-0000007e.
Nov 25 12:07:04 np0005535469 NetworkManager[48891]: <info>  [1764090424.2859] device (tap54e9d2ac-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.295 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[88c818dc-4940-4eb5-87da-d5d5104931eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.320 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[171d1290-58da-44de-8bbf-20f2efbc31ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 134 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.458 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ee84d615-acd5-4aad-824f-44d47d01bc86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 NetworkManager[48891]: <info>  [1764090424.4644] manager: (tapef2caff8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.463 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29a6378a-44ac-46e1-b9e3-3147be62ab60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 systemd-udevd[392655]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.503 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ad8275-2312-4892-86c3-a2e471de9276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.506 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfeb635-e5d4-4a85-875c-fffe05c80d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 NetworkManager[48891]: <info>  [1764090424.5302] device (tapef2caff8-40): carrier: link connected
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8736f9-76e3-4ea4-be00-50d4f69cb508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.552 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6d45d84a-3576-4b0f-8b98-4733ef349f4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392685, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.570 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f21b257a-b243-4384-864e-ae778b34a933]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:52b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688212, 'tstamp': 688212}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392686, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f981d7-6fd9-4553-a664-1c2a06e0d113]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392687, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e04cf98f-72b1-4e6d-84ec-944d0b072fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c81acd9b-2361-462c-acc5-2a4f2b899aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.698 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.698 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.699 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef2caff8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:04 np0005535469 NetworkManager[48891]: <info>  [1764090424.7020] manager: (tapef2caff8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Nov 25 12:07:04 np0005535469 kernel: tapef2caff8-40: entered promiscuous mode
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.704 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef2caff8-40, col_values=(('external_ids', {'iface-id': '50aec7ed-4f15-4d72-87dd-48c327de28ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:04 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:04Z|01305|binding|INFO|Releasing lport 50aec7ed-4f15-4d72-87dd-48c327de28ce from this chassis (sb_readonly=0)
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.719 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef2caff8-43ec-4364-a979-521405023410.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef2caff8-43ec-4364-a979-521405023410.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.720 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[073465eb-78a4-4a38-ab73-641c326b1bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.721 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-ef2caff8-43ec-4364-a979-521405023410
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/ef2caff8-43ec-4364-a979-521405023410.pid.haproxy
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID ef2caff8-43ec-4364-a979-521405023410
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:07:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.724 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'env', 'PROCESS_TAG=haproxy-ef2caff8-43ec-4364-a979-521405023410', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef2caff8-43ec-4364-a979-521405023410.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.904 254096 DEBUG nova.compute.manager [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.905 254096 DEBUG oslo_concurrency.lockutils [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.905 254096 DEBUG oslo_concurrency.lockutils [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.906 254096 DEBUG oslo_concurrency.lockutils [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.906 254096 DEBUG nova.compute.manager [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Processing event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.956 254096 DEBUG nova.network.neutron [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated VIF entry in instance network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.957 254096 DEBUG nova.network.neutron [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.976 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] No waiting events found dispatching network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:07:04 np0005535469 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 WARNING nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:07:05 np0005535469 podman[392719]: 2025-11-25 17:07:05.103453377 +0000 UTC m=+0.075707948 container create a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 12:07:05 np0005535469 systemd[1]: Started libpod-conmon-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0.scope.
Nov 25 12:07:05 np0005535469 podman[392719]: 2025-11-25 17:07:05.053939988 +0000 UTC m=+0.026194589 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:07:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:07:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69c52ab9da1464e9256b97d314ba04db47e1086bb356a0791f8cd44f4454be6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:07:05 np0005535469 podman[392719]: 2025-11-25 17:07:05.19655777 +0000 UTC m=+0.168812401 container init a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.203 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:07:05 np0005535469 podman[392719]: 2025-11-25 17:07:05.203041848 +0000 UTC m=+0.175296439 container start a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090425.2027023, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Started (Lifecycle Event)#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.209 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.214 254096 INFO nova.virt.libvirt.driver [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance spawned successfully.#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.214 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.227 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.236 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.237 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:05 np0005535469 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : New worker (392782) forked
Nov 25 12:07:05 np0005535469 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : Loading success.
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.238 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.238 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.239 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.239 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.247 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.247 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090425.202915, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.248 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.266 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.271 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090425.2095118, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.271 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.294 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.298 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.310 254096 INFO nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 8.45 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.310 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.318 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.367 254096 INFO nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 9.36 seconds to build instance.#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.385 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:05 np0005535469 nova_compute[254092]: 2025-11-25 17:07:05.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Nov 25 12:07:07 np0005535469 nova_compute[254092]: 2025-11-25 17:07:07.001 254096 DEBUG nova.compute.manager [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:07 np0005535469 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG oslo_concurrency.lockutils [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:07 np0005535469 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG oslo_concurrency.lockutils [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:07 np0005535469 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG oslo_concurrency.lockutils [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:07 np0005535469 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG nova.compute.manager [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] No waiting events found dispatching network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:07:07 np0005535469 nova_compute[254092]: 2025-11-25 17:07:07.003 254096 WARNING nova.compute.manager [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received unexpected event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for instance with vm_state active and task_state None.#033[00m
Nov 25 12:07:08 np0005535469 nova_compute[254092]: 2025-11-25 17:07:08.177 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 12:07:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:07:10 np0005535469 nova_compute[254092]: 2025-11-25 17:07:10.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Nov 25 12:07:11 np0005535469 nova_compute[254092]: 2025-11-25 17:07:11.117 254096 DEBUG nova.compute.manager [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:11 np0005535469 nova_compute[254092]: 2025-11-25 17:07:11.118 254096 DEBUG nova.compute.manager [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing instance network info cache due to event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:07:11 np0005535469 nova_compute[254092]: 2025-11-25 17:07:11.118 254096 DEBUG oslo_concurrency.lockutils [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:11 np0005535469 nova_compute[254092]: 2025-11-25 17:07:11.119 254096 DEBUG oslo_concurrency.lockutils [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:11 np0005535469 nova_compute[254092]: 2025-11-25 17:07:11.119 254096 DEBUG nova.network.neutron [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:07:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 815 KiB/s wr, 155 op/s
Nov 25 12:07:13 np0005535469 nova_compute[254092]: 2025-11-25 17:07:13.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:13.644 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.881680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433881706, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 255, "total_data_size": 526218, "memory_usage": 536584, "flush_reason": "Manual Compaction"}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433887991, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 521526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51983, "largest_seqno": 52524, "table_properties": {"data_size": 518529, "index_size": 969, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7082, "raw_average_key_size": 18, "raw_value_size": 512431, "raw_average_value_size": 1355, "num_data_blocks": 43, "num_entries": 378, "num_filter_entries": 378, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090401, "oldest_key_time": 1764090401, "file_creation_time": 1764090433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 6373 microseconds, and 2439 cpu microseconds.
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.888045) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 521526 bytes OK
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.888071) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.889848) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.889868) EVENT_LOG_v1 {"time_micros": 1764090433889862, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.889888) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 523105, prev total WAL file size 523105, number of live WAL files 2.
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.890434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303132' seq:72057594037927935, type:22 .. '6C6F676D0032323633' seq:0, type:0; will stop at (end)
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(509KB)], [116(10070KB)]
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433890474, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 10833401, "oldest_snapshot_seqno": -1}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7322 keys, 10705590 bytes, temperature: kUnknown
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433959000, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10705590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10655767, "index_size": 30383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 191526, "raw_average_key_size": 26, "raw_value_size": 10523882, "raw_average_value_size": 1437, "num_data_blocks": 1190, "num_entries": 7322, "num_filter_entries": 7322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.960523) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10705590 bytes
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.961979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.9 rd, 156.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.8 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(41.3) write-amplify(20.5) OK, records in: 7844, records dropped: 522 output_compression: NoCompression
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.962015) EVENT_LOG_v1 {"time_micros": 1764090433962000, "job": 70, "event": "compaction_finished", "compaction_time_micros": 68600, "compaction_time_cpu_micros": 30151, "output_level": 6, "num_output_files": 1, "total_output_size": 10705590, "num_input_records": 7844, "num_output_records": 7322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433962316, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433965081, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.890340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:07:13 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:07:14 np0005535469 nova_compute[254092]: 2025-11-25 17:07:14.209 254096 DEBUG nova.network.neutron [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated VIF entry in instance network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:07:14 np0005535469 nova_compute[254092]: 2025-11-25 17:07:14.209 254096 DEBUG nova.network.neutron [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:14 np0005535469 nova_compute[254092]: 2025-11-25 17:07:14.240 254096 DEBUG oslo_concurrency.lockutils [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 12:07:15 np0005535469 nova_compute[254092]: 2025-11-25 17:07:15.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:15 np0005535469 nova_compute[254092]: 2025-11-25 17:07:15.935 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:15 np0005535469 nova_compute[254092]: 2025-11-25 17:07:15.935 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:15 np0005535469 nova_compute[254092]: 2025-11-25 17:07:15.950 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:07:15 np0005535469 nova_compute[254092]: 2025-11-25 17:07:15.970 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090420.966844, c64717b5-8862-4f84-989e-9f21bdc37759 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:15 np0005535469 nova_compute[254092]: 2025-11-25 17:07:15.971 254096 INFO nova.compute.manager [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:07:15 np0005535469 nova_compute[254092]: 2025-11-25 17:07:15.998 254096 DEBUG nova.compute.manager [None req-48858c16-7d19-4f27-bf65-98c03332c5f9 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.024 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.025 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.033 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.033 254096 INFO nova.compute.claims [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.170 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 12:07:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:07:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254771916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.630 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.637 254096 DEBUG nova.compute.provider_tree [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.651 254096 DEBUG nova.scheduler.client.report [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.683 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.684 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.913 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.914 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.938 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:07:16 np0005535469 nova_compute[254092]: 2025-11-25 17:07:16.954 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.039 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.040 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.040 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Creating image(s)#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.072 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.098 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.132 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.140 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.222 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.224 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.224 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.225 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.256 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.262 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c787d91-7197-42cc-9ee6-870806f4904b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.372 254096 DEBUG nova.policy [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.534 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c787d91-7197-42cc-9ee6-870806f4904b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.596 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.681 254096 DEBUG nova.objects.instance [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c787d91-7197-42cc-9ee6-870806f4904b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.695 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.695 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Ensure instance console log exists: /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.695 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.696 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:17 np0005535469 nova_compute[254092]: 2025-11-25 17:07:17.696 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:18 np0005535469 nova_compute[254092]: 2025-11-25 17:07:18.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 25 12:07:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:18.595 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:07:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:18.596 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:07:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:18.597 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:18 np0005535469 nova_compute[254092]: 2025-11-25 17:07:18.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:18 np0005535469 podman[392980]: 2025-11-25 17:07:18.676781867 +0000 UTC m=+0.098320918 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:07:18 np0005535469 podman[392979]: 2025-11-25 17:07:18.714217643 +0000 UTC m=+0.128005541 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 25 12:07:18 np0005535469 podman[392981]: 2025-11-25 17:07:18.749564243 +0000 UTC m=+0.154775696 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:07:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:18 np0005535469 nova_compute[254092]: 2025-11-25 17:07:18.937 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Successfully updated port: c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:07:18 np0005535469 nova_compute[254092]: 2025-11-25 17:07:18.947 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:18 np0005535469 nova_compute[254092]: 2025-11-25 17:07:18.947 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:18 np0005535469 nova_compute[254092]: 2025-11-25 17:07:18.947 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:07:19 np0005535469 nova_compute[254092]: 2025-11-25 17:07:19.009 254096 DEBUG nova.compute.manager [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:19 np0005535469 nova_compute[254092]: 2025-11-25 17:07:19.010 254096 DEBUG nova.compute.manager [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Refreshing instance network info cache due to event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:07:19 np0005535469 nova_compute[254092]: 2025-11-25 17:07:19.010 254096 DEBUG oslo_concurrency.lockutils [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:19 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:19Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:40:a3 10.100.0.6
Nov 25 12:07:19 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:19Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:40:a3 10.100.0.6
Nov 25 12:07:19 np0005535469 nova_compute[254092]: 2025-11-25 17:07:19.605 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:07:20 np0005535469 nova_compute[254092]: 2025-11-25 17:07:20.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 126 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 102 op/s
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.012 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.031 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.031 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance network_info: |[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.032 254096 DEBUG oslo_concurrency.lockutils [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.032 254096 DEBUG nova.network.neutron [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Refreshing network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.037 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start _get_guest_xml network_info=[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.044 254096 WARNING nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.059 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.060 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.064 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.065 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.066 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.066 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.067 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.067 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.068 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.068 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.069 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.069 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.070 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.070 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.071 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.071 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.076 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/798358495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.543 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.565 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:21 np0005535469 nova_compute[254092]: 2025-11-25 17:07:21.569 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376944058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.046 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.048 254096 DEBUG nova.virt.libvirt.vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1986324305',display_name='tempest-TestNetworkBasicOps-server-1986324305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1986324305',id=127,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKrxKz+ClDmDPtttl68dpUGqC2hAC/9rCcixCbug4R50S5nwSU3HKhq3xj3GQLRg8Ve4nqve6H8xXdBSp8jAZjPjBXa2nszRROcY53aqbkyt1QUaGjq4KGsGK4a3amVOFw==',key_name='tempest-TestNetworkBasicOps-929152585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-14xdfzgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:16Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=2c787d91-7197-42cc-9ee6-870806f4904b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.048 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.049 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.050 254096 DEBUG nova.objects.instance [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c787d91-7197-42cc-9ee6-870806f4904b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.067 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <uuid>2c787d91-7197-42cc-9ee6-870806f4904b</uuid>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <name>instance-0000007f</name>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-1986324305</nova:name>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:07:21</nova:creationTime>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <nova:port uuid="c7eaeb08-d94a-4ecb-a87f-459a8d848a74">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <entry name="serial">2c787d91-7197-42cc-9ee6-870806f4904b</entry>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <entry name="uuid">2c787d91-7197-42cc-9ee6-870806f4904b</entry>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2c787d91-7197-42cc-9ee6-870806f4904b_disk">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2c787d91-7197-42cc-9ee6-870806f4904b_disk.config">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:51:b5:45"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <target dev="tapc7eaeb08-d9"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/console.log" append="off"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:07:22 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:07:22 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:07:22 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:07:22 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.068 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Preparing to wait for external event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.068 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.069 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.069 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.070 254096 DEBUG nova.virt.libvirt.vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1986324305',display_name='tempest-TestNetworkBasicOps-server-1986324305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1986324305',id=127,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKrxKz+ClDmDPtttl68dpUGqC2hAC/9rCcixCbug4R50S5nwSU3HKhq3xj3GQLRg8Ve4nqve6H8xXdBSp8jAZjPjBXa2nszRROcY53aqbkyt1QUaGjq4KGsGK4a3amVOFw==',key_name='tempest-TestNetworkBasicOps-929152585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-14xdfzgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:16Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=2c787d91-7197-42cc-9ee6-870806f4904b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.071 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.071 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.072 254096 DEBUG os_vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.073 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.074 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.078 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7eaeb08-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.078 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7eaeb08-d9, col_values=(('external_ids', {'iface-id': 'c7eaeb08-d94a-4ecb-a87f-459a8d848a74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:b5:45', 'vm-uuid': '2c787d91-7197-42cc-9ee6-870806f4904b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:22 np0005535469 NetworkManager[48891]: <info>  [1764090442.1191] manager: (tapc7eaeb08-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.126 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.127 254096 INFO os_vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.178 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.178 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.179 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:51:b5:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.179 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Using config drive#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.199 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 120 op/s
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.801 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Creating config drive at /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.815 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuood24s_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.862 254096 DEBUG nova.network.neutron [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updated VIF entry in instance network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.863 254096 DEBUG nova.network.neutron [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.876 254096 DEBUG oslo_concurrency.lockutils [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.970 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuood24s_" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.995 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:22 np0005535469 nova_compute[254092]: 2025-11-25 17:07:22.998 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.153 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.154 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deleting local config drive /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config because it was imported into RBD.#033[00m
Nov 25 12:07:23 np0005535469 kernel: tapc7eaeb08-d9: entered promiscuous mode
Nov 25 12:07:23 np0005535469 NetworkManager[48891]: <info>  [1764090443.2018] manager: (tapc7eaeb08-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Nov 25 12:07:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:23Z|01306|binding|INFO|Claiming lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for this chassis.
Nov 25 12:07:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:23Z|01307|binding|INFO|c7eaeb08-d94a-4ecb-a87f-459a8d848a74: Claiming fa:16:3e:51:b5:45 10.100.0.7
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.212 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2c787d91-7197-42cc-9ee6-870806f4904b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '7', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.215 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 bound to our chassis#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.217 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe8ef6db-e551-4904-a3ea-4af9320e49b5#033[00m
Nov 25 12:07:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:23Z|01308|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 ovn-installed in OVS
Nov 25 12:07:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:23Z|01309|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 up in Southbound
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.232 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84b0eb04-0989-4b22-8cc1-f3b6471858e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.234 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe8ef6db-e1 in ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.236 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe8ef6db-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d18c1ad-9913-4bbd-972e-5d634fe8c6a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.237 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcf1fea-bbba-4df7-9dda-e34bb41f8768]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 systemd-machined[216343]: New machine qemu-159-instance-0000007f.
Nov 25 12:07:23 np0005535469 systemd-udevd[393182]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.249 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[08a85ee2-cfdd-4699-8cee-1b50671d0f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 systemd[1]: Started Virtual Machine qemu-159-instance-0000007f.
Nov 25 12:07:23 np0005535469 NetworkManager[48891]: <info>  [1764090443.2573] device (tapc7eaeb08-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:07:23 np0005535469 NetworkManager[48891]: <info>  [1764090443.2585] device (tapc7eaeb08-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.271 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[792cc5c4-ae48-4db1-89bc-7927dcb20ceb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.298 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4330e8c5-6cc8-443f-b48e-66f37cfa80e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.303 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87bae3b5-4b1f-4a4b-be91-44f5f6f1f5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 systemd-udevd[393185]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:07:23 np0005535469 NetworkManager[48891]: <info>  [1764090443.3051] manager: (tapfe8ef6db-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/544)
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.340 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a740a8bc-1a90-4c5a-aade-9ad7ffde5182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.343 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d56b5a48-d26a-48bd-8dcd-e0cba0960cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 NetworkManager[48891]: <info>  [1764090443.3675] device (tapfe8ef6db-e0): carrier: link connected
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.374 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d9031fcd-13b8-41af-b98a-2aca7970edaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.390 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d4310ae3-8fd8-4d8d-be31-0c67f67fbbcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690095, 'reachable_time': 29700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393213, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.405 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d728b746-6e82-41cf-88e6-5aab643f78c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:36f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 690095, 'tstamp': 690095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393214, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.421 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4cd10f-e997-420d-b0d9-6a20dd5a1d13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690095, 'reachable_time': 29700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393215, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.456 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e17cb5c-33f2-47c7-839e-17a4f691a4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.517 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d247288f-7c86-4186-829b-6cf837da5244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.518 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.519 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.519 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe8ef6db-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:23 np0005535469 NetworkManager[48891]: <info>  [1764090443.5217] manager: (tapfe8ef6db-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Nov 25 12:07:23 np0005535469 kernel: tapfe8ef6db-e0: entered promiscuous mode
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.525 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.526 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe8ef6db-e0, col_values=(('external_ids', {'iface-id': 'abb62292-a5ed-40d9-8b98-1dcedecc4b03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:23Z|01310|binding|INFO|Releasing lport abb62292-a5ed-40d9-8b98-1dcedecc4b03 from this chassis (sb_readonly=0)
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.545 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8bf51f-010a-48e8-bb3e-5467f6094e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.547 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:07:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.547 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'env', 'PROCESS_TAG=haproxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe8ef6db-e551-4904-a3ea-4af9320e49b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.750 254096 DEBUG nova.compute.manager [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.750 254096 DEBUG oslo_concurrency.lockutils [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.751 254096 DEBUG oslo_concurrency.lockutils [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.751 254096 DEBUG oslo_concurrency.lockutils [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.752 254096 DEBUG nova.compute.manager [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Processing event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.770 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090443.768909, 2c787d91-7197-42cc-9ee6-870806f4904b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.770 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Started (Lifecycle Event)#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.773 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.778 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.781 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance spawned successfully.#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.782 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.795 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.801 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.807 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.807 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.807 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.808 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.808 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.808 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.825 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.826 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090443.7691536, 2c787d91-7197-42cc-9ee6-870806f4904b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.826 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.848 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.852 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090443.7775419, 2c787d91-7197-42cc-9ee6-870806f4904b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.852 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.875 254096 INFO nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 6.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.876 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.877 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.883 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.917 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:07:23 np0005535469 podman[393289]: 2025-11-25 17:07:23.932045299 +0000 UTC m=+0.061863278 container create 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.963 254096 INFO nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 7.96 seconds to build instance.#033[00m
Nov 25 12:07:23 np0005535469 systemd[1]: Started libpod-conmon-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf.scope.
Nov 25 12:07:23 np0005535469 nova_compute[254092]: 2025-11-25 17:07:23.995 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:23 np0005535469 podman[393289]: 2025-11-25 17:07:23.905972074 +0000 UTC m=+0.035790083 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:07:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:07:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791ef2e7b3ae2735d694805a682a8604ad7328a1136199e68bfe2d8104709ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:07:24 np0005535469 podman[393289]: 2025-11-25 17:07:24.025224884 +0000 UTC m=+0.155042933 container init 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 12:07:24 np0005535469 podman[393289]: 2025-11-25 17:07:24.03016355 +0000 UTC m=+0.159981569 container start 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 12:07:24 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : New worker (393310) forked
Nov 25 12:07:24 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : Loading success.
Nov 25 12:07:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 12:07:25 np0005535469 nova_compute[254092]: 2025-11-25 17:07:25.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:25 np0005535469 nova_compute[254092]: 2025-11-25 17:07:25.863 254096 DEBUG nova.compute.manager [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:25 np0005535469 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG oslo_concurrency.lockutils [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:25 np0005535469 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG oslo_concurrency.lockutils [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:25 np0005535469 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG oslo_concurrency.lockutils [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:25 np0005535469 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG nova.compute.manager [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:07:25 np0005535469 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 WARNING nova.compute.manager [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.413 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.413 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.414 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.414 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.414 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.415 254096 INFO nova.compute.manager [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Terminating instance#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.415 254096 DEBUG nova.compute.manager [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:07:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 167 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 12:07:26 np0005535469 kernel: tapc7eaeb08-d9 (unregistering): left promiscuous mode
Nov 25 12:07:26 np0005535469 NetworkManager[48891]: <info>  [1764090446.4617] device (tapc7eaeb08-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:07:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:26Z|01311|binding|INFO|Releasing lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 from this chassis (sb_readonly=0)
Nov 25 12:07:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:26Z|01312|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 down in Southbound
Nov 25 12:07:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:26Z|01313|binding|INFO|Removing iface tapc7eaeb08-d9 ovn-installed in OVS
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.527 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.535 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2c787d91-7197-42cc-9ee6-870806f4904b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '9', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.537 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 unbound from our chassis#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.538 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe8ef6db-e551-4904-a3ea-4af9320e49b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.539 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fa72d3-db56-4eae-a6e6-c6ecbb979012]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.539 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace which is not needed anymore#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 25 12:07:26 np0005535469 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Consumed 3.247s CPU time.
Nov 25 12:07:26 np0005535469 systemd-machined[216343]: Machine qemu-159-instance-0000007f terminated.
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.650 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance destroyed successfully.#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.651 254096 DEBUG nova.objects.instance [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 2c787d91-7197-42cc-9ee6-870806f4904b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:26 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : haproxy version is 2.8.14-c23fe91
Nov 25 12:07:26 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : path to executable is /usr/sbin/haproxy
Nov 25 12:07:26 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [WARNING]  (393308) : Exiting Master process...
Nov 25 12:07:26 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [WARNING]  (393308) : Exiting Master process...
Nov 25 12:07:26 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [ALERT]    (393308) : Current worker (393310) exited with code 143 (Terminated)
Nov 25 12:07:26 np0005535469 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [WARNING]  (393308) : All workers exited. Exiting... (0)
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.664 254096 DEBUG nova.virt.libvirt.vif [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1986324305',display_name='tempest-TestNetworkBasicOps-server-1986324305',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1986324305',id=127,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKrxKz+ClDmDPtttl68dpUGqC2hAC/9rCcixCbug4R50S5nwSU3HKhq3xj3GQLRg8Ve4nqve6H8xXdBSp8jAZjPjBXa2nszRROcY53aqbkyt1QUaGjq4KGsGK4a3amVOFw==',key_name='tempest-TestNetworkBasicOps-929152585',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:07:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-14xdfzgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:07:23Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=2c787d91-7197-42cc-9ee6-870806f4904b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.665 254096 DEBUG nova.network.os_vif_util [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:26 np0005535469 systemd[1]: libpod-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf.scope: Deactivated successfully.
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.665 254096 DEBUG nova.network.os_vif_util [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.666 254096 DEBUG os_vif [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7eaeb08-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.672 254096 INFO os_vif [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')#033[00m
Nov 25 12:07:26 np0005535469 podman[393342]: 2025-11-25 17:07:26.672932776 +0000 UTC m=+0.048741678 container died 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 12:07:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf-userdata-shm.mount: Deactivated successfully.
Nov 25 12:07:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b791ef2e7b3ae2735d694805a682a8604ad7328a1136199e68bfe2d8104709ce-merged.mount: Deactivated successfully.
Nov 25 12:07:26 np0005535469 podman[393342]: 2025-11-25 17:07:26.719046811 +0000 UTC m=+0.094855713 container cleanup 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:07:26 np0005535469 systemd[1]: libpod-conmon-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf.scope: Deactivated successfully.
Nov 25 12:07:26 np0005535469 podman[393399]: 2025-11-25 17:07:26.780096365 +0000 UTC m=+0.042762994 container remove 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.784 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25f6c480-9609-4151-9e90-9374b3b81b28]: (4, ('Tue Nov 25 05:07:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf)\n0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf\nTue Nov 25 05:07:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf)\n0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.786 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65dda74e-96b3-4d06-80f6-53c2cb9816f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 kernel: tapfe8ef6db-e0: left promiscuous mode
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a1e698-5770-4c80-b8bc-50955926b60e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 nova_compute[254092]: 2025-11-25 17:07:26.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.810 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf76c340-a01c-49e7-8b0f-ecaa176b61b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.810 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d06b87b8-6263-4231-86e3-da3816d5ebd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4971bf7-b197-4788-8007-1945ac8c9fe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690088, 'reachable_time': 28420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393414, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.827 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:07:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.828 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3c28f3-aaa3-4d6b-b255-e72e015a09e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:26 np0005535469 systemd[1]: run-netns-ovnmeta\x2dfe8ef6db\x2de551\x2d4904\x2da3ea\x2d4af9320e49b5.mount: Deactivated successfully.
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.004 254096 INFO nova.virt.libvirt.driver [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deleting instance files /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b_del#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.005 254096 INFO nova.virt.libvirt.driver [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deletion of /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b_del complete#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.071 254096 INFO nova.compute.manager [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.072 254096 DEBUG oslo.service.loopingcall [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.072 254096 DEBUG nova.compute.manager [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.072 254096 DEBUG nova.network.neutron [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.963 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.963 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.964 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.964 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.964 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] No waiting events found dispatching network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.965 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.965 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.965 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.966 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.966 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.966 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:07:27 np0005535469 nova_compute[254092]: 2025-11-25 17:07:27.967 254096 WARNING nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:07:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 167 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 12:07:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:29 np0005535469 nova_compute[254092]: 2025-11-25 17:07:29.504 254096 DEBUG nova.network.neutron [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:29 np0005535469 nova_compute[254092]: 2025-11-25 17:07:29.542 254096 INFO nova.compute.manager [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 2.47 seconds to deallocate network for instance.#033[00m
Nov 25 12:07:29 np0005535469 nova_compute[254092]: 2025-11-25 17:07:29.595 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:29 np0005535469 nova_compute[254092]: 2025-11-25 17:07:29.595 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:29 np0005535469 nova_compute[254092]: 2025-11-25 17:07:29.769 254096 DEBUG oslo_concurrency.processutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:07:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/152362480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:07:30 np0005535469 nova_compute[254092]: 2025-11-25 17:07:30.200 254096 DEBUG oslo_concurrency.processutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:30 np0005535469 nova_compute[254092]: 2025-11-25 17:07:30.208 254096 DEBUG nova.compute.provider_tree [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:07:30 np0005535469 nova_compute[254092]: 2025-11-25 17:07:30.230 254096 DEBUG nova.scheduler.client.report [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:07:30 np0005535469 nova_compute[254092]: 2025-11-25 17:07:30.265 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:30 np0005535469 nova_compute[254092]: 2025-11-25 17:07:30.296 254096 INFO nova.scheduler.client.report [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 2c787d91-7197-42cc-9ee6-870806f4904b#033[00m
Nov 25 12:07:30 np0005535469 nova_compute[254092]: 2025-11-25 17:07:30.376 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:30 np0005535469 nova_compute[254092]: 2025-11-25 17:07:30.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 144 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 25 12:07:31 np0005535469 nova_compute[254092]: 2025-11-25 17:07:31.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:31 np0005535469 nova_compute[254092]: 2025-11-25 17:07:31.906 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:31 np0005535469 nova_compute[254092]: 2025-11-25 17:07:31.907 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:31 np0005535469 nova_compute[254092]: 2025-11-25 17:07:31.926 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:07:31 np0005535469 nova_compute[254092]: 2025-11-25 17:07:31.998 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:31 np0005535469 nova_compute[254092]: 2025-11-25 17:07:31.998 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.005 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.006 254096 INFO nova.compute.claims [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.138 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 155 op/s
Nov 25 12:07:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:07:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147495736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.644 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.652 254096 DEBUG nova.compute.provider_tree [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.675 254096 DEBUG nova.scheduler.client.report [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.703 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.704 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.775 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.776 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.801 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.831 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.960 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.962 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:07:32 np0005535469 nova_compute[254092]: 2025-11-25 17:07:32.963 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Creating image(s)#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.005 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.043 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.075 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.079 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.116 254096 DEBUG nova.policy [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.153 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.154 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.155 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.155 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.193 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.199 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 600da46c-eccb-4422-9531-4fa91fdda153_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.528 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 600da46c-eccb-4422-9531-4fa91fdda153_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.615 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.747 254096 DEBUG nova.objects.instance [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid 600da46c-eccb-4422-9531-4fa91fdda153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.764 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Ensure instance console log exists: /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:33 np0005535469 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:34 np0005535469 nova_compute[254092]: 2025-11-25 17:07:34.118 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Successfully created port: 18b66f97-4edf-40c8-b35b-66005e28732c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:07:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 100 op/s
Nov 25 12:07:34 np0005535469 nova_compute[254092]: 2025-11-25 17:07:34.501 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.194 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Successfully updated port: 18b66f97-4edf-40c8-b35b-66005e28732c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.207 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.208 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.209 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.300 254096 DEBUG nova.compute.manager [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.301 254096 DEBUG nova.compute.manager [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing instance network info cache due to event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.301 254096 DEBUG oslo_concurrency.lockutils [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.382 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:07:35 np0005535469 nova_compute[254092]: 2025-11-25 17:07:35.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 12:07:36 np0005535469 nova_compute[254092]: 2025-11-25 17:07:36.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:36 np0005535469 nova_compute[254092]: 2025-11-25 17:07:36.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:36Z|01314|binding|INFO|Releasing lport 50aec7ed-4f15-4d72-87dd-48c327de28ce from this chassis (sb_readonly=0)
Nov 25 12:07:36 np0005535469 nova_compute[254092]: 2025-11-25 17:07:36.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:36 np0005535469 nova_compute[254092]: 2025-11-25 17:07:36.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.217 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.246 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.247 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance network_info: |[{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.248 254096 DEBUG oslo_concurrency.lockutils [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.249 254096 DEBUG nova.network.neutron [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.254 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start _get_guest_xml network_info=[{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.262 254096 WARNING nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.273 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.274 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.280 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.282 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.282 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.283 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.284 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.284 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.284 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.285 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.285 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.285 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.286 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.286 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.286 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.287 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.293 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:07:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379043969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.762 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.784 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:37 np0005535469 nova_compute[254092]: 2025-11-25 17:07:37.789 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765316076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.216 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.218 254096 DEBUG nova.virt.libvirt.vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=128,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-vyqdrhti',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:32Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=600da46c-eccb-4422-9531-4fa91fdda153,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.218 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.219 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.221 254096 DEBUG nova.objects.instance [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid 600da46c-eccb-4422-9531-4fa91fdda153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.236 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <uuid>600da46c-eccb-4422-9531-4fa91fdda153</uuid>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <name>instance-00000080</name>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294</nova:name>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:07:37</nova:creationTime>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <nova:port uuid="18b66f97-4edf-40c8-b35b-66005e28732c">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <entry name="serial">600da46c-eccb-4422-9531-4fa91fdda153</entry>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <entry name="uuid">600da46c-eccb-4422-9531-4fa91fdda153</entry>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/600da46c-eccb-4422-9531-4fa91fdda153_disk">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/600da46c-eccb-4422-9531-4fa91fdda153_disk.config">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:9b:97:5e"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <target dev="tap18b66f97-4e"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/console.log" append="off"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:07:38 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:07:38 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:07:38 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:07:38 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.237 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Preparing to wait for external event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.237 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.238 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.239 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.240 254096 DEBUG nova.virt.libvirt.vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=128,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-vyqdrhti',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:32Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=600da46c-eccb-4422-9531-4fa91fdda153,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.241 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.242 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.242 254096 DEBUG os_vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.244 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.245 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.250 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18b66f97-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.250 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18b66f97-4e, col_values=(('external_ids', {'iface-id': '18b66f97-4edf-40c8-b35b-66005e28732c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:97:5e', 'vm-uuid': '600da46c-eccb-4422-9531-4fa91fdda153'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:38 np0005535469 NetworkManager[48891]: <info>  [1764090458.2536] manager: (tap18b66f97-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.261 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.262 254096 INFO os_vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e')#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.309 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.310 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.310 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:9b:97:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.310 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Using config drive#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.335 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.866 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Creating config drive at /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config#033[00m
Nov 25 12:07:38 np0005535469 nova_compute[254092]: 2025-11-25 17:07:38.871 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6gpoo_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.008 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6gpoo_u" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.031 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.034 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config 600da46c-eccb-4422-9531-4fa91fdda153_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.186 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config 600da46c-eccb-4422-9531-4fa91fdda153_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.187 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deleting local config drive /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config because it was imported into RBD.#033[00m
Nov 25 12:07:39 np0005535469 kernel: tap18b66f97-4e: entered promiscuous mode
Nov 25 12:07:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:39Z|01315|binding|INFO|Claiming lport 18b66f97-4edf-40c8-b35b-66005e28732c for this chassis.
Nov 25 12:07:39 np0005535469 NetworkManager[48891]: <info>  [1764090459.2415] manager: (tap18b66f97-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Nov 25 12:07:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:39Z|01316|binding|INFO|18b66f97-4edf-40c8-b35b-66005e28732c: Claiming fa:16:3e:9b:97:5e 10.100.0.14
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.250 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:97:5e 10.100.0.14'], port_security=['fa:16:3e:9b:97:5e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '600da46c-eccb-4422-9531-4fa91fdda153', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '779da3fb-8b66-4cf7-a59e-bc7311564ace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=18b66f97-4edf-40c8-b35b-66005e28732c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.251 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 18b66f97-4edf-40c8-b35b-66005e28732c in datapath ef2caff8-43ec-4364-a979-521405023410 bound to our chassis#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.253 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef2caff8-43ec-4364-a979-521405023410#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:39Z|01317|binding|INFO|Setting lport 18b66f97-4edf-40c8-b35b-66005e28732c ovn-installed in OVS
Nov 25 12:07:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:39Z|01318|binding|INFO|Setting lport 18b66f97-4edf-40c8-b35b-66005e28732c up in Southbound
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:39 np0005535469 systemd-udevd[393762]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.269 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79efaf68-a367-43a8-8b83-13575e650ca6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:39 np0005535469 NetworkManager[48891]: <info>  [1764090459.2819] device (tap18b66f97-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:07:39 np0005535469 NetworkManager[48891]: <info>  [1764090459.2825] device (tap18b66f97-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:07:39 np0005535469 systemd-machined[216343]: New machine qemu-160-instance-00000080.
Nov 25 12:07:39 np0005535469 systemd[1]: Started Virtual Machine qemu-160-instance-00000080.
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.306 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da418dc7-ce3d-4c6e-9407-8f2f25fb4a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9e16d365-baea-45f6-a05d-dcceec329e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.337 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[482c02f3-eb33-46e0-82b7-b983933e6e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.353 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[677ff912-ccb5-48e5-bc58-b340966d9205]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393775, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.368 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53e76928-cf1e-4e31-bb16-9131d2e55473]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688224, 'tstamp': 688224}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393777, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688228, 'tstamp': 688228}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393777, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.371 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.373 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.374 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef2caff8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.374 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef2caff8-40, col_values=(('external_ids', {'iface-id': '50aec7ed-4f15-4d72-87dd-48c327de28ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.383 254096 DEBUG nova.network.neutron [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updated VIF entry in instance network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.384 254096 DEBUG nova.network.neutron [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.399 254096 DEBUG oslo_concurrency.lockutils [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.506 254096 DEBUG nova.compute.manager [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG oslo_concurrency.lockutils [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG oslo_concurrency.lockutils [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG oslo_concurrency.lockutils [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:39 np0005535469 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG nova.compute.manager [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Processing event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.058 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090460.0574887, 600da46c-eccb-4422-9531-4fa91fdda153 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.058 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Started (Lifecycle Event)#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.060 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.063 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.066 254096 INFO nova.virt.libvirt.driver [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance spawned successfully.#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.066 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.089 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.095 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.095 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.096 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.096 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.097 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.097 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.103 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.137 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.138 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090460.0577555, 600da46c-eccb-4422-9531-4fa91fdda153 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.138 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:07:40
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', 'vms', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root']
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.166 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.169 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090460.0635598, 600da46c-eccb-4422-9531-4fa91fdda153 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.169 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.180 254096 INFO nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 7.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.180 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.205 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.208 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.229 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.244 254096 INFO nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 8.26 seconds to build instance.#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.261 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:07:40 np0005535469 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 12:07:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:07:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838234383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:07:40 np0005535469 nova_compute[254092]: 2025-11-25 17:07:40.979 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.077 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.077 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.278 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3443MB free_disk=59.92197799682617GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance bd1d0296-ae28-4eac-9f38-80e6ca17dbff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 600da46c-eccb-4422-9531-4fa91fdda153 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.412 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.591 254096 DEBUG nova.compute.manager [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.592 254096 DEBUG oslo_concurrency.lockutils [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.593 254096 DEBUG oslo_concurrency.lockutils [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.593 254096 DEBUG oslo_concurrency.lockutils [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.593 254096 DEBUG nova.compute.manager [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] No waiting events found dispatching network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.594 254096 WARNING nova.compute.manager [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received unexpected event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c for instance with vm_state active and task_state None.#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.650 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090446.6488988, 2c787d91-7197-42cc-9ee6-870806f4904b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.650 254096 INFO nova.compute.manager [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.667 254096 DEBUG nova.compute.manager [None req-02f18b01-fbef-49b6-bd2a-0749b4566b14 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:07:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:07:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638363428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.854 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.860 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.877 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.903 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:07:41 np0005535469 nova_compute[254092]: 2025-11-25 17:07:41.903 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Nov 25 12:07:42 np0005535469 nova_compute[254092]: 2025-11-25 17:07:42.904 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:42 np0005535469 nova_compute[254092]: 2025-11-25 17:07:42.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:07:42 np0005535469 nova_compute[254092]: 2025-11-25 17:07:42.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:07:43 np0005535469 nova_compute[254092]: 2025-11-25 17:07:43.154 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:43 np0005535469 nova_compute[254092]: 2025-11-25 17:07:43.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:43 np0005535469 nova_compute[254092]: 2025-11-25 17:07:43.155 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:07:43 np0005535469 nova_compute[254092]: 2025-11-25 17:07:43.156 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:43 np0005535469 nova_compute[254092]: 2025-11-25 17:07:43.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Nov 25 12:07:44 np0005535469 nova_compute[254092]: 2025-11-25 17:07:44.902 254096 DEBUG nova.compute.manager [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:44 np0005535469 nova_compute[254092]: 2025-11-25 17:07:44.902 254096 DEBUG nova.compute.manager [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing instance network info cache due to event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:07:44 np0005535469 nova_compute[254092]: 2025-11-25 17:07:44.903 254096 DEBUG oslo_concurrency.lockutils [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:44 np0005535469 nova_compute[254092]: 2025-11-25 17:07:44.903 254096 DEBUG oslo_concurrency.lockutils [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:44 np0005535469 nova_compute[254092]: 2025-11-25 17:07:44.903 254096 DEBUG nova.network.neutron [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:07:45 np0005535469 nova_compute[254092]: 2025-11-25 17:07:45.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:45 np0005535469 nova_compute[254092]: 2025-11-25 17:07:45.666 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:45 np0005535469 nova_compute[254092]: 2025-11-25 17:07:45.690 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:45 np0005535469 nova_compute[254092]: 2025-11-25 17:07:45.691 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:07:45 np0005535469 nova_compute[254092]: 2025-11-25 17:07:45.692 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 12:07:46 np0005535469 nova_compute[254092]: 2025-11-25 17:07:46.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:46 np0005535469 nova_compute[254092]: 2025-11-25 17:07:46.880 254096 DEBUG nova.network.neutron [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updated VIF entry in instance network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:07:46 np0005535469 nova_compute[254092]: 2025-11-25 17:07:46.881 254096 DEBUG nova.network.neutron [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:46 np0005535469 nova_compute[254092]: 2025-11-25 17:07:46.905 254096 DEBUG oslo_concurrency.lockutils [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:47 np0005535469 nova_compute[254092]: 2025-11-25 17:07:47.278 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:07:48 np0005535469 nova_compute[254092]: 2025-11-25 17:07:48.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:07:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:49 np0005535469 podman[393866]: 2025-11-25 17:07:49.699025889 +0000 UTC m=+0.099201993 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 12:07:49 np0005535469 podman[393867]: 2025-11-25 17:07:49.703534723 +0000 UTC m=+0.098155044 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:07:49 np0005535469 podman[393865]: 2025-11-25 17:07:49.717315241 +0000 UTC m=+0.116957899 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 12:07:50 np0005535469 nova_compute[254092]: 2025-11-25 17:07:50.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001108131674480693 of space, bias 1.0, pg target 0.3324395023442079 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:07:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:07:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 71 op/s
Nov 25 12:07:53 np0005535469 nova_compute[254092]: 2025-11-25 17:07:53.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:53Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:97:5e 10.100.0.14
Nov 25 12:07:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:53Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:97:5e 10.100.0.14
Nov 25 12:07:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.139 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.140 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.153 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.216 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.217 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.224 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.225 254096 INFO nova.compute.claims [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.342 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.7 KiB/s wr, 53 op/s
Nov 25 12:07:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:07:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820752934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.833 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.840 254096 DEBUG nova.compute.provider_tree [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.857 254096 DEBUG nova.scheduler.client.report [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.880 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.881 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.931 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.932 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.958 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:07:54 np0005535469 nova_compute[254092]: 2025-11-25 17:07:54.975 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.073 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.074 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.075 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Creating image(s)#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.098 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.119 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.138 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.141 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.227 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.230 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.231 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.232 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.270 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.276 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:07:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236060575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:07:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:07:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236060575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.587 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.667 254096 DEBUG nova.policy [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.674 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.766 254096 DEBUG nova.objects.instance [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.779 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.780 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Ensure instance console log exists: /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.780 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.781 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:55 np0005535469 nova_compute[254092]: 2025-11-25 17:07:55.781 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 243 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 132 op/s
Nov 25 12:07:56 np0005535469 nova_compute[254092]: 2025-11-25 17:07:56.646 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Successfully created port: 1e4bb581-2eb1-4909-a066-11e1096cbffa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:07:56 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev eb353d93-a044-4788-8e8f-672f0eba129b does not exist
Nov 25 12:07:56 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c0e10d0e-de65-4c0b-8616-6174ca0dd093 does not exist
Nov 25 12:07:56 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c3f08a5f-93cd-40f0-9e4e-ff30b8aad232 does not exist
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:07:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.347 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Successfully updated port: 1e4bb581-2eb1-4909-a066-11e1096cbffa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.358 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.359 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.359 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.453 254096 DEBUG nova.compute.manager [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.454 254096 DEBUG nova.compute.manager [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing instance network info cache due to event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.454 254096 DEBUG oslo_concurrency.lockutils [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:07:57 np0005535469 nova_compute[254092]: 2025-11-25 17:07:57.512 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:07:57 np0005535469 podman[394385]: 2025-11-25 17:07:57.596937964 +0000 UTC m=+0.065533118 container create 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:07:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:07:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:07:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:07:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:07:57 np0005535469 systemd[1]: Started libpod-conmon-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope.
Nov 25 12:07:57 np0005535469 podman[394385]: 2025-11-25 17:07:57.573515112 +0000 UTC m=+0.042110266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:07:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:07:57 np0005535469 podman[394385]: 2025-11-25 17:07:57.702744056 +0000 UTC m=+0.171339190 container init 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:07:57 np0005535469 podman[394385]: 2025-11-25 17:07:57.713536303 +0000 UTC m=+0.182131427 container start 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:07:57 np0005535469 podman[394385]: 2025-11-25 17:07:57.717783258 +0000 UTC m=+0.186378412 container attach 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:07:57 np0005535469 busy_williams[394401]: 167 167
Nov 25 12:07:57 np0005535469 systemd[1]: libpod-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope: Deactivated successfully.
Nov 25 12:07:57 np0005535469 conmon[394401]: conmon 7a4b6e88db36b1fe8fd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope/container/memory.events
Nov 25 12:07:57 np0005535469 podman[394385]: 2025-11-25 17:07:57.722494388 +0000 UTC m=+0.191089512 container died 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:07:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-47b2a252ed1e123587282309b290071d0e09b77bab9afa3579684f0261ef8379-merged.mount: Deactivated successfully.
Nov 25 12:07:57 np0005535469 podman[394385]: 2025-11-25 17:07:57.770265588 +0000 UTC m=+0.238860712 container remove 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:07:57 np0005535469 systemd[1]: libpod-conmon-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope: Deactivated successfully.
Nov 25 12:07:58 np0005535469 podman[394423]: 2025-11-25 17:07:58.037746354 +0000 UTC m=+0.058192916 container create a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:07:58 np0005535469 systemd[1]: Started libpod-conmon-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope.
Nov 25 12:07:58 np0005535469 podman[394423]: 2025-11-25 17:07:58.015632378 +0000 UTC m=+0.036078950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:07:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:07:58 np0005535469 podman[394423]: 2025-11-25 17:07:58.158761794 +0000 UTC m=+0.179208426 container init a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:07:58 np0005535469 podman[394423]: 2025-11-25 17:07:58.172293595 +0000 UTC m=+0.192740147 container start a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:07:58 np0005535469 podman[394423]: 2025-11-25 17:07:58.176065129 +0000 UTC m=+0.196511711 container attach a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.258 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 243 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 3.8 MiB/s wr, 79 op/s
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.612 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.642 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.643 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance network_info: |[{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.643 254096 DEBUG oslo_concurrency.lockutils [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.644 254096 DEBUG nova.network.neutron [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.646 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start _get_guest_xml network_info=[{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.652 254096 WARNING nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.659 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.660 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.664 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.664 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.665 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.665 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.666 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.666 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.666 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.667 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.667 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.667 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:07:58 np0005535469 nova_compute[254092]: 2025-11-25 17:07:58.671 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:07:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1168052522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.104 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.129 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.134 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:07:59 np0005535469 wizardly_wozniak[394440]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:07:59 np0005535469 wizardly_wozniak[394440]: --> relative data size: 1.0
Nov 25 12:07:59 np0005535469 wizardly_wozniak[394440]: --> All data devices are unavailable
Nov 25 12:07:59 np0005535469 systemd[1]: libpod-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope: Deactivated successfully.
Nov 25 12:07:59 np0005535469 systemd[1]: libpod-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope: Consumed 1.009s CPU time.
Nov 25 12:07:59 np0005535469 podman[394520]: 2025-11-25 17:07:59.33078835 +0000 UTC m=+0.043162204 container died a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:07:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4-merged.mount: Deactivated successfully.
Nov 25 12:07:59 np0005535469 podman[394520]: 2025-11-25 17:07:59.394960041 +0000 UTC m=+0.107333885 container remove a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 12:07:59 np0005535469 systemd[1]: libpod-conmon-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope: Deactivated successfully.
Nov 25 12:07:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:07:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266493158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.567 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.570 254096 DEBUG nova.virt.libvirt.vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742985816',display_name='tempest-TestNetworkBasicOps-server-1742985816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742985816',id=129,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00WPYJfplltH8QXqH5VBLBDZhQ795ogiRVA83nhFmClvp98XVxMKrxCuE4fsMnEOta0H4tqzcEHlYCGCoNe9LzeAmLZ6Pp7hI8JV+Hz5Z8Dy6OeDo6E9caHpgF+YsYXg==',key_name='tempest-TestNetworkBasicOps-1671052',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-v513b262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:55Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.570 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.571 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.572 254096 DEBUG nova.objects.instance [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.587 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <uuid>239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e</uuid>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <name>instance-00000081</name>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-1742985816</nova:name>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:07:58</nova:creationTime>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <nova:port uuid="1e4bb581-2eb1-4909-a066-11e1096cbffa">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <entry name="serial">239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e</entry>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <entry name="uuid">239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e</entry>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:5d:af:f0"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <target dev="tap1e4bb581-2e"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/console.log" append="off"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:07:59 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:07:59 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:07:59 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:07:59 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.587 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Preparing to wait for external event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG nova.virt.libvirt.vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742985816',display_name='tempest-TestNetworkBasicOps-server-1742985816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742985816',id=129,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00WPYJfplltH8QXqH5VBLBDZhQ795ogiRVA83nhFmClvp98XVxMKrxCuE4fsMnEOta0H4tqzcEHlYCGCoNe9LzeAmLZ6Pp7hI8JV+Hz5Z8Dy6OeDo6E9caHpgF+YsYXg==',key_name='tempest-TestNetworkBasicOps-1671052',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-v513b262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:55Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.589 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.589 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.589 254096 DEBUG os_vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.590 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.591 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.594 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e4bb581-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.594 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e4bb581-2e, col_values=(('external_ids', {'iface-id': '1e4bb581-2eb1-4909-a066-11e1096cbffa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:af:f0', 'vm-uuid': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:59 np0005535469 NetworkManager[48891]: <info>  [1764090479.5975] manager: (tap1e4bb581-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.605 254096 INFO os_vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e')#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.648 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.648 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.648 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:5d:af:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.649 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Using config drive#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.668 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.859 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.862 254096 INFO nova.compute.manager [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Terminating instance#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.863 254096 DEBUG nova.compute.manager [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:07:59 np0005535469 kernel: tap18b66f97-4e (unregistering): left promiscuous mode
Nov 25 12:07:59 np0005535469 NetworkManager[48891]: <info>  [1764090479.9178] device (tap18b66f97-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:07:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:59Z|01319|binding|INFO|Releasing lport 18b66f97-4edf-40c8-b35b-66005e28732c from this chassis (sb_readonly=0)
Nov 25 12:07:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:59Z|01320|binding|INFO|Setting lport 18b66f97-4edf-40c8-b35b-66005e28732c down in Southbound
Nov 25 12:07:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:07:59Z|01321|binding|INFO|Removing iface tap18b66f97-4e ovn-installed in OVS
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.933 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:97:5e 10.100.0.14'], port_security=['fa:16:3e:9b:97:5e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '600da46c-eccb-4422-9531-4fa91fdda153', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fd5575e0-e7b7-4a41-9013-0119bbe9a244', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=18b66f97-4edf-40c8-b35b-66005e28732c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:07:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.934 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 18b66f97-4edf-40c8-b35b-66005e28732c in datapath ef2caff8-43ec-4364-a979-521405023410 unbound from our chassis#033[00m
Nov 25 12:07:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.936 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef2caff8-43ec-4364-a979-521405023410#033[00m
Nov 25 12:07:59 np0005535469 nova_compute[254092]: 2025-11-25 17:07:59.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:07:59 np0005535469 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Deactivated successfully.
Nov 25 12:07:59 np0005535469 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Consumed 13.650s CPU time.
Nov 25 12:07:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.964 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54f8866e-b238-441d-b004-3d0995df41e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:07:59 np0005535469 systemd-machined[216343]: Machine qemu-160-instance-00000080 terminated.
Nov 25 12:07:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.997 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1216f144-e829-4407-9fe2-8182a90dd464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.000 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[36eccbf4-8503-4531-8d28-e7bf9a336e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.029 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[79bfafd9-597d-47af-ac48-32f6fe5c48ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.031 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Creating config drive at /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.035 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygcj1kuc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.046 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fed02793-399a-4b2d-a8d0-87c54972b4cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394713, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.061 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d26df08b-f350-42c6-a2ce-cf17e646c6dc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688224, 'tstamp': 688224}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394717, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688228, 'tstamp': 688228}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394717, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.062 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.071 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef2caff8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.071 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.072 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef2caff8-40, col_values=(('external_ids', {'iface-id': '50aec7ed-4f15-4d72-87dd-48c327de28ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.072 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.098 254096 INFO nova.virt.libvirt.driver [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance destroyed successfully.#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.099 254096 DEBUG nova.objects.instance [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid 600da46c-eccb-4422-9531-4fa91fdda153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:00 np0005535469 podman[394714]: 2025-11-25 17:08:00.105761957 +0000 UTC m=+0.047019941 container create 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.113 254096 DEBUG nova.virt.libvirt.vif [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=128,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:07:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-vyqdrhti',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:07:40Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=600da46c-eccb-4422-9531-4fa91fdda153,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.113 254096 DEBUG nova.network.os_vif_util [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.114 254096 DEBUG nova.network.os_vif_util [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.114 254096 DEBUG os_vif [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.116 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18b66f97-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.124 254096 INFO os_vif [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e')#033[00m
Nov 25 12:08:00 np0005535469 systemd[1]: Started libpod-conmon-03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52.scope.
Nov 25 12:08:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:08:00 np0005535469 podman[394714]: 2025-11-25 17:08:00.180885386 +0000 UTC m=+0.122143390 container init 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:08:00 np0005535469 podman[394714]: 2025-11-25 17:08:00.08768476 +0000 UTC m=+0.028942774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.182 254096 DEBUG nova.network.neutron [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updated VIF entry in instance network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.183 254096 DEBUG nova.network.neutron [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.184 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygcj1kuc" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:00 np0005535469 podman[394714]: 2025-11-25 17:08:00.192532707 +0000 UTC m=+0.133790681 container start 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:08:00 np0005535469 podman[394714]: 2025-11-25 17:08:00.196813114 +0000 UTC m=+0.138071118 container attach 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:08:00 np0005535469 sweet_goldberg[394764]: 167 167
Nov 25 12:08:00 np0005535469 systemd[1]: libpod-03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52.scope: Deactivated successfully.
Nov 25 12:08:00 np0005535469 podman[394714]: 2025-11-25 17:08:00.205392129 +0000 UTC m=+0.146650123 container died 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.212 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.223 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c9ae562390774e8cb4080234ee850bc8f4843542156650cca254cc75f2bde7c6-merged.mount: Deactivated successfully.
Nov 25 12:08:00 np0005535469 podman[394714]: 2025-11-25 17:08:00.249593971 +0000 UTC m=+0.190851965 container remove 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:08:00 np0005535469 systemd[1]: libpod-conmon-03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52.scope: Deactivated successfully.
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.276 254096 DEBUG nova.compute.manager [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-unplugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.276 254096 DEBUG oslo_concurrency.lockutils [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG oslo_concurrency.lockutils [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG oslo_concurrency.lockutils [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG nova.compute.manager [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] No waiting events found dispatching network-vif-unplugged-18b66f97-4edf-40c8-b35b-66005e28732c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG nova.compute.manager [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-unplugged-18b66f97-4edf-40c8-b35b-66005e28732c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.278 254096 DEBUG oslo_concurrency.lockutils [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.406 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.407 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deleting local config drive /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config because it was imported into RBD.#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.452 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 246 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 12:08:00 np0005535469 podman[394829]: 2025-11-25 17:08:00.480124754 +0000 UTC m=+0.061524448 container create b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:08:00 np0005535469 kernel: tap1e4bb581-2e: entered promiscuous mode
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.516 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 NetworkManager[48891]: <info>  [1764090480.5180] manager: (tap1e4bb581-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Nov 25 12:08:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:00Z|01322|binding|INFO|Claiming lport 1e4bb581-2eb1-4909-a066-11e1096cbffa for this chassis.
Nov 25 12:08:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:00Z|01323|binding|INFO|1e4bb581-2eb1-4909-a066-11e1096cbffa: Claiming fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 12:08:00 np0005535469 systemd-udevd[394690]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.526 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.528 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 bound to our chassis#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.529 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93#033[00m
Nov 25 12:08:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:00Z|01324|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa ovn-installed in OVS
Nov 25 12:08:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:00Z|01325|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa up in Southbound
Nov 25 12:08:00 np0005535469 systemd[1]: Started libpod-conmon-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope.
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 NetworkManager[48891]: <info>  [1764090480.5441] device (tap1e4bb581-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:08:00 np0005535469 NetworkManager[48891]: <info>  [1764090480.5457] device (tap1e4bb581-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:08:00 np0005535469 podman[394829]: 2025-11-25 17:08:00.454627725 +0000 UTC m=+0.036027439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35e1d67d-81e9-4d6c-bc1b-46a5359f2c21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.556 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap829d1d91-41 in ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.559 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap829d1d91-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.559 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6718f14f-de2c-4150-9839-150cea77d442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.560 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65c25884-dbdb-4d75-ab96-6859f2324bfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 systemd-machined[216343]: New machine qemu-161-instance-00000081.
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.572 254096 INFO nova.virt.libvirt.driver [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deleting instance files /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153_del#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.573 254096 INFO nova.virt.libvirt.driver [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deletion of /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153_del complete#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.572 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[409db817-0bf8-40a7-9a73-78797fad8c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:08:00 np0005535469 systemd[1]: Started Virtual Machine qemu-161-instance-00000081.
Nov 25 12:08:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:00 np0005535469 podman[394829]: 2025-11-25 17:08:00.603253501 +0000 UTC m=+0.184653225 container init b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:08:00 np0005535469 podman[394829]: 2025-11-25 17:08:00.61159201 +0000 UTC m=+0.192991684 container start b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06898e71-04e5-44fc-a8b6-b407e186724f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 podman[394829]: 2025-11-25 17:08:00.617353928 +0000 UTC m=+0.198753622 container attach b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.627 254096 INFO nova.compute.manager [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.628 254096 DEBUG oslo.service.loopingcall [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.629 254096 DEBUG nova.compute.manager [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.629 254096 DEBUG nova.network.neutron [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.643 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3880ac-028d-4665-a4d5-e6dd51e9bbac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 NetworkManager[48891]: <info>  [1764090480.6511] manager: (tap829d1d91-40): new Veth device (/org/freedesktop/NetworkManager/Devices/550)
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.653 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[741db36e-5d47-4ee2-9c7f-fb5fe4e97123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.695 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[60c2d008-8a76-42a5-8983-5d70f843ea17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.698 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[db8e7e9e-0523-47f8-9a9d-e92c9b376fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 NetworkManager[48891]: <info>  [1764090480.7212] device (tap829d1d91-40): carrier: link connected
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.728 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[733ae763-148b-4c24-87a7-1f766ca01ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b9840c-7b23-4d5f-ac07-2bff4be4f200]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap829d1d91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3f:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693831, 'reachable_time': 15069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394893, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.758 254096 DEBUG nova.compute.manager [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.758 254096 DEBUG oslo_concurrency.lockutils [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.759 254096 DEBUG oslo_concurrency.lockutils [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.759 254096 DEBUG oslo_concurrency.lockutils [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.759 254096 DEBUG nova.compute.manager [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Processing event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.761 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e52d2a3e-6791-40cb-8385-1c3f33a89262]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:3f5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693831, 'tstamp': 693831}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394894, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6b83ac-04d7-41df-8582-8d2f89cbe346]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap829d1d91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3f:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693831, 'reachable_time': 15069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394895, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.811 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fab2396a-3772-4e1a-acf3-1df00bcc8034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc5b7c9-93f5-4534-b264-29e42ced07e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap829d1d91-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap829d1d91-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:00 np0005535469 NetworkManager[48891]: <info>  [1764090480.8814] manager: (tap829d1d91-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Nov 25 12:08:00 np0005535469 kernel: tap829d1d91-40: entered promiscuous mode
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.890 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap829d1d91-40, col_values=(('external_ids', {'iface-id': '28d8cf8b-f054-409e-bf0f-959d42d6b803'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:00 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:00Z|01326|binding|INFO|Releasing lport 28d8cf8b-f054-409e-bf0f-959d42d6b803 from this chassis (sb_readonly=0)
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.893 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/829d1d91-46ff-47ac-8fc7-645e2a94fa93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/829d1d91-46ff-47ac-8fc7-645e2a94fa93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e08c785-fa75-4629-8503-07676affd0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.905 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-829d1d91-46ff-47ac-8fc7-645e2a94fa93
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/829d1d91-46ff-47ac-8fc7-645e2a94fa93.pid.haproxy
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 829d1d91-46ff-47ac-8fc7-645e2a94fa93
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:08:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.905 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'env', 'PROCESS_TAG=haproxy-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/829d1d91-46ff-47ac-8fc7-645e2a94fa93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:08:00 np0005535469 nova_compute[254092]: 2025-11-25 17:08:00.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.172 254096 DEBUG nova.network.neutron [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.185 254096 INFO nova.compute.manager [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 0.56 seconds to deallocate network for instance.#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.242 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.242 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:01 np0005535469 podman[394960]: 2025-11-25 17:08:01.305210485 +0000 UTC m=+0.049709465 container create 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 12:08:01 np0005535469 systemd[1]: Started libpod-conmon-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b.scope.
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.351 254096 DEBUG oslo_concurrency.processutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:01 np0005535469 podman[394960]: 2025-11-25 17:08:01.278396859 +0000 UTC m=+0.022895859 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]: {
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:    "0": [
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:        {
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "devices": [
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "/dev/loop3"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            ],
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_name": "ceph_lv0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_size": "21470642176",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "name": "ceph_lv0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "tags": {
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cluster_name": "ceph",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.crush_device_class": "",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.encrypted": "0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osd_id": "0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.type": "block",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.vdo": "0"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            },
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "type": "block",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "vg_name": "ceph_vg0"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:        }
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:    ],
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:    "1": [
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:        {
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "devices": [
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "/dev/loop4"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            ],
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_name": "ceph_lv1",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_size": "21470642176",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "name": "ceph_lv1",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "tags": {
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cluster_name": "ceph",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.crush_device_class": "",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.encrypted": "0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osd_id": "1",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.type": "block",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.vdo": "0"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            },
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "type": "block",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "vg_name": "ceph_vg1"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:        }
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:    ],
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:    "2": [
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:        {
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "devices": [
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "/dev/loop5"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            ],
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_name": "ceph_lv2",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_size": "21470642176",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "name": "ceph_lv2",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "tags": {
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.cluster_name": "ceph",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.crush_device_class": "",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.encrypted": "0",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osd_id": "2",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.type": "block",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:                "ceph.vdo": "0"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            },
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "type": "block",
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:            "vg_name": "ceph_vg2"
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:        }
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]:    ]
Nov 25 12:08:01 np0005535469 blissful_varahamihira[394859]: }
Nov 25 12:08:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.391 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090481.3698726, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.392 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Started (Lifecycle Event)#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.394 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:08:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8974f3be415033b929b46e3c2caddf48c1e9be811e4c30ce085d7210f3d05c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:01 np0005535469 systemd[1]: libpod-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope: Deactivated successfully.
Nov 25 12:08:01 np0005535469 conmon[394859]: conmon b8dc833c3d0690ec6742 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope/container/memory.events
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.405 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.410 254096 INFO nova.virt.libvirt.driver [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance spawned successfully.#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.410 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:08:01 np0005535469 podman[394960]: 2025-11-25 17:08:01.413873476 +0000 UTC m=+0.158372476 container init 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.415 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:01 np0005535469 podman[394960]: 2025-11-25 17:08:01.421556226 +0000 UTC m=+0.166055206 container start 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.427 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.432 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.432 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.433 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.433 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.434 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.434 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:01 np0005535469 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : New worker (395002) forked
Nov 25 12:08:01 np0005535469 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : Loading success.
Nov 25 12:08:01 np0005535469 podman[394992]: 2025-11-25 17:08:01.471080155 +0000 UTC m=+0.035753582 container died b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:08:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb-merged.mount: Deactivated successfully.
Nov 25 12:08:01 np0005535469 podman[394992]: 2025-11-25 17:08:01.525137387 +0000 UTC m=+0.089810794 container remove b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:08:01 np0005535469 systemd[1]: libpod-conmon-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope: Deactivated successfully.
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.618 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.619 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090481.3700488, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.619 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.638 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.647 254096 INFO nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 6.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.647 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.649 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090481.4051926, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.649 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.674 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.678 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.699 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.717 254096 INFO nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 7.53 seconds to build instance.#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.735 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181830008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.803 254096 DEBUG oslo_concurrency.processutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.817 254096 DEBUG nova.compute.provider_tree [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.831 254096 DEBUG nova.scheduler.client.report [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.866 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:01 np0005535469 nova_compute[254092]: 2025-11-25 17:08:01.900 254096 INFO nova.scheduler.client.report [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance 600da46c-eccb-4422-9531-4fa91fdda153#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.010 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:02 np0005535469 podman[395178]: 2025-11-25 17:08:02.305786149 +0000 UTC m=+0.062924038 container create 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:08:02 np0005535469 podman[395178]: 2025-11-25 17:08:02.267629542 +0000 UTC m=+0.024767461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:08:02 np0005535469 systemd[1]: Started libpod-conmon-882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac.scope.
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.387 254096 DEBUG nova.compute.manager [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.387 254096 DEBUG oslo_concurrency.lockutils [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.387 254096 DEBUG oslo_concurrency.lockutils [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.388 254096 DEBUG oslo_concurrency.lockutils [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.388 254096 DEBUG nova.compute.manager [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] No waiting events found dispatching network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.388 254096 WARNING nova.compute.manager [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received unexpected event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:08:02 np0005535469 podman[395178]: 2025-11-25 17:08:02.427056715 +0000 UTC m=+0.184194614 container init 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 12:08:02 np0005535469 podman[395178]: 2025-11-25 17:08:02.435764154 +0000 UTC m=+0.192902063 container start 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:08:02 np0005535469 podman[395178]: 2025-11-25 17:08:02.439761774 +0000 UTC m=+0.196899683 container attach 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 12:08:02 np0005535469 heuristic_galileo[395194]: 167 167
Nov 25 12:08:02 np0005535469 systemd[1]: libpod-882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac.scope: Deactivated successfully.
Nov 25 12:08:02 np0005535469 podman[395178]: 2025-11-25 17:08:02.44326842 +0000 UTC m=+0.200406299 container died 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:08:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 205 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Nov 25 12:08:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d4f5d1b39b261f479904d24370dacf6d7a75d1536ef2669845c62f34a184cc80-merged.mount: Deactivated successfully.
Nov 25 12:08:02 np0005535469 podman[395178]: 2025-11-25 17:08:02.492830399 +0000 UTC m=+0.249968278 container remove 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:08:02 np0005535469 systemd[1]: libpod-conmon-882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac.scope: Deactivated successfully.
Nov 25 12:08:02 np0005535469 podman[395218]: 2025-11-25 17:08:02.709753499 +0000 UTC m=+0.045037536 container create 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:08:02 np0005535469 systemd[1]: Started libpod-conmon-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope.
Nov 25 12:08:02 np0005535469 podman[395218]: 2025-11-25 17:08:02.689900064 +0000 UTC m=+0.025184131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:08:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:08:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:02 np0005535469 podman[395218]: 2025-11-25 17:08:02.818339937 +0000 UTC m=+0.153624004 container init 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:08:02 np0005535469 podman[395218]: 2025-11-25 17:08:02.827885909 +0000 UTC m=+0.163169956 container start 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:08:02 np0005535469 podman[395218]: 2025-11-25 17:08:02.831665743 +0000 UTC m=+0.166949810 container attach 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.865 254096 DEBUG nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.866 254096 DEBUG oslo_concurrency.lockutils [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG oslo_concurrency.lockutils [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG oslo_concurrency.lockutils [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 WARNING nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state active and task_state None.#033[00m
Nov 25 12:08:02 np0005535469 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-deleted-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]: {
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "osd_id": 1,
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "type": "bluestore"
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:    },
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "osd_id": 2,
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "type": "bluestore"
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:    },
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "osd_id": 0,
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:        "type": "bluestore"
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]:    }
Nov 25 12:08:03 np0005535469 modest_mcclintock[395235]: }
Nov 25 12:08:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:03 np0005535469 systemd[1]: libpod-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope: Deactivated successfully.
Nov 25 12:08:03 np0005535469 systemd[1]: libpod-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope: Consumed 1.078s CPU time.
Nov 25 12:08:03 np0005535469 podman[395268]: 2025-11-25 17:08:03.953227485 +0000 UTC m=+0.034122347 container died 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:08:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb-merged.mount: Deactivated successfully.
Nov 25 12:08:04 np0005535469 podman[395268]: 2025-11-25 17:08:04.030743942 +0000 UTC m=+0.111638814 container remove 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:08:04 np0005535469 systemd[1]: libpod-conmon-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope: Deactivated successfully.
Nov 25 12:08:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:08:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:08:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:08:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:08:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9ca37df6-69ce-428f-a820-f8c37a06f52f does not exist
Nov 25 12:08:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c288f77c-0f83-4c0a-b20f-78b30e66e99c does not exist
Nov 25 12:08:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 205 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Nov 25 12:08:04 np0005535469 nova_compute[254092]: 2025-11-25 17:08:04.842 254096 DEBUG nova.compute.manager [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:04 np0005535469 nova_compute[254092]: 2025-11-25 17:08:04.842 254096 DEBUG nova.compute.manager [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing instance network info cache due to event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:08:04 np0005535469 nova_compute[254092]: 2025-11-25 17:08:04.842 254096 DEBUG oslo_concurrency.lockutils [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:04 np0005535469 nova_compute[254092]: 2025-11-25 17:08:04.843 254096 DEBUG oslo_concurrency.lockutils [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:04 np0005535469 nova_compute[254092]: 2025-11-25 17:08:04.843 254096 DEBUG nova.network.neutron [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.018 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.020 254096 INFO nova.compute.manager [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Terminating instance#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.021 254096 DEBUG nova.compute.manager [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:08:05 np0005535469 kernel: tap54e9d2ac-a4 (unregistering): left promiscuous mode
Nov 25 12:08:05 np0005535469 NetworkManager[48891]: <info>  [1764090485.0831] device (tap54e9d2ac-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:05Z|01327|binding|INFO|Releasing lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c from this chassis (sb_readonly=0)
Nov 25 12:08:05 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:05Z|01328|binding|INFO|Setting lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c down in Southbound
Nov 25 12:08:05 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:05Z|01329|binding|INFO|Removing iface tap54e9d2ac-a4 ovn-installed in OVS
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:08:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.117 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:40:a3 10.100.0.6'], port_security=['fa:16:3e:09:40:a3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd1d0296-ae28-4eac-9f38-80e6ca17dbff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b953fef-d250-41a4-af84-97bf9c7f4822 779da3fb-8b66-4cf7-a59e-bc7311564ace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.118 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c in datapath ef2caff8-43ec-4364-a979-521405023410 unbound from our chassis#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.119 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef2caff8-43ec-4364-a979-521405023410, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.120 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[46a38125-57a9-49c6-8542-96f8488d23f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.121 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef2caff8-43ec-4364-a979-521405023410 namespace which is not needed anymore#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 25 12:08:05 np0005535469 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Consumed 16.583s CPU time.
Nov 25 12:08:05 np0005535469 systemd-machined[216343]: Machine qemu-158-instance-0000007e terminated.
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.262 254096 INFO nova.virt.libvirt.driver [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance destroyed successfully.#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.262 254096 DEBUG nova.objects.instance [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.273 254096 DEBUG nova.virt.libvirt.vif [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=126,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:07:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-uvh0a05a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:07:05Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=bd1d0296-ae28-4eac-9f38-80e6ca17dbff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.273 254096 DEBUG nova.network.os_vif_util [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.274 254096 DEBUG nova.network.os_vif_util [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.274 254096 DEBUG os_vif [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.277 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54e9d2ac-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.282 254096 INFO os_vif [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4')#033[00m
Nov 25 12:08:05 np0005535469 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : haproxy version is 2.8.14-c23fe91
Nov 25 12:08:05 np0005535469 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : path to executable is /usr/sbin/haproxy
Nov 25 12:08:05 np0005535469 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [WARNING]  (392780) : Exiting Master process...
Nov 25 12:08:05 np0005535469 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [ALERT]    (392780) : Current worker (392782) exited with code 143 (Terminated)
Nov 25 12:08:05 np0005535469 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [WARNING]  (392780) : All workers exited. Exiting... (0)
Nov 25 12:08:05 np0005535469 systemd[1]: libpod-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0.scope: Deactivated successfully.
Nov 25 12:08:05 np0005535469 podman[395358]: 2025-11-25 17:08:05.325025541 +0000 UTC m=+0.071364529 container died a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:08:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0-userdata-shm.mount: Deactivated successfully.
Nov 25 12:08:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e69c52ab9da1464e9256b97d314ba04db47e1086bb356a0791f8cd44f4454be6-merged.mount: Deactivated successfully.
Nov 25 12:08:05 np0005535469 podman[395358]: 2025-11-25 17:08:05.369869631 +0000 UTC m=+0.116208609 container cleanup a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:08:05 np0005535469 systemd[1]: libpod-conmon-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0.scope: Deactivated successfully.
Nov 25 12:08:05 np0005535469 podman[395418]: 2025-11-25 17:08:05.449480874 +0000 UTC m=+0.052574933 container remove a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.457 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff464f6-6486-4c29-b0be-dc0fb3ef0a31]: (4, ('Tue Nov 25 05:08:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410 (a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0)\na232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0\nTue Nov 25 05:08:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410 (a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0)\na232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e220fde4-54ea-4ab9-9f97-aef0503b0b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.460 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:05 np0005535469 kernel: tapef2caff8-40: left promiscuous mode
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.481 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e743f26e-6dca-4b20-b112-1a28fc4b5873]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.499 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6600c6e7-6d9f-4cc3-b1dd-1b993062a767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.502 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26a355b8-dab9-4170-9870-f7353cfaa9c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.527 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f5b518-bf50-4536-a930-1e01a9589d04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688203, 'reachable_time': 38596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395433, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.531 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef2caff8-43ec-4364-a979-521405023410 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:08:05 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.531 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[238a4dfa-cbf5-43d1-b641-973dbd7740d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:05 np0005535469 systemd[1]: run-netns-ovnmeta\x2def2caff8\x2d43ec\x2d4364\x2da979\x2d521405023410.mount: Deactivated successfully.
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.634 254096 INFO nova.virt.libvirt.driver [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deleting instance files /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_del#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.635 254096 INFO nova.virt.libvirt.driver [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deletion of /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_del complete#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.704 254096 INFO nova.compute.manager [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.705 254096 DEBUG oslo.service.loopingcall [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.705 254096 DEBUG nova.compute.manager [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.706 254096 DEBUG nova.network.neutron [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.955 254096 DEBUG nova.compute.manager [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-unplugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.955 254096 DEBUG oslo_concurrency.lockutils [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.957 254096 DEBUG oslo_concurrency.lockutils [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.957 254096 DEBUG oslo_concurrency.lockutils [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.957 254096 DEBUG nova.compute.manager [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] No waiting events found dispatching network-vif-unplugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:05 np0005535469 nova_compute[254092]: 2025-11-25 17:08:05.958 254096 DEBUG nova.compute.manager [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-unplugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:08:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 131 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.016 254096 DEBUG nova.network.neutron [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.031 254096 INFO nova.compute.manager [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 1.33 seconds to deallocate network for instance.#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.071 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.071 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.117 254096 DEBUG nova.network.neutron [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated VIF entry in instance network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.118 254096 DEBUG nova.network.neutron [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.137 254096 DEBUG oslo_concurrency.lockutils [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.161 254096 DEBUG oslo_concurrency.processutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/94981960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.619 254096 DEBUG oslo_concurrency.processutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.625 254096 DEBUG nova.compute.provider_tree [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.639 254096 DEBUG nova.scheduler.client.report [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.675 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.707 254096 INFO nova.scheduler.client.report [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance bd1d0296-ae28-4eac-9f38-80e6ca17dbff#033[00m
Nov 25 12:08:07 np0005535469 nova_compute[254092]: 2025-11-25 17:08:07.774 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.066 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.067 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.067 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.068 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.068 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] No waiting events found dispatching network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.068 254096 WARNING nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received unexpected event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.069 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-deleted-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.069 254096 INFO nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Neutron deleted interface 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.069 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.072 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Detach interface failed, port_id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c, reason: Instance bd1d0296-ae28-4eac-9f38-80e6ca17dbff could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing instance network info cache due to event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:08 np0005535469 nova_compute[254092]: 2025-11-25 17:08:08.074 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:08:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 131 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 128 op/s
Nov 25 12:08:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:09 np0005535469 nova_compute[254092]: 2025-11-25 17:08:09.641 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updated VIF entry in instance network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:08:09 np0005535469 nova_compute[254092]: 2025-11-25 17:08:09.642 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:09 np0005535469 nova_compute[254092]: 2025-11-25 17:08:09.662 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:08:10 np0005535469 nova_compute[254092]: 2025-11-25 17:08:10.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:10 np0005535469 nova_compute[254092]: 2025-11-25 17:08:10.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 88 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 141 op/s
Nov 25 12:08:10 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:10Z|01330|binding|INFO|Releasing lport 28d8cf8b-f054-409e-bf0f-959d42d6b803 from this chassis (sb_readonly=0)
Nov 25 12:08:11 np0005535469 nova_compute[254092]: 2025-11-25 17:08:11.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 88 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 129 op/s
Nov 25 12:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:13Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 12:08:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:13Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 12:08:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 88 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Nov 25 12:08:15 np0005535469 nova_compute[254092]: 2025-11-25 17:08:15.096 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090480.0954783, 600da46c-eccb-4422-9531-4fa91fdda153 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:15 np0005535469 nova_compute[254092]: 2025-11-25 17:08:15.096 254096 INFO nova.compute.manager [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:08:15 np0005535469 nova_compute[254092]: 2025-11-25 17:08:15.124 254096 DEBUG nova.compute.manager [None req-15e40d0e-61af-482d-9c91-8041ed84922e - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:15 np0005535469 nova_compute[254092]: 2025-11-25 17:08:15.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:15 np0005535469 nova_compute[254092]: 2025-11-25 17:08:15.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Nov 25 12:08:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 12:08:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:20 np0005535469 nova_compute[254092]: 2025-11-25 17:08:20.257 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090485.2546551, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:20 np0005535469 nova_compute[254092]: 2025-11-25 17:08:20.257 254096 INFO nova.compute.manager [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:08:20 np0005535469 nova_compute[254092]: 2025-11-25 17:08:20.284 254096 DEBUG nova.compute.manager [None req-d4fbcff8-32e2-42cc-91da-c005b04142a2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:20 np0005535469 nova_compute[254092]: 2025-11-25 17:08:20.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:20 np0005535469 nova_compute[254092]: 2025-11-25 17:08:20.460 254096 INFO nova.compute.manager [None req-ab059e85-ca32-4ef3-9c76-fab88e85d6a9 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Get console output#033[00m
Nov 25 12:08:20 np0005535469 nova_compute[254092]: 2025-11-25 17:08:20.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 12:08:20 np0005535469 nova_compute[254092]: 2025-11-25 17:08:20.472 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:08:20 np0005535469 podman[395457]: 2025-11-25 17:08:20.689865739 +0000 UTC m=+0.086017160 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:08:20 np0005535469 podman[395456]: 2025-11-25 17:08:20.695601356 +0000 UTC m=+0.095325025 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:08:20 np0005535469 podman[395458]: 2025-11-25 17:08:20.724240632 +0000 UTC m=+0.114098480 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 12:08:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:08:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:22Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 12:08:22 np0005535469 nova_compute[254092]: 2025-11-25 17:08:22.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:22.866 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:22.867 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:08:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:08:25 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:24Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 12:08:25 np0005535469 nova_compute[254092]: 2025-11-25 17:08:25.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:25 np0005535469 nova_compute[254092]: 2025-11-25 17:08:25.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.225 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.227 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.228 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.229 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.229 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.230 254096 INFO nova.compute.manager [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Terminating instance#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.232 254096 DEBUG nova.compute.manager [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:08:26 np0005535469 kernel: tap1e4bb581-2e (unregistering): left promiscuous mode
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.295 254096 DEBUG nova.compute.manager [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:26 np0005535469 NetworkManager[48891]: <info>  [1764090506.2972] device (tap1e4bb581-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.297 254096 DEBUG nova.compute.manager [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing instance network info cache due to event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.300 254096 DEBUG oslo_concurrency.lockutils [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.300 254096 DEBUG oslo_concurrency.lockutils [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.300 254096 DEBUG nova.network.neutron [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01331|binding|INFO|Releasing lport 1e4bb581-2eb1-4909-a066-11e1096cbffa from this chassis (sb_readonly=0)
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01332|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa down in Southbound
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01333|binding|INFO|Removing iface tap1e4bb581-2e ovn-installed in OVS
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:08:26 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.321 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.325 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 unbound from our chassis#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.328 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.330 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0267cc8a-50d6-4e40-a850-2a9cbac969ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.331 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 namespace which is not needed anymore#033[00m
Nov 25 12:08:26 np0005535469 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 25 12:08:26 np0005535469 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Consumed 14.428s CPU time.
Nov 25 12:08:26 np0005535469 systemd-machined[216343]: Machine qemu-161-instance-00000081 terminated.
Nov 25 12:08:26 np0005535469 kernel: tap1e4bb581-2e: entered promiscuous mode
Nov 25 12:08:26 np0005535469 NetworkManager[48891]: <info>  [1764090506.4574] manager: (tap1e4bb581-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01334|binding|INFO|Claiming lport 1e4bb581-2eb1-4909-a066-11e1096cbffa for this chassis.
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01335|binding|INFO|1e4bb581-2eb1-4909-a066-11e1096cbffa: Claiming fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 12:08:26 np0005535469 kernel: tap1e4bb581-2e (unregistering): left promiscuous mode
Nov 25 12:08:26 np0005535469 systemd-udevd[395523]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.467 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:08:26 np0005535469 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : haproxy version is 2.8.14-c23fe91
Nov 25 12:08:26 np0005535469 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : path to executable is /usr/sbin/haproxy
Nov 25 12:08:26 np0005535469 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [WARNING]  (394994) : Exiting Master process...
Nov 25 12:08:26 np0005535469 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [ALERT]    (394994) : Current worker (395002) exited with code 143 (Terminated)
Nov 25 12:08:26 np0005535469 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [WARNING]  (394994) : All workers exited. Exiting... (0)
Nov 25 12:08:26 np0005535469 systemd[1]: libpod-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b.scope: Deactivated successfully.
Nov 25 12:08:26 np0005535469 podman[395544]: 2025-11-25 17:08:26.485456951 +0000 UTC m=+0.050662030 container died 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01336|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa ovn-installed in OVS
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01337|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa up in Southbound
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01338|binding|INFO|Releasing lport 1e4bb581-2eb1-4909-a066-11e1096cbffa from this chassis (sb_readonly=1)
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01339|if_status|INFO|Dropped 5 log messages in last 1626 seconds (most recently, 1626 seconds ago) due to excessive rate
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01340|if_status|INFO|Not setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa down as sb is readonly
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01341|binding|INFO|Removing iface tap1e4bb581-2e ovn-installed in OVS
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01342|binding|INFO|Releasing lport 1e4bb581-2eb1-4909-a066-11e1096cbffa from this chassis (sb_readonly=0)
Nov 25 12:08:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:26Z|01343|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa down in Southbound
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.493 254096 INFO nova.virt.libvirt.driver [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance destroyed successfully.#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.494 254096 DEBUG nova.objects.instance [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.498 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.509 254096 DEBUG nova.virt.libvirt.vif [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742985816',display_name='tempest-TestNetworkBasicOps-server-1742985816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742985816',id=129,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00WPYJfplltH8QXqH5VBLBDZhQ795ogiRVA83nhFmClvp98XVxMKrxCuE4fsMnEOta0H4tqzcEHlYCGCoNe9LzeAmLZ6Pp7hI8JV+Hz5Z8Dy6OeDo6E9caHpgF+YsYXg==',key_name='tempest-TestNetworkBasicOps-1671052',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-v513b262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:01Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.510 254096 DEBUG nova.network.os_vif_util [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.511 254096 DEBUG nova.network.os_vif_util [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.512 254096 DEBUG os_vif [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.515 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4bb581-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b-userdata-shm.mount: Deactivated successfully.
Nov 25 12:08:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-df8974f3be415033b929b46e3c2caddf48c1e9be811e4c30ce085d7210f3d05c-merged.mount: Deactivated successfully.
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.522 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.528 254096 INFO os_vif [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e')#033[00m
Nov 25 12:08:26 np0005535469 podman[395544]: 2025-11-25 17:08:26.536039299 +0000 UTC m=+0.101244388 container cleanup 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:08:26 np0005535469 systemd[1]: libpod-conmon-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b.scope: Deactivated successfully.
Nov 25 12:08:26 np0005535469 podman[395591]: 2025-11-25 17:08:26.604570759 +0000 UTC m=+0.047550885 container remove 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.614 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42e614af-68ed-4e96-9e7f-d6a99802d922]: (4, ('Tue Nov 25 05:08:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 (134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b)\n134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b\nTue Nov 25 05:08:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 (134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b)\n134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.617 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2d283a-8116-4dc9-8595-243450db05ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap829d1d91-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:26 np0005535469 kernel: tap829d1d91-40: left promiscuous mode
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.663 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aedc2578-878f-462c-aa88-347c89abde6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.686 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57938066-b65a-4861-9c13-b7737d902366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.687 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37a21ed2-fac1-48f5-a68d-bc5c81c569e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1a3236-98f1-492c-bbde-ff2bcf86820b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693822, 'reachable_time': 18567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395612, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.710 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.710 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6128ef-021f-4ae4-836d-fcc0bc32e2c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.712 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 unbound from our chassis#033[00m
Nov 25 12:08:26 np0005535469 systemd[1]: run-netns-ovnmeta\x2d829d1d91\x2d46ff\x2d47ac\x2d8fc7\x2d645e2a94fa93.mount: Deactivated successfully.
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.714 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.715 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3f039633-d3ba-4ee2-9495-37aac5decd88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.716 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 unbound from our chassis#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.717 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:08:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[77a3a9b7-a83b-4843-b7f9-6acb54a27c8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.838 254096 DEBUG nova.compute.manager [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-unplugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG oslo_concurrency.lockutils [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG oslo_concurrency.lockutils [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG oslo_concurrency.lockutils [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG nova.compute.manager [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-unplugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.840 254096 DEBUG nova.compute.manager [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-unplugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.973 254096 INFO nova.virt.libvirt.driver [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deleting instance files /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_del#033[00m
Nov 25 12:08:26 np0005535469 nova_compute[254092]: 2025-11-25 17:08:26.974 254096 INFO nova.virt.libvirt.driver [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deletion of /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_del complete#033[00m
Nov 25 12:08:27 np0005535469 nova_compute[254092]: 2025-11-25 17:08:27.493 254096 INFO nova.compute.manager [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:08:27 np0005535469 nova_compute[254092]: 2025-11-25 17:08:27.494 254096 DEBUG oslo.service.loopingcall [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:08:27 np0005535469 nova_compute[254092]: 2025-11-25 17:08:27.495 254096 DEBUG nova.compute.manager [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:08:27 np0005535469 nova_compute[254092]: 2025-11-25 17:08:27.495 254096 DEBUG nova.network.neutron [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.191 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.191 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.213 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.290 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.291 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.300 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.300 254096 INFO nova.compute.claims [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.454 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.589 254096 DEBUG nova.network.neutron [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updated VIF entry in instance network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.591 254096 DEBUG nova.network.neutron [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.609 254096 DEBUG oslo_concurrency.lockutils [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.618 254096 DEBUG nova.network.neutron [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.631 254096 INFO nova.compute.manager [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 1.14 seconds to deallocate network for instance.#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.669 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898967013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.902 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.967 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.967 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.969 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.969 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.969 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.973 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.973 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:28 np0005535469 nova_compute[254092]: 2025-11-25 17:08:28.973 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-deleted-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.589 254096 DEBUG nova.compute.provider_tree [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.600 254096 DEBUG nova.scheduler.client.report [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.622 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.623 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.625 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.670 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.671 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.689 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.692 254096 DEBUG oslo_concurrency.processutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.752 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.841 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.842 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.842 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Creating image(s)#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.864 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.891 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.920 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.925 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:29 np0005535469 nova_compute[254092]: 2025-11-25 17:08:29.980 254096 DEBUG nova.policy [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c10f7baf67a8412caf2428d3200f851d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.033 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.034 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.035 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.035 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.056 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.060 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779679148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.136 254096 DEBUG oslo_concurrency.processutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.144 254096 DEBUG nova.compute.provider_tree [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.160 254096 DEBUG nova.scheduler.client.report [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.177 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.204 254096 INFO nova.scheduler.client.report [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.293 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.320 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.388 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] resizing rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:08:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 103 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 14 op/s
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.481 254096 DEBUG nova.objects.instance [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'migration_context' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.497 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.498 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Ensure instance console log exists: /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.498 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.499 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.499 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:30 np0005535469 nova_compute[254092]: 2025-11-25 17:08:30.747 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Successfully created port: cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:08:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:30.869 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.440 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Successfully updated port: cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.461 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.462 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.462 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.512 254096 DEBUG nova.compute.manager [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-changed-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.512 254096 DEBUG nova.compute.manager [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Refreshing instance network info cache due to event network-changed-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.513 254096 DEBUG oslo_concurrency.lockutils [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:31 np0005535469 nova_compute[254092]: 2025-11-25 17:08:31.631 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:08:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 76 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.749 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.768 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.768 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance network_info: |[{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.769 254096 DEBUG oslo_concurrency.lockutils [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.769 254096 DEBUG nova.network.neutron [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Refreshing network info cache for port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.772 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start _get_guest_xml network_info=[{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.778 254096 WARNING nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.783 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.783 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.787 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.790 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.790 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:08:33 np0005535469 nova_compute[254092]: 2025-11-25 17:08:33.792 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:08:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635124925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.282 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.321 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.328 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 76 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:08:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4187159391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.800 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.801 254096 DEBUG nova.virt.libvirt.vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:29Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.802 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.803 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.804 254096 DEBUG nova.objects.instance [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.837 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <uuid>8dfb4b8d-30c5-414d-98f6-1f788dbae8c0</uuid>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <name>instance-00000082</name>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestServerAdvancedOps-server-1287608653</nova:name>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:08:33</nova:creationTime>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:user uuid="c10f7baf67a8412caf2428d3200f851d">tempest-TestServerAdvancedOps-343246187-project-member</nova:user>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:project uuid="6d8d30c9803a4a3fa7d9179a85cf828e">tempest-TestServerAdvancedOps-343246187</nova:project>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <nova:port uuid="cd0a47c2-5422-4bcf-a714-1ecdc3db9e87">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <entry name="serial">8dfb4b8d-30c5-414d-98f6-1f788dbae8c0</entry>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <entry name="uuid">8dfb4b8d-30c5-414d-98f6-1f788dbae8c0</entry>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:37:ef:7f"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <target dev="tapcd0a47c2-54"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/console.log" append="off"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:08:34 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:08:34 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:08:34 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:08:34 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.838 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Preparing to wait for external event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.838 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.839 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.839 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.840 254096 DEBUG nova.virt.libvirt.vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:29Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.840 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.841 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.841 254096 DEBUG os_vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.842 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.842 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.845 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd0a47c2-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.845 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd0a47c2-54, col_values=(('external_ids', {'iface-id': 'cd0a47c2-5422-4bcf-a714-1ecdc3db9e87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ef:7f', 'vm-uuid': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:34 np0005535469 NetworkManager[48891]: <info>  [1764090514.8477] manager: (tapcd0a47c2-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.854 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.855 254096 INFO os_vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.965 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.966 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.966 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] No VIF found with MAC fa:16:3e:37:ef:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.967 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Using config drive#033[00m
Nov 25 12:08:34 np0005535469 nova_compute[254092]: 2025-11-25 17:08:34.996 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.736 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Creating config drive at /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.742 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7g_mmbmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.774 254096 DEBUG nova.network.neutron [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updated VIF entry in instance network info cache for port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.775 254096 DEBUG nova.network.neutron [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.814 254096 DEBUG oslo_concurrency.lockutils [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.882 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7g_mmbmj" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.923 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:35 np0005535469 nova_compute[254092]: 2025-11-25 17:08:35.928 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.100 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.101 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deleting local config drive /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config because it was imported into RBD.#033[00m
Nov 25 12:08:36 np0005535469 kernel: tapcd0a47c2-54: entered promiscuous mode
Nov 25 12:08:36 np0005535469 NetworkManager[48891]: <info>  [1764090516.1967] manager: (tapcd0a47c2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Nov 25 12:08:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:36Z|01344|binding|INFO|Claiming lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for this chassis.
Nov 25 12:08:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:36Z|01345|binding|INFO|cd0a47c2-5422-4bcf-a714-1ecdc3db9e87: Claiming fa:16:3e:37:ef:7f 10.100.0.3
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.248 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.250 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b bound to our chassis#033[00m
Nov 25 12:08:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.250 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 12:08:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.251 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee4a058-f2fa-48fa-a95f-15e945c98f51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:36 np0005535469 systemd-udevd[395959]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:08:36 np0005535469 systemd-machined[216343]: New machine qemu-162-instance-00000082.
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:36 np0005535469 NetworkManager[48891]: <info>  [1764090516.2844] device (tapcd0a47c2-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:08:36 np0005535469 NetworkManager[48891]: <info>  [1764090516.2855] device (tapcd0a47c2-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:08:36 np0005535469 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Nov 25 12:08:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:36Z|01346|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 ovn-installed in OVS
Nov 25 12:08:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:36Z|01347|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 up in Southbound
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.902 254096 DEBUG nova.compute.manager [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.904 254096 DEBUG oslo_concurrency.lockutils [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.904 254096 DEBUG oslo_concurrency.lockutils [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.905 254096 DEBUG oslo_concurrency.lockutils [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.905 254096 DEBUG nova.compute.manager [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Processing event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.983 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.984 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090516.9821105, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.984 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Started (Lifecycle Event)#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.988 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.991 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance spawned successfully.#033[00m
Nov 25 12:08:36 np0005535469 nova_compute[254092]: 2025-11-25 17:08:36.992 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.015 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.022 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.025 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.026 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.026 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.027 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.027 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.027 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.047 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.048 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090516.9824736, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.048 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.070 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.073 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090516.9875927, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.074 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.087 254096 INFO nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 7.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.087 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.091 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.097 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.127 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.159 254096 INFO nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 8.90 seconds to build instance.#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.186 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:37 np0005535469 nova_compute[254092]: 2025-11-25 17:08:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:08:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 25 12:08:38 np0005535469 nova_compute[254092]: 2025-11-25 17:08:38.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:38 np0005535469 nova_compute[254092]: 2025-11-25 17:08:38.992 254096 DEBUG nova.compute.manager [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:38 np0005535469 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG oslo_concurrency.lockutils [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:38 np0005535469 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG oslo_concurrency.lockutils [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:38 np0005535469 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG oslo_concurrency.lockutils [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:38 np0005535469 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG nova.compute.manager [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:38 np0005535469 nova_compute[254092]: 2025-11-25 17:08:38.994 254096 WARNING nova.compute.manager [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:08:39 np0005535469 nova_compute[254092]: 2025-11-25 17:08:39.885 254096 DEBUG nova.objects.instance [None req-1f531883-117b-40b3-8c65-f71c375a05c7 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:39 np0005535469 nova_compute[254092]: 2025-11-25 17:08:39.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:39 np0005535469 nova_compute[254092]: 2025-11-25 17:08:39.901 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090519.9017365, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:39 np0005535469 nova_compute[254092]: 2025-11-25 17:08:39.902 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:08:39 np0005535469 nova_compute[254092]: 2025-11-25 17:08:39.933 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:39 np0005535469 nova_compute[254092]: 2025-11-25 17:08:39.938 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:39 np0005535469 nova_compute[254092]: 2025-11-25 17:08:39.963 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:08:40
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control']
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:08:40 np0005535469 kernel: tapcd0a47c2-54 (unregistering): left promiscuous mode
Nov 25 12:08:40 np0005535469 NetworkManager[48891]: <info>  [1764090520.4096] device (tapcd0a47c2-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:08:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:40Z|01348|binding|INFO|Releasing lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 from this chassis (sb_readonly=0)
Nov 25 12:08:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:40Z|01349|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 down in Southbound
Nov 25 12:08:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:40Z|01350|binding|INFO|Removing iface tapcd0a47c2-54 ovn-installed in OVS
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.428 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.430 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b unbound from our chassis#033[00m
Nov 25 12:08:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.430 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 12:08:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.432 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9796f1-8e18-41a5-9479-6379daae8445]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:40 np0005535469 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 12:08:40 np0005535469 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 3.819s CPU time.
Nov 25 12:08:40 np0005535469 systemd-machined[216343]: Machine qemu-162-instance-00000082 terminated.
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:40 np0005535469 nova_compute[254092]: 2025-11-25 17:08:40.585 254096 DEBUG nova.compute.manager [None req-1f531883-117b-40b3-8c65-f71c375a05c7 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:08:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/63606502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.037 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.083 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.083 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 WARNING nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.085 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.085 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.085 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.086 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.086 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.086 254096 WARNING nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.259 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.260 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3643MB free_disk=59.9675178527832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.261 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.261 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.348 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.488 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090506.4877155, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.489 254096 INFO nova.compute.manager [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.505 254096 DEBUG nova.compute.manager [None req-e7ca1aef-eee9-4dc3-ac74-3d6a3d90e8c2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761545435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.762 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.767 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.791 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.809 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:08:41 np0005535469 nova_compute[254092]: 2025-11-25 17:08:41.810 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.464 254096 INFO nova.compute.manager [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Resuming#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.466 254096 DEBUG nova.objects.instance [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'flavor' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.507 254096 DEBUG oslo_concurrency.lockutils [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.508 254096 DEBUG oslo_concurrency.lockutils [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.508 254096 DEBUG nova.network.neutron [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.517 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:08:42 np0005535469 nova_compute[254092]: 2025-11-25 17:08:42.529 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.748 254096 DEBUG nova.network.neutron [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.762 254096 DEBUG oslo_concurrency.lockutils [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.770 254096 DEBUG nova.virt.libvirt.vif [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:40Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.771 254096 DEBUG nova.network.os_vif_util [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.772 254096 DEBUG nova.network.os_vif_util [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.773 254096 DEBUG os_vif [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.775 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.776 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.781 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd0a47c2-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.782 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd0a47c2-54, col_values=(('external_ids', {'iface-id': 'cd0a47c2-5422-4bcf-a714-1ecdc3db9e87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ef:7f', 'vm-uuid': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.783 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.783 254096 INFO os_vif [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.810 254096 DEBUG nova.objects.instance [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'numa_topology' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:43 np0005535469 kernel: tapcd0a47c2-54: entered promiscuous mode
Nov 25 12:08:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:43Z|01351|binding|INFO|Claiming lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for this chassis.
Nov 25 12:08:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:43Z|01352|binding|INFO|cd0a47c2-5422-4bcf-a714-1ecdc3db9e87: Claiming fa:16:3e:37:ef:7f 10.100.0.3
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:43 np0005535469 NetworkManager[48891]: <info>  [1764090523.9109] manager: (tapcd0a47c2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/555)
Nov 25 12:08:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.920 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.922 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b bound to our chassis#033[00m
Nov 25 12:08:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.923 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 12:08:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.925 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f249bd9-1ecd-4e3d-ba4a-a785f5d23106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:43Z|01353|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 up in Southbound
Nov 25 12:08:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:43Z|01354|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 ovn-installed in OVS
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:43 np0005535469 nova_compute[254092]: 2025-11-25 17:08:43.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:43 np0005535469 systemd-udevd[396090]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:08:43 np0005535469 systemd-machined[216343]: New machine qemu-163-instance-00000082.
Nov 25 12:08:43 np0005535469 NetworkManager[48891]: <info>  [1764090523.9690] device (tapcd0a47c2-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:08:43 np0005535469 NetworkManager[48891]: <info>  [1764090523.9706] device (tapcd0a47c2-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:08:43 np0005535469 systemd[1]: Started Virtual Machine qemu-163-instance-00000082.
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.139 254096 DEBUG nova.compute.manager [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.140 254096 DEBUG oslo_concurrency.lockutils [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.140 254096 DEBUG oslo_concurrency.lockutils [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.141 254096 DEBUG oslo_concurrency.lockutils [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.141 254096 DEBUG nova.compute.manager [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.141 254096 WARNING nova.compute.manager [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 25 12:08:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 87 op/s
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.717 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.718 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090524.7168858, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.718 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Started (Lifecycle Event)#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.745 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.753 254096 DEBUG nova.compute.manager [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.753 254096 DEBUG nova.objects.instance [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.757 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.787 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.788 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090524.7222576, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.788 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.796 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance running successfully.#033[00m
Nov 25 12:08:44 np0005535469 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.799 254096 DEBUG nova.virt.libvirt.guest [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.800 254096 DEBUG nova.compute.manager [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.808 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.814 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.841 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 12:08:44 np0005535469 nova_compute[254092]: 2025-11-25 17:08:44.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:45 np0005535469 nova_compute[254092]: 2025-11-25 17:08:45.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.277 254096 DEBUG nova.compute.manager [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.278 254096 DEBUG oslo_concurrency.lockutils [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.279 254096 DEBUG oslo_concurrency.lockutils [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.279 254096 DEBUG oslo_concurrency.lockutils [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.280 254096 DEBUG nova.compute.manager [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.280 254096 WARNING nova.compute.manager [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.421 254096 DEBUG nova.objects.instance [None req-d4791176-05a4-47a3-ad14-30dd70df3b03 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.451 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090526.4507444, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.452 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.471 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.476 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 92 op/s
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.493 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 25 12:08:46 np0005535469 kernel: tapcd0a47c2-54 (unregistering): left promiscuous mode
Nov 25 12:08:46 np0005535469 NetworkManager[48891]: <info>  [1764090526.9682] device (tapcd0a47c2-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:08:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:46Z|01355|binding|INFO|Releasing lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 from this chassis (sb_readonly=0)
Nov 25 12:08:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:46Z|01356|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 down in Southbound
Nov 25 12:08:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:46Z|01357|binding|INFO|Removing iface tapcd0a47c2-54 ovn-installed in OVS
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.981 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.984 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b unbound from our chassis#033[00m
Nov 25 12:08:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.985 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 12:08:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.987 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f4c96d-e227-43a5-8dc3-d4d1d6d075d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:46 np0005535469 nova_compute[254092]: 2025-11-25 17:08:46.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:47 np0005535469 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 12:08:47 np0005535469 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Consumed 2.535s CPU time.
Nov 25 12:08:47 np0005535469 systemd-machined[216343]: Machine qemu-163-instance-00000082 terminated.
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.154 254096 DEBUG nova.compute.manager [None req-d4791176-05a4-47a3-ad14-30dd70df3b03 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.856 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.857 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.872 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.959 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.960 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.968 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:08:47 np0005535469 nova_compute[254092]: 2025-11-25 17:08:47.968 254096 INFO nova.compute.claims [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.100 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.357 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.359 254096 WARNING nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.359 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.359 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 WARNING nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.406 254096 INFO nova.compute.manager [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Resuming#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.407 254096 DEBUG nova.objects.instance [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'flavor' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.446 254096 DEBUG oslo_concurrency.lockutils [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.446 254096 DEBUG oslo_concurrency.lockutils [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.447 254096 DEBUG nova.network.neutron [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:08:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567139493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.579 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.584 254096 DEBUG nova.compute.provider_tree [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.596 254096 DEBUG nova.scheduler.client.report [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.614 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.616 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.662 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.663 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.681 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.700 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.789 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.791 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.792 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Creating image(s)#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.829 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.865 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.895 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.900 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.903196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528903242, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1029, "num_deletes": 251, "total_data_size": 1409745, "memory_usage": 1431664, "flush_reason": "Manual Compaction"}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528910262, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 856788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52525, "largest_seqno": 53553, "table_properties": {"data_size": 852821, "index_size": 1555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10739, "raw_average_key_size": 20, "raw_value_size": 844254, "raw_average_value_size": 1632, "num_data_blocks": 70, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090434, "oldest_key_time": 1764090434, "file_creation_time": 1764090528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 7118 microseconds, and 3852 cpu microseconds.
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.910313) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 856788 bytes OK
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.910334) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912000) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912018) EVENT_LOG_v1 {"time_micros": 1764090528912013, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1404883, prev total WAL file size 1404883, number of live WAL files 2.
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912856) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303039' seq:72057594037927935, type:22 .. '6D6772737461740032323631' seq:0, type:0; will stop at (end)
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(836KB)], [119(10MB)]
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528912919, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 11562378, "oldest_snapshot_seqno": -1}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7366 keys, 8786319 bytes, temperature: kUnknown
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528968880, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 8786319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8739899, "index_size": 26916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 192652, "raw_average_key_size": 26, "raw_value_size": 8610923, "raw_average_value_size": 1169, "num_data_blocks": 1048, "num_entries": 7366, "num_filter_entries": 7366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.969056) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8786319 bytes
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.969976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.4 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.2 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(23.7) write-amplify(10.3) OK, records in: 7839, records dropped: 473 output_compression: NoCompression
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.969991) EVENT_LOG_v1 {"time_micros": 1764090528969984, "job": 72, "event": "compaction_finished", "compaction_time_micros": 56006, "compaction_time_cpu_micros": 22800, "output_level": 6, "num_output_files": 1, "total_output_size": 8786319, "num_input_records": 7839, "num_output_records": 7366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528970201, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528971669, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:08:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.985 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.987 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.988 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:48 np0005535469 nova_compute[254092]: 2025-11-25 17:08:48.988 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.016 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.019 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.054 254096 DEBUG nova.policy [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.282 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.344 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.417 254096 DEBUG nova.objects.instance [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.432 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.433 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Ensure instance console log exists: /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.433 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.433 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.434 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.657 254096 DEBUG nova.network.neutron [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.668 254096 DEBUG oslo_concurrency.lockutils [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.674 254096 DEBUG nova.virt.libvirt.vif [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:47Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.675 254096 DEBUG nova.network.os_vif_util [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.675 254096 DEBUG nova.network.os_vif_util [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.676 254096 DEBUG os_vif [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.677 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.677 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.680 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd0a47c2-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.680 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd0a47c2-54, col_values=(('external_ids', {'iface-id': 'cd0a47c2-5422-4bcf-a714-1ecdc3db9e87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ef:7f', 'vm-uuid': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.681 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.681 254096 INFO os_vif [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.699 254096 DEBUG nova.objects.instance [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'numa_topology' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.717 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Successfully created port: 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:08:49 np0005535469 kernel: tapcd0a47c2-54: entered promiscuous mode
Nov 25 12:08:49 np0005535469 systemd-udevd[396148]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:08:49 np0005535469 NetworkManager[48891]: <info>  [1764090529.7754] manager: (tapcd0a47c2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/556)
Nov 25 12:08:49 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:49Z|01358|binding|INFO|Claiming lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for this chassis.
Nov 25 12:08:49 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:49Z|01359|binding|INFO|cd0a47c2-5422-4bcf-a714-1ecdc3db9e87: Claiming fa:16:3e:37:ef:7f 10.100.0.3
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.783 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.786 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b bound to our chassis#033[00m
Nov 25 12:08:49 np0005535469 NetworkManager[48891]: <info>  [1764090529.7881] device (tapcd0a47c2-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:08:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.787 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 12:08:49 np0005535469 NetworkManager[48891]: <info>  [1764090529.7892] device (tapcd0a47c2-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:08:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.788 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c12031d9-9919-4b4d-b91b-d634b6594c0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:49 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:49Z|01360|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 ovn-installed in OVS
Nov 25 12:08:49 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:49Z|01361|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 up in Southbound
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.808 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:49 np0005535469 systemd-machined[216343]: New machine qemu-164-instance-00000082.
Nov 25 12:08:49 np0005535469 systemd[1]: Started Virtual Machine qemu-164-instance-00000082.
Nov 25 12:08:49 np0005535469 nova_compute[254092]: 2025-11-25 17:08:49.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.354 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.354 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090530.3538384, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.355 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Started (Lifecycle Event)#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.369 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.375 254096 DEBUG nova.compute.manager [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.376 254096 DEBUG nova.objects.instance [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.379 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.409 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance running successfully.#033[00m
Nov 25 12:08:50 np0005535469 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.410 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.411 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090530.3584712, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.411 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.412 254096 DEBUG nova.virt.libvirt.guest [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.412 254096 DEBUG nova.compute.manager [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.431 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.461 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 25 12:08:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 103 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 365 KiB/s wr, 90 op/s
Nov 25 12:08:50 np0005535469 nova_compute[254092]: 2025-11-25 17:08:50.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00041556527560721496 of space, bias 1.0, pg target 0.12466958268216449 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:08:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:08:51 np0005535469 podman[396418]: 2025-11-25 17:08:51.649143436 +0000 UTC m=+0.060882621 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:08:51 np0005535469 podman[396417]: 2025-11-25 17:08:51.649386772 +0000 UTC m=+0.060913162 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:08:51 np0005535469 podman[396419]: 2025-11-25 17:08:51.685187555 +0000 UTC m=+0.079894833 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:08:51 np0005535469 nova_compute[254092]: 2025-11-25 17:08:51.865 254096 DEBUG nova.compute.manager [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:51 np0005535469 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG oslo_concurrency.lockutils [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:51 np0005535469 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG oslo_concurrency.lockutils [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:51 np0005535469 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG oslo_concurrency.lockutils [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:51 np0005535469 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG nova.compute.manager [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:51 np0005535469 nova_compute[254092]: 2025-11-25 17:08:51.867 254096 WARNING nova.compute.manager [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.375 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Successfully updated port: 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.434 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.435 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.435 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.447 254096 DEBUG nova.compute.manager [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.448 254096 DEBUG nova.compute.manager [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.448 254096 DEBUG oslo_concurrency.lockutils [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:08:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 134 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 25 12:08:52 np0005535469 nova_compute[254092]: 2025-11-25 17:08:52.573 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:08:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 134 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.689 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.732 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.733 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance network_info: |[{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.733 254096 DEBUG oslo_concurrency.lockutils [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.733 254096 DEBUG nova.network.neutron [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.736 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start _get_guest_xml network_info=[{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.743 254096 DEBUG nova.compute.manager [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.743 254096 DEBUG oslo_concurrency.lockutils [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.743 254096 DEBUG oslo_concurrency.lockutils [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.744 254096 DEBUG oslo_concurrency.lockutils [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.744 254096 DEBUG nova.compute.manager [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.744 254096 WARNING nova.compute.manager [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.745 254096 WARNING nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.749 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.749 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.752 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.753 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.753 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.753 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.754 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.754 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.756 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.756 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.756 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.757 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.760 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.842 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.845 254096 INFO nova.compute.manager [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Terminating instance#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.846 254096 DEBUG nova.compute.manager [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:08:54 np0005535469 kernel: tapcd0a47c2-54 (unregistering): left promiscuous mode
Nov 25 12:08:54 np0005535469 NetworkManager[48891]: <info>  [1764090534.8820] device (tapcd0a47c2-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:08:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:54Z|01362|binding|INFO|Releasing lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 from this chassis (sb_readonly=0)
Nov 25 12:08:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:54Z|01363|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 down in Southbound
Nov 25 12:08:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:54Z|01364|binding|INFO|Removing iface tapcd0a47c2-54 ovn-installed in OVS
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.898 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.899 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b unbound from our chassis#033[00m
Nov 25 12:08:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.900 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 25 12:08:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.901 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d26f984-3719-460e-8220-a0abf31c0933]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:54 np0005535469 nova_compute[254092]: 2025-11-25 17:08:54.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:54 np0005535469 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 12:08:54 np0005535469 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Consumed 4.941s CPU time.
Nov 25 12:08:54 np0005535469 systemd-machined[216343]: Machine qemu-164-instance-00000082 terminated.
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.089 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance destroyed successfully.#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.090 254096 DEBUG nova.objects.instance [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'resources' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.104 254096 DEBUG nova.virt.libvirt.vif [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:50Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.105 254096 DEBUG nova.network.os_vif_util [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.106 254096 DEBUG nova.network.os_vif_util [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.106 254096 DEBUG os_vif [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.110 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd0a47c2-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.119 254096 INFO os_vif [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')#033[00m
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410214177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.213 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.235 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.240 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4262876514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4262876514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.466 254096 INFO nova.virt.libvirt.driver [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deleting instance files /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_del#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.467 254096 INFO nova.virt.libvirt.driver [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deletion of /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_del complete#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.545 254096 INFO nova.compute.manager [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.546 254096 DEBUG oslo.service.loopingcall [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.547 254096 DEBUG nova.compute.manager [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.547 254096 DEBUG nova.network.neutron [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:08:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041196014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.701 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.705 254096 DEBUG nova.virt.libvirt.vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-367110792',display_name='tempest-TestNetworkBasicOps-server-367110792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-367110792',id=131,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDN+GWJbAZWOIdymw+cz2oLcyndkUPhl6vtYaMAR6OngHd8OlebpGiETQxsMoSybRqEA+rFizH3rxBjAI6Jko4gUoKJ0EE0bXq9XY/gGruR3mEMNu5mTsv7YmUDww+bvsg==',key_name='tempest-TestNetworkBasicOps-2139833866',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-3zuncmiv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:48Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e9a105a6-90a1-4e21-9296-61a55e2ceec3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.705 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.707 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.709 254096 DEBUG nova.objects.instance [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.724 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <uuid>e9a105a6-90a1-4e21-9296-61a55e2ceec3</uuid>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <name>instance-00000083</name>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-367110792</nova:name>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:08:54</nova:creationTime>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <nova:port uuid="9b06b5b4-bc07-48ef-b51a-ecf0abc558ab">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <entry name="serial">e9a105a6-90a1-4e21-9296-61a55e2ceec3</entry>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <entry name="uuid">e9a105a6-90a1-4e21-9296-61a55e2ceec3</entry>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e4:f0:6f"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <target dev="tap9b06b5b4-bc"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/console.log" append="off"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:08:55 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:08:55 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:08:55 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:08:55 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.726 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Preparing to wait for external event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.727 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.727 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.727 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.728 254096 DEBUG nova.virt.libvirt.vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-367110792',display_name='tempest-TestNetworkBasicOps-server-367110792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-367110792',id=131,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDN+GWJbAZWOIdymw+cz2oLcyndkUPhl6vtYaMAR6OngHd8OlebpGiETQxsMoSybRqEA+rFizH3rxBjAI6Jko4gUoKJ0EE0bXq9XY/gGruR3mEMNu5mTsv7YmUDww+bvsg==',key_name='tempest-TestNetworkBasicOps-2139833866',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-3zuncmiv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:48Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e9a105a6-90a1-4e21-9296-61a55e2ceec3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.729 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.730 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.730 254096 DEBUG os_vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.731 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.732 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.735 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.736 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b06b5b4-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.737 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b06b5b4-bc, col_values=(('external_ids', {'iface-id': '9b06b5b4-bc07-48ef-b51a-ecf0abc558ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:f0:6f', 'vm-uuid': 'e9a105a6-90a1-4e21-9296-61a55e2ceec3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 NetworkManager[48891]: <info>  [1764090535.7399] manager: (tap9b06b5b4-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.749 254096 INFO os_vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc')#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.812 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.813 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.814 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:e4:f0:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.815 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Using config drive#033[00m
Nov 25 12:08:55 np0005535469 nova_compute[254092]: 2025-11-25 17:08:55.845 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.442 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Creating config drive at /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.452 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmr0db8u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 99 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.519 254096 DEBUG nova.network.neutron [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.533 254096 INFO nova.compute.manager [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 0.99 seconds to deallocate network for instance.#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.577 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.578 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.611 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmr0db8u" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.642 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.646 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.683 254096 DEBUG nova.network.neutron [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.684 254096 DEBUG nova.network.neutron [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.705 254096 DEBUG oslo_concurrency.lockutils [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.716 254096 DEBUG oslo_concurrency.processutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.822 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.823 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deleting local config drive /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config because it was imported into RBD.#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.830 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.830 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.830 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.831 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.831 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.831 254096 WARNING nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.832 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.833 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.833 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 WARNING nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-deleted-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:56 np0005535469 kernel: tap9b06b5b4-bc: entered promiscuous mode
Nov 25 12:08:56 np0005535469 NetworkManager[48891]: <info>  [1764090536.8749] manager: (tap9b06b5b4-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Nov 25 12:08:56 np0005535469 systemd-udevd[396502]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:08:56 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:56Z|01365|binding|INFO|Claiming lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for this chassis.
Nov 25 12:08:56 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:56Z|01366|binding|INFO|9b06b5b4-bc07-48ef-b51a-ecf0abc558ab: Claiming fa:16:3e:e4:f0:6f 10.100.0.3
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:56 np0005535469 NetworkManager[48891]: <info>  [1764090536.8867] device (tap9b06b5b4-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:08:56 np0005535469 NetworkManager[48891]: <info>  [1764090536.8879] device (tap9b06b5b4-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.893 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:f0:6f 10.100.0.3'], port_security=['fa:16:3e:e4:f0:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e9a105a6-90a1-4e21-9296-61a55e2ceec3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3c9a6b1-fb73-4f95-84e0-2b0bae619305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.894 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 bound to our chassis#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.895 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 877a4e79-06f0-432b-a5f9-1a0277ccd412#033[00m
Nov 25 12:08:56 np0005535469 systemd-machined[216343]: New machine qemu-165-instance-00000083.
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.911 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e31ec9e7-00f0-4653-9a37-7c4445f14143]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.912 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap877a4e79-01 in ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.913 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap877a4e79-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.913 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[649a08bf-2bd1-4167-b83d-45b2f445b1f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.914 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe1fba2-04a7-4ecb-a03e-9d1c7d7ff73b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.929 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1857684b-3e13-413a-a344-344f81173a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:56 np0005535469 systemd[1]: Started Virtual Machine qemu-165-instance-00000083.
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:56 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:56Z|01367|binding|INFO|Setting lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab ovn-installed in OVS
Nov 25 12:08:56 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:56Z|01368|binding|INFO|Setting lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab up in Southbound
Nov 25 12:08:56 np0005535469 nova_compute[254092]: 2025-11-25 17:08:56.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.954 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[112cbc39-ea70-4b18-80aa-4271f161c328]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.982 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f99b5aad-9a8e-41eb-8036-0fa5195c83e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:56 np0005535469 NetworkManager[48891]: <info>  [1764090536.9894] manager: (tap877a4e79-00): new Veth device (/org/freedesktop/NetworkManager/Devices/559)
Nov 25 12:08:56 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.990 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[75504731-3a97-4a41-944e-5c92d8b08802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.030 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[20a7c52b-d1a4-44a0-ae62-818e8ac9c53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.034 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a3641001-5364-4a91-87aa-97d13befb1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 NetworkManager[48891]: <info>  [1764090537.0574] device (tap877a4e79-00): carrier: link connected
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.062 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2143caef-8caa-4b5f-b435-c97fead5b830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.078 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b65772d7-ad85-45c4-b06e-0b096bf50209]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396700, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37c71797-4720-401c-aa1a-fd517930675e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:1639'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699464, 'tstamp': 699464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396701, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50106787-fd44-474e-8b18-729fee308bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396702, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.153 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb50713-abb1-4678-8757-3af5fb5f1579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:08:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996582494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[faa84eda-7825-469e-9b0a-6d54cf26d2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.213 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.213 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap877a4e79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:57 np0005535469 NetworkManager[48891]: <info>  [1764090537.2212] manager: (tap877a4e79-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Nov 25 12:08:57 np0005535469 kernel: tap877a4e79-00: entered promiscuous mode
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.227 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap877a4e79-00, col_values=(('external_ids', {'iface-id': '62f37f6b-9026-4c42-8bb8-f4b3e0610e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.227 254096 DEBUG oslo_concurrency.processutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:08:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:57Z|01369|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.229 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.237 254096 DEBUG nova.compute.provider_tree [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.256 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/877a4e79-06f0-432b-a5f9-1a0277ccd412.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/877a4e79-06f0-432b-a5f9-1a0277ccd412.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.256 254096 DEBUG nova.scheduler.client.report [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.257 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[52287e79-17d3-4618-a5f9-ff047d183305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.258 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-877a4e79-06f0-432b-a5f9-1a0277ccd412
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/877a4e79-06f0-432b-a5f9-1a0277ccd412.pid.haproxy
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 877a4e79-06f0-432b-a5f9-1a0277ccd412
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:08:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.259 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'env', 'PROCESS_TAG=haproxy-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/877a4e79-06f0-432b-a5f9-1a0277ccd412.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.277 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.291 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090537.2908912, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.291 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Started (Lifecycle Event)#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.310 254096 DEBUG nova.compute.manager [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG oslo_concurrency.lockutils [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG oslo_concurrency.lockutils [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG oslo_concurrency.lockutils [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG nova.compute.manager [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Processing event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.313 254096 INFO nova.scheduler.client.report [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Deleted allocations for instance 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.314 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.315 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.320 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.321 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.329 254096 INFO nova.virt.libvirt.driver [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance spawned successfully.#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.330 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.376 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.376 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090537.29106, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.376 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.387 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.387 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.415 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.419 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090537.3183403, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.419 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.429 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.458 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.461 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.465 254096 INFO nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 8.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.465 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.493 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.527 254096 INFO nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 9.60 seconds to build instance.#033[00m
Nov 25 12:08:57 np0005535469 nova_compute[254092]: 2025-11-25 17:08:57.545 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:57 np0005535469 podman[396777]: 2025-11-25 17:08:57.62710224 +0000 UTC m=+0.050942398 container create 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:08:57 np0005535469 systemd[1]: Started libpod-conmon-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652.scope.
Nov 25 12:08:57 np0005535469 podman[396777]: 2025-11-25 17:08:57.600374506 +0000 UTC m=+0.024214684 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:08:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:08:57 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e0d9216ee6b30cc22072c4c0c5dbb641729836999db290bc3d2789f7b7629e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:08:57 np0005535469 podman[396777]: 2025-11-25 17:08:57.718153517 +0000 UTC m=+0.141993695 container init 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:08:57 np0005535469 podman[396777]: 2025-11-25 17:08:57.725614222 +0000 UTC m=+0.149454380 container start 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:08:57 np0005535469 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : New worker (396796) forked
Nov 25 12:08:57 np0005535469 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : Loading success.
Nov 25 12:08:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 99 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 25 12:08:58 np0005535469 nova_compute[254092]: 2025-11-25 17:08:58.535 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:08:58 np0005535469 nova_compute[254092]: 2025-11-25 17:08:58.536 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:08:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:08:59 np0005535469 nova_compute[254092]: 2025-11-25 17:08:59.409 254096 DEBUG nova.compute.manager [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:08:59 np0005535469 nova_compute[254092]: 2025-11-25 17:08:59.409 254096 DEBUG oslo_concurrency.lockutils [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:08:59 np0005535469 nova_compute[254092]: 2025-11-25 17:08:59.409 254096 DEBUG oslo_concurrency.lockutils [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:08:59 np0005535469 nova_compute[254092]: 2025-11-25 17:08:59.410 254096 DEBUG oslo_concurrency.lockutils [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:08:59 np0005535469 nova_compute[254092]: 2025-11-25 17:08:59.410 254096 DEBUG nova.compute.manager [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:08:59 np0005535469 nova_compute[254092]: 2025-11-25 17:08:59.410 254096 WARNING nova.compute.manager [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.#033[00m
Nov 25 12:08:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:08:59Z|01370|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 12:08:59 np0005535469 nova_compute[254092]: 2025-11-25 17:08:59.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 88 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 25 12:09:00 np0005535469 nova_compute[254092]: 2025-11-25 17:09:00.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:00 np0005535469 nova_compute[254092]: 2025-11-25 17:09:00.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:01Z|01371|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 12:09:01 np0005535469 NetworkManager[48891]: <info>  [1764090541.1943] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Nov 25 12:09:01 np0005535469 NetworkManager[48891]: <info>  [1764090541.1950] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:01 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:01Z|01372|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.837 254096 DEBUG nova.compute.manager [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.837 254096 DEBUG nova.compute.manager [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.838 254096 DEBUG oslo_concurrency.lockutils [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.838 254096 DEBUG oslo_concurrency.lockutils [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:01 np0005535469 nova_compute[254092]: 2025-11-25 17:09:01.838 254096 DEBUG nova.network.neutron [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 120 op/s
Nov 25 12:09:03 np0005535469 nova_compute[254092]: 2025-11-25 17:09:03.467 254096 DEBUG nova.network.neutron [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:03 np0005535469 nova_compute[254092]: 2025-11-25 17:09:03.467 254096 DEBUG nova.network.neutron [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:03 np0005535469 nova_compute[254092]: 2025-11-25 17:09:03.547 254096 DEBUG oslo_concurrency.lockutils [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Nov 25 12:09:05 np0005535469 podman[396982]: 2025-11-25 17:09:05.203601599 +0000 UTC m=+0.126190692 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:09:05 np0005535469 podman[396982]: 2025-11-25 17:09:05.31335983 +0000 UTC m=+0.235948893 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:09:05 np0005535469 nova_compute[254092]: 2025-11-25 17:09:05.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:05 np0005535469 nova_compute[254092]: 2025-11-25 17:09:05.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:09:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:09:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b1c94ba0-b95f-4f70-8667-b9b7df4293f1 does not exist
Nov 25 12:09:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1d06334d-8e40-4181-98bf-a38f57a27919 does not exist
Nov 25 12:09:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 586ee341-b3a9-4635-94b7-7b80c47d919a does not exist
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.584 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.585 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.604 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:09:07 np0005535469 podman[397415]: 2025-11-25 17:09:07.717064049 +0000 UTC m=+0.097088844 container create ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:09:07 np0005535469 podman[397415]: 2025-11-25 17:09:07.641135895 +0000 UTC m=+0.021160700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.754 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.754 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.762 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.763 254096 INFO nova.compute.claims [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:09:07 np0005535469 systemd[1]: Started libpod-conmon-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope.
Nov 25 12:09:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:09:07 np0005535469 nova_compute[254092]: 2025-11-25 17:09:07.935 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:07 np0005535469 podman[397415]: 2025-11-25 17:09:07.987842405 +0000 UTC m=+0.367867240 container init ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:09:07 np0005535469 podman[397415]: 2025-11-25 17:09:07.995893576 +0000 UTC m=+0.375918361 container start ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:09:08 np0005535469 affectionate_lamarr[397431]: 167 167
Nov 25 12:09:08 np0005535469 systemd[1]: libpod-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope: Deactivated successfully.
Nov 25 12:09:08 np0005535469 conmon[397431]: conmon ff4bddfac6f409a5fba0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope/container/memory.events
Nov 25 12:09:08 np0005535469 podman[397415]: 2025-11-25 17:09:08.039207065 +0000 UTC m=+0.419231870 container attach ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:09:08 np0005535469 podman[397415]: 2025-11-25 17:09:08.040180301 +0000 UTC m=+0.420205086 container died ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e9dd10fc9d48e9664e9ef00d6bdcb95be3460ca169357ff9532c4620d629c6bf-merged.mount: Deactivated successfully.
Nov 25 12:09:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:09:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253505592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.367 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.373 254096 DEBUG nova.compute.provider_tree [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.385 254096 DEBUG nova.scheduler.client.report [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.430 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.431 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:09:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.629 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.629 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.672 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.734 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:09:08 np0005535469 podman[397415]: 2025-11-25 17:09:08.762106522 +0000 UTC m=+1.142131327 container remove ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:09:08 np0005535469 systemd[1]: libpod-conmon-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope: Deactivated successfully.
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.877 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.878 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.879 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Creating image(s)#033[00m
Nov 25 12:09:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.914 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.946 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.965 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:08 np0005535469 nova_compute[254092]: 2025-11-25 17:09:08.969 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:09 np0005535469 nova_compute[254092]: 2025-11-25 17:09:09.009 254096 DEBUG nova.policy [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:09:09 np0005535469 podman[397479]: 2025-11-25 17:09:08.928941588 +0000 UTC m=+0.038419745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:09:09 np0005535469 nova_compute[254092]: 2025-11-25 17:09:09.043 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:09 np0005535469 nova_compute[254092]: 2025-11-25 17:09:09.043 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:09 np0005535469 nova_compute[254092]: 2025-11-25 17:09:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:09 np0005535469 nova_compute[254092]: 2025-11-25 17:09:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:09 np0005535469 nova_compute[254092]: 2025-11-25 17:09:09.067 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:09 np0005535469 nova_compute[254092]: 2025-11-25 17:09:09.071 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:09 np0005535469 podman[397479]: 2025-11-25 17:09:09.164454377 +0000 UTC m=+0.273932534 container create dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:09:09 np0005535469 systemd[1]: Started libpod-conmon-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope.
Nov 25 12:09:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:09:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:09 np0005535469 podman[397479]: 2025-11-25 17:09:09.620534567 +0000 UTC m=+0.730012734 container init dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:09:09 np0005535469 podman[397479]: 2025-11-25 17:09:09.628631149 +0000 UTC m=+0.738109296 container start dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:09:09 np0005535469 podman[397479]: 2025-11-25 17:09:09.639115527 +0000 UTC m=+0.748593704 container attach dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:09:09 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.080 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090535.0785992, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.082 254096 INFO nova.compute.manager [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.108 254096 DEBUG nova.compute.manager [None req-2c275085-f0e9-441c-a688-e890800f1705 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.442 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Successfully created port: 8f6c27c2-adf3-4556-9c61-69f27de29c0a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:09:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 97 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 964 KiB/s wr, 93 op/s
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.508 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.666 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:10 np0005535469 gallant_jang[397583]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:09:10 np0005535469 gallant_jang[397583]: --> relative data size: 1.0
Nov 25 12:09:10 np0005535469 gallant_jang[397583]: --> All data devices are unavailable
Nov 25 12:09:10 np0005535469 systemd[1]: libpod-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope: Deactivated successfully.
Nov 25 12:09:10 np0005535469 systemd[1]: libpod-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope: Consumed 1.027s CPU time.
Nov 25 12:09:10 np0005535469 podman[397479]: 2025-11-25 17:09:10.730523872 +0000 UTC m=+1.840002039 container died dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:10 np0005535469 nova_compute[254092]: 2025-11-25 17:09:10.755 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:09:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf-merged.mount: Deactivated successfully.
Nov 25 12:09:11 np0005535469 podman[397479]: 2025-11-25 17:09:11.251494101 +0000 UTC m=+2.360972248 container remove dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:09:11 np0005535469 systemd[1]: libpod-conmon-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope: Deactivated successfully.
Nov 25 12:09:11 np0005535469 nova_compute[254092]: 2025-11-25 17:09:11.406 254096 DEBUG nova.objects.instance [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid e5fb0d68-c20f-4118-96eb-4e6de1db03a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:09:11 np0005535469 nova_compute[254092]: 2025-11-25 17:09:11.426 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:09:11 np0005535469 nova_compute[254092]: 2025-11-25 17:09:11.426 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Ensure instance console log exists: /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:09:11 np0005535469 nova_compute[254092]: 2025-11-25 17:09:11.427 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:11 np0005535469 nova_compute[254092]: 2025-11-25 17:09:11.427 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:11 np0005535469 nova_compute[254092]: 2025-11-25 17:09:11.427 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:11 np0005535469 podman[397843]: 2025-11-25 17:09:11.854083799 +0000 UTC m=+0.044698117 container create 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:09:11 np0005535469 systemd[1]: Started libpod-conmon-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope.
Nov 25 12:09:11 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:11Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:f0:6f 10.100.0.3
Nov 25 12:09:11 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:11Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:f0:6f 10.100.0.3
Nov 25 12:09:11 np0005535469 podman[397843]: 2025-11-25 17:09:11.836527858 +0000 UTC m=+0.027142206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:09:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:09:11 np0005535469 podman[397843]: 2025-11-25 17:09:11.953236399 +0000 UTC m=+0.143850747 container init 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:09:11 np0005535469 podman[397843]: 2025-11-25 17:09:11.960490527 +0000 UTC m=+0.151104845 container start 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:09:11 np0005535469 podman[397843]: 2025-11-25 17:09:11.964409945 +0000 UTC m=+0.155024293 container attach 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:09:11 np0005535469 dazzling_hodgkin[397859]: 167 167
Nov 25 12:09:11 np0005535469 systemd[1]: libpod-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope: Deactivated successfully.
Nov 25 12:09:11 np0005535469 conmon[397859]: conmon 8d46efaf428a446a86c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope/container/memory.events
Nov 25 12:09:11 np0005535469 podman[397843]: 2025-11-25 17:09:11.967898631 +0000 UTC m=+0.158512949 container died 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:09:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5089f48ff53013247aede612232e6dd61a870b5b2acc4754a60c308532f2a12c-merged.mount: Deactivated successfully.
Nov 25 12:09:12 np0005535469 podman[397843]: 2025-11-25 17:09:12.010657113 +0000 UTC m=+0.201271431 container remove 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:09:12 np0005535469 systemd[1]: libpod-conmon-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope: Deactivated successfully.
Nov 25 12:09:12 np0005535469 podman[397883]: 2025-11-25 17:09:12.246976426 +0000 UTC m=+0.066522506 container create bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:09:12 np0005535469 systemd[1]: Started libpod-conmon-bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547.scope.
Nov 25 12:09:12 np0005535469 podman[397883]: 2025-11-25 17:09:12.215489931 +0000 UTC m=+0.035036031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:09:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:09:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:12 np0005535469 podman[397883]: 2025-11-25 17:09:12.36710737 +0000 UTC m=+0.186653500 container init bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 12:09:12 np0005535469 podman[397883]: 2025-11-25 17:09:12.375059978 +0000 UTC m=+0.194606058 container start bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:09:12 np0005535469 podman[397883]: 2025-11-25 17:09:12.382982916 +0000 UTC m=+0.202529016 container attach bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 147 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.532 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Successfully updated port: 8f6c27c2-adf3-4556-9c61-69f27de29c0a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.558 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.558 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.558 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.690 254096 DEBUG nova.compute.manager [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.690 254096 DEBUG nova.compute.manager [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing instance network info cache due to event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.691 254096 DEBUG oslo_concurrency.lockutils [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:12 np0005535469 nova_compute[254092]: 2025-11-25 17:09:12.751 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]: {
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:    "0": [
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:        {
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "devices": [
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "/dev/loop3"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            ],
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_name": "ceph_lv0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_size": "21470642176",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "name": "ceph_lv0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "tags": {
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cluster_name": "ceph",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.crush_device_class": "",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.encrypted": "0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osd_id": "0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.type": "block",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.vdo": "0"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            },
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "type": "block",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "vg_name": "ceph_vg0"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:        }
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:    ],
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:    "1": [
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:        {
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "devices": [
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "/dev/loop4"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            ],
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_name": "ceph_lv1",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_size": "21470642176",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "name": "ceph_lv1",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "tags": {
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cluster_name": "ceph",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.crush_device_class": "",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.encrypted": "0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osd_id": "1",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.type": "block",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.vdo": "0"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            },
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "type": "block",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "vg_name": "ceph_vg1"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:        }
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:    ],
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:    "2": [
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:        {
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "devices": [
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "/dev/loop5"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            ],
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_name": "ceph_lv2",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_size": "21470642176",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "name": "ceph_lv2",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "tags": {
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.cluster_name": "ceph",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.crush_device_class": "",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.encrypted": "0",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osd_id": "2",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.type": "block",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:                "ceph.vdo": "0"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            },
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "type": "block",
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:            "vg_name": "ceph_vg2"
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:        }
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]:    ]
Nov 25 12:09:13 np0005535469 mystifying_shamir[397901]: }
Nov 25 12:09:13 np0005535469 systemd[1]: libpod-bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547.scope: Deactivated successfully.
Nov 25 12:09:13 np0005535469 podman[397883]: 2025-11-25 17:09:13.177390215 +0000 UTC m=+0.996936295 container died bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:09:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658-merged.mount: Deactivated successfully.
Nov 25 12:09:13 np0005535469 podman[397883]: 2025-11-25 17:09:13.236478326 +0000 UTC m=+1.056024406 container remove bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 12:09:13 np0005535469 systemd[1]: libpod-conmon-bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547.scope: Deactivated successfully.
Nov 25 12:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:13.647 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:13.647 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.749 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.770 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.771 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance network_info: |[{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.773 254096 DEBUG oslo_concurrency.lockutils [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.774 254096 DEBUG nova.network.neutron [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.777 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start _get_guest_xml network_info=[{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.784 254096 WARNING nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.792 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.793 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.796 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.797 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.798 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.798 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.799 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.799 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.800 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.800 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.800 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.802 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:09:13 np0005535469 nova_compute[254092]: 2025-11-25 17:09:13.805 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:13 np0005535469 podman[398065]: 2025-11-25 17:09:13.84586618 +0000 UTC m=+0.063533573 container create bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:09:13 np0005535469 systemd[1]: Started libpod-conmon-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope.
Nov 25 12:09:13 np0005535469 podman[398065]: 2025-11-25 17:09:13.813331958 +0000 UTC m=+0.030999411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:09:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:09:13 np0005535469 podman[398065]: 2025-11-25 17:09:13.946853469 +0000 UTC m=+0.164520872 container init bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:09:13 np0005535469 podman[398065]: 2025-11-25 17:09:13.955399324 +0000 UTC m=+0.173066677 container start bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 12:09:13 np0005535469 podman[398065]: 2025-11-25 17:09:13.959332462 +0000 UTC m=+0.176999855 container attach bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:09:13 np0005535469 vibrant_mccarthy[398081]: 167 167
Nov 25 12:09:13 np0005535469 systemd[1]: libpod-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope: Deactivated successfully.
Nov 25 12:09:13 np0005535469 conmon[398081]: conmon bb7330cd53dd86ad0f4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope/container/memory.events
Nov 25 12:09:13 np0005535469 podman[398065]: 2025-11-25 17:09:13.963686141 +0000 UTC m=+0.181353504 container died bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:09:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9bcdbf64cb0632f76c236d9f283ea777b8ea63b75aac0f2fa946c47821073f09-merged.mount: Deactivated successfully.
Nov 25 12:09:14 np0005535469 podman[398065]: 2025-11-25 17:09:14.017708334 +0000 UTC m=+0.235375687 container remove bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:09:14 np0005535469 systemd[1]: libpod-conmon-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope: Deactivated successfully.
Nov 25 12:09:14 np0005535469 podman[398124]: 2025-11-25 17:09:14.238396226 +0000 UTC m=+0.056279045 container create 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:09:14 np0005535469 systemd[1]: Started libpod-conmon-63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82.scope.
Nov 25 12:09:14 np0005535469 podman[398124]: 2025-11-25 17:09:14.211322904 +0000 UTC m=+0.029205743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:09:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:09:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:09:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351559669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:09:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:14 np0005535469 podman[398124]: 2025-11-25 17:09:14.327618893 +0000 UTC m=+0.145501732 container init 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:09:14 np0005535469 podman[398124]: 2025-11-25 17:09:14.338621325 +0000 UTC m=+0.156504144 container start 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.338 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:14 np0005535469 podman[398124]: 2025-11-25 17:09:14.342272586 +0000 UTC m=+0.160155405 container attach 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.362 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.366 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 147 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 3.5 MiB/s wr, 56 op/s
Nov 25 12:09:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:09:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201384133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.833 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.835 254096 DEBUG nova.virt.libvirt.vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:09:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1686027048',display_name='tempest-TestNetworkBasicOps-server-1686027048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1686027048',id=132,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNdl5to0NzjjAFgFNhpsdh8Jfq3ScLPPqyXuaui5ecMeBqyO36Oalgk9S8OnNSAFKWqaym/gRqT0RKa9nF633E5JOkmAjnn0MFwmhHcgXwoFxcxFcA2kbyimTGOoXBEcA==',key_name='tempest-TestNetworkBasicOps-1297437986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-dsazanwp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:08Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e5fb0d68-c20f-4118-96eb-4e6de1db03a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.835 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.836 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.837 254096 DEBUG nova.objects.instance [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5fb0d68-c20f-4118-96eb-4e6de1db03a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.850 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <uuid>e5fb0d68-c20f-4118-96eb-4e6de1db03a1</uuid>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <name>instance-00000084</name>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-1686027048</nova:name>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:09:13</nova:creationTime>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <nova:port uuid="8f6c27c2-adf3-4556-9c61-69f27de29c0a">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <entry name="serial">e5fb0d68-c20f-4118-96eb-4e6de1db03a1</entry>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <entry name="uuid">e5fb0d68-c20f-4118-96eb-4e6de1db03a1</entry>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:66:a8:a6"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <target dev="tap8f6c27c2-ad"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/console.log" append="off"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:09:14 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:09:14 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:09:14 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:09:14 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.850 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Preparing to wait for external event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG nova.virt.libvirt.vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:09:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1686027048',display_name='tempest-TestNetworkBasicOps-server-1686027048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1686027048',id=132,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNdl5to0NzjjAFgFNhpsdh8Jfq3ScLPPqyXuaui5ecMeBqyO36Oalgk9S8OnNSAFKWqaym/gRqT0RKa9nF633E5JOkmAjnn0MFwmhHcgXwoFxcxFcA2kbyimTGOoXBEcA==',key_name='tempest-TestNetworkBasicOps-1297437986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-dsazanwp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:08Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e5fb0d68-c20f-4118-96eb-4e6de1db03a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.852 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.852 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.852 254096 DEBUG os_vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.853 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.854 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.855 254096 DEBUG nova.network.neutron [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updated VIF entry in instance network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.855 254096 DEBUG nova.network.neutron [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.858 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f6c27c2-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.859 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f6c27c2-ad, col_values=(('external_ids', {'iface-id': '8f6c27c2-adf3-4556-9c61-69f27de29c0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:a8:a6', 'vm-uuid': 'e5fb0d68-c20f-4118-96eb-4e6de1db03a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:14 np0005535469 NetworkManager[48891]: <info>  [1764090554.8621] manager: (tap8f6c27c2-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.864 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.868 254096 DEBUG oslo_concurrency.lockutils [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.870 254096 INFO os_vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad')#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.927 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.928 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.928 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:66:a8:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.929 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Using config drive#033[00m
Nov 25 12:09:14 np0005535469 nova_compute[254092]: 2025-11-25 17:09:14.948 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]: {
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "osd_id": 1,
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "type": "bluestore"
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:    },
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "osd_id": 2,
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "type": "bluestore"
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:    },
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "osd_id": 0,
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:        "type": "bluestore"
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]:    }
Nov 25 12:09:15 np0005535469 sleepy_newton[398141]: }
Nov 25 12:09:15 np0005535469 systemd[1]: libpod-63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82.scope: Deactivated successfully.
Nov 25 12:09:15 np0005535469 podman[398124]: 2025-11-25 17:09:15.334560502 +0000 UTC m=+1.152443321 container died 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:09:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d-merged.mount: Deactivated successfully.
Nov 25 12:09:15 np0005535469 podman[398124]: 2025-11-25 17:09:15.424779677 +0000 UTC m=+1.242662496 container remove 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 12:09:15 np0005535469 systemd[1]: libpod-conmon-63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82.scope: Deactivated successfully.
Nov 25 12:09:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:09:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:09:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ca6cdad2-82c9-4deb-8975-2af298c71564 does not exist
Nov 25 12:09:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 897da2fc-4144-4cfd-91f9-10347b5c32c2 does not exist
Nov 25 12:09:15 np0005535469 nova_compute[254092]: 2025-11-25 17:09:15.511 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:15 np0005535469 nova_compute[254092]: 2025-11-25 17:09:15.834 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Creating config drive at /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config#033[00m
Nov 25 12:09:15 np0005535469 nova_compute[254092]: 2025-11-25 17:09:15.841 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvfl44r_h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:15 np0005535469 nova_compute[254092]: 2025-11-25 17:09:15.981 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvfl44r_h" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.010 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.014 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.186 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.187 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deleting local config drive /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config because it was imported into RBD.#033[00m
Nov 25 12:09:16 np0005535469 kernel: tap8f6c27c2-ad: entered promiscuous mode
Nov 25 12:09:16 np0005535469 NetworkManager[48891]: <info>  [1764090556.2509] manager: (tap8f6c27c2-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Nov 25 12:09:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:16Z|01373|binding|INFO|Claiming lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a for this chassis.
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:16Z|01374|binding|INFO|8f6c27c2-adf3-4556-9c61-69f27de29c0a: Claiming fa:16:3e:66:a8:a6 10.100.0.4
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.260 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:a8:a6 10.100.0.4'], port_security=['fa:16:3e:66:a8:a6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e5fb0d68-c20f-4118-96eb-4e6de1db03a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16916ada-fafd-4ccd-ac83-b8a7fafd2092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8f6c27c2-adf3-4556-9c61-69f27de29c0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.263 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8f6c27c2-adf3-4556-9c61-69f27de29c0a in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 bound to our chassis#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.265 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 877a4e79-06f0-432b-a5f9-1a0277ccd412#033[00m
Nov 25 12:09:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:16Z|01375|binding|INFO|Setting lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a ovn-installed in OVS
Nov 25 12:09:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:16Z|01376|binding|INFO|Setting lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a up in Southbound
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.280 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1da6e84-17d2-4256-8645-fa7a33e0e761]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:16 np0005535469 systemd-machined[216343]: New machine qemu-166-instance-00000084.
Nov 25 12:09:16 np0005535469 systemd[1]: Started Virtual Machine qemu-166-instance-00000084.
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac44a671-b4ae-48b0-8917-4806177196aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.312 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[70b3b034-829f-4e8b-81b8-2042174e536b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:16 np0005535469 systemd-udevd[398357]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:09:16 np0005535469 NetworkManager[48891]: <info>  [1764090556.3270] device (tap8f6c27c2-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:09:16 np0005535469 NetworkManager[48891]: <info>  [1764090556.3280] device (tap8f6c27c2-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.340 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a852738f-fae7-47f8-ad57-0f60e506ecf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.355 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad5e965-892a-40f8-b1f4-bf40e3942772]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398367, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.373 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aef77d5e-3771-4d5a-87fb-6c502c02e5f6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699476, 'tstamp': 699476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398368, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699479, 'tstamp': 699479}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398368, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.377 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap877a4e79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.378 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.378 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap877a4e79-00, col_values=(('external_ids', {'iface-id': '62f37f6b-9026-4c42-8bb8-f4b3e0610e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.378 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.451 254096 DEBUG nova.compute.manager [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.451 254096 DEBUG oslo_concurrency.lockutils [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.451 254096 DEBUG oslo_concurrency.lockutils [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.452 254096 DEBUG oslo_concurrency.lockutils [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.452 254096 DEBUG nova.compute.manager [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Processing event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:09:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.744 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.745 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090556.7433374, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.745 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Started (Lifecycle Event)#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.751 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.756 254096 INFO nova.virt.libvirt.driver [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance spawned successfully.#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.756 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.768 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.774 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.778 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.779 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.779 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.779 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.780 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.780 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.802 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.802 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090556.7445712, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.802 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.825 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.828 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090556.7502584, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.842 254096 INFO nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 7.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.842 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.843 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.850 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.877 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.904 254096 INFO nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 9.18 seconds to build instance.#033[00m
Nov 25 12:09:16 np0005535469 nova_compute[254092]: 2025-11-25 17:09:16.917 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:17 np0005535469 nova_compute[254092]: 2025-11-25 17:09:17.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 12:09:18 np0005535469 nova_compute[254092]: 2025-11-25 17:09:18.585 254096 DEBUG nova.compute.manager [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:18 np0005535469 nova_compute[254092]: 2025-11-25 17:09:18.585 254096 DEBUG oslo_concurrency.lockutils [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:18 np0005535469 nova_compute[254092]: 2025-11-25 17:09:18.585 254096 DEBUG oslo_concurrency.lockutils [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:18 np0005535469 nova_compute[254092]: 2025-11-25 17:09:18.586 254096 DEBUG oslo_concurrency.lockutils [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:18 np0005535469 nova_compute[254092]: 2025-11-25 17:09:18.586 254096 DEBUG nova.compute.manager [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] No waiting events found dispatching network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:18 np0005535469 nova_compute[254092]: 2025-11-25 17:09:18.586 254096 WARNING nova.compute.manager [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received unexpected event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a for instance with vm_state active and task_state None.#033[00m
Nov 25 12:09:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:19 np0005535469 nova_compute[254092]: 2025-11-25 17:09:19.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 25 12:09:20 np0005535469 nova_compute[254092]: 2025-11-25 17:09:20.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 161 op/s
Nov 25 12:09:22 np0005535469 podman[398412]: 2025-11-25 17:09:22.676186768 +0000 UTC m=+0.094221324 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 12:09:22 np0005535469 podman[398414]: 2025-11-25 17:09:22.676187788 +0000 UTC m=+0.090027910 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:09:22 np0005535469 podman[398413]: 2025-11-25 17:09:22.680734954 +0000 UTC m=+0.090143344 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:09:23 np0005535469 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG nova.compute.manager [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:23 np0005535469 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG nova.compute.manager [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing instance network info cache due to event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:23 np0005535469 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG oslo_concurrency.lockutils [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:23 np0005535469 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG oslo_concurrency.lockutils [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:23 np0005535469 nova_compute[254092]: 2025-11-25 17:09:23.254 254096 DEBUG nova.network.neutron [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:24 np0005535469 nova_compute[254092]: 2025-11-25 17:09:24.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 419 KiB/s wr, 113 op/s
Nov 25 12:09:24 np0005535469 nova_compute[254092]: 2025-11-25 17:09:24.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:24 np0005535469 nova_compute[254092]: 2025-11-25 17:09:24.971 254096 DEBUG nova.network.neutron [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updated VIF entry in instance network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:24 np0005535469 nova_compute[254092]: 2025-11-25 17:09:24.971 254096 DEBUG nova.network.neutron [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:25 np0005535469 nova_compute[254092]: 2025-11-25 17:09:24.999 254096 DEBUG oslo_concurrency.lockutils [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:25 np0005535469 nova_compute[254092]: 2025-11-25 17:09:25.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:25.812 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:09:25 np0005535469 nova_compute[254092]: 2025-11-25 17:09:25.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:25.813 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:09:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 423 KiB/s wr, 113 op/s
Nov 25 12:09:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Nov 25 12:09:28 np0005535469 nova_compute[254092]: 2025-11-25 17:09:28.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:29 np0005535469 nova_compute[254092]: 2025-11-25 17:09:29.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 190 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Nov 25 12:09:30 np0005535469 nova_compute[254092]: 2025-11-25 17:09:30.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:30Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:a8:a6 10.100.0.4
Nov 25 12:09:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:30Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:a8:a6 10.100.0.4
Nov 25 12:09:31 np0005535469 nova_compute[254092]: 2025-11-25 17:09:31.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Nov 25 12:09:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.801 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.802 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.816 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.890 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.891 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.899 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.899 254096 INFO nova.compute.claims [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:09:34 np0005535469 nova_compute[254092]: 2025-11-25 17:09:34.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.193 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.519 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:09:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4099504700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.730 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.739 254096 DEBUG nova.compute.provider_tree [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.755 254096 DEBUG nova.scheduler.client.report [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.779 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.781 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:09:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:35.815 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.847 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.848 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.871 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.889 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.985 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.987 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:09:35 np0005535469 nova_compute[254092]: 2025-11-25 17:09:35.988 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Creating image(s)#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.025 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.056 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.081 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.085 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.122 254096 DEBUG nova.policy [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24306f395dd542b6a11b3bd0faadd4ad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '49df21ca46894c8fb4040c7e9eccaef4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.164 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.165 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.166 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.166 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.189 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.193 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.538 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.614 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] resizing rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.731 254096 DEBUG nova.objects.instance [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a4ffa69-afb1-46b7-9109-8edeb9481103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.745 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.746 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Ensure instance console log exists: /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.746 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.747 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.748 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:36 np0005535469 nova_compute[254092]: 2025-11-25 17:09:36.847 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Successfully created port: 507a4b35-dd4f-4777-a88c-c40597fe827b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.509 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Successfully updated port: 507a4b35-dd4f-4777-a88c-c40597fe827b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.528 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.529 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquired lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.529 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.614 254096 DEBUG nova.compute.manager [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.614 254096 DEBUG nova.compute.manager [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing instance network info cache due to event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:37 np0005535469 nova_compute[254092]: 2025-11-25 17:09:37.615 254096 DEBUG oslo_concurrency.lockutils [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:38 np0005535469 nova_compute[254092]: 2025-11-25 17:09:38.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:09:38 np0005535469 nova_compute[254092]: 2025-11-25 17:09:38.748 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:09:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.161 254096 INFO nova.compute.manager [None req-1da6962b-6e80-454f-85be-4d52380b7ed1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Get console output#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.182 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.600 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.618 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Releasing lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.619 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance network_info: |[{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.620 254096 DEBUG oslo_concurrency.lockutils [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.621 254096 DEBUG nova.network.neutron [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.628 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start _get_guest_xml network_info=[{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.634 254096 WARNING nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.642 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.644 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.654 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.656 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.657 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.657 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.659 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.659 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.659 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.660 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.660 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.661 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.661 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.662 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.662 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.663 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.669 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:39 np0005535469 nova_compute[254092]: 2025-11-25 17:09:39.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.029 254096 DEBUG nova.compute.manager [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.029 254096 DEBUG nova.compute.manager [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.030 254096 DEBUG oslo_concurrency.lockutils [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.030 254096 DEBUG oslo_concurrency.lockutils [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.030 254096 DEBUG nova.network.neutron [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:09:40
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups']
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:09:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:09:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826761267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.211 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.236 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.243 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 232 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.2 MiB/s wr, 80 op/s
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.529 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.696 254096 DEBUG nova.network.neutron [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updated VIF entry in instance network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.698 254096 DEBUG nova.network.neutron [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.714 254096 DEBUG oslo_concurrency.lockutils [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:09:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765816984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.797 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.800 254096 DEBUG nova.virt.libvirt.vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-955582536',display_name='tempest-TestServerBasicOps-server-955582536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-955582536',id=133,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDdtyaxmMVL8CAc7S++Y8gbOFbk1LCqSryz68UQakJZFbiX886AD33j7kjiNk5kHLWkm6AP4LpqGE2BaOqlADwTjW6lF33LqPINwV3iZ2r2irHV9h1AdPWEego3dYIkYew==',key_name='tempest-TestServerBasicOps-2071633386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49df21ca46894c8fb4040c7e9eccaef4',ramdisk_id='',reservation_id='r-w0qyy7pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1965980686',owner_user_name='tempest-TestServerBasicOps-1965980686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24306f395dd542b6a11b3bd0faadd4ad',uuid=6a4ffa69-afb1-46b7-9109-8edeb9481103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.801 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converting VIF {"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.802 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.804 254096 DEBUG nova.objects.instance [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a4ffa69-afb1-46b7-9109-8edeb9481103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.824 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <uuid>6a4ffa69-afb1-46b7-9109-8edeb9481103</uuid>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <name>instance-00000085</name>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestServerBasicOps-server-955582536</nova:name>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:09:39</nova:creationTime>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:user uuid="24306f395dd542b6a11b3bd0faadd4ad">tempest-TestServerBasicOps-1965980686-project-member</nova:user>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:project uuid="49df21ca46894c8fb4040c7e9eccaef4">tempest-TestServerBasicOps-1965980686</nova:project>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <nova:port uuid="507a4b35-dd4f-4777-a88c-c40597fe827b">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <entry name="serial">6a4ffa69-afb1-46b7-9109-8edeb9481103</entry>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <entry name="uuid">6a4ffa69-afb1-46b7-9109-8edeb9481103</entry>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6a4ffa69-afb1-46b7-9109-8edeb9481103_disk">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:d4:22:50"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <target dev="tap507a4b35-dd"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/console.log" append="off"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:09:40 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:09:40 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:09:40 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:09:40 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.833 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Preparing to wait for external event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.833 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.834 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.834 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.835 254096 DEBUG nova.virt.libvirt.vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-955582536',display_name='tempest-TestServerBasicOps-server-955582536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-955582536',id=133,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDdtyaxmMVL8CAc7S++Y8gbOFbk1LCqSryz68UQakJZFbiX886AD33j7kjiNk5kHLWkm6AP4LpqGE2BaOqlADwTjW6lF33LqPINwV3iZ2r2irHV9h1AdPWEego3dYIkYew==',key_name='tempest-TestServerBasicOps-2071633386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49df21ca46894c8fb4040c7e9eccaef4',ramdisk_id='',reservation_id='r-w0qyy7pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1965980686',owner_user_name='tempest-TestServerBasicOps-1965980686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24306f395dd542b6a11b3bd0faadd4ad',uuid=6a4ffa69-afb1-46b7-9109-8edeb9481103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.836 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converting VIF {"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.837 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.838 254096 DEBUG os_vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.839 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.840 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.844 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap507a4b35-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.845 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap507a4b35-dd, col_values=(('external_ids', {'iface-id': '507a4b35-dd4f-4777-a88c-c40597fe827b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:22:50', 'vm-uuid': '6a4ffa69-afb1-46b7-9109-8edeb9481103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:40 np0005535469 NetworkManager[48891]: <info>  [1764090580.8483] manager: (tap507a4b35-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:40 np0005535469 nova_compute[254092]: 2025-11-25 17:09:40.856 254096 INFO os_vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd')#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.028 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.029 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.030 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] No VIF found with MAC fa:16:3e:d4:22:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.031 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Using config drive#033[00m
Nov 25 12:09:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:09:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808388358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.121 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.135 254096 INFO nova.compute.manager [None req-0c87e0ed-a00b-48e8-8457-c887aa2f1ca7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Get console output#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.137 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.145 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.233 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.234 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.238 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.238 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.242 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.243 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.418 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.420 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3304MB free_disk=59.89732360839844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.420 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.420 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e9a105a6-90a1-4e21-9296-61a55e2ceec3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e5fb0d68-c20f-4118-96eb-4e6de1db03a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 6a4ffa69-afb1-46b7-9109-8edeb9481103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.617 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.924 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Creating config drive at /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config#033[00m
Nov 25 12:09:41 np0005535469 nova_compute[254092]: 2025-11-25 17:09:41.938 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnok5dwcm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:09:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538877808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.084 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.093 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.099 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnok5dwcm" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.129 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.133 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.167 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.173 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.173 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.173 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 WARNING nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 WARNING nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.199 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.199 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.275 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.276 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deleting local config drive /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config because it was imported into RBD.#033[00m
Nov 25 12:09:42 np0005535469 kernel: tap507a4b35-dd: entered promiscuous mode
Nov 25 12:09:42 np0005535469 NetworkManager[48891]: <info>  [1764090582.3247] manager: (tap507a4b35-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Nov 25 12:09:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:42Z|01377|binding|INFO|Claiming lport 507a4b35-dd4f-4777-a88c-c40597fe827b for this chassis.
Nov 25 12:09:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:42Z|01378|binding|INFO|507a4b35-dd4f-4777-a88c-c40597fe827b: Claiming fa:16:3e:d4:22:50 10.100.0.9
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.325 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.336 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:22:50 10.100.0.9'], port_security=['fa:16:3e:d4:22:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a4ffa69-afb1-46b7-9109-8edeb9481103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e024dc03-b986-42e0-ad9c-68e6318af670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49df21ca46894c8fb4040c7e9eccaef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dda0fb5-abf0-44ce-9142-5535344390ea 94b8f488-8d50-467c-9417-5b43dfa0fc8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edf8fd3-0892-465d-8830-58affb8f0bec, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=507a4b35-dd4f-4777-a88c-c40597fe827b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.338 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 507a4b35-dd4f-4777-a88c-c40597fe827b in datapath e024dc03-b986-42e0-ad9c-68e6318af670 bound to our chassis#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.340 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e024dc03-b986-42e0-ad9c-68e6318af670#033[00m
Nov 25 12:09:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:42Z|01379|binding|INFO|Setting lport 507a4b35-dd4f-4777-a88c-c40597fe827b ovn-installed in OVS
Nov 25 12:09:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:42Z|01380|binding|INFO|Setting lport 507a4b35-dd4f-4777-a88c-c40597fe827b up in Southbound
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.354 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d87fb36-143c-4872-828f-685d6749c365]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.356 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape024dc03-b1 in ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.357 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape024dc03-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.357 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[519c2b4c-63db-423e-bd6f-3df0d93384a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.359 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25d1be8d-ce0f-491b-8094-518cfa3a5550]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 systemd-udevd[398846]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:09:42 np0005535469 systemd-machined[216343]: New machine qemu-167-instance-00000085.
Nov 25 12:09:42 np0005535469 NetworkManager[48891]: <info>  [1764090582.3746] device (tap507a4b35-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:09:42 np0005535469 NetworkManager[48891]: <info>  [1764090582.3757] device (tap507a4b35-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:09:42 np0005535469 systemd[1]: Started Virtual Machine qemu-167-instance-00000085.
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.383 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f8793675-336f-4d51-ab3f-bb7defdb1d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.408 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35091f8f-cb3f-4bc5-bb96-e5d6b89425a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.436 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f8b487-14b4-40e5-84ca-2ee5f6b30088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10e0db70-4839-458c-89b3-f52c32ff2557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 NetworkManager[48891]: <info>  [1764090582.4428] manager: (tape024dc03-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/567)
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.476 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed4a95e-9249-477e-9030-437fc973ca03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.479 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee28ebd-5e37-43e6-bd8c-0a2aa2570344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Nov 25 12:09:42 np0005535469 NetworkManager[48891]: <info>  [1764090582.5156] device (tape024dc03-b0): carrier: link connected
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.524 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41a9c97f-8c9a-417f-a499-157cebc00488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.543 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45cc44df-0f50-4152-bd15-da1912ab05a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape024dc03-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:af:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704010, 'reachable_time': 34527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398878, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.560 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[520dd638-372a-4317-a9e5-9487c1d1e8de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef0:af8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704010, 'tstamp': 704010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398879, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4602c558-b161-4d31-9283-8b299fc800fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape024dc03-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:af:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704010, 'reachable_time': 34527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398880, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.630 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[faadb7bc-c266-4f64-83db-fa8a64f668c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.722 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4d902918-3d7d-45a9-98f8-4a1613041074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape024dc03-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape024dc03-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:42 np0005535469 kernel: tape024dc03-b0: entered promiscuous mode
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:42 np0005535469 NetworkManager[48891]: <info>  [1764090582.7291] manager: (tape024dc03-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape024dc03-b0, col_values=(('external_ids', {'iface-id': '40effa5c-1023-4055-bf6e-f3fb50577f32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:42Z|01381|binding|INFO|Releasing lport 40effa5c-1023-4055-bf6e-f3fb50577f32 from this chassis (sb_readonly=0)
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.746 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e024dc03-b986-42e0-ad9c-68e6318af670.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e024dc03-b986-42e0-ad9c-68e6318af670.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.748 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4078301-9564-404e-904b-ea251371b826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.750 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-e024dc03-b986-42e0-ad9c-68e6318af670
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/e024dc03-b986-42e0-ad9c-68e6318af670.pid.haproxy
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID e024dc03-b986-42e0-ad9c-68e6318af670
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:09:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.751 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'env', 'PROCESS_TAG=haproxy-e024dc03-b986-42e0-ad9c-68e6318af670', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e024dc03-b986-42e0-ad9c-68e6318af670.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.783 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090582.782857, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.784 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Started (Lifecycle Event)#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.807 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.813 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090582.7832856, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.813 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.831 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.836 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:09:42 np0005535469 nova_compute[254092]: 2025-11-25 17:09:42.854 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:09:43 np0005535469 podman[398954]: 2025-11-25 17:09:43.133130504 +0000 UTC m=+0.048458530 container create 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:09:43 np0005535469 systemd[1]: Started libpod-conmon-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b.scope.
Nov 25 12:09:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:09:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a09b45283fd3fcfbf225fb1458ae704eb97d1c98df77d8854db8ed15a4adda7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:09:43 np0005535469 podman[398954]: 2025-11-25 17:09:43.106888015 +0000 UTC m=+0.022216011 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:09:43 np0005535469 podman[398954]: 2025-11-25 17:09:43.215043551 +0000 UTC m=+0.130371537 container init 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 12:09:43 np0005535469 podman[398954]: 2025-11-25 17:09:43.220366587 +0000 UTC m=+0.135694573 container start 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:09:43 np0005535469 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : New worker (398975) forked
Nov 25 12:09:43 np0005535469 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : Loading success.
Nov 25 12:09:43 np0005535469 nova_compute[254092]: 2025-11-25 17:09:43.767 254096 DEBUG nova.network.neutron [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:43 np0005535469 nova_compute[254092]: 2025-11-25 17:09:43.768 254096 DEBUG nova.network.neutron [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:43 np0005535469 nova_compute[254092]: 2025-11-25 17:09:43.781 254096 DEBUG oslo_concurrency.lockutils [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.055 254096 DEBUG nova.compute.manager [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.055 254096 DEBUG nova.compute.manager [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.056 254096 DEBUG oslo_concurrency.lockutils [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.056 254096 DEBUG oslo_concurrency.lockutils [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.056 254096 DEBUG nova.network.neutron [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.222 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.222 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.223 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.223 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.224 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Processing event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.224 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.225 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.225 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.226 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.226 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] No waiting events found dispatching network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.227 254096 WARNING nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received unexpected event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.228 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.234 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090584.2345781, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.235 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.238 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.244 254096 INFO nova.virt.libvirt.driver [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance spawned successfully.#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.245 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.260 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.269 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.276 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.277 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.277 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.278 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.279 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.280 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.320 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.363 254096 INFO nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 8.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.364 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.431 254096 INFO nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 9.57 seconds to build instance.#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.449 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.553 254096 INFO nova.compute.manager [None req-2ee2c256-307f-4235-b5c2-7652d2305870 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Get console output#033[00m
Nov 25 12:09:44 np0005535469 nova_compute[254092]: 2025-11-25 17:09:44.559 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:09:45 np0005535469 nova_compute[254092]: 2025-11-25 17:09:45.200 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:45 np0005535469 nova_compute[254092]: 2025-11-25 17:09:45.200 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:09:45 np0005535469 nova_compute[254092]: 2025-11-25 17:09:45.201 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:09:45 np0005535469 nova_compute[254092]: 2025-11-25 17:09:45.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:45 np0005535469 nova_compute[254092]: 2025-11-25 17:09:45.745 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:45 np0005535469 nova_compute[254092]: 2025-11-25 17:09:45.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.259 254096 DEBUG nova.network.neutron [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.260 254096 DEBUG nova.network.neutron [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.284 254096 DEBUG oslo_concurrency.lockutils [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.286 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.287 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.288 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.305 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.306 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.306 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.307 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.307 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.307 254096 WARNING nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.308 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.308 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.309 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.309 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.309 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.310 254096 WARNING nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.#033[00m
Nov 25 12:09:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.517 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.517 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.518 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.518 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.518 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.520 254096 INFO nova.compute.manager [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Terminating instance#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.522 254096 DEBUG nova.compute.manager [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:09:46 np0005535469 kernel: tap8f6c27c2-ad (unregistering): left promiscuous mode
Nov 25 12:09:46 np0005535469 NetworkManager[48891]: <info>  [1764090586.5928] device (tap8f6c27c2-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:09:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:46Z|01382|binding|INFO|Releasing lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a from this chassis (sb_readonly=0)
Nov 25 12:09:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:46Z|01383|binding|INFO|Setting lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a down in Southbound
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:46Z|01384|binding|INFO|Removing iface tap8f6c27c2-ad ovn-installed in OVS
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.617 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:a8:a6 10.100.0.4'], port_security=['fa:16:3e:66:a8:a6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e5fb0d68-c20f-4118-96eb-4e6de1db03a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16916ada-fafd-4ccd-ac83-b8a7fafd2092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8f6c27c2-adf3-4556-9c61-69f27de29c0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.618 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8f6c27c2-adf3-4556-9c61-69f27de29c0a in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 unbound from our chassis#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.620 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 877a4e79-06f0-432b-a5f9-1a0277ccd412#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.641 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4118e919-e657-4b49-8ca8-57c24b05ca56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:46 np0005535469 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Deactivated successfully.
Nov 25 12:09:46 np0005535469 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Consumed 13.474s CPU time.
Nov 25 12:09:46 np0005535469 systemd-machined[216343]: Machine qemu-166-instance-00000084 terminated.
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.677 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[88fd7b03-8831-4ffe-92cc-15ddb2547d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.681 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[70c14873-e609-4692-b142-80de272e3630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.718 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3ed1ec-ae79-4198-aabe-0a340bd228e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.739 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[58ad68a4-499c-4cd9-b69d-53d3f159367d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398995, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfb0e1b-0f78-4c8b-afe9-6b280d9c1980]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699476, 'tstamp': 699476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398998, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699479, 'tstamp': 699479}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398998, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.772 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.773 254096 INFO nova.virt.libvirt.driver [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance destroyed successfully.#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.774 254096 DEBUG nova.objects.instance [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid e5fb0d68-c20f-4118-96eb-4e6de1db03a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.777 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap877a4e79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.777 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap877a4e79-00, col_values=(('external_ids', {'iface-id': '62f37f6b-9026-4c42-8bb8-f4b3e0610e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.785 254096 DEBUG nova.virt.libvirt.vif [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1686027048',display_name='tempest-TestNetworkBasicOps-server-1686027048',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1686027048',id=132,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNdl5to0NzjjAFgFNhpsdh8Jfq3ScLPPqyXuaui5ecMeBqyO36Oalgk9S8OnNSAFKWqaym/gRqT0RKa9nF633E5JOkmAjnn0MFwmhHcgXwoFxcxFcA2kbyimTGOoXBEcA==',key_name='tempest-TestNetworkBasicOps-1297437986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:09:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-dsazanwp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:09:16Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e5fb0d68-c20f-4118-96eb-4e6de1db03a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.785 254096 DEBUG nova.network.os_vif_util [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.786 254096 DEBUG nova.network.os_vif_util [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.787 254096 DEBUG os_vif [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.789 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f6c27c2-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.797 254096 INFO os_vif [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad')#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.938 254096 DEBUG nova.compute.manager [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-unplugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.939 254096 DEBUG oslo_concurrency.lockutils [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.939 254096 DEBUG oslo_concurrency.lockutils [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.939 254096 DEBUG oslo_concurrency.lockutils [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.940 254096 DEBUG nova.compute.manager [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] No waiting events found dispatching network-vif-unplugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:46 np0005535469 nova_compute[254092]: 2025-11-25 17:09:46.940 254096 DEBUG nova.compute.manager [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-unplugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.157 254096 INFO nova.virt.libvirt.driver [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deleting instance files /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_del#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.158 254096 INFO nova.virt.libvirt.driver [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deletion of /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_del complete#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.224 254096 INFO nova.compute.manager [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.226 254096 DEBUG oslo.service.loopingcall [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.226 254096 DEBUG nova.compute.manager [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.227 254096 DEBUG nova.network.neutron [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.809 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.838 254096 DEBUG nova.network.neutron [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.841 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.842 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.842 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.855 254096 INFO nova.compute.manager [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 0.63 seconds to deallocate network for instance.#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.905 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.906 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:47 np0005535469 nova_compute[254092]: 2025-11-25 17:09:47.997 254096 DEBUG oslo_concurrency.processutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.418 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.418 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing instance network info cache due to event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.419 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.419 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.419 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:09:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543606504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.439 254096 DEBUG oslo_concurrency.processutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.446 254096 DEBUG nova.compute.provider_tree [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.459 254096 DEBUG nova.scheduler.client.report [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.479 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.508 254096 INFO nova.scheduler.client.report [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance e5fb0d68-c20f-4118-96eb-4e6de1db03a1#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.569 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.587 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.842 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.858 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.859 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.859 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing instance network info cache due to event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.859 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.860 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:48 np0005535469 nova_compute[254092]: 2025-11-25 17:09:48.860 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:49 np0005535469 nova_compute[254092]: 2025-11-25 17:09:49.028 254096 DEBUG nova.compute.manager [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:49 np0005535469 nova_compute[254092]: 2025-11-25 17:09:49.028 254096 DEBUG oslo_concurrency.lockutils [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:49 np0005535469 nova_compute[254092]: 2025-11-25 17:09:49.028 254096 DEBUG oslo_concurrency.lockutils [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:49 np0005535469 nova_compute[254092]: 2025-11-25 17:09:49.029 254096 DEBUG oslo_concurrency.lockutils [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:49 np0005535469 nova_compute[254092]: 2025-11-25 17:09:49.029 254096 DEBUG nova.compute.manager [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] No waiting events found dispatching network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:49 np0005535469 nova_compute[254092]: 2025-11-25 17:09:49.029 254096 WARNING nova.compute.manager [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received unexpected event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:09:49 np0005535469 nova_compute[254092]: 2025-11-25 17:09:49.132 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.111 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updated VIF entry in instance network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.112 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.136 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.137 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-deleted-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 190 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.904 254096 DEBUG nova.compute.manager [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.905 254096 DEBUG nova.compute.manager [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.905 254096 DEBUG oslo_concurrency.lockutils [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.905 254096 DEBUG oslo_concurrency.lockutils [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.906 254096 DEBUG nova.network.neutron [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.980 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.981 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.981 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.982 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.982 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.984 254096 INFO nova.compute.manager [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Terminating instance#033[00m
Nov 25 12:09:50 np0005535469 nova_compute[254092]: 2025-11-25 17:09:50.986 254096 DEBUG nova.compute.manager [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:09:51 np0005535469 kernel: tap9b06b5b4-bc (unregistering): left promiscuous mode
Nov 25 12:09:51 np0005535469 NetworkManager[48891]: <info>  [1764090591.0326] device (tap9b06b5b4-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:09:51 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:51Z|01385|binding|INFO|Releasing lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab from this chassis (sb_readonly=0)
Nov 25 12:09:51 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:51Z|01386|binding|INFO|Setting lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab down in Southbound
Nov 25 12:09:51 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:51Z|01387|binding|INFO|Removing iface tap9b06b5b4-bc ovn-installed in OVS
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.052 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:f0:6f 10.100.0.3'], port_security=['fa:16:3e:e4:f0:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e9a105a6-90a1-4e21-9296-61a55e2ceec3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f3c9a6b1-fb73-4f95-84e0-2b0bae619305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.054 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 unbound from our chassis#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.057 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 877a4e79-06f0-432b-a5f9-1a0277ccd412, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.058 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e308de-523a-4e5f-a0a4-40c3bafde057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.059 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 namespace which is not needed anymore#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.075 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 25 12:09:51 np0005535469 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Consumed 14.504s CPU time.
Nov 25 12:09:51 np0005535469 systemd-machined[216343]: Machine qemu-165-instance-00000083 terminated.
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : haproxy version is 2.8.14-c23fe91
Nov 25 12:09:51 np0005535469 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : path to executable is /usr/sbin/haproxy
Nov 25 12:09:51 np0005535469 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [WARNING]  (396794) : Exiting Master process...
Nov 25 12:09:51 np0005535469 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [ALERT]    (396794) : Current worker (396796) exited with code 143 (Terminated)
Nov 25 12:09:51 np0005535469 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [WARNING]  (396794) : All workers exited. Exiting... (0)
Nov 25 12:09:51 np0005535469 systemd[1]: libpod-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652.scope: Deactivated successfully.
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.227 254096 INFO nova.virt.libvirt.driver [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance destroyed successfully.#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.227 254096 DEBUG nova.objects.instance [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:09:51 np0005535469 podman[399073]: 2025-11-25 17:09:51.230004176 +0000 UTC m=+0.051945195 container died 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.237 254096 DEBUG nova.virt.libvirt.vif [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-367110792',display_name='tempest-TestNetworkBasicOps-server-367110792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-367110792',id=131,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDN+GWJbAZWOIdymw+cz2oLcyndkUPhl6vtYaMAR6OngHd8OlebpGiETQxsMoSybRqEA+rFizH3rxBjAI6Jko4gUoKJ0EE0bXq9XY/gGruR3mEMNu5mTsv7YmUDww+bvsg==',key_name='tempest-TestNetworkBasicOps-2139833866',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-3zuncmiv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:57Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e9a105a6-90a1-4e21-9296-61a55e2ceec3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.238 254096 DEBUG nova.network.os_vif_util [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.238 254096 DEBUG nova.network.os_vif_util [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.238 254096 DEBUG os_vif [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.241 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b06b5b4-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.247 254096 INFO os_vif [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc')#033[00m
Nov 25 12:09:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652-userdata-shm.mount: Deactivated successfully.
Nov 25 12:09:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a0e0d9216ee6b30cc22072c4c0c5dbb641729836999db290bc3d2789f7b7629e-merged.mount: Deactivated successfully.
Nov 25 12:09:51 np0005535469 podman[399073]: 2025-11-25 17:09:51.283620986 +0000 UTC m=+0.105562005 container cleanup 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:09:51 np0005535469 systemd[1]: libpod-conmon-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652.scope: Deactivated successfully.
Nov 25 12:09:51 np0005535469 podman[399133]: 2025-11-25 17:09:51.364856655 +0000 UTC m=+0.054577498 container remove 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.372 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b228835-359f-4a85-935a-416bfe0bd39d]: (4, ('Tue Nov 25 05:09:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 (36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652)\n36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652\nTue Nov 25 05:09:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 (36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652)\n36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.374 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f79e3c2e-f69b-4b2f-a6e4-240d3afc5339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 kernel: tap877a4e79-00: left promiscuous mode
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.393 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.396 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57f5e0bb-10ae-4f21-aa3d-5c3f9043aab3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d264846c-3b00-44a5-a09b-e91f721dcbdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.417 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8f592bd0-3e68-4e79-892d-3c9b46ae8a5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.437 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c02a40-9ff7-4775-9688-2b0efc11fb8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699456, 'reachable_time': 37393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399148, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 systemd[1]: run-netns-ovnmeta\x2d877a4e79\x2d06f0\x2d432b\x2da5f9\x2d1a0277ccd412.mount: Deactivated successfully.
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.442 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:09:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.442 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8efe8411-feb5-458c-8bd5-25f9b15c1e0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013727962004297714 of space, bias 1.0, pg target 0.41183886012893145 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:09:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.696 254096 INFO nova.virt.libvirt.driver [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deleting instance files /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3_del#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.698 254096 INFO nova.virt.libvirt.driver [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deletion of /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3_del complete#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.751 254096 INFO nova.compute.manager [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.752 254096 DEBUG oslo.service.loopingcall [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.752 254096 DEBUG nova.compute.manager [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:09:51 np0005535469 nova_compute[254092]: 2025-11-25 17:09:51.753 254096 DEBUG nova.network.neutron [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.087 254096 DEBUG nova.network.neutron [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.088 254096 DEBUG nova.network.neutron [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.109 254096 DEBUG oslo_concurrency.lockutils [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.228 254096 DEBUG nova.network.neutron [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.242 254096 INFO nova.compute.manager [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 0.49 seconds to deallocate network for instance.#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.283 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.283 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.341 254096 DEBUG oslo_concurrency.processutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:09:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 167 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 802 KiB/s wr, 114 op/s
Nov 25 12:09:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:09:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358946284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.829 254096 DEBUG oslo_concurrency.processutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.838 254096 DEBUG nova.compute.provider_tree [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.855 254096 DEBUG nova.scheduler.client.report [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.878 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.916 254096 INFO nova.scheduler.client.report [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance e9a105a6-90a1-4e21-9296-61a55e2ceec3#033[00m
Nov 25 12:09:52 np0005535469 nova_compute[254092]: 2025-11-25 17:09:52.982 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.004 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.005 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.006 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.006 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.007 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.008 254096 WARNING nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.008 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.009 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.010 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.011 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.011 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.012 254096 WARNING nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:09:53 np0005535469 nova_compute[254092]: 2025-11-25 17:09:53.013 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-deleted-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:09:53 np0005535469 podman[399173]: 2025-11-25 17:09:53.655676158 +0000 UTC m=+0.062196697 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:09:53 np0005535469 podman[399172]: 2025-11-25 17:09:53.662914597 +0000 UTC m=+0.075650806 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:09:53 np0005535469 podman[399174]: 2025-11-25 17:09:53.704450785 +0000 UTC m=+0.104996960 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 12:09:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:09:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 167 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Nov 25 12:09:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:09:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2378382817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:09:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:09:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2378382817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:09:55 np0005535469 nova_compute[254092]: 2025-11-25 17:09:55.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:56 np0005535469 nova_compute[254092]: 2025-11-25 17:09:56.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 130 op/s
Nov 25 12:09:58 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:58Z|01388|binding|INFO|Releasing lport 40effa5c-1023-4055-bf6e-f3fb50577f32 from this chassis (sb_readonly=0)
Nov 25 12:09:58 np0005535469 nova_compute[254092]: 2025-11-25 17:09:58.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:09:58 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:58Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:22:50 10.100.0.9
Nov 25 12:09:58 np0005535469 ovn_controller[153477]: 2025-11-25T17:09:58Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:22:50 10.100.0.9
Nov 25 12:09:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.4 KiB/s wr, 88 op/s
Nov 25 12:09:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 107 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 124 op/s
Nov 25 12:10:00 np0005535469 nova_compute[254092]: 2025-11-25 17:10:00.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:01 np0005535469 nova_compute[254092]: 2025-11-25 17:10:01.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:01 np0005535469 nova_compute[254092]: 2025-11-25 17:10:01.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:01 np0005535469 nova_compute[254092]: 2025-11-25 17:10:01.767 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090586.766356, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:10:01 np0005535469 nova_compute[254092]: 2025-11-25 17:10:01.767 254096 INFO nova.compute.manager [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:10:01 np0005535469 nova_compute[254092]: 2025-11-25 17:10:01.788 254096 DEBUG nova.compute.manager [None req-1bbf74ad-1ee8-49a3-89f0-5dcfa4077cf5 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 25 12:10:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 12:10:05 np0005535469 nova_compute[254092]: 2025-11-25 17:10:05.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:05 np0005535469 nova_compute[254092]: 2025-11-25 17:10:05.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:06 np0005535469 nova_compute[254092]: 2025-11-25 17:10:06.225 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090591.2246091, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:10:06 np0005535469 nova_compute[254092]: 2025-11-25 17:10:06.226 254096 INFO nova.compute.manager [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:10:06 np0005535469 nova_compute[254092]: 2025-11-25 17:10:06.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:06 np0005535469 nova_compute[254092]: 2025-11-25 17:10:06.249 254096 DEBUG nova.compute.manager [None req-8ec635fa-1a86-44d1-9714-d79f4954cf72 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 12:10:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 12:10:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:10:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 12:10:10 np0005535469 nova_compute[254092]: 2025-11-25 17:10:10.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.204 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.205 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.226 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.339 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.340 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.351 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.352 254096 INFO nova.compute.claims [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.472 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:10:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713015053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.897 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.904 254096 DEBUG nova.compute.provider_tree [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.922 254096 DEBUG nova.scheduler.client.report [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.949 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.950 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.998 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:10:11 np0005535469 nova_compute[254092]: 2025-11-25 17:10:11.998 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.014 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.037 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.128 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.130 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.131 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Creating image(s)#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.170 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.210 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.241 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.246 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.353 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.355 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.356 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.357 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.398 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.403 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 678bebc8-318d-4332-b89f-f86ac5f187c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 673 KiB/s wr, 29 op/s
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.747 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 678bebc8-318d-4332-b89f-f86ac5f187c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.801 254096 DEBUG nova.policy [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.841 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.958 254096 DEBUG nova.objects.instance [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 678bebc8-318d-4332-b89f-f86ac5f187c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.971 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.972 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Ensure instance console log exists: /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.973 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.973 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:12 np0005535469 nova_compute[254092]: 2025-11-25 17:10:12.974 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:13.648 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:13.648 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:13.649 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:13 np0005535469 nova_compute[254092]: 2025-11-25 17:10:13.898 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Successfully created port: 3af510d7-9800-4ba2-9c9f-f8ded924314f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:10:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 12:10:14 np0005535469 nova_compute[254092]: 2025-11-25 17:10:14.983 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Successfully updated port: 3af510d7-9800-4ba2-9c9f-f8ded924314f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:10:14 np0005535469 nova_compute[254092]: 2025-11-25 17:10:14.997 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:10:14 np0005535469 nova_compute[254092]: 2025-11-25 17:10:14.998 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:10:14 np0005535469 nova_compute[254092]: 2025-11-25 17:10:14.998 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.078 254096 DEBUG nova.compute.manager [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.078 254096 DEBUG nova.compute.manager [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing instance network info cache due to event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.079 254096 DEBUG oslo_concurrency.lockutils [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.128 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.912 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.931 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.932 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance network_info: |[{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.933 254096 DEBUG oslo_concurrency.lockutils [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.933 254096 DEBUG nova.network.neutron [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.938 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start _get_guest_xml network_info=[{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.944 254096 WARNING nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.949 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.950 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.953 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.954 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.954 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.955 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.955 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.955 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.956 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.956 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.956 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.957 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.957 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.957 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.958 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.958 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:10:15 np0005535469 nova_compute[254092]: 2025-11-25 17:10:15.962 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664822610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.411 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.467 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.471 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:10:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c4ee6f04-d0a4-4564-b4fc-bc42d32c4f1e does not exist
Nov 25 12:10:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev df6765ca-fa8a-44a7-b90d-19745477817f does not exist
Nov 25 12:10:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b85cc2d4-1e1b-4520-8759-1afd4c8ee327 does not exist
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:10:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214018862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.910 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.912 254096 DEBUG nova.virt.libvirt.vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1329077363',display_name='tempest-TestNetworkBasicOps-server-1329077363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1329077363',id=134,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFAq/FVSBvI83QmbRYXeQD5oPtImBiS/J8S4tENTC3HiqmpnLQgZSgQo5Q3eg9KB495TzA7SZganOs6ca4kdDUFsCruPnoCgpH1Af7eq+g3pVeBR/yIGYrZrs0yR9s3UA==',key_name='tempest-TestNetworkBasicOps-801135771',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-k9efxcoe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:10:12Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=678bebc8-318d-4332-b89f-f86ac5f187c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.912 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.913 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.915 254096 DEBUG nova.objects.instance [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 678bebc8-318d-4332-b89f-f86ac5f187c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.929 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <uuid>678bebc8-318d-4332-b89f-f86ac5f187c4</uuid>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <name>instance-00000086</name>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestNetworkBasicOps-server-1329077363</nova:name>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:10:15</nova:creationTime>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <nova:port uuid="3af510d7-9800-4ba2-9c9f-f8ded924314f">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <entry name="serial">678bebc8-318d-4332-b89f-f86ac5f187c4</entry>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <entry name="uuid">678bebc8-318d-4332-b89f-f86ac5f187c4</entry>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/678bebc8-318d-4332-b89f-f86ac5f187c4_disk">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:b7:9a:99"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <target dev="tap3af510d7-98"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/console.log" append="off"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:10:16 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:10:16 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:10:16 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:10:16 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.931 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Preparing to wait for external event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.931 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.932 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.932 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.933 254096 DEBUG nova.virt.libvirt.vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1329077363',display_name='tempest-TestNetworkBasicOps-server-1329077363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1329077363',id=134,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFAq/FVSBvI83QmbRYXeQD5oPtImBiS/J8S4tENTC3HiqmpnLQgZSgQo5Q3eg9KB495TzA7SZganOs6ca4kdDUFsCruPnoCgpH1Af7eq+g3pVeBR/yIGYrZrs0yR9s3UA==',key_name='tempest-TestNetworkBasicOps-801135771',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-k9efxcoe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:10:12Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=678bebc8-318d-4332-b89f-f86ac5f187c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.933 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.934 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.934 254096 DEBUG os_vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.936 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.936 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.941 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3af510d7-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.942 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3af510d7-98, col_values=(('external_ids', {'iface-id': '3af510d7-9800-4ba2-9c9f-f8ded924314f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:9a:99', 'vm-uuid': '678bebc8-318d-4332-b89f-f86ac5f187c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:16 np0005535469 NetworkManager[48891]: <info>  [1764090616.9445] manager: (tap3af510d7-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:16 np0005535469 nova_compute[254092]: 2025-11-25 17:10:16.952 254096 INFO os_vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98')#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.106 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.106 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.106 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:b7:9a:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.107 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Using config drive#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.129 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:10:17 np0005535469 podman[399762]: 2025-11-25 17:10:17.110673664 +0000 UTC m=+0.032850942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:10:17 np0005535469 podman[399762]: 2025-11-25 17:10:17.408845392 +0000 UTC m=+0.331022580 container create 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:10:17 np0005535469 systemd[1]: Started libpod-conmon-8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57.scope.
Nov 25 12:10:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.565 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Creating config drive at /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.573 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphufrjpdg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.723 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphufrjpdg" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.726694) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617726825, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1007, "num_deletes": 251, "total_data_size": 1375693, "memory_usage": 1393936, "flush_reason": "Manual Compaction"}
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 25 12:10:17 np0005535469 podman[399762]: 2025-11-25 17:10:17.728706095 +0000 UTC m=+0.650883363 container init 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:10:17 np0005535469 podman[399762]: 2025-11-25 17:10:17.740163109 +0000 UTC m=+0.662340317 container start 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617744179, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1351050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53554, "largest_seqno": 54560, "table_properties": {"data_size": 1346140, "index_size": 2434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10943, "raw_average_key_size": 19, "raw_value_size": 1336226, "raw_average_value_size": 2420, "num_data_blocks": 109, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090528, "oldest_key_time": 1764090528, "file_creation_time": 1764090617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 17542 microseconds, and 10290 cpu microseconds.
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:10:17 np0005535469 podman[399762]: 2025-11-25 17:10:17.745778113 +0000 UTC m=+0.667955301 container attach 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.744261) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1351050 bytes OK
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.744292) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.746237) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.746267) EVENT_LOG_v1 {"time_micros": 1764090617746257, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.746292) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1370889, prev total WAL file size 1397763, number of live WAL files 2.
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.747553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1319KB)], [122(8580KB)]
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617747621, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 10137369, "oldest_snapshot_seqno": -1}
Nov 25 12:10:17 np0005535469 zen_yonath[399797]: 167 167
Nov 25 12:10:17 np0005535469 systemd[1]: libpod-8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57.scope: Deactivated successfully.
Nov 25 12:10:17 np0005535469 podman[399762]: 2025-11-25 17:10:17.752457397 +0000 UTC m=+0.674634615 container died 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.763 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.773 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7404 keys, 8409268 bytes, temperature: kUnknown
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617810398, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8409268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8362928, "index_size": 26708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 194168, "raw_average_key_size": 26, "raw_value_size": 8233604, "raw_average_value_size": 1112, "num_data_blocks": 1034, "num_entries": 7404, "num_filter_entries": 7404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.810734) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8409268 bytes
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.812125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 133.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.7) write-amplify(6.2) OK, records in: 7918, records dropped: 514 output_compression: NoCompression
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.812144) EVENT_LOG_v1 {"time_micros": 1764090617812134, "job": 74, "event": "compaction_finished", "compaction_time_micros": 62800, "compaction_time_cpu_micros": 25989, "output_level": 6, "num_output_files": 1, "total_output_size": 8409268, "num_input_records": 7918, "num_output_records": 7404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617812549, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617815292, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.747243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:10:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:10:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-495dcd64fe0b8475991de64555be3d3d2f1c2adabb93c1a00967205bb8046ccd-merged.mount: Deactivated successfully.
Nov 25 12:10:17 np0005535469 podman[399762]: 2025-11-25 17:10:17.847056052 +0000 UTC m=+0.769233240 container remove 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:10:17 np0005535469 systemd[1]: libpod-conmon-8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57.scope: Deactivated successfully.
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.966 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:17 np0005535469 nova_compute[254092]: 2025-11-25 17:10:17.968 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deleting local config drive /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config because it was imported into RBD.#033[00m
Nov 25 12:10:18 np0005535469 kernel: tap3af510d7-98: entered promiscuous mode
Nov 25 12:10:18 np0005535469 NetworkManager[48891]: <info>  [1764090618.0202] manager: (tap3af510d7-98): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Nov 25 12:10:18 np0005535469 podman[399862]: 2025-11-25 17:10:18.06250027 +0000 UTC m=+0.082795211 container create 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:10:18 np0005535469 systemd-udevd[399884]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:10:18 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:18Z|01389|binding|INFO|Claiming lport 3af510d7-9800-4ba2-9c9f-f8ded924314f for this chassis.
Nov 25 12:10:18 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:18Z|01390|binding|INFO|3af510d7-9800-4ba2-9c9f-f8ded924314f: Claiming fa:16:3e:b7:9a:99 10.100.0.4
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.072 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:9a:99 10.100.0.4'], port_security=['fa:16:3e:b7:9a:99 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678bebc8-318d-4332-b89f-f86ac5f187c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58ae50d2-6994-45e4-b2e6-36301d8443e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=702c597c-9457-430d-aa7a-2d35c3cf306f, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3af510d7-9800-4ba2-9c9f-f8ded924314f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.073 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3af510d7-9800-4ba2-9c9f-f8ded924314f in datapath 169b0886-fc13-49ff-b4f6-0f14f908ad1c bound to our chassis#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.075 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 169b0886-fc13-49ff-b4f6-0f14f908ad1c#033[00m
Nov 25 12:10:18 np0005535469 NetworkManager[48891]: <info>  [1764090618.0792] device (tap3af510d7-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:10:18 np0005535469 NetworkManager[48891]: <info>  [1764090618.0801] device (tap3af510d7-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:10:18 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:18Z|01391|binding|INFO|Setting lport 3af510d7-9800-4ba2-9c9f-f8ded924314f ovn-installed in OVS
Nov 25 12:10:18 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:18Z|01392|binding|INFO|Setting lport 3af510d7-9800-4ba2-9c9f-f8ded924314f up in Southbound
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.083 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8f3677-a6d5-4f1f-ac72-39b5731b5180]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.093 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap169b0886-f1 in ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.094 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap169b0886-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.095 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[405b23d8-b556-42f7-81b6-e2b2e878731d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c216c58c-b5b6-41de-8a25-0e5486e80e8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 systemd-machined[216343]: New machine qemu-168-instance-00000086.
Nov 25 12:10:18 np0005535469 podman[399862]: 2025-11-25 17:10:18.003808691 +0000 UTC m=+0.024103662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.108 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[20b32be6-81bb-40a6-bef9-53779502d7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 systemd[1]: Started libpod-conmon-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope.
Nov 25 12:10:18 np0005535469 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.131 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[49f10e76-11d1-4549-88bc-c92d3ee2ee28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:10:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:18 np0005535469 podman[399862]: 2025-11-25 17:10:18.158494784 +0000 UTC m=+0.178789745 container init 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:10:18 np0005535469 podman[399862]: 2025-11-25 17:10:18.168059866 +0000 UTC m=+0.188354807 container start 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:10:18 np0005535469 podman[399862]: 2025-11-25 17:10:18.171089709 +0000 UTC m=+0.191384660 container attach 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.170 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f37e4aba-e206-419d-bc00-9029675bbf80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 NetworkManager[48891]: <info>  [1764090618.1819] manager: (tap169b0886-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.180 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[677feadb-8e49-4f3f-bc9b-29220171f9a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.218 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7db7069d-9915-4547-9356-ba90538f2360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.222 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba65125-3325-45d0-bc9c-7e780c3380f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 NetworkManager[48891]: <info>  [1764090618.2473] device (tap169b0886-f0): carrier: link connected
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.255 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bd4585-9e5c-4597-bfc5-192a216d68ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.272 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3f21508d-8d18-4fa5-a1ea-bd89162c6dfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap169b0886-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:0e:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707583, 'reachable_time': 40120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399932, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e94c2a-6a93-4a67-a0fd-85a661d43ca0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:e7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707583, 'tstamp': 707583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399933, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.318 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04c30405-c570-472a-8662-c9969fd1edd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap169b0886-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:0e:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707583, 'reachable_time': 40120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399949, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.352 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3fff40-89ea-4366-9deb-615befda64fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.406 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5af6afa-4f94-44a8-a189-3f85b98db01b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.407 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap169b0886-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.407 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.408 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap169b0886-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:18 np0005535469 NetworkManager[48891]: <info>  [1764090618.4100] manager: (tap169b0886-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:18 np0005535469 kernel: tap169b0886-f0: entered promiscuous mode
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.413 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap169b0886-f0, col_values=(('external_ids', {'iface-id': 'c8371c10-cfa3-4f58-a300-b742148bc7d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:18 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:18Z|01393|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.429 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/169b0886-fc13-49ff-b4f6-0f14f908ad1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/169b0886-fc13-49ff-b4f6-0f14f908ad1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.430 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5539389a-ed76-43d2-a3e2-a63e58ca5127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.431 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-169b0886-fc13-49ff-b4f6-0f14f908ad1c
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/169b0886-fc13-49ff-b4f6-0f14f908ad1c.pid.haproxy
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 169b0886-fc13-49ff-b4f6-0f14f908ad1c
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:10:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.431 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'env', 'PROCESS_TAG=haproxy-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/169b0886-fc13-49ff-b4f6-0f14f908ad1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.469 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090618.4683244, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.469 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Started (Lifecycle Event)#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.503 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.507 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090618.46845, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.507 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:10:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.531 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.535 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.556 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.806 254096 DEBUG nova.network.neutron [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updated VIF entry in instance network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.806 254096 DEBUG nova.network.neutron [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.823 254096 DEBUG oslo_concurrency.lockutils [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:10:18 np0005535469 podman[400008]: 2025-11-25 17:10:18.831558114 +0000 UTC m=+0.081738162 container create 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 12:10:18 np0005535469 podman[400008]: 2025-11-25 17:10:18.790789556 +0000 UTC m=+0.040969604 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:10:18 np0005535469 systemd[1]: Started libpod-conmon-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1.scope.
Nov 25 12:10:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:10:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26536350f4636af02cd64a2d8c5eb49f4c32108a7d2a493c1f2754a932dc988a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:18 np0005535469 podman[400008]: 2025-11-25 17:10:18.939981409 +0000 UTC m=+0.190161417 container init 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.939 254096 DEBUG nova.compute.manager [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.940 254096 DEBUG oslo_concurrency.lockutils [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.940 254096 DEBUG oslo_concurrency.lockutils [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.940 254096 DEBUG oslo_concurrency.lockutils [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.941 254096 DEBUG nova.compute.manager [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Processing event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.941 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.945 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090618.9454968, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.946 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.948 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:10:18 np0005535469 podman[400008]: 2025-11-25 17:10:18.9491516 +0000 UTC m=+0.199331628 container start 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.953 254096 INFO nova.virt.libvirt.driver [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance spawned successfully.#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.954 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.968 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:18 np0005535469 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : New worker (400037) forked
Nov 25 12:10:18 np0005535469 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : Loading success.
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.976 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.983 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.983 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.983 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.984 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.984 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:10:18 np0005535469 nova_compute[254092]: 2025-11-25 17:10:18.984 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:10:19 np0005535469 nova_compute[254092]: 2025-11-25 17:10:19.010 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:10:19 np0005535469 nova_compute[254092]: 2025-11-25 17:10:19.044 254096 INFO nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 6.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:10:19 np0005535469 nova_compute[254092]: 2025-11-25 17:10:19.044 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:19 np0005535469 nova_compute[254092]: 2025-11-25 17:10:19.100 254096 INFO nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 7.81 seconds to build instance.#033[00m
Nov 25 12:10:19 np0005535469 nova_compute[254092]: 2025-11-25 17:10:19.115 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:19 np0005535469 unruffled_swirles[399898]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:10:19 np0005535469 unruffled_swirles[399898]: --> relative data size: 1.0
Nov 25 12:10:19 np0005535469 unruffled_swirles[399898]: --> All data devices are unavailable
Nov 25 12:10:19 np0005535469 systemd[1]: libpod-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope: Deactivated successfully.
Nov 25 12:10:19 np0005535469 systemd[1]: libpod-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope: Consumed 1.000s CPU time.
Nov 25 12:10:19 np0005535469 conmon[399898]: conmon 9949995ab30923063940 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope/container/memory.events
Nov 25 12:10:19 np0005535469 podman[399862]: 2025-11-25 17:10:19.240731657 +0000 UTC m=+1.261026598 container died 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:10:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5-merged.mount: Deactivated successfully.
Nov 25 12:10:19 np0005535469 podman[399862]: 2025-11-25 17:10:19.321161763 +0000 UTC m=+1.341456704 container remove 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:10:19 np0005535469 systemd[1]: libpod-conmon-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope: Deactivated successfully.
Nov 25 12:10:19 np0005535469 podman[400214]: 2025-11-25 17:10:19.927976507 +0000 UTC m=+0.034952450 container create 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:10:19 np0005535469 systemd[1]: Started libpod-conmon-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope.
Nov 25 12:10:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:10:20 np0005535469 podman[400214]: 2025-11-25 17:10:19.912883773 +0000 UTC m=+0.019859746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:10:20 np0005535469 podman[400214]: 2025-11-25 17:10:20.010980964 +0000 UTC m=+0.117956937 container init 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:10:20 np0005535469 podman[400214]: 2025-11-25 17:10:20.017952455 +0000 UTC m=+0.124928398 container start 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:10:20 np0005535469 podman[400214]: 2025-11-25 17:10:20.021928564 +0000 UTC m=+0.128904497 container attach 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:10:20 np0005535469 sharp_mendeleev[400230]: 167 167
Nov 25 12:10:20 np0005535469 systemd[1]: libpod-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope: Deactivated successfully.
Nov 25 12:10:20 np0005535469 conmon[400230]: conmon 36208602e69236f1db8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope/container/memory.events
Nov 25 12:10:20 np0005535469 podman[400214]: 2025-11-25 17:10:20.024090124 +0000 UTC m=+0.131066067 container died 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:10:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-96e3c21acd7c2541299f77909fcb526fb1351245feef185e30ee7c2301940b60-merged.mount: Deactivated successfully.
Nov 25 12:10:20 np0005535469 podman[400214]: 2025-11-25 17:10:20.059256968 +0000 UTC m=+0.166232911 container remove 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:10:20 np0005535469 systemd[1]: libpod-conmon-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope: Deactivated successfully.
Nov 25 12:10:20 np0005535469 podman[400255]: 2025-11-25 17:10:20.273311779 +0000 UTC m=+0.052433019 container create b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:10:20 np0005535469 systemd[1]: Started libpod-conmon-b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2.scope.
Nov 25 12:10:20 np0005535469 podman[400255]: 2025-11-25 17:10:20.250248486 +0000 UTC m=+0.029369766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:10:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:10:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:20 np0005535469 podman[400255]: 2025-11-25 17:10:20.400310392 +0000 UTC m=+0.179431722 container init b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:10:20 np0005535469 podman[400255]: 2025-11-25 17:10:20.41043772 +0000 UTC m=+0.189558990 container start b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:10:20 np0005535469 podman[400255]: 2025-11-25 17:10:20.414531732 +0000 UTC m=+0.193653002 container attach b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:10:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 25 12:10:20 np0005535469 nova_compute[254092]: 2025-11-25 17:10:20.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:21 np0005535469 nova_compute[254092]: 2025-11-25 17:10:21.011 254096 DEBUG nova.compute.manager [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:21 np0005535469 nova_compute[254092]: 2025-11-25 17:10:21.011 254096 DEBUG oslo_concurrency.lockutils [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:21 np0005535469 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 DEBUG oslo_concurrency.lockutils [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:21 np0005535469 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 DEBUG oslo_concurrency.lockutils [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:21 np0005535469 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 DEBUG nova.compute.manager [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] No waiting events found dispatching network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:10:21 np0005535469 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 WARNING nova.compute.manager [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received unexpected event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f for instance with vm_state active and task_state None.#033[00m
Nov 25 12:10:21 np0005535469 festive_bell[400272]: {
Nov 25 12:10:21 np0005535469 festive_bell[400272]:    "0": [
Nov 25 12:10:21 np0005535469 festive_bell[400272]:        {
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "devices": [
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "/dev/loop3"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            ],
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_name": "ceph_lv0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_size": "21470642176",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "name": "ceph_lv0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "tags": {
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cluster_name": "ceph",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.crush_device_class": "",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.encrypted": "0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osd_id": "0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.type": "block",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.vdo": "0"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            },
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "type": "block",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "vg_name": "ceph_vg0"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:        }
Nov 25 12:10:21 np0005535469 festive_bell[400272]:    ],
Nov 25 12:10:21 np0005535469 festive_bell[400272]:    "1": [
Nov 25 12:10:21 np0005535469 festive_bell[400272]:        {
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "devices": [
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "/dev/loop4"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            ],
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_name": "ceph_lv1",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_size": "21470642176",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "name": "ceph_lv1",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "tags": {
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cluster_name": "ceph",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.crush_device_class": "",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.encrypted": "0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osd_id": "1",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.type": "block",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.vdo": "0"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            },
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "type": "block",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "vg_name": "ceph_vg1"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:        }
Nov 25 12:10:21 np0005535469 festive_bell[400272]:    ],
Nov 25 12:10:21 np0005535469 festive_bell[400272]:    "2": [
Nov 25 12:10:21 np0005535469 festive_bell[400272]:        {
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "devices": [
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "/dev/loop5"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            ],
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_name": "ceph_lv2",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_size": "21470642176",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "name": "ceph_lv2",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "tags": {
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.cluster_name": "ceph",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.crush_device_class": "",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.encrypted": "0",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osd_id": "2",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.type": "block",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:                "ceph.vdo": "0"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            },
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "type": "block",
Nov 25 12:10:21 np0005535469 festive_bell[400272]:            "vg_name": "ceph_vg2"
Nov 25 12:10:21 np0005535469 festive_bell[400272]:        }
Nov 25 12:10:21 np0005535469 festive_bell[400272]:    ]
Nov 25 12:10:21 np0005535469 festive_bell[400272]: }
Nov 25 12:10:21 np0005535469 systemd[1]: libpod-b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2.scope: Deactivated successfully.
Nov 25 12:10:21 np0005535469 podman[400255]: 2025-11-25 17:10:21.196935232 +0000 UTC m=+0.976056522 container died b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:10:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913-merged.mount: Deactivated successfully.
Nov 25 12:10:21 np0005535469 podman[400255]: 2025-11-25 17:10:21.265687668 +0000 UTC m=+1.044808908 container remove b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:10:21 np0005535469 systemd[1]: libpod-conmon-b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2.scope: Deactivated successfully.
Nov 25 12:10:21 np0005535469 podman[400435]: 2025-11-25 17:10:21.877293823 +0000 UTC m=+0.039527215 container create d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:10:21 np0005535469 systemd[1]: Started libpod-conmon-d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943.scope.
Nov 25 12:10:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:10:21 np0005535469 nova_compute[254092]: 2025-11-25 17:10:21.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:21 np0005535469 podman[400435]: 2025-11-25 17:10:21.859169276 +0000 UTC m=+0.021402668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:10:21 np0005535469 podman[400435]: 2025-11-25 17:10:21.958883291 +0000 UTC m=+0.121116683 container init d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 12:10:21 np0005535469 podman[400435]: 2025-11-25 17:10:21.966468799 +0000 UTC m=+0.128702171 container start d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:21.970 163448 DEBUG eventlet.wsgi.server [-] (163448) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 25 12:10:21 np0005535469 naughty_mestorf[400452]: 167 167
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:21.972 163448 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: Accept: */*#015
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: Connection: close#015
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: Content-Type: text/plain#015
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: Host: 169.254.169.254#015
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: User-Agent: curl/7.84.0#015
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: X-Forwarded-For: 10.100.0.9#015
Nov 25 12:10:21 np0005535469 ovn_metadata_agent[163333]: X-Ovn-Network-Id: e024dc03-b986-42e0-ad9c-68e6318af670 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 25 12:10:21 np0005535469 podman[400435]: 2025-11-25 17:10:21.969665916 +0000 UTC m=+0.131899318 container attach d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:10:21 np0005535469 systemd[1]: libpod-d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943.scope: Deactivated successfully.
Nov 25 12:10:22 np0005535469 podman[400457]: 2025-11-25 17:10:22.020041448 +0000 UTC m=+0.027461924 container died d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:10:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ca8f1378b616ccfb0483ed532c65a889285f5d736447ceeb3a41f19a7af31566-merged.mount: Deactivated successfully.
Nov 25 12:10:22 np0005535469 podman[400457]: 2025-11-25 17:10:22.058548765 +0000 UTC m=+0.065969231 container remove d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:10:22 np0005535469 systemd[1]: libpod-conmon-d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943.scope: Deactivated successfully.
Nov 25 12:10:22 np0005535469 podman[400479]: 2025-11-25 17:10:22.294115226 +0000 UTC m=+0.048601034 container create 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 12:10:22 np0005535469 systemd[1]: Started libpod-conmon-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope.
Nov 25 12:10:22 np0005535469 podman[400479]: 2025-11-25 17:10:22.270304193 +0000 UTC m=+0.024790031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:10:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:10:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:10:22 np0005535469 podman[400479]: 2025-11-25 17:10:22.409583073 +0000 UTC m=+0.164068891 container init 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:10:22 np0005535469 podman[400479]: 2025-11-25 17:10:22.423098123 +0000 UTC m=+0.177583951 container start 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:10:22 np0005535469 podman[400479]: 2025-11-25 17:10:22.426280641 +0000 UTC m=+0.180766439 container attach 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:10:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]: {
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "osd_id": 1,
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "type": "bluestore"
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:    },
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "osd_id": 2,
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "type": "bluestore"
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:    },
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "osd_id": 0,
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:        "type": "bluestore"
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]:    }
Nov 25 12:10:23 np0005535469 zen_mahavira[400496]: }
Nov 25 12:10:23 np0005535469 podman[400479]: 2025-11-25 17:10:23.501552163 +0000 UTC m=+1.256037981 container died 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:10:23 np0005535469 systemd[1]: libpod-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope: Deactivated successfully.
Nov 25 12:10:23 np0005535469 systemd[1]: libpod-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope: Consumed 1.083s CPU time.
Nov 25 12:10:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b-merged.mount: Deactivated successfully.
Nov 25 12:10:23 np0005535469 podman[400479]: 2025-11-25 17:10:23.570971518 +0000 UTC m=+1.325457316 container remove 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:10:23 np0005535469 systemd[1]: libpod-conmon-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope: Deactivated successfully.
Nov 25 12:10:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:10:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:10:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:10:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:10:23 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9dc6e192-2828-4d26-a60a-59ef743e7d2b does not exist
Nov 25 12:10:23 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 519f2a67-9458-49dc-b4c9-ade0c78b9c9d does not exist
Nov 25 12:10:23 np0005535469 podman[400566]: 2025-11-25 17:10:23.792608647 +0000 UTC m=+0.074170056 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:10:23 np0005535469 podman[400565]: 2025-11-25 17:10:23.808543754 +0000 UTC m=+0.087271656 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 12:10:23 np0005535469 podman[400576]: 2025-11-25 17:10:23.854018871 +0000 UTC m=+0.117174285 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 12:10:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:23.940 163448 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 25 12:10:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:23.942 163448 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.9699941#033[00m
Nov 25 12:10:23 np0005535469 haproxy-metadata-proxy-e024dc03-b986-42e0-ad9c-68e6318af670[398975]: 10.100.0.9:33136 [25/Nov/2025:17:10:21.968] listener listener/metadata 0/0/0/1973/1973 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.009 163448 DEBUG eventlet.wsgi.server [-] (163448) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.011 163448 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: Accept: */*#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: Connection: close#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: Content-Length: 100#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: Content-Type: application/x-www-form-urlencoded#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: Host: 169.254.169.254#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: User-Agent: curl/7.84.0#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: X-Forwarded-For: 10.100.0.9#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: X-Ovn-Network-Id: e024dc03-b986-42e0-ad9c-68e6318af670#015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: #015
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 25 12:10:24 np0005535469 haproxy-metadata-proxy-e024dc03-b986-42e0-ad9c-68e6318af670[398975]: 10.100.0.9:33150 [25/Nov/2025:17:10:24.008] listener listener/metadata 0/0/0/323/323 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.331 163448 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 25 12:10:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.332 163448 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3211155#033[00m
Nov 25 12:10:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 12:10:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:10:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:10:25 np0005535469 nova_compute[254092]: 2025-11-25 17:10:25.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:25 np0005535469 nova_compute[254092]: 2025-11-25 17:10:25.910 254096 DEBUG nova.compute.manager [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:25 np0005535469 nova_compute[254092]: 2025-11-25 17:10:25.912 254096 DEBUG nova.compute.manager [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing instance network info cache due to event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:10:25 np0005535469 nova_compute[254092]: 2025-11-25 17:10:25.913 254096 DEBUG oslo_concurrency.lockutils [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:10:25 np0005535469 nova_compute[254092]: 2025-11-25 17:10:25.914 254096 DEBUG oslo_concurrency.lockutils [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:10:25 np0005535469 nova_compute[254092]: 2025-11-25 17:10:25.914 254096 DEBUG nova.network.neutron [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.163 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.164 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.164 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.165 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.165 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.166 254096 INFO nova.compute.manager [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Terminating instance#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.167 254096 DEBUG nova.compute.manager [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:10:26 np0005535469 kernel: tap507a4b35-dd (unregistering): left promiscuous mode
Nov 25 12:10:26 np0005535469 NetworkManager[48891]: <info>  [1764090626.2563] device (tap507a4b35-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:10:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:26Z|01394|binding|INFO|Releasing lport 507a4b35-dd4f-4777-a88c-c40597fe827b from this chassis (sb_readonly=0)
Nov 25 12:10:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:26Z|01395|binding|INFO|Setting lport 507a4b35-dd4f-4777-a88c-c40597fe827b down in Southbound
Nov 25 12:10:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:26Z|01396|binding|INFO|Removing iface tap507a4b35-dd ovn-installed in OVS
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.277 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:22:50 10.100.0.9'], port_security=['fa:16:3e:d4:22:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a4ffa69-afb1-46b7-9109-8edeb9481103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e024dc03-b986-42e0-ad9c-68e6318af670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49df21ca46894c8fb4040c7e9eccaef4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dda0fb5-abf0-44ce-9142-5535344390ea 94b8f488-8d50-467c-9417-5b43dfa0fc8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edf8fd3-0892-465d-8830-58affb8f0bec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=507a4b35-dd4f-4777-a88c-c40597fe827b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.278 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 507a4b35-dd4f-4777-a88c-c40597fe827b in datapath e024dc03-b986-42e0-ad9c-68e6318af670 unbound from our chassis#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.279 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e024dc03-b986-42e0-ad9c-68e6318af670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7c1da6-cd1e-4996-bbfe-19986d1cec7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.281 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 namespace which is not needed anymore#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:26 np0005535469 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 25 12:10:26 np0005535469 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Consumed 16.129s CPU time.
Nov 25 12:10:26 np0005535469 systemd-machined[216343]: Machine qemu-167-instance-00000085 terminated.
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.407 254096 INFO nova.virt.libvirt.driver [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance destroyed successfully.#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.410 254096 DEBUG nova.objects.instance [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lazy-loading 'resources' on Instance uuid 6a4ffa69-afb1-46b7-9109-8edeb9481103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.423 254096 DEBUG nova.virt.libvirt.vif [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-955582536',display_name='tempest-TestServerBasicOps-server-955582536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-955582536',id=133,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDdtyaxmMVL8CAc7S++Y8gbOFbk1LCqSryz68UQakJZFbiX886AD33j7kjiNk5kHLWkm6AP4LpqGE2BaOqlADwTjW6lF33LqPINwV3iZ2r2irHV9h1AdPWEego3dYIkYew==',key_name='tempest-TestServerBasicOps-2071633386',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:09:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='49df21ca46894c8fb4040c7e9eccaef4',ramdisk_id='',reservation_id='r-w0qyy7pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1965980686',owner_user_name='tempest-TestServerBasicOps-1965980686-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:10:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24306f395dd542b6a11b3bd0faadd4ad',uuid=6a4ffa69-afb1-46b7-9109-8edeb9481103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.424 254096 DEBUG nova.network.os_vif_util [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converting VIF {"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.425 254096 DEBUG nova.network.os_vif_util [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.426 254096 DEBUG os_vif [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap507a4b35-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.436 254096 INFO os_vif [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd')#033[00m
Nov 25 12:10:26 np0005535469 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : haproxy version is 2.8.14-c23fe91
Nov 25 12:10:26 np0005535469 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : path to executable is /usr/sbin/haproxy
Nov 25 12:10:26 np0005535469 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [WARNING]  (398973) : Exiting Master process...
Nov 25 12:10:26 np0005535469 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [ALERT]    (398973) : Current worker (398975) exited with code 143 (Terminated)
Nov 25 12:10:26 np0005535469 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [WARNING]  (398973) : All workers exited. Exiting... (0)
Nov 25 12:10:26 np0005535469 systemd[1]: libpod-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b.scope: Deactivated successfully.
Nov 25 12:10:26 np0005535469 podman[400677]: 2025-11-25 17:10:26.454020664 +0000 UTC m=+0.061384675 container died 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:10:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b-userdata-shm.mount: Deactivated successfully.
Nov 25 12:10:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3a09b45283fd3fcfbf225fb1458ae704eb97d1c98df77d8854db8ed15a4adda7-merged.mount: Deactivated successfully.
Nov 25 12:10:26 np0005535469 podman[400677]: 2025-11-25 17:10:26.497735543 +0000 UTC m=+0.105099534 container cleanup 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:10:26 np0005535469 systemd[1]: libpod-conmon-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b.scope: Deactivated successfully.
Nov 25 12:10:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 12:10:26 np0005535469 podman[400736]: 2025-11-25 17:10:26.565773389 +0000 UTC m=+0.047833023 container remove 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.575 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8192767-ffee-425f-bbca-ac7d9d189577]: (4, ('Tue Nov 25 05:10:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 (484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b)\n484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b\nTue Nov 25 05:10:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 (484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b)\n484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.577 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94a8a112-b150-4f4e-a5dc-c210b76579a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.578 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape024dc03-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:26 np0005535469 kernel: tape024dc03-b0: left promiscuous mode
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.600 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e63515b1-b79c-4289-8dd9-692fd9160421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.614 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae50bb73-e2c2-46e5-b132-7dda748217b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91d45b3d-563d-477b-9d50-ebacc6999257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.632 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d50d654c-3801-4dde-a0b0-2b2e72620971]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704002, 'reachable_time': 16802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400752, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 systemd[1]: run-netns-ovnmeta\x2de024dc03\x2db986\x2d42e0\x2dad9c\x2d68e6318af670.mount: Deactivated successfully.
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.636 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.636 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[20e3d3f8-3e05-441c-a5ed-fa16b48d1759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.777 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:10:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.777 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.878 254096 INFO nova.virt.libvirt.driver [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deleting instance files /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103_del#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.879 254096 INFO nova.virt.libvirt.driver [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deletion of /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103_del complete#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.965 254096 INFO nova.compute.manager [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.966 254096 DEBUG oslo.service.loopingcall [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.966 254096 DEBUG nova.compute.manager [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:10:26 np0005535469 nova_compute[254092]: 2025-11-25 17:10:26.966 254096 DEBUG nova.network.neutron [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:10:27 np0005535469 nova_compute[254092]: 2025-11-25 17:10:27.860 254096 DEBUG nova.network.neutron [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updated VIF entry in instance network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:10:27 np0005535469 nova_compute[254092]: 2025-11-25 17:10:27.861 254096 DEBUG nova.network.neutron [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:10:27 np0005535469 nova_compute[254092]: 2025-11-25 17:10:27.882 254096 DEBUG oslo_concurrency.lockutils [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.007 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-unplugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.008 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.008 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.009 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.009 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] No waiting events found dispatching network-vif-unplugged-507a4b35-dd4f-4777-a88c-c40597fe827b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.009 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-unplugged-507a4b35-dd4f-4777-a88c-c40597fe827b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.011 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] No waiting events found dispatching network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.011 254096 WARNING nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received unexpected event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:10:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.873 254096 DEBUG nova.network.neutron [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:10:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:28 np0005535469 nova_compute[254092]: 2025-11-25 17:10:28.941 254096 INFO nova.compute.manager [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 1.97 seconds to deallocate network for instance.#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.128 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.129 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.315 254096 DEBUG oslo_concurrency.processutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:10:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175135485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.780 254096 DEBUG oslo_concurrency.processutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.787 254096 DEBUG nova.compute.provider_tree [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.812 254096 DEBUG nova.scheduler.client.report [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.834 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.861 254096 INFO nova.scheduler.client.report [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Deleted allocations for instance 6a4ffa69-afb1-46b7-9109-8edeb9481103#033[00m
Nov 25 12:10:29 np0005535469 nova_compute[254092]: 2025-11-25 17:10:29.918 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:30 np0005535469 nova_compute[254092]: 2025-11-25 17:10:30.115 254096 DEBUG nova.compute.manager [req-bac99123-275a-431e-bbd5-a6c8cf408717 req-b0c01ca8-8998-4137-8c54-9b42908083b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-deleted-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:30 np0005535469 nova_compute[254092]: 2025-11-25 17:10:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:30 np0005535469 nova_compute[254092]: 2025-11-25 17:10:30.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 116 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 96 op/s
Nov 25 12:10:31 np0005535469 nova_compute[254092]: 2025-11-25 17:10:31.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:31.779 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:32 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:32Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:9a:99 10.100.0.4
Nov 25 12:10:32 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:32Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:9a:99 10.100.0.4
Nov 25 12:10:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 88 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 170 KiB/s wr, 69 op/s
Nov 25 12:10:33 np0005535469 nova_compute[254092]: 2025-11-25 17:10:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 88 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 980 KiB/s rd, 170 KiB/s wr, 63 op/s
Nov 25 12:10:35 np0005535469 nova_compute[254092]: 2025-11-25 17:10:35.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:36Z|01397|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 12:10:36 np0005535469 nova_compute[254092]: 2025-11-25 17:10:36.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:36 np0005535469 nova_compute[254092]: 2025-11-25 17:10:36.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 12:10:37 np0005535469 nova_compute[254092]: 2025-11-25 17:10:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:37 np0005535469 nova_compute[254092]: 2025-11-25 17:10:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:10:38 np0005535469 nova_compute[254092]: 2025-11-25 17:10:38.487 254096 INFO nova.compute.manager [None req-0625a721-3427-40f3-b1d8-3a3ae55ba477 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Get console output#033[00m
Nov 25 12:10:38 np0005535469 nova_compute[254092]: 2025-11-25 17:10:38.493 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:10:38 np0005535469 nova_compute[254092]: 2025-11-25 17:10:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 12:10:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:39Z|01398|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 12:10:39 np0005535469 nova_compute[254092]: 2025-11-25 17:10:39.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:39Z|01399|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 12:10:39 np0005535469 nova_compute[254092]: 2025-11-25 17:10:39.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:10:40
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'images', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control']
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.469 254096 INFO nova.compute.manager [None req-4e854509-5189-424c-bd55-0ef957736341 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Get console output#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.473 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:40 np0005535469 nova_compute[254092]: 2025-11-25 17:10:40.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:10:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 25 12:10:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:10:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/910119553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.008 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:41 np0005535469 NetworkManager[48891]: <info>  [1764090641.0690] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Nov 25 12:10:41 np0005535469 NetworkManager[48891]: <info>  [1764090641.0700] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.080 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.080 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:10:41 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:41Z|01400|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.224 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.225 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3507MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.225 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.225 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.305 254096 INFO nova.compute.manager [None req-5d4179ee-086e-4bb9-9905-93683c2c4b0f e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Get console output#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.309 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.404 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090626.402938, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.405 254096 INFO nova.compute.manager [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.421 254096 DEBUG nova.compute.manager [None req-e98c1e4d-db01-45c1-8923-5db874b2e8aa - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.642 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 678bebc8-318d-4332-b89f-f86ac5f187c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.642 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:10:41 np0005535469 nova_compute[254092]: 2025-11-25 17:10:41.643 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.120 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.275 254096 DEBUG nova.compute.manager [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.275 254096 DEBUG nova.compute.manager [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing instance network info cache due to event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.276 254096 DEBUG oslo_concurrency.lockutils [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.276 254096 DEBUG oslo_concurrency.lockutils [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.276 254096 DEBUG nova.network.neutron [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.492 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.493 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.493 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.494 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.494 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.496 254096 INFO nova.compute.manager [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Terminating instance#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.498 254096 DEBUG nova.compute.manager [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:10:42 np0005535469 kernel: tap3af510d7-98 (unregistering): left promiscuous mode
Nov 25 12:10:42 np0005535469 NetworkManager[48891]: <info>  [1764090642.5560] device (tap3af510d7-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:10:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:42Z|01401|binding|INFO|Releasing lport 3af510d7-9800-4ba2-9c9f-f8ded924314f from this chassis (sb_readonly=0)
Nov 25 12:10:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:42Z|01402|binding|INFO|Setting lport 3af510d7-9800-4ba2-9c9f-f8ded924314f down in Southbound
Nov 25 12:10:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:10:42Z|01403|binding|INFO|Removing iface tap3af510d7-98 ovn-installed in OVS
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.572 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:9a:99 10.100.0.4'], port_security=['fa:16:3e:b7:9a:99 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678bebc8-318d-4332-b89f-f86ac5f187c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58ae50d2-6994-45e4-b2e6-36301d8443e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=702c597c-9457-430d-aa7a-2d35c3cf306f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3af510d7-9800-4ba2-9c9f-f8ded924314f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:10:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.573 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3af510d7-9800-4ba2-9c9f-f8ded924314f in datapath 169b0886-fc13-49ff-b4f6-0f14f908ad1c unbound from our chassis#033[00m
Nov 25 12:10:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.574 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 169b0886-fc13-49ff-b4f6-0f14f908ad1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:10:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.576 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb2c76a-ff43-40b0-a2c0-5417f7742588]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.576 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c namespace which is not needed anymore#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.597 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.597 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:10:42 np0005535469 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 25 12:10:42 np0005535469 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 13.280s CPU time.
Nov 25 12:10:42 np0005535469 systemd-machined[216343]: Machine qemu-168-instance-00000086 terminated.
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.626 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.656 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:10:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.696 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.753 254096 INFO nova.virt.libvirt.driver [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance destroyed successfully.#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.755 254096 DEBUG nova.objects.instance [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 678bebc8-318d-4332-b89f-f86ac5f187c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:10:42 np0005535469 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : haproxy version is 2.8.14-c23fe91
Nov 25 12:10:42 np0005535469 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : path to executable is /usr/sbin/haproxy
Nov 25 12:10:42 np0005535469 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [WARNING]  (400035) : Exiting Master process...
Nov 25 12:10:42 np0005535469 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [ALERT]    (400035) : Current worker (400037) exited with code 143 (Terminated)
Nov 25 12:10:42 np0005535469 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [WARNING]  (400035) : All workers exited. Exiting... (0)
Nov 25 12:10:42 np0005535469 systemd[1]: libpod-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1.scope: Deactivated successfully.
Nov 25 12:10:42 np0005535469 podman[400826]: 2025-11-25 17:10:42.772303402 +0000 UTC m=+0.064276914 container died 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.780 254096 DEBUG nova.virt.libvirt.vif [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1329077363',display_name='tempest-TestNetworkBasicOps-server-1329077363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1329077363',id=134,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFAq/FVSBvI83QmbRYXeQD5oPtImBiS/J8S4tENTC3HiqmpnLQgZSgQo5Q3eg9KB495TzA7SZganOs6ca4kdDUFsCruPnoCgpH1Af7eq+g3pVeBR/yIGYrZrs0yR9s3UA==',key_name='tempest-TestNetworkBasicOps-801135771',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:10:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-k9efxcoe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:10:19Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=678bebc8-318d-4332-b89f-f86ac5f187c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.784 254096 DEBUG nova.network.os_vif_util [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.786 254096 DEBUG nova.network.os_vif_util [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.787 254096 DEBUG os_vif [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.794 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3af510d7-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1-userdata-shm.mount: Deactivated successfully.
Nov 25 12:10:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-26536350f4636af02cd64a2d8c5eb49f4c32108a7d2a493c1f2754a932dc988a-merged.mount: Deactivated successfully.
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.894 254096 DEBUG nova.compute.manager [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-unplugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.894 254096 DEBUG oslo_concurrency.lockutils [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG oslo_concurrency.lockutils [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG oslo_concurrency.lockutils [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG nova.compute.manager [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] No waiting events found dispatching network-vif-unplugged-3af510d7-9800-4ba2-9c9f-f8ded924314f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG nova.compute.manager [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-unplugged-3af510d7-9800-4ba2-9c9f-f8ded924314f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:10:42 np0005535469 nova_compute[254092]: 2025-11-25 17:10:42.899 254096 INFO os_vif [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98')#033[00m
Nov 25 12:10:42 np0005535469 podman[400826]: 2025-11-25 17:10:42.987974697 +0000 UTC m=+0.279948209 container cleanup 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 12:10:43 np0005535469 systemd[1]: libpod-conmon-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1.scope: Deactivated successfully.
Nov 25 12:10:43 np0005535469 podman[400904]: 2025-11-25 17:10:43.071975321 +0000 UTC m=+0.059791800 container remove 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.080 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5d2631-45db-4c12-af44-423d47723aab]: (4, ('Tue Nov 25 05:10:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c (2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1)\n2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1\nTue Nov 25 05:10:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c (2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1)\n2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06181a79-c47d-4ddf-bb1b-4d9d370e8fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.082 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap169b0886-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:10:43 np0005535469 kernel: tap169b0886-f0: left promiscuous mode
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af5117b9-2e74-4223-beaf-934d742673b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecdc5e4-b03c-40a4-b9d4-6e1ff04a301d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26549f66-8e1b-4c5d-a119-23e833c4d34f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.150 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6e8dcd-70cf-4431-a80a-49c60c82ae23]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707575, 'reachable_time': 37880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400917, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.153 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:10:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.153 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[db2c3fd8-fc14-43c9-9516-4782b2567669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:10:43 np0005535469 systemd[1]: run-netns-ovnmeta\x2d169b0886\x2dfc13\x2d49ff\x2db4f6\x2d0f14f908ad1c.mount: Deactivated successfully.
Nov 25 12:10:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:10:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574049576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.264 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.272 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.289 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.320 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.321 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.376 254096 INFO nova.virt.libvirt.driver [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deleting instance files /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4_del#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.379 254096 INFO nova.virt.libvirt.driver [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deletion of /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4_del complete#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.432 254096 INFO nova.compute.manager [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.433 254096 DEBUG oslo.service.loopingcall [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.434 254096 DEBUG nova.compute.manager [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.434 254096 DEBUG nova.network.neutron [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:10:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.984 254096 DEBUG nova.network.neutron [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updated VIF entry in instance network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:10:43 np0005535469 nova_compute[254092]: 2025-11-25 17:10:43.985 254096 DEBUG nova.network.neutron [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.007 254096 DEBUG oslo_concurrency.lockutils [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.199 254096 DEBUG nova.network.neutron [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.214 254096 INFO nova.compute.manager [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 0.78 seconds to deallocate network for instance.#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.250 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.251 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.292 254096 DEBUG oslo_concurrency.processutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.362 254096 DEBUG nova.compute.manager [req-aeed8a7c-7771-4293-a3f5-1470efce5354 req-c3d1202e-e954-4b26-a021-98e3e9992416 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-deleted-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Nov 25 12:10:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:10:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973931364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.723 254096 DEBUG oslo_concurrency.processutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.730 254096 DEBUG nova.compute.provider_tree [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.752 254096 DEBUG nova.scheduler.client.report [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.776 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.817 254096 INFO nova.scheduler.client.report [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 678bebc8-318d-4332-b89f-f86ac5f187c4#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.869 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.967 254096 DEBUG nova.compute.manager [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG oslo_concurrency.lockutils [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG oslo_concurrency.lockutils [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG oslo_concurrency.lockutils [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG nova.compute.manager [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] No waiting events found dispatching network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:10:44 np0005535469 nova_compute[254092]: 2025-11-25 17:10:44.969 254096 WARNING nova.compute.manager [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received unexpected event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:10:45 np0005535469 nova_compute[254092]: 2025-11-25 17:10:45.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.0 MiB/s wr, 89 op/s
Nov 25 12:10:47 np0005535469 nova_compute[254092]: 2025-11-25 17:10:47.322 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:47 np0005535469 nova_compute[254092]: 2025-11-25 17:10:47.323 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:10:47 np0005535469 nova_compute[254092]: 2025-11-25 17:10:47.361 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:10:47 np0005535469 nova_compute[254092]: 2025-11-25 17:10:47.362 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:10:47 np0005535469 nova_compute[254092]: 2025-11-25 17:10:47.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 12:10:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:49 np0005535469 nova_compute[254092]: 2025-11-25 17:10:49.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:49 np0005535469 nova_compute[254092]: 2025-11-25 17:10:49.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:50 np0005535469 nova_compute[254092]: 2025-11-25 17:10:50.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:10:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:10:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:10:52 np0005535469 nova_compute[254092]: 2025-11-25 17:10:52.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:10:54 np0005535469 podman[400945]: 2025-11-25 17:10:54.638974041 +0000 UTC m=+0.048235914 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:10:54 np0005535469 podman[400944]: 2025-11-25 17:10:54.64657456 +0000 UTC m=+0.057509939 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:10:54 np0005535469 podman[400946]: 2025-11-25 17:10:54.686484264 +0000 UTC m=+0.090000799 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 25 12:10:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:10:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:10:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3181114344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:10:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:10:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3181114344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:10:55 np0005535469 nova_compute[254092]: 2025-11-25 17:10:55.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:10:57 np0005535469 nova_compute[254092]: 2025-11-25 17:10:57.750 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090642.7384002, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:10:57 np0005535469 nova_compute[254092]: 2025-11-25 17:10:57.751 254096 INFO nova.compute.manager [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:10:57 np0005535469 nova_compute[254092]: 2025-11-25 17:10:57.769 254096 DEBUG nova.compute.manager [None req-98fe419f-2440-4f83-bc24-9853e80c65fb - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:10:57 np0005535469 nova_compute[254092]: 2025-11-25 17:10:57.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:10:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:10:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:00 np0005535469 nova_compute[254092]: 2025-11-25 17:11:00.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:02 np0005535469 nova_compute[254092]: 2025-11-25 17:11:02.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:05 np0005535469 nova_compute[254092]: 2025-11-25 17:11:05.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.584 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:5c:a4 10.100.0.2 2001:db8::f816:3eff:fee8:5ca4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee8:5ca4/64', 'neutron:device_id': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=6c3d14c5-0ace-4cc3-828c-1d4061ad3097) old=Port_Binding(mac=['fa:16:3e:e8:5c:a4 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:11:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.585 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 updated#033[00m
Nov 25 12:11:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.586 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:11:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f782c710-103d-44e3-8fbb-d924e214427c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:07 np0005535469 nova_compute[254092]: 2025-11-25 17:11:07.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:11:10 np0005535469 nova_compute[254092]: 2025-11-25 17:11:10.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:12 np0005535469 nova_compute[254092]: 2025-11-25 17:11:12.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:13.649 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:13.650 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:13.650 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.390 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.390 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.411 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.492 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.493 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.504 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.505 254096 INFO nova.compute.claims [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.602 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:15 np0005535469 nova_compute[254092]: 2025-11-25 17:11:15.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:11:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190248708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.043 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.049 254096 DEBUG nova.compute.provider_tree [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.139 254096 DEBUG nova.scheduler.client.report [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.230 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.231 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.272 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.272 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.293 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.308 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.381 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.383 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.384 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Creating image(s)#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.415 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.436 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.460 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.465 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.511 254096 DEBUG nova.policy [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.563 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.564 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.565 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.565 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.594 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.600 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2f4d2580-5acd-4693-a158-926565a16fe9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:16 np0005535469 nova_compute[254092]: 2025-11-25 17:11:16.963 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2f4d2580-5acd-4693-a158-926565a16fe9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.025 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.117 254096 DEBUG nova.objects.instance [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.131 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.131 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Ensure instance console log exists: /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.132 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.132 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.133 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:17 np0005535469 nova_compute[254092]: 2025-11-25 17:11:17.754 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Successfully created port: 0d520bf2-3b97-484c-9046-b614ea281b88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.459 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Successfully updated port: 0d520bf2-3b97-484c-9046-b614ea281b88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.472 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.472 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.473 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.571 254096 DEBUG nova.compute.manager [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.572 254096 DEBUG nova.compute.manager [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing instance network info cache due to event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.573 254096 DEBUG oslo_concurrency.lockutils [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:11:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:18 np0005535469 nova_compute[254092]: 2025-11-25 17:11:18.759 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.760 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.783 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.783 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance network_info: |[{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.784 254096 DEBUG oslo_concurrency.lockutils [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.784 254096 DEBUG nova.network.neutron [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.788 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start _get_guest_xml network_info=[{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.792 254096 WARNING nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.796 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.797 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.800 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.800 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.800 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:11:19 np0005535469 nova_compute[254092]: 2025-11-25 17:11:19.805 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:11:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720976347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.251 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.277 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.281 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:11:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822169432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:11:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 75 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.722 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.725 254096 DEBUG nova.virt.libvirt.vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-318572700',display_name='tempest-TestGettingAddress-server-318572700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-318572700',id=135,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-6ty6vm7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:11:16Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=2f4d2580-5acd-4693-a158-926565a16fe9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.726 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.728 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.731 254096 DEBUG nova.objects.instance [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.752 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <uuid>2f4d2580-5acd-4693-a158-926565a16fe9</uuid>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <name>instance-00000087</name>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-318572700</nova:name>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:11:19</nova:creationTime>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <nova:port uuid="0d520bf2-3b97-484c-9046-b614ea281b88">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fefa:3cb2" ipVersion="6"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <entry name="serial">2f4d2580-5acd-4693-a158-926565a16fe9</entry>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <entry name="uuid">2f4d2580-5acd-4693-a158-926565a16fe9</entry>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2f4d2580-5acd-4693-a158-926565a16fe9_disk">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/2f4d2580-5acd-4693-a158-926565a16fe9_disk.config">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:fa:3c:b2"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <target dev="tap0d520bf2-3b"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/console.log" append="off"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:11:20 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:11:20 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:11:20 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:11:20 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Preparing to wait for external event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.756 254096 DEBUG nova.virt.libvirt.vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-318572700',display_name='tempest-TestGettingAddress-server-318572700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-318572700',id=135,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-6ty6vm7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:11:16Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=2f4d2580-5acd-4693-a158-926565a16fe9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.756 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.757 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.757 254096 DEBUG os_vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.758 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.758 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.762 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d520bf2-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.763 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d520bf2-3b, col_values=(('external_ids', {'iface-id': '0d520bf2-3b97-484c-9046-b614ea281b88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:3c:b2', 'vm-uuid': '2f4d2580-5acd-4693-a158-926565a16fe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:20 np0005535469 NetworkManager[48891]: <info>  [1764090680.7677] manager: (tap0d520bf2-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.772 254096 INFO os_vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b')#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.822 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.822 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.822 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:fa:3c:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.823 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Using config drive#033[00m
Nov 25 12:11:20 np0005535469 nova_compute[254092]: 2025-11-25 17:11:20.848 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.234 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Creating config drive at /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.240 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzl60rrde execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.280 254096 DEBUG nova.network.neutron [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated VIF entry in instance network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.281 254096 DEBUG nova.network.neutron [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.303 254096 DEBUG oslo_concurrency.lockutils [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.385 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzl60rrde" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.433 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:11:21 np0005535469 nova_compute[254092]: 2025-11-25 17:11:21.438 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.512 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.513 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deleting local config drive /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config because it was imported into RBD.#033[00m
Nov 25 12:11:22 np0005535469 kernel: tap0d520bf2-3b: entered promiscuous mode
Nov 25 12:11:22 np0005535469 NetworkManager[48891]: <info>  [1764090682.5690] manager: (tap0d520bf2-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:22Z|01404|binding|INFO|Claiming lport 0d520bf2-3b97-484c-9046-b614ea281b88 for this chassis.
Nov 25 12:11:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:22Z|01405|binding|INFO|0d520bf2-3b97-484c-9046-b614ea281b88: Claiming fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.584 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], port_security=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fefa:3cb2/64', 'neutron:device_id': '2f4d2580-5acd-4693-a158-926565a16fe9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d520bf2-3b97-484c-9046-b614ea281b88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.585 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d520bf2-3b97-484c-9046-b614ea281b88 in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 bound to our chassis#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9#033[00m
Nov 25 12:11:22 np0005535469 systemd-udevd[401332]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.600 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00a511e0-9dd3-46db-a91e-bafe5cd2e4b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.601 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0bd20b9f-21 in ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.603 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0bd20b9f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a91b4b51-7b39-497f-a9c6-944fbe1e8825]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fec58a-376e-47f8-86e5-c2dbb079b27d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 systemd-machined[216343]: New machine qemu-169-instance-00000087.
Nov 25 12:11:22 np0005535469 NetworkManager[48891]: <info>  [1764090682.6102] device (tap0d520bf2-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:11:22 np0005535469 NetworkManager[48891]: <info>  [1764090682.6113] device (tap0d520bf2-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.616 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[da13c04a-d04c-49ee-a2fa-76d202fc18bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 systemd[1]: Started Virtual Machine qemu-169-instance-00000087.
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.640 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c01c3018-cacf-4ae6-a149-c0290428b95c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:22Z|01406|binding|INFO|Setting lport 0d520bf2-3b97-484c-9046-b614ea281b88 ovn-installed in OVS
Nov 25 12:11:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:22Z|01407|binding|INFO|Setting lport 0d520bf2-3b97-484c-9046-b614ea281b88 up in Southbound
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.670 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9fee1a-a99a-4090-bc30-420678ac1651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 NetworkManager[48891]: <info>  [1764090682.6787] manager: (tap0bd20b9f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/577)
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.677 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf759df5-cfe7-484d-b982-1c5521de4d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.724 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2b3b22-7dff-4935-8e5a-89f3eaf8b839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.730 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7e3942-1b5c-4f80-9cbd-39c7fa3a05e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 NetworkManager[48891]: <info>  [1764090682.7683] device (tap0bd20b9f-20): carrier: link connected
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.780 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dec0e29f-6d65-4b76-8f0e-91b580f4909d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64958853-419c-446c-99e9-b25496cfaf0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 30796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401366, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e492a416-6418-4917-88ed-a88b34d7a2c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:5ca4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714035, 'tstamp': 714035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401367, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.852 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[58b9313f-a5cf-4088-a880-094c56fc2fd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 30796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401368, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be80ea82-0345-47d8-8a59-7a7668c5550f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.966 254096 DEBUG nova.compute.manager [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.966 254096 DEBUG oslo_concurrency.lockutils [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.966 254096 DEBUG oslo_concurrency.lockutils [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.967 254096 DEBUG oslo_concurrency.lockutils [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.967 254096 DEBUG nova.compute.manager [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Processing event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.974 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d256a949-817c-4625-848f-0d3454493b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.977 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.977 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.978 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd20b9f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:22 np0005535469 NetworkManager[48891]: <info>  [1764090682.9812] manager: (tap0bd20b9f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/578)
Nov 25 12:11:22 np0005535469 kernel: tap0bd20b9f-20: entered promiscuous mode
Nov 25 12:11:22 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.983 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bd20b9f-20, col_values=(('external_ids', {'iface-id': '6c3d14c5-0ace-4cc3-828c-1d4061ad3097'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:11:22 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:22Z|01408|binding|INFO|Releasing lport 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 from this chassis (sb_readonly=0)
Nov 25 12:11:22 np0005535469 nova_compute[254092]: 2025-11-25 17:11:22.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.000 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.001 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c4daa3-2a00-44bb-8c4a-5d915c7e010d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.003 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-0bd20b9f-24bc-4728-90ff-2870bebbf3f9
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.pid.haproxy
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 0bd20b9f-24bc-4728-90ff-2870bebbf3f9
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:11:23 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.003 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'env', 'PROCESS_TAG=haproxy-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090683.3952062, 2f4d2580-5acd-4693-a158-926565a16fe9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.396 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Started (Lifecycle Event)#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.398 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.401 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.405 254096 INFO nova.virt.libvirt.driver [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance spawned successfully.#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.405 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.416 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.423 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.431 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.432 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.433 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.434 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.434 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.435 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.442 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.443 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090683.3963647, 2f4d2580-5acd-4693-a158-926565a16fe9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.444 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:11:23 np0005535469 podman[401441]: 2025-11-25 17:11:23.357455287 +0000 UTC m=+0.023035163 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.463 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.466 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090683.4003797, 2f4d2580-5acd-4693-a158-926565a16fe9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.466 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.477 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.493 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.559 254096 INFO nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 7.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.559 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.624 254096 INFO nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 8.16 seconds to build instance.#033[00m
Nov 25 12:11:23 np0005535469 podman[401441]: 2025-11-25 17:11:23.624213854 +0000 UTC m=+0.289793700 container create 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:11:23 np0005535469 nova_compute[254092]: 2025-11-25 17:11:23.650 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:23 np0005535469 systemd[1]: Started libpod-conmon-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d.scope.
Nov 25 12:11:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:11:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7a71fe41d6784c5b2181a1c90baa9c83c45304d06a23693e4f44a74edde50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:23 np0005535469 podman[401441]: 2025-11-25 17:11:23.758823536 +0000 UTC m=+0.424403402 container init 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:11:23 np0005535469 podman[401441]: 2025-11-25 17:11:23.766305421 +0000 UTC m=+0.431885267 container start 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:11:23 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : New worker (401464) forked
Nov 25 12:11:23 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : Loading success.
Nov 25 12:11:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:11:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2921b2fd-cb0a-4993-bd4c-c4d8bd1e787f does not exist
Nov 25 12:11:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ab607b29-99d7-45cd-acfe-9b955c1ad6ab does not exist
Nov 25 12:11:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 21c45035-0c76-4cb3-a241-468701f90808 does not exist
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.101 254096 DEBUG nova.compute.manager [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.103 254096 DEBUG oslo_concurrency.lockutils [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.103 254096 DEBUG oslo_concurrency.lockutils [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.104 254096 DEBUG oslo_concurrency.lockutils [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.105 254096 DEBUG nova.compute.manager [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] No waiting events found dispatching network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.105 254096 WARNING nova.compute.manager [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received unexpected event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:11:25 np0005535469 podman[401629]: 2025-11-25 17:11:25.245472902 +0000 UTC m=+0.063082442 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 12:11:25 np0005535469 podman[401628]: 2025-11-25 17:11:25.281573961 +0000 UTC m=+0.098790190 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 12:11:25 np0005535469 podman[401630]: 2025-11-25 17:11:25.29023357 +0000 UTC m=+0.102366529 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:25 np0005535469 nova_compute[254092]: 2025-11-25 17:11:25.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:25 np0005535469 podman[401804]: 2025-11-25 17:11:25.796714221 +0000 UTC m=+0.062909276 container create fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:11:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:11:25 np0005535469 podman[401804]: 2025-11-25 17:11:25.755935223 +0000 UTC m=+0.022130298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:11:25 np0005535469 systemd[1]: Started libpod-conmon-fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381.scope.
Nov 25 12:11:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:11:25 np0005535469 podman[401804]: 2025-11-25 17:11:25.99539121 +0000 UTC m=+0.261586295 container init fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:11:26 np0005535469 podman[401804]: 2025-11-25 17:11:26.006106494 +0000 UTC m=+0.272301549 container start fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:11:26 np0005535469 gifted_napier[401820]: 167 167
Nov 25 12:11:26 np0005535469 systemd[1]: libpod-fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381.scope: Deactivated successfully.
Nov 25 12:11:26 np0005535469 podman[401804]: 2025-11-25 17:11:26.028288893 +0000 UTC m=+0.294483998 container attach fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:11:26 np0005535469 podman[401804]: 2025-11-25 17:11:26.02929448 +0000 UTC m=+0.295489555 container died fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:11:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-be0216e74757715ccf89ea27e1a32bdd00b715589609e850eb5f9f2f9e6d27d0-merged.mount: Deactivated successfully.
Nov 25 12:11:26 np0005535469 podman[401804]: 2025-11-25 17:11:26.219113267 +0000 UTC m=+0.485308332 container remove fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:11:26 np0005535469 systemd[1]: libpod-conmon-fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381.scope: Deactivated successfully.
Nov 25 12:11:26 np0005535469 podman[401846]: 2025-11-25 17:11:26.370412836 +0000 UTC m=+0.020856762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:11:26 np0005535469 podman[401846]: 2025-11-25 17:11:26.482502341 +0000 UTC m=+0.132946247 container create 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:11:26 np0005535469 systemd[1]: Started libpod-conmon-75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575.scope.
Nov 25 12:11:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:11:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:26 np0005535469 podman[401846]: 2025-11-25 17:11:26.593577118 +0000 UTC m=+0.244021054 container init 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:11:26 np0005535469 podman[401846]: 2025-11-25 17:11:26.601420473 +0000 UTC m=+0.251864379 container start 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:11:26 np0005535469 podman[401846]: 2025-11-25 17:11:26.612836666 +0000 UTC m=+0.263280592 container attach 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:11:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:11:26 np0005535469 NetworkManager[48891]: <info>  [1764090686.7659] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Nov 25 12:11:26 np0005535469 nova_compute[254092]: 2025-11-25 17:11:26.765 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:26 np0005535469 NetworkManager[48891]: <info>  [1764090686.7691] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Nov 25 12:11:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:26Z|01409|binding|INFO|Releasing lport 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 from this chassis (sb_readonly=0)
Nov 25 12:11:26 np0005535469 nova_compute[254092]: 2025-11-25 17:11:26.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:26Z|01410|binding|INFO|Releasing lport 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 from this chassis (sb_readonly=0)
Nov 25 12:11:26 np0005535469 nova_compute[254092]: 2025-11-25 17:11:26.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:27.122 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:11:27 np0005535469 nova_compute[254092]: 2025-11-25 17:11:27.123 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:27.124 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:11:27 np0005535469 nova_compute[254092]: 2025-11-25 17:11:27.327 254096 DEBUG nova.compute.manager [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:11:27 np0005535469 nova_compute[254092]: 2025-11-25 17:11:27.328 254096 DEBUG nova.compute.manager [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing instance network info cache due to event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:11:27 np0005535469 nova_compute[254092]: 2025-11-25 17:11:27.328 254096 DEBUG oslo_concurrency.lockutils [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:11:27 np0005535469 nova_compute[254092]: 2025-11-25 17:11:27.329 254096 DEBUG oslo_concurrency.lockutils [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:11:27 np0005535469 nova_compute[254092]: 2025-11-25 17:11:27.329 254096 DEBUG nova.network.neutron [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:11:27 np0005535469 sad_jackson[401863]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:11:27 np0005535469 sad_jackson[401863]: --> relative data size: 1.0
Nov 25 12:11:27 np0005535469 sad_jackson[401863]: --> All data devices are unavailable
Nov 25 12:11:27 np0005535469 systemd[1]: libpod-75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575.scope: Deactivated successfully.
Nov 25 12:11:27 np0005535469 podman[401846]: 2025-11-25 17:11:27.645064018 +0000 UTC m=+1.295507924 container died 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:11:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d-merged.mount: Deactivated successfully.
Nov 25 12:11:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:11:28 np0005535469 podman[401846]: 2025-11-25 17:11:28.787114642 +0000 UTC m=+2.437558538 container remove 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 12:11:28 np0005535469 systemd[1]: libpod-conmon-75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575.scope: Deactivated successfully.
Nov 25 12:11:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:29 np0005535469 nova_compute[254092]: 2025-11-25 17:11:29.041 254096 DEBUG nova.network.neutron [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated VIF entry in instance network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:11:29 np0005535469 nova_compute[254092]: 2025-11-25 17:11:29.043 254096 DEBUG nova.network.neutron [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:11:29 np0005535469 nova_compute[254092]: 2025-11-25 17:11:29.070 254096 DEBUG oslo_concurrency.lockutils [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:11:29 np0005535469 podman[402047]: 2025-11-25 17:11:29.468688706 +0000 UTC m=+0.047531015 container create c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:11:29 np0005535469 systemd[1]: Started libpod-conmon-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope.
Nov 25 12:11:29 np0005535469 podman[402047]: 2025-11-25 17:11:29.443209417 +0000 UTC m=+0.022051726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:11:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:11:29 np0005535469 podman[402047]: 2025-11-25 17:11:29.586512277 +0000 UTC m=+0.165354596 container init c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 12:11:29 np0005535469 podman[402047]: 2025-11-25 17:11:29.598164637 +0000 UTC m=+0.177006936 container start c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:11:29 np0005535469 great_proskuriakova[402063]: 167 167
Nov 25 12:11:29 np0005535469 systemd[1]: libpod-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope: Deactivated successfully.
Nov 25 12:11:29 np0005535469 conmon[402063]: conmon c61c298980aac0384919 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope/container/memory.events
Nov 25 12:11:29 np0005535469 podman[402047]: 2025-11-25 17:11:29.696702949 +0000 UTC m=+0.275545248 container attach c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:11:29 np0005535469 podman[402047]: 2025-11-25 17:11:29.698358785 +0000 UTC m=+0.277201074 container died c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:11:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d8e2ed412e7672d150f73c16f262fe6ec7d5ed81b81c0e13d4ca701a77caeea0-merged.mount: Deactivated successfully.
Nov 25 12:11:29 np0005535469 podman[402047]: 2025-11-25 17:11:29.98146087 +0000 UTC m=+0.560303179 container remove c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:11:30 np0005535469 systemd[1]: libpod-conmon-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope: Deactivated successfully.
Nov 25 12:11:30 np0005535469 podman[402087]: 2025-11-25 17:11:30.248309969 +0000 UTC m=+0.096695123 container create f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:11:30 np0005535469 podman[402087]: 2025-11-25 17:11:30.18126156 +0000 UTC m=+0.029646674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:11:30 np0005535469 systemd[1]: Started libpod-conmon-f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288.scope.
Nov 25 12:11:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:11:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:30 np0005535469 podman[402087]: 2025-11-25 17:11:30.446894306 +0000 UTC m=+0.295279440 container init f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:11:30 np0005535469 podman[402087]: 2025-11-25 17:11:30.453925979 +0000 UTC m=+0.302311093 container start f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 12:11:30 np0005535469 nova_compute[254092]: 2025-11-25 17:11:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:30 np0005535469 podman[402087]: 2025-11-25 17:11:30.509077751 +0000 UTC m=+0.357462895 container attach f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:11:30 np0005535469 nova_compute[254092]: 2025-11-25 17:11:30.622 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:11:30 np0005535469 nova_compute[254092]: 2025-11-25 17:11:30.768 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:31 np0005535469 busy_lamport[402104]: {
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:    "0": [
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:        {
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "devices": [
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "/dev/loop3"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            ],
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_name": "ceph_lv0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_size": "21470642176",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "name": "ceph_lv0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "tags": {
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cluster_name": "ceph",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.crush_device_class": "",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.encrypted": "0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osd_id": "0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.type": "block",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.vdo": "0"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            },
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "type": "block",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "vg_name": "ceph_vg0"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:        }
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:    ],
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:    "1": [
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:        {
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "devices": [
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "/dev/loop4"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            ],
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_name": "ceph_lv1",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_size": "21470642176",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "name": "ceph_lv1",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "tags": {
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cluster_name": "ceph",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.crush_device_class": "",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.encrypted": "0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osd_id": "1",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.type": "block",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.vdo": "0"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            },
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "type": "block",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "vg_name": "ceph_vg1"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:        }
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:    ],
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:    "2": [
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:        {
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "devices": [
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "/dev/loop5"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            ],
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_name": "ceph_lv2",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_size": "21470642176",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "name": "ceph_lv2",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "tags": {
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.cluster_name": "ceph",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.crush_device_class": "",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.encrypted": "0",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osd_id": "2",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.type": "block",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:                "ceph.vdo": "0"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            },
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "type": "block",
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:            "vg_name": "ceph_vg2"
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:        }
Nov 25 12:11:31 np0005535469 busy_lamport[402104]:    ]
Nov 25 12:11:31 np0005535469 busy_lamport[402104]: }
Nov 25 12:11:31 np0005535469 systemd[1]: libpod-f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288.scope: Deactivated successfully.
Nov 25 12:11:31 np0005535469 podman[402087]: 2025-11-25 17:11:31.198228293 +0000 UTC m=+1.046613407 container died f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:11:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6-merged.mount: Deactivated successfully.
Nov 25 12:11:31 np0005535469 podman[402087]: 2025-11-25 17:11:31.485304288 +0000 UTC m=+1.333689442 container remove f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:11:31 np0005535469 systemd[1]: libpod-conmon-f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288.scope: Deactivated successfully.
Nov 25 12:11:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:11:32.126 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:11:32 np0005535469 podman[402268]: 2025-11-25 17:11:32.133782024 +0000 UTC m=+0.079094450 container create 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:11:32 np0005535469 podman[402268]: 2025-11-25 17:11:32.074766215 +0000 UTC m=+0.020078661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:11:32 np0005535469 systemd[1]: Started libpod-conmon-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope.
Nov 25 12:11:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:11:32 np0005535469 podman[402268]: 2025-11-25 17:11:32.296102976 +0000 UTC m=+0.241415442 container init 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:11:32 np0005535469 podman[402268]: 2025-11-25 17:11:32.303569221 +0000 UTC m=+0.248881647 container start 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:11:32 np0005535469 naughty_dubinsky[402285]: 167 167
Nov 25 12:11:32 np0005535469 systemd[1]: libpod-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope: Deactivated successfully.
Nov 25 12:11:32 np0005535469 conmon[402285]: conmon 4827954ce971dbac5cc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope/container/memory.events
Nov 25 12:11:32 np0005535469 podman[402268]: 2025-11-25 17:11:32.4632101 +0000 UTC m=+0.408522546 container attach 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:11:32 np0005535469 podman[402268]: 2025-11-25 17:11:32.464529416 +0000 UTC m=+0.409841842 container died 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:11:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-782353a02f82c42a089144d0c6221e036e2d130c7d22e8a6068d76d656e2a789-merged.mount: Deactivated successfully.
Nov 25 12:11:32 np0005535469 podman[402268]: 2025-11-25 17:11:32.685843316 +0000 UTC m=+0.631155742 container remove 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:11:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 111 KiB/s wr, 74 op/s
Nov 25 12:11:32 np0005535469 systemd[1]: libpod-conmon-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope: Deactivated successfully.
Nov 25 12:11:32 np0005535469 podman[402309]: 2025-11-25 17:11:32.841819694 +0000 UTC m=+0.026533938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:11:32 np0005535469 podman[402309]: 2025-11-25 17:11:32.956059238 +0000 UTC m=+0.140773452 container create e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:11:33 np0005535469 systemd[1]: Started libpod-conmon-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope.
Nov 25 12:11:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:11:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:11:33 np0005535469 podman[402309]: 2025-11-25 17:11:33.110079062 +0000 UTC m=+0.294793276 container init e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:11:33 np0005535469 podman[402309]: 2025-11-25 17:11:33.120273632 +0000 UTC m=+0.304987856 container start e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 12:11:33 np0005535469 podman[402309]: 2025-11-25 17:11:33.135488029 +0000 UTC m=+0.320202263 container attach e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:11:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:34 np0005535469 epic_williams[402327]: {
Nov 25 12:11:34 np0005535469 epic_williams[402327]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "osd_id": 1,
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "type": "bluestore"
Nov 25 12:11:34 np0005535469 epic_williams[402327]:    },
Nov 25 12:11:34 np0005535469 epic_williams[402327]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "osd_id": 2,
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "type": "bluestore"
Nov 25 12:11:34 np0005535469 epic_williams[402327]:    },
Nov 25 12:11:34 np0005535469 epic_williams[402327]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "osd_id": 0,
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:11:34 np0005535469 epic_williams[402327]:        "type": "bluestore"
Nov 25 12:11:34 np0005535469 epic_williams[402327]:    }
Nov 25 12:11:34 np0005535469 epic_williams[402327]: }
Nov 25 12:11:34 np0005535469 systemd[1]: libpod-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope: Deactivated successfully.
Nov 25 12:11:34 np0005535469 podman[402309]: 2025-11-25 17:11:34.154936831 +0000 UTC m=+1.339651045 container died e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:11:34 np0005535469 systemd[1]: libpod-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope: Consumed 1.040s CPU time.
Nov 25 12:11:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5-merged.mount: Deactivated successfully.
Nov 25 12:11:34 np0005535469 podman[402309]: 2025-11-25 17:11:34.324676417 +0000 UTC m=+1.509390631 container remove e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 12:11:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:11:34 np0005535469 systemd[1]: libpod-conmon-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope: Deactivated successfully.
Nov 25 12:11:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:11:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:11:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:11:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev bf102b83-5738-44bb-abc9-4eb0cae743b9 does not exist
Nov 25 12:11:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev aa095296-2ee5-4337-97c4-843889be8525 does not exist
Nov 25 12:11:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:11:35 np0005535469 nova_compute[254092]: 2025-11-25 17:11:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:11:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:11:35 np0005535469 nova_compute[254092]: 2025-11-25 17:11:35.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:35 np0005535469 nova_compute[254092]: 2025-11-25 17:11:35.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 92 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 456 KiB/s wr, 84 op/s
Nov 25 12:11:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:11:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 55K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1360 writes, 6176 keys, 1360 commit groups, 1.0 writes per commit group, ingest: 8.67 MB, 0.01 MB/s#012Interval WAL: 1360 writes, 1360 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     29.9      2.15              0.21        37    0.058       0      0       0.0       0.0#012  L6      1/0    8.02 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    110.5     92.6      3.12              0.82        36    0.087    220K    20K       0.0       0.0#012 Sum      1/0    8.02 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5     65.5     67.0      5.27              1.03        73    0.072    220K    20K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6    120.8    121.2      0.43              0.15        10    0.043     39K   2537       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    110.5     92.6      3.12              0.82        36    0.087    220K    20K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.5      2.10              0.21        36    0.058       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.1 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.34 GB write, 0.07 MB/s write, 0.34 GB read, 0.07 MB/s read, 5.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 40.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000381 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2663,39.11 MB,12.8668%) FilterBlock(74,620.42 KB,0.199303%) IndexBlock(74,1.00 MB,0.329073%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 12:11:37 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 12:11:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:38Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:3c:b2 10.100.0.11
Nov 25 12:11:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:38Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:3c:b2 10.100.0.11
Nov 25 12:11:38 np0005535469 nova_compute[254092]: 2025-11-25 17:11:38.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:38 np0005535469 nova_compute[254092]: 2025-11-25 17:11:38.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:11:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 92 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 443 KiB/s wr, 10 op/s
Nov 25 12:11:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:11:40
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', '.mgr', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'volumes']
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:11:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 119 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Nov 25 12:11:40 np0005535469 nova_compute[254092]: 2025-11-25 17:11:40.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:11:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1375603128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.017 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.117 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.118 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.335 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.336 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3457MB free_disk=59.94306182861328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.336 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.336 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.415 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2f4d2580-5acd-4693-a158-926565a16fe9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.459 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:11:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:11:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027102479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.993 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:11:41 np0005535469 nova_compute[254092]: 2025-11-25 17:11:41.999 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:11:42 np0005535469 nova_compute[254092]: 2025-11-25 17:11:42.014 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:11:42 np0005535469 nova_compute[254092]: 2025-11-25 17:11:42.059 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:11:42 np0005535469 nova_compute[254092]: 2025-11-25 17:11:42.059 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:11:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:11:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:11:45 np0005535469 nova_compute[254092]: 2025-11-25 17:11:45.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:45 np0005535469 nova_compute[254092]: 2025-11-25 17:11:45.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:46 np0005535469 nova_compute[254092]: 2025-11-25 17:11:46.062 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:46 np0005535469 nova_compute[254092]: 2025-11-25 17:11:46.062 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:11:46 np0005535469 nova_compute[254092]: 2025-11-25 17:11:46.062 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:11:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:11:46 np0005535469 nova_compute[254092]: 2025-11-25 17:11:46.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:11:46 np0005535469 nova_compute[254092]: 2025-11-25 17:11:46.784 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:11:46 np0005535469 nova_compute[254092]: 2025-11-25 17:11:46.784 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:11:46 np0005535469 nova_compute[254092]: 2025-11-25 17:11:46.784 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:11:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.7 MiB/s wr, 49 op/s
Nov 25 12:11:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:49 np0005535469 nova_compute[254092]: 2025-11-25 17:11:49.156 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:11:49 np0005535469 nova_compute[254092]: 2025-11-25 17:11:49.172 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:11:49 np0005535469 nova_compute[254092]: 2025-11-25 17:11:49.172 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:11:49 np0005535469 nova_compute[254092]: 2025-11-25 17:11:49.173 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:50 np0005535469 nova_compute[254092]: 2025-11-25 17:11:50.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.7 MiB/s wr, 49 op/s
Nov 25 12:11:50 np0005535469 nova_compute[254092]: 2025-11-25 17:11:50.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:11:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:11:52 np0005535469 nova_compute[254092]: 2025-11-25 17:11:52.602 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:11:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 71 KiB/s wr, 17 op/s
Nov 25 12:11:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:11:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 12:11:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:11:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3430701093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:11:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:11:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3430701093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:11:55 np0005535469 nova_compute[254092]: 2025-11-25 17:11:55.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:55 np0005535469 podman[402467]: 2025-11-25 17:11:55.673804876 +0000 UTC m=+0.086376629 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:11:55 np0005535469 podman[402468]: 2025-11-25 17:11:55.676478179 +0000 UTC m=+0.072479818 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 12:11:55 np0005535469 podman[402469]: 2025-11-25 17:11:55.740736092 +0000 UTC m=+0.140367581 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:11:55 np0005535469 nova_compute[254092]: 2025-11-25 17:11:55.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:11:56 np0005535469 ovn_controller[153477]: 2025-11-25T17:11:56Z|01411|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 12:11:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 12:11:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:11:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:00 np0005535469 nova_compute[254092]: 2025-11-25 17:12:00.638 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:12:00 np0005535469 nova_compute[254092]: 2025-11-25 17:12:00.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.055 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.056 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.072 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.156 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.157 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.166 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.166 254096 INFO nova.compute.claims [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.266 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:12:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278741482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.791 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.799 254096 DEBUG nova.compute.provider_tree [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.813 254096 DEBUG nova.scheduler.client.report [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.834 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.835 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.871 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.872 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.886 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.899 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:12:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.981 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.983 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:12:03 np0005535469 nova_compute[254092]: 2025-11-25 17:12:03.983 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Creating image(s)#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.010 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.035 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.058 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.062 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.112 254096 DEBUG nova.policy [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.163 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.164 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.165 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.165 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.193 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.197 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a8194956-04fe-46d6-9b07-63486afc3c7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.502 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a8194956-04fe-46d6-9b07-63486afc3c7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.572 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.672 254096 DEBUG nova.objects.instance [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid a8194956-04fe-46d6-9b07-63486afc3c7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.701 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.701 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Ensure instance console log exists: /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.702 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.702 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.703 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:12:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:04.831 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:12:04 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:04.833 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:04 np0005535469 nova_compute[254092]: 2025-11-25 17:12:04.973 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Successfully created port: 44750fec-28de-4cf5-8393-333676bb4fed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.652 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Successfully updated port: 44750fec-28de-4cf5-8393-333676bb4fed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.671 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.761 254096 DEBUG nova.compute.manager [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-changed-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.762 254096 DEBUG nova.compute.manager [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing instance network info cache due to event network-changed-44750fec-28de-4cf5-8393-333676bb4fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.762 254096 DEBUG oslo_concurrency.lockutils [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:12:05 np0005535469 nova_compute[254092]: 2025-11-25 17:12:05.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:06 np0005535469 nova_compute[254092]: 2025-11-25 17:12:06.256 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:12:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 167 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.933 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.954 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.954 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance network_info: |[{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.955 254096 DEBUG oslo_concurrency.lockutils [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.955 254096 DEBUG nova.network.neutron [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.959 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start _get_guest_xml network_info=[{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.963 254096 WARNING nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.972 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.972 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.980 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.981 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.981 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.981 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.982 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.982 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.983 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.983 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.983 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.984 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.984 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.984 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.985 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.985 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:12:07 np0005535469 nova_compute[254092]: 2025-11-25 17:12:07.988 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:12:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789845615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.459 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.480 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.484 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 167 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:12:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:12:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738469575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.910 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.911 254096 DEBUG nova.virt.libvirt.vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:12:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1415313673',display_name='tempest-TestGettingAddress-server-1415313673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1415313673',id=136,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzl12pdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:12:03Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a8194956-04fe-46d6-9b07-63486afc3c7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.912 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.912 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.914 254096 DEBUG nova.objects.instance [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8194956-04fe-46d6-9b07-63486afc3c7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.925 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <uuid>a8194956-04fe-46d6-9b07-63486afc3c7d</uuid>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <name>instance-00000088</name>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1415313673</nova:name>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:12:07</nova:creationTime>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <nova:port uuid="44750fec-28de-4cf5-8393-333676bb4fed">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe23:af6f" ipVersion="6"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <entry name="serial">a8194956-04fe-46d6-9b07-63486afc3c7d</entry>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <entry name="uuid">a8194956-04fe-46d6-9b07-63486afc3c7d</entry>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a8194956-04fe-46d6-9b07-63486afc3c7d_disk">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:23:af:6f"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <target dev="tap44750fec-28"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/console.log" append="off"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:12:08 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:12:08 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:12:08 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:12:08 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Preparing to wait for external event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.927 254096 DEBUG nova.virt.libvirt.vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:12:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1415313673',display_name='tempest-TestGettingAddress-server-1415313673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1415313673',id=136,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzl12pdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:12:03Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a8194956-04fe-46d6-9b07-63486afc3c7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.927 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.928 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.928 254096 DEBUG os_vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.929 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.929 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.932 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44750fec-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.932 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44750fec-28, col_values=(('external_ids', {'iface-id': '44750fec-28de-4cf5-8393-333676bb4fed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:af:6f', 'vm-uuid': 'a8194956-04fe-46d6-9b07-63486afc3c7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:08 np0005535469 NetworkManager[48891]: <info>  [1764090728.9347] manager: (tap44750fec-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.940 254096 INFO os_vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28')#033[00m
Nov 25 12:12:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.984 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.984 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.985 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:23:af:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:12:08 np0005535469 nova_compute[254092]: 2025-11-25 17:12:08.986 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Using config drive#033[00m
Nov 25 12:12:09 np0005535469 nova_compute[254092]: 2025-11-25 17:12:09.005 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:12:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:09.834 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:09 np0005535469 nova_compute[254092]: 2025-11-25 17:12:09.950 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Creating config drive at /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config#033[00m
Nov 25 12:12:09 np0005535469 nova_compute[254092]: 2025-11-25 17:12:09.954 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xksatra execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.088 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xksatra" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.110 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.114 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.246 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.247 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deleting local config drive /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config because it was imported into RBD.#033[00m
Nov 25 12:12:10 np0005535469 kernel: tap44750fec-28: entered promiscuous mode
Nov 25 12:12:10 np0005535469 NetworkManager[48891]: <info>  [1764090730.2918] manager: (tap44750fec-28): new Tun device (/org/freedesktop/NetworkManager/Devices/582)
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:10 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:10Z|01412|binding|INFO|Claiming lport 44750fec-28de-4cf5-8393-333676bb4fed for this chassis.
Nov 25 12:12:10 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:10Z|01413|binding|INFO|44750fec-28de-4cf5-8393-333676bb4fed: Claiming fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.298 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], port_security=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe23:af6f/64', 'neutron:device_id': 'a8194956-04fe-46d6-9b07-63486afc3c7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=44750fec-28de-4cf5-8393-333676bb4fed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.299 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 44750fec-28de-4cf5-8393-333676bb4fed in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 bound to our chassis#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.301 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.321 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b842008c-edb3-44a3-9025-5966769d7722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:10 np0005535469 systemd-udevd[402854]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:12:10 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:10Z|01414|binding|INFO|Setting lport 44750fec-28de-4cf5-8393-333676bb4fed ovn-installed in OVS
Nov 25 12:12:10 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:10Z|01415|binding|INFO|Setting lport 44750fec-28de-4cf5-8393-333676bb4fed up in Southbound
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:10 np0005535469 NetworkManager[48891]: <info>  [1764090730.3393] device (tap44750fec-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:12:10 np0005535469 NetworkManager[48891]: <info>  [1764090730.3403] device (tap44750fec-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:12:10 np0005535469 systemd-machined[216343]: New machine qemu-170-instance-00000088.
Nov 25 12:12:10 np0005535469 systemd[1]: Started Virtual Machine qemu-170-instance-00000088.
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.356 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec0ea2e-60bb-4441-97e7-fb544a5e3511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.360 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[143ef667-6884-4f44-aa95-a74523d5ad78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.391 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29905af9-eec5-4269-8025-36ef7459ba98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.409 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[56eef1d2-cbea-4855-a620-dd51dc40131b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 31125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402863, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.423 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[16e31a5e-2824-429d-b13d-a28ce536dcce]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714051, 'tstamp': 714051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402868, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714055, 'tstamp': 714055}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402868, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.425 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.429 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd20b9f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.429 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.430 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bd20b9f-20, col_values=(('external_ids', {'iface-id': '6c3d14c5-0ace-4cc3-828c-1d4061ad3097'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.430 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:12:10 np0005535469 nova_compute[254092]: 2025-11-25 17:12:10.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.000 254096 DEBUG nova.compute.manager [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.000 254096 DEBUG oslo_concurrency.lockutils [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.002 254096 DEBUG oslo_concurrency.lockutils [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.002 254096 DEBUG oslo_concurrency.lockutils [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.002 254096 DEBUG nova.compute.manager [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Processing event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.003 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090730.9963694, a8194956-04fe-46d6-9b07-63486afc3c7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.004 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Started (Lifecycle Event)#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.007 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.011 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.015 254096 INFO nova.virt.libvirt.driver [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance spawned successfully.#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.015 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.021 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.026 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.042 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.042 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.043 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.044 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.045 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.045 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.052 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.053 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090731.0018113, a8194956-04fe-46d6-9b07-63486afc3c7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.053 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.090 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.093 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090731.0107813, a8194956-04fe-46d6-9b07-63486afc3c7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.093 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.112 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.121 254096 INFO nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 7.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.121 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.141 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.176 254096 INFO nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 8.05 seconds to build instance.#033[00m
Nov 25 12:12:11 np0005535469 nova_compute[254092]: 2025-11-25 17:12:11.189 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:12 np0005535469 nova_compute[254092]: 2025-11-25 17:12:12.074 254096 DEBUG nova.network.neutron [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updated VIF entry in instance network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:12:12 np0005535469 nova_compute[254092]: 2025-11-25 17:12:12.074 254096 DEBUG nova.network.neutron [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:12:12 np0005535469 nova_compute[254092]: 2025-11-25 17:12:12.086 254096 DEBUG oslo_concurrency.lockutils [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:12:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 12:12:13 np0005535469 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG nova.compute.manager [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:13 np0005535469 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG oslo_concurrency.lockutils [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:13 np0005535469 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG oslo_concurrency.lockutils [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:13 np0005535469 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG oslo_concurrency.lockutils [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:13 np0005535469 nova_compute[254092]: 2025-11-25 17:12:13.216 254096 DEBUG nova.compute.manager [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] No waiting events found dispatching network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:12:13 np0005535469 nova_compute[254092]: 2025-11-25 17:12:13.216 254096 WARNING nova.compute.manager [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received unexpected event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed for instance with vm_state active and task_state None.#033[00m
Nov 25 12:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:13.650 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:13 np0005535469 nova_compute[254092]: 2025-11-25 17:12:13.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 12:12:15 np0005535469 nova_compute[254092]: 2025-11-25 17:12:15.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:16 np0005535469 nova_compute[254092]: 2025-11-25 17:12:16.016 254096 DEBUG nova.compute.manager [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-changed-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:16 np0005535469 nova_compute[254092]: 2025-11-25 17:12:16.017 254096 DEBUG nova.compute.manager [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing instance network info cache due to event network-changed-44750fec-28de-4cf5-8393-333676bb4fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:12:16 np0005535469 nova_compute[254092]: 2025-11-25 17:12:16.017 254096 DEBUG oslo_concurrency.lockutils [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:12:16 np0005535469 nova_compute[254092]: 2025-11-25 17:12:16.017 254096 DEBUG oslo_concurrency.lockutils [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:12:16 np0005535469 nova_compute[254092]: 2025-11-25 17:12:16.018 254096 DEBUG nova.network.neutron [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:12:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 12:12:17 np0005535469 nova_compute[254092]: 2025-11-25 17:12:17.619 254096 DEBUG nova.network.neutron [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updated VIF entry in instance network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:12:17 np0005535469 nova_compute[254092]: 2025-11-25 17:12:17.620 254096 DEBUG nova.network.neutron [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:12:17 np0005535469 nova_compute[254092]: 2025-11-25 17:12:17.640 254096 DEBUG oslo_concurrency.lockutils [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:12:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:12:18 np0005535469 nova_compute[254092]: 2025-11-25 17:12:18.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:20 np0005535469 nova_compute[254092]: 2025-11-25 17:12:20.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:12:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 12:12:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:23Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:af:6f 10.100.0.9
Nov 25 12:12:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:23Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:af:6f 10.100.0.9
Nov 25 12:12:23 np0005535469 nova_compute[254092]: 2025-11-25 17:12:23.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 65 op/s
Nov 25 12:12:25 np0005535469 nova_compute[254092]: 2025-11-25 17:12:25.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:26 np0005535469 podman[402912]: 2025-11-25 17:12:26.661669545 +0000 UTC m=+0.066269979 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 12:12:26 np0005535469 podman[402913]: 2025-11-25 17:12:26.671698631 +0000 UTC m=+0.073722884 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 12:12:26 np0005535469 podman[402914]: 2025-11-25 17:12:26.744612941 +0000 UTC m=+0.133762271 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 12:12:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 25 12:12:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:12:28 np0005535469 nova_compute[254092]: 2025-11-25 17:12:28.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:30 np0005535469 nova_compute[254092]: 2025-11-25 17:12:30.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:30 np0005535469 nova_compute[254092]: 2025-11-25 17:12:30.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:12:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.272 254096 DEBUG nova.compute.manager [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-changed-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.272 254096 DEBUG nova.compute.manager [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing instance network info cache due to event network-changed-44750fec-28de-4cf5-8393-333676bb4fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.273 254096 DEBUG oslo_concurrency.lockutils [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.273 254096 DEBUG oslo_concurrency.lockutils [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.273 254096 DEBUG nova.network.neutron [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.337 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.338 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.339 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.339 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.339 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.340 254096 INFO nova.compute.manager [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Terminating instance#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.341 254096 DEBUG nova.compute.manager [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:12:33 np0005535469 kernel: tap44750fec-28 (unregistering): left promiscuous mode
Nov 25 12:12:33 np0005535469 NetworkManager[48891]: <info>  [1764090753.3930] device (tap44750fec-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:12:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:33Z|01416|binding|INFO|Releasing lport 44750fec-28de-4cf5-8393-333676bb4fed from this chassis (sb_readonly=0)
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:33Z|01417|binding|INFO|Setting lport 44750fec-28de-4cf5-8393-333676bb4fed down in Southbound
Nov 25 12:12:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:33Z|01418|binding|INFO|Removing iface tap44750fec-28 ovn-installed in OVS
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.477 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], port_security=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe23:af6f/64', 'neutron:device_id': 'a8194956-04fe-46d6-9b07-63486afc3c7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=44750fec-28de-4cf5-8393-333676bb4fed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.478 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 44750fec-28de-4cf5-8393-333676bb4fed in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 unbound from our chassis#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.480 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.497 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de44ae6a-8949-4b78-a7b1-ec2473c3de47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:33 np0005535469 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 25 12:12:33 np0005535469 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Consumed 13.264s CPU time.
Nov 25 12:12:33 np0005535469 systemd-machined[216343]: Machine qemu-170-instance-00000088 terminated.
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.532 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29d8fa50-2d55-48b7-9f41-269c6233db3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.535 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28322817-6007-4bd4-94ec-25c7b3135616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.572 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[48489ac3-917b-4e88-b6f1-cae571d37f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.578 254096 INFO nova.virt.libvirt.driver [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance destroyed successfully.#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.578 254096 DEBUG nova.objects.instance [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid a8194956-04fe-46d6-9b07-63486afc3c7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.593 254096 DEBUG nova.virt.libvirt.vif [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:12:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1415313673',display_name='tempest-TestGettingAddress-server-1415313673',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1415313673',id=136,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:12:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzl12pdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:12:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a8194956-04fe-46d6-9b07-63486afc3c7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.593 254096 DEBUG nova.network.os_vif_util [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8304611-2812-4700-b2e8-321ee7f82b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 31125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402997, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.594 254096 DEBUG nova.network.os_vif_util [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.595 254096 DEBUG os_vif [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.597 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44750fec-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.604 254096 INFO os_vif [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28')#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0d2d18-cb3e-486d-a4cc-b58f797990e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714051, 'tstamp': 714051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402998, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714055, 'tstamp': 714055}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402998, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.610 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.612 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd20b9f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bd20b9f-20, col_values=(('external_ids', {'iface-id': '6c3d14c5-0ace-4cc3-828c-1d4061ad3097'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:33 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:12:33 np0005535469 nova_compute[254092]: 2025-11-25 17:12:33.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.108 254096 INFO nova.virt.libvirt.driver [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deleting instance files /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d_del#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.110 254096 INFO nova.virt.libvirt.driver [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deletion of /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d_del complete#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.162 254096 INFO nova.compute.manager [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.163 254096 DEBUG oslo.service.loopingcall [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.164 254096 DEBUG nova.compute.manager [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.164 254096 DEBUG nova.network.neutron [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.614 254096 DEBUG nova.network.neutron [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updated VIF entry in instance network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.616 254096 DEBUG nova.network.neutron [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.636 254096 DEBUG oslo_concurrency.lockutils [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:12:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.785 254096 DEBUG nova.network.neutron [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.800 254096 INFO nova.compute.manager [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.837 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.837 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.871 254096 DEBUG nova.compute.manager [req-81fba276-ef05-4b67-8926-617dfc488364 req-f9d19b09-749d-4c44-9eec-036f4a0ed425 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-deleted-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:34 np0005535469 nova_compute[254092]: 2025-11-25 17:12:34.901 254096 DEBUG oslo_concurrency.processutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2574910987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.348 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-unplugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.349 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.349 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] No waiting events found dispatching network-vif-unplugged-44750fec-28de-4cf5-8393-333676bb4fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 WARNING nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received unexpected event network-vif-unplugged-44750fec-28de-4cf5-8393-333676bb4fed for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] No waiting events found dispatching network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 WARNING nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received unexpected event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.352 254096 DEBUG oslo_concurrency.processutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.359 254096 DEBUG nova.compute.provider_tree [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:12:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f29818a6-c1d7-430d-be6a-30f7908272a6 does not exist
Nov 25 12:12:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 398e8e5e-3072-43b2-96fb-ec3da7f38744 does not exist
Nov 25 12:12:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 80bff87b-16b6-4ba6-b8b6-1f09fb9ebe9e does not exist
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.376 254096 DEBUG nova.scheduler.client.report [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.396 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.422 254096 INFO nova.scheduler.client.report [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance a8194956-04fe-46d6-9b07-63486afc3c7d#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.474 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:35 np0005535469 nova_compute[254092]: 2025-11-25 17:12:35.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:12:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:12:36 np0005535469 podman[403312]: 2025-11-25 17:12:36.070341098 +0000 UTC m=+0.065539189 container create 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:12:36 np0005535469 systemd[1]: Started libpod-conmon-43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50.scope.
Nov 25 12:12:36 np0005535469 podman[403312]: 2025-11-25 17:12:36.047229183 +0000 UTC m=+0.042427314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:12:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:12:36 np0005535469 podman[403312]: 2025-11-25 17:12:36.181876737 +0000 UTC m=+0.177074858 container init 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:36 np0005535469 podman[403312]: 2025-11-25 17:12:36.198554534 +0000 UTC m=+0.193752635 container start 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:36 np0005535469 podman[403312]: 2025-11-25 17:12:36.203722786 +0000 UTC m=+0.198920927 container attach 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:12:36 np0005535469 crazy_jemison[403329]: 167 167
Nov 25 12:12:36 np0005535469 systemd[1]: libpod-43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50.scope: Deactivated successfully.
Nov 25 12:12:36 np0005535469 podman[403312]: 2025-11-25 17:12:36.212067585 +0000 UTC m=+0.207265696 container died 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:12:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6dd6194bad6761cc2f6f3b425de1aa6286d9c4c17e2188b5513f8d22b915e0b8-merged.mount: Deactivated successfully.
Nov 25 12:12:36 np0005535469 podman[403312]: 2025-11-25 17:12:36.258376055 +0000 UTC m=+0.253574146 container remove 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:12:36 np0005535469 systemd[1]: libpod-conmon-43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50.scope: Deactivated successfully.
Nov 25 12:12:36 np0005535469 podman[403353]: 2025-11-25 17:12:36.460876249 +0000 UTC m=+0.047411581 container create 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:36 np0005535469 systemd[1]: Started libpod-conmon-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope.
Nov 25 12:12:36 np0005535469 podman[403353]: 2025-11-25 17:12:36.442221937 +0000 UTC m=+0.028757269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:12:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:12:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:36 np0005535469 podman[403353]: 2025-11-25 17:12:36.565013655 +0000 UTC m=+0.151548997 container init 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 12:12:36 np0005535469 podman[403353]: 2025-11-25 17:12:36.572560363 +0000 UTC m=+0.159095675 container start 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:36 np0005535469 podman[403353]: 2025-11-25 17:12:36.575670708 +0000 UTC m=+0.162206040 container attach 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:12:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.838 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.839 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.839 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.840 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.840 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.842 254096 INFO nova.compute.manager [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Terminating instance#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.843 254096 DEBUG nova.compute.manager [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:12:36 np0005535469 kernel: tap0d520bf2-3b (unregistering): left promiscuous mode
Nov 25 12:12:36 np0005535469 NetworkManager[48891]: <info>  [1764090756.9017] device (tap0d520bf2-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:36Z|01419|binding|INFO|Releasing lport 0d520bf2-3b97-484c-9046-b614ea281b88 from this chassis (sb_readonly=0)
Nov 25 12:12:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:36Z|01420|binding|INFO|Setting lport 0d520bf2-3b97-484c-9046-b614ea281b88 down in Southbound
Nov 25 12:12:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:12:36Z|01421|binding|INFO|Removing iface tap0d520bf2-3b ovn-installed in OVS
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG nova.compute.manager [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG nova.compute.manager [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing instance network info cache due to event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG oslo_concurrency.lockutils [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG oslo_concurrency.lockutils [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG nova.network.neutron [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:12:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.945 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], port_security=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fefa:3cb2/64', 'neutron:device_id': '2f4d2580-5acd-4693-a158-926565a16fe9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d520bf2-3b97-484c-9046-b614ea281b88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:12:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.947 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d520bf2-3b97-484c-9046-b614ea281b88 in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 unbound from our chassis#033[00m
Nov 25 12:12:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.948 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:12:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[384ccfc8-4c67-4b69-b245-e83c0f9a4ba9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.952 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 namespace which is not needed anymore#033[00m
Nov 25 12:12:36 np0005535469 nova_compute[254092]: 2025-11-25 17:12:36.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:36 np0005535469 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 25 12:12:36 np0005535469 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Consumed 16.143s CPU time.
Nov 25 12:12:36 np0005535469 systemd-machined[216343]: Machine qemu-169-instance-00000087 terminated.
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.069 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.092 254096 INFO nova.virt.libvirt.driver [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance destroyed successfully.#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.093 254096 DEBUG nova.objects.instance [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.105 254096 DEBUG nova.virt.libvirt.vif [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-318572700',display_name='tempest-TestGettingAddress-server-318572700',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-318572700',id=135,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:11:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-6ty6vm7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:11:23Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=2f4d2580-5acd-4693-a158-926565a16fe9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.105 254096 DEBUG nova.network.os_vif_util [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.106 254096 DEBUG nova.network.os_vif_util [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.106 254096 DEBUG os_vif [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.109 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d520bf2-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:37 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : haproxy version is 2.8.14-c23fe91
Nov 25 12:12:37 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : path to executable is /usr/sbin/haproxy
Nov 25 12:12:37 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [WARNING]  (401462) : Exiting Master process...
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:12:37 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [WARNING]  (401462) : Exiting Master process...
Nov 25 12:12:37 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [ALERT]    (401462) : Current worker (401464) exited with code 143 (Terminated)
Nov 25 12:12:37 np0005535469 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [WARNING]  (401462) : All workers exited. Exiting... (0)
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.116 254096 INFO os_vif [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b')#033[00m
Nov 25 12:12:37 np0005535469 systemd[1]: libpod-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d.scope: Deactivated successfully.
Nov 25 12:12:37 np0005535469 podman[403398]: 2025-11-25 17:12:37.123237736 +0000 UTC m=+0.054393572 container died 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:12:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d-userdata-shm.mount: Deactivated successfully.
Nov 25 12:12:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bde7a71fe41d6784c5b2181a1c90baa9c83c45304d06a23693e4f44a74edde50-merged.mount: Deactivated successfully.
Nov 25 12:12:37 np0005535469 podman[403398]: 2025-11-25 17:12:37.208900416 +0000 UTC m=+0.140056252 container cleanup 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:12:37 np0005535469 systemd[1]: libpod-conmon-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d.scope: Deactivated successfully.
Nov 25 12:12:37 np0005535469 podman[403458]: 2025-11-25 17:12:37.305001472 +0000 UTC m=+0.064900601 container remove 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.311 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5a0951-a269-4624-b8b1-cbe6bca75f13]: (4, ('Tue Nov 25 05:12:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 (3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d)\n3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d\nTue Nov 25 05:12:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 (3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d)\n3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef35d81f-fb7c-4bc6-8d64-6b4bc6530b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.315 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.317 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:37 np0005535469 kernel: tap0bd20b9f-20: left promiscuous mode
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1560f67d-eb00-46ae-ad66-d067b24c2a32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.353 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5289313-fdc6-4a4b-a311-30ceda63ee9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.355 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a2dc19-6cb3-44de-8f71-a428c8b1ee4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.375 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dbdf4c-c08e-462a-804a-24733ed975f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714025, 'reachable_time': 30225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403476, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:37 np0005535469 systemd[1]: run-netns-ovnmeta\x2d0bd20b9f\x2d24bc\x2d4728\x2d90ff\x2d2870bebbf3f9.mount: Deactivated successfully.
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.380 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:12:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.380 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[baa12052-2e94-4a42-bfb2-856462ff6152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:12:37 np0005535469 vigilant_roentgen[403369]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:12:37 np0005535469 vigilant_roentgen[403369]: --> relative data size: 1.0
Nov 25 12:12:37 np0005535469 vigilant_roentgen[403369]: --> All data devices are unavailable
Nov 25 12:12:37 np0005535469 systemd[1]: libpod-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope: Deactivated successfully.
Nov 25 12:12:37 np0005535469 systemd[1]: libpod-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope: Consumed 1.067s CPU time.
Nov 25 12:12:37 np0005535469 podman[403353]: 2025-11-25 17:12:37.752099365 +0000 UTC m=+1.338634697 container died 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1-merged.mount: Deactivated successfully.
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.815 254096 INFO nova.virt.libvirt.driver [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deleting instance files /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9_del#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.817 254096 INFO nova.virt.libvirt.driver [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deletion of /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9_del complete#033[00m
Nov 25 12:12:37 np0005535469 podman[403353]: 2025-11-25 17:12:37.857402664 +0000 UTC m=+1.443937976 container remove 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:12:37 np0005535469 systemd[1]: libpod-conmon-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope: Deactivated successfully.
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.864 254096 INFO nova.compute.manager [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.865 254096 DEBUG oslo.service.loopingcall [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.865 254096 DEBUG nova.compute.manager [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:12:37 np0005535469 nova_compute[254092]: 2025-11-25 17:12:37.865 254096 DEBUG nova.network.neutron [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:12:38 np0005535469 podman[403652]: 2025-11-25 17:12:38.41216955 +0000 UTC m=+0.038734714 container create 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:12:38 np0005535469 systemd[1]: Started libpod-conmon-6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf.scope.
Nov 25 12:12:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:12:38 np0005535469 podman[403652]: 2025-11-25 17:12:38.395288077 +0000 UTC m=+0.021853281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:12:38 np0005535469 podman[403652]: 2025-11-25 17:12:38.493233804 +0000 UTC m=+0.119798998 container init 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:12:38 np0005535469 nova_compute[254092]: 2025-11-25 17:12:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:38 np0005535469 nova_compute[254092]: 2025-11-25 17:12:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:12:38 np0005535469 podman[403652]: 2025-11-25 17:12:38.501406837 +0000 UTC m=+0.127972011 container start 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:12:38 np0005535469 podman[403652]: 2025-11-25 17:12:38.504241054 +0000 UTC m=+0.130806228 container attach 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:12:38 np0005535469 nice_antonelli[403668]: 167 167
Nov 25 12:12:38 np0005535469 systemd[1]: libpod-6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf.scope: Deactivated successfully.
Nov 25 12:12:38 np0005535469 podman[403652]: 2025-11-25 17:12:38.507510255 +0000 UTC m=+0.134075439 container died 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6b612afd5f55b63cdf076e65c56fe61eae4e06409569c069a496db742b3019fa-merged.mount: Deactivated successfully.
Nov 25 12:12:38 np0005535469 podman[403652]: 2025-11-25 17:12:38.541838106 +0000 UTC m=+0.168403280 container remove 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:12:38 np0005535469 systemd[1]: libpod-conmon-6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf.scope: Deactivated successfully.
Nov 25 12:12:38 np0005535469 podman[403692]: 2025-11-25 17:12:38.751726733 +0000 UTC m=+0.077406554 container create 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:12:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 25 12:12:38 np0005535469 systemd[1]: Started libpod-conmon-4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b.scope.
Nov 25 12:12:38 np0005535469 podman[403692]: 2025-11-25 17:12:38.723684434 +0000 UTC m=+0.049364355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:12:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:12:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:38 np0005535469 podman[403692]: 2025-11-25 17:12:38.887041304 +0000 UTC m=+0.212721155 container init 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:12:38 np0005535469 podman[403692]: 2025-11-25 17:12:38.900536905 +0000 UTC m=+0.226216816 container start 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:12:38 np0005535469 podman[403692]: 2025-11-25 17:12:38.906228851 +0000 UTC m=+0.231908682 container attach 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:12:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.048 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-unplugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.049 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] No waiting events found dispatching network-vif-unplugged-0d520bf2-3b97-484c-9046-b614ea281b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-unplugged-0d520bf2-3b97-484c-9046-b614ea281b88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] No waiting events found dispatching network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.052 254096 WARNING nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received unexpected event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:39 np0005535469 priceless_nash[403708]: {
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:    "0": [
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:        {
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "devices": [
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "/dev/loop3"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            ],
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_name": "ceph_lv0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_size": "21470642176",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "name": "ceph_lv0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "tags": {
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cluster_name": "ceph",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.crush_device_class": "",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.encrypted": "0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osd_id": "0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.type": "block",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.vdo": "0"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            },
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "type": "block",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "vg_name": "ceph_vg0"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:        }
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:    ],
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:    "1": [
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:        {
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "devices": [
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "/dev/loop4"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            ],
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_name": "ceph_lv1",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_size": "21470642176",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "name": "ceph_lv1",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "tags": {
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cluster_name": "ceph",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.crush_device_class": "",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.encrypted": "0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osd_id": "1",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.type": "block",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.vdo": "0"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            },
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "type": "block",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "vg_name": "ceph_vg1"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:        }
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:    ],
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:    "2": [
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:        {
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "devices": [
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "/dev/loop5"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            ],
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_name": "ceph_lv2",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_size": "21470642176",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "name": "ceph_lv2",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "tags": {
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.cluster_name": "ceph",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.crush_device_class": "",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.encrypted": "0",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osd_id": "2",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.type": "block",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:                "ceph.vdo": "0"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            },
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "type": "block",
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:            "vg_name": "ceph_vg2"
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:        }
Nov 25 12:12:39 np0005535469 priceless_nash[403708]:    ]
Nov 25 12:12:39 np0005535469 priceless_nash[403708]: }
Nov 25 12:12:39 np0005535469 systemd[1]: libpod-4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b.scope: Deactivated successfully.
Nov 25 12:12:39 np0005535469 podman[403692]: 2025-11-25 17:12:39.723286021 +0000 UTC m=+1.048965842 container died 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:12:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356-merged.mount: Deactivated successfully.
Nov 25 12:12:39 np0005535469 podman[403692]: 2025-11-25 17:12:39.808074097 +0000 UTC m=+1.133753928 container remove 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.810 254096 DEBUG nova.network.neutron [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:12:39 np0005535469 systemd[1]: libpod-conmon-4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b.scope: Deactivated successfully.
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.831 254096 INFO nova.compute.manager [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 1.97 seconds to deallocate network for instance.#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.879 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.879 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:39 np0005535469 nova_compute[254092]: 2025-11-25 17:12:39.914 254096 DEBUG oslo_concurrency.processutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.074 254096 DEBUG nova.network.neutron [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated VIF entry in instance network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.075 254096 DEBUG nova.network.neutron [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.092 254096 DEBUG oslo_concurrency.lockutils [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:12:40
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'backups', '.mgr', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:12:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:12:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306141709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.370 254096 DEBUG oslo_concurrency.processutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.376 254096 DEBUG nova.compute.provider_tree [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.391 254096 DEBUG nova.scheduler.client.report [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.408 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:40 np0005535469 podman[403892]: 2025-11-25 17:12:40.419161578 +0000 UTC m=+0.038173709 container create afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.452 254096 INFO nova.scheduler.client.report [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 2f4d2580-5acd-4693-a158-926565a16fe9#033[00m
Nov 25 12:12:40 np0005535469 systemd[1]: Started libpod-conmon-afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0.scope.
Nov 25 12:12:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:40 np0005535469 podman[403892]: 2025-11-25 17:12:40.401906034 +0000 UTC m=+0.020918185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:12:40 np0005535469 podman[403892]: 2025-11-25 17:12:40.505199798 +0000 UTC m=+0.124211949 container init afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:12:40 np0005535469 podman[403892]: 2025-11-25 17:12:40.518471851 +0000 UTC m=+0.137483982 container start afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:12:40 np0005535469 podman[403892]: 2025-11-25 17:12:40.522630706 +0000 UTC m=+0.141642857 container attach afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:12:40 np0005535469 pensive_kare[403908]: 167 167
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:12:40 np0005535469 systemd[1]: libpod-afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0.scope: Deactivated successfully.
Nov 25 12:12:40 np0005535469 podman[403892]: 2025-11-25 17:12:40.525479354 +0000 UTC m=+0.144491485 container died afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6634ec1057330b0912cf6541a7d94e814f1b36148f4d33e93de9410263ade52c-merged.mount: Deactivated successfully.
Nov 25 12:12:40 np0005535469 podman[403892]: 2025-11-25 17:12:40.561735768 +0000 UTC m=+0.180747899 container remove afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.568 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:40 np0005535469 systemd[1]: libpod-conmon-afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0.scope: Deactivated successfully.
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:40 np0005535469 podman[403952]: 2025-11-25 17:12:40.726115127 +0000 UTC m=+0.043348610 container create 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 59 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 15 KiB/s wr, 55 op/s
Nov 25 12:12:40 np0005535469 systemd[1]: Started libpod-conmon-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope.
Nov 25 12:12:40 np0005535469 podman[403952]: 2025-11-25 17:12:40.706342605 +0000 UTC m=+0.023576108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:12:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:12:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:12:40 np0005535469 podman[403952]: 2025-11-25 17:12:40.826310284 +0000 UTC m=+0.143543797 container init 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:12:40 np0005535469 podman[403952]: 2025-11-25 17:12:40.83454026 +0000 UTC m=+0.151773743 container start 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:12:40 np0005535469 podman[403952]: 2025-11-25 17:12:40.837726508 +0000 UTC m=+0.154959991 container attach 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:12:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:12:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083821883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:12:40 np0005535469 nova_compute[254092]: 2025-11-25 17:12:40.981 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.129 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.132 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3652MB free_disk=59.98165512084961GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.133 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.136 254096 DEBUG nova.compute.manager [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-deleted-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.137 254096 INFO nova.compute.manager [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Neutron deleted interface 0d520bf2-3b97-484c-9046-b614ea281b88; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.137 254096 DEBUG nova.network.neutron [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.139 254096 DEBUG nova.compute.manager [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Detach interface failed, port_id=0d520bf2-3b97-484c-9046-b614ea281b88, reason: Instance 2f4d2580-5acd-4693-a158-926565a16fe9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.174 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.175 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.188 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:12:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:12:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668610908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.657 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.664 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.683 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.704 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:12:41 np0005535469 nova_compute[254092]: 2025-11-25 17:12:41.705 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]: {
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "osd_id": 1,
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "type": "bluestore"
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:    },
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "osd_id": 2,
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "type": "bluestore"
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:    },
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "osd_id": 0,
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:        "type": "bluestore"
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]:    }
Nov 25 12:12:41 np0005535469 stoic_bohr[403968]: }
Nov 25 12:12:41 np0005535469 systemd[1]: libpod-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope: Deactivated successfully.
Nov 25 12:12:41 np0005535469 podman[403952]: 2025-11-25 17:12:41.932787952 +0000 UTC m=+1.250021435 container died 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:12:41 np0005535469 systemd[1]: libpod-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope: Consumed 1.096s CPU time.
Nov 25 12:12:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27-merged.mount: Deactivated successfully.
Nov 25 12:12:41 np0005535469 podman[403952]: 2025-11-25 17:12:41.983709007 +0000 UTC m=+1.300942490 container remove 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:12:41 np0005535469 systemd[1]: libpod-conmon-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope: Deactivated successfully.
Nov 25 12:12:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:12:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:12:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:12:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:12:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 95f5f54c-199c-4d78-a036-bb1c0464fcd4 does not exist
Nov 25 12:12:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e0cf41cb-9e2c-45e1-920f-1fa8c9d7ead1 does not exist
Nov 25 12:12:42 np0005535469 nova_compute[254092]: 2025-11-25 17:12:42.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:42 np0005535469 nova_compute[254092]: 2025-11-25 17:12:42.702 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:42 np0005535469 nova_compute[254092]: 2025-11-25 17:12:42.702 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:42 np0005535469 nova_compute[254092]: 2025-11-25 17:12:42.703 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 KiB/s wr, 56 op/s
Nov 25 12:12:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:12:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:12:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Nov 25 12:12:45 np0005535469 nova_compute[254092]: 2025-11-25 17:12:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:45 np0005535469 nova_compute[254092]: 2025-11-25 17:12:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:12:45 np0005535469 nova_compute[254092]: 2025-11-25 17:12:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:12:45 np0005535469 nova_compute[254092]: 2025-11-25 17:12:45.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:12:45 np0005535469 nova_compute[254092]: 2025-11-25 17:12:45.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Nov 25 12:12:47 np0005535469 nova_compute[254092]: 2025-11-25 17:12:47.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:47 np0005535469 nova_compute[254092]: 2025-11-25 17:12:47.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:47 np0005535469 nova_compute[254092]: 2025-11-25 17:12:47.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:48 np0005535469 nova_compute[254092]: 2025-11-25 17:12:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:12:48 np0005535469 nova_compute[254092]: 2025-11-25 17:12:48.576 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090753.5749605, a8194956-04fe-46d6-9b07-63486afc3c7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:12:48 np0005535469 nova_compute[254092]: 2025-11-25 17:12:48.576 254096 INFO nova.compute.manager [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:12:48 np0005535469 nova_compute[254092]: 2025-11-25 17:12:48.592 254096 DEBUG nova.compute.manager [None req-e63dc93c-d523-47fc-a400-dea559a995c2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:12:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:12:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:50 np0005535469 nova_compute[254092]: 2025-11-25 17:12:50.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:12:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:12:52 np0005535469 nova_compute[254092]: 2025-11-25 17:12:52.089 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090757.088163, 2f4d2580-5acd-4693-a158-926565a16fe9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:12:52 np0005535469 nova_compute[254092]: 2025-11-25 17:12:52.090 254096 INFO nova.compute.manager [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:12:52 np0005535469 nova_compute[254092]: 2025-11-25 17:12:52.110 254096 DEBUG nova.compute.manager [None req-8846d0b1-5462-4e60-b3a4-8316f5dde1f2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:12:52 np0005535469 nova_compute[254092]: 2025-11-25 17:12:52.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Nov 25 12:12:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:12:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:12:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:12:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762493568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:12:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:12:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762493568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:12:55 np0005535469 nova_compute[254092]: 2025-11-25 17:12:55.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:12:57 np0005535469 nova_compute[254092]: 2025-11-25 17:12:57.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:12:57 np0005535469 podman[404091]: 2025-11-25 17:12:57.637309818 +0000 UTC m=+0.053391074 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 12:12:57 np0005535469 podman[404090]: 2025-11-25 17:12:57.6454658 +0000 UTC m=+0.062219104 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 12:12:57 np0005535469 podman[404092]: 2025-11-25 17:12:57.697374943 +0000 UTC m=+0.107788335 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 12:12:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:12:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:00 np0005535469 nova_compute[254092]: 2025-11-25 17:13:00.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:02 np0005535469 nova_compute[254092]: 2025-11-25 17:13:02.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:05 np0005535469 nova_compute[254092]: 2025-11-25 17:13:05.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:06.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:13:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:06.137 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:13:06 np0005535469 nova_compute[254092]: 2025-11-25 17:13:06.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:07 np0005535469 nova_compute[254092]: 2025-11-25 17:13:07.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.416 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.416 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.429 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.543 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.544 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.554 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.555 254096 INFO nova.compute.claims [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:10 np0005535469 nova_compute[254092]: 2025-11-25 17:13:10.706 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:13:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193490772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.127 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.133 254096 DEBUG nova.compute.provider_tree [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:13:11 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:11.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.145 254096 DEBUG nova.scheduler.client.report [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.161 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.162 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.198 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.198 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.216 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.231 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.311 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.312 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.312 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Creating image(s)#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.330 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.347 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.364 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.368 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.439 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.440 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.441 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.441 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.461 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.463 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.732 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.789 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.840 254096 DEBUG nova.policy [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.877 254096 DEBUG nova.objects.instance [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.891 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.892 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Ensure instance console log exists: /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.892 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.893 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:11 np0005535469 nova_compute[254092]: 2025-11-25 17:13:11.893 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:12 np0005535469 nova_compute[254092]: 2025-11-25 17:13:12.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:13.652 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:13.652 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:13 np0005535469 nova_compute[254092]: 2025-11-25 17:13:13.659 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully created port: bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:13:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:14 np0005535469 nova_compute[254092]: 2025-11-25 17:13:14.108 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully created port: 831eaa83-55bd-4098-9037-4b628eb8d994 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:13:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.190 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully updated port: bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.320 254096 DEBUG nova.compute.manager [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.320 254096 DEBUG nova.compute.manager [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.321 254096 DEBUG oslo_concurrency.lockutils [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.321 254096 DEBUG oslo_concurrency.lockutils [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.321 254096 DEBUG nova.network.neutron [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.658 254096 DEBUG nova.network.neutron [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:13:15 np0005535469 nova_compute[254092]: 2025-11-25 17:13:15.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:16 np0005535469 nova_compute[254092]: 2025-11-25 17:13:16.029 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully updated port: 831eaa83-55bd-4098-9037-4b628eb8d994 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:13:16 np0005535469 nova_compute[254092]: 2025-11-25 17:13:16.041 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:16 np0005535469 nova_compute[254092]: 2025-11-25 17:13:16.102 254096 DEBUG nova.network.neutron [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:16 np0005535469 nova_compute[254092]: 2025-11-25 17:13:16.122 254096 DEBUG oslo_concurrency.lockutils [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:16 np0005535469 nova_compute[254092]: 2025-11-25 17:13:16.122 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:16 np0005535469 nova_compute[254092]: 2025-11-25 17:13:16.123 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:13:16 np0005535469 nova_compute[254092]: 2025-11-25 17:13:16.291 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:13:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:17 np0005535469 nova_compute[254092]: 2025-11-25 17:13:17.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:17 np0005535469 nova_compute[254092]: 2025-11-25 17:13:17.435 254096 DEBUG nova.compute.manager [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:17 np0005535469 nova_compute[254092]: 2025-11-25 17:13:17.435 254096 DEBUG nova.compute.manager [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-831eaa83-55bd-4098-9037-4b628eb8d994. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:13:17 np0005535469 nova_compute[254092]: 2025-11-25 17:13:17.436 254096 DEBUG oslo_concurrency.lockutils [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.336 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.356 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.356 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance network_info: |[{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.357 254096 DEBUG oslo_concurrency.lockutils [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.357 254096 DEBUG nova.network.neutron [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port 831eaa83-55bd-4098-9037-4b628eb8d994 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.361 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start _get_guest_xml network_info=[{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.365 254096 WARNING nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.370 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.370 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.376 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.376 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.382 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:13:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964183865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.833 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.858 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:18 np0005535469 nova_compute[254092]: 2025-11-25 17:13:18.862 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:13:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451132653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.284 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.286 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.286 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.287 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.288 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.288 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.289 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.290 254096 DEBUG nova.objects.instance [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.306 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <uuid>3b797ee6-c82f-4c01-bb54-31de659fcad8</uuid>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <name>instance-00000089</name>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-848969978</nova:name>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:13:18</nova:creationTime>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:port uuid="bf0e0412-082f-4b0e-aabe-4e4f0de25b43">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <nova:port uuid="831eaa83-55bd-4098-9037-4b628eb8d994">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb0:cda4" ipVersion="6"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <entry name="serial">3b797ee6-c82f-4c01-bb54-31de659fcad8</entry>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <entry name="uuid">3b797ee6-c82f-4c01-bb54-31de659fcad8</entry>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3b797ee6-c82f-4c01-bb54-31de659fcad8_disk">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:3b:43:89"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <target dev="tapbf0e0412-08"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:b0:cd:a4"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <target dev="tap831eaa83-55"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/console.log" append="off"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:13:19 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:13:19 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:13:19 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:13:19 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Preparing to wait for external event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Preparing to wait for external event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.310 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.310 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.311 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.311 254096 DEBUG os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.315 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0e0412-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.316 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0e0412-08, col_values=(('external_ids', {'iface-id': 'bf0e0412-082f-4b0e-aabe-4e4f0de25b43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:43:89', 'vm-uuid': '3b797ee6-c82f-4c01-bb54-31de659fcad8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.317 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 NetworkManager[48891]: <info>  [1764090799.3191] manager: (tapbf0e0412-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.320 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.324 254096 INFO os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08')#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.325 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.325 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.326 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.326 254096 DEBUG os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.327 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.327 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.329 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap831eaa83-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.329 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap831eaa83-55, col_values=(('external_ids', {'iface-id': '831eaa83-55bd-4098-9037-4b628eb8d994', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:cd:a4', 'vm-uuid': '3b797ee6-c82f-4c01-bb54-31de659fcad8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 NetworkManager[48891]: <info>  [1764090799.3319] manager: (tap831eaa83-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.340 254096 INFO os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55')#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.402 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.403 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.403 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:3b:43:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.404 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:b0:cd:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.404 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Using config drive#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.437 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.884 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Creating config drive at /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.895 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpreeh6xzc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.941 254096 DEBUG nova.network.neutron [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated VIF entry in instance network info cache for port 831eaa83-55bd-4098-9037-4b628eb8d994. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.943 254096 DEBUG nova.network.neutron [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:19 np0005535469 nova_compute[254092]: 2025-11-25 17:13:19.957 254096 DEBUG oslo_concurrency.lockutils [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.046 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpreeh6xzc" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.087 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.094 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.368 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.369 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deleting local config drive /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config because it was imported into RBD.#033[00m
Nov 25 12:13:20 np0005535469 kernel: tapbf0e0412-08: entered promiscuous mode
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.4463] manager: (tapbf0e0412-08): new Tun device (/org/freedesktop/NetworkManager/Devices/585)
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01422|binding|INFO|Claiming lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for this chassis.
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01423|binding|INFO|bf0e0412-082f-4b0e-aabe-4e4f0de25b43: Claiming fa:16:3e:3b:43:89 10.100.0.8
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:43:89 10.100.0.8'], port_security=['fa:16:3e:3b:43:89 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bf0e0412-082f-4b0e-aabe-4e4f0de25b43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.472 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d bound to our chassis#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.474 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddff42cd-c011-4371-96b1-f2bb5093a16d#033[00m
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.4764] manager: (tap831eaa83-55): new Tun device (/org/freedesktop/NetworkManager/Devices/586)
Nov 25 12:13:20 np0005535469 systemd-udevd[404483]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.486 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc14153-682f-4120-9b66-0bb065fc8bfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.487 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddff42cd-c1 in ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:13:20 np0005535469 systemd-udevd[404484]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.490 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddff42cd-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66ccfbd7-14d7-495a-9ed1-88131abf007f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.492 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f77a98dd-3a84-4570-bf04-d83458b101c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.5020] device (tapbf0e0412-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.5031] device (tapbf0e0412-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.509 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fe2b23-ade2-47c3-809b-6fc91e5d46db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 systemd-machined[216343]: New machine qemu-171-instance-00000089.
Nov 25 12:13:20 np0005535469 kernel: tap831eaa83-55: entered promiscuous mode
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.5349] device (tap831eaa83-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.5360] device (tap831eaa83-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01424|binding|INFO|Claiming lport 831eaa83-55bd-4098-9037-4b628eb8d994 for this chassis.
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01425|binding|INFO|831eaa83-55bd-4098-9037-4b628eb8d994: Claiming fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4
Nov 25 12:13:20 np0005535469 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01426|binding|INFO|Setting lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 ovn-installed in OVS
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01427|binding|INFO|Setting lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 up in Southbound
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.545 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], port_security=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb0:cda4/64', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=831eaa83-55bd-4098-9037-4b628eb8d994) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.547 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9018650-308d-4d26-bd25-54287f53a06d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01428|binding|INFO|Setting lport 831eaa83-55bd-4098-9037-4b628eb8d994 ovn-installed in OVS
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01429|binding|INFO|Setting lport 831eaa83-55bd-4098-9037-4b628eb8d994 up in Southbound
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.577 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[45a32e69-f62e-43c6-9d51-70111efbd2b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d82d789-ec85-4e89-b471-7270328e5d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.5842] manager: (tapddff42cd-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/587)
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.615 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[52a650e3-8b1a-439f-8795-f631992465d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.619 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[27039a28-5864-4f4f-a967-4b8ad3e6b733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.6526] device (tapddff42cd-c0): carrier: link connected
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.659 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5c5ccc-eab8-486e-996b-700225fb344c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.682 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f5064d-90b1-4d45-8e59-24033c19f3d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404519, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b77c4b-bfcf-43f4-b595-5bc953d75a11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:82d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725824, 'tstamp': 725824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404520, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.734 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67c51a2f-2441-48e2-8c86-5d92e5cd687c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404521, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.780 254096 DEBUG nova.compute.manager [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG oslo_concurrency.lockutils [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG oslo_concurrency.lockutils [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG oslo_concurrency.lockutils [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG nova.compute.manager [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Processing event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.783 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27967f56-5d11-4493-9e9d-f5ebcd3b9e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.794 254096 DEBUG nova.compute.manager [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.794 254096 DEBUG oslo_concurrency.lockutils [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.795 254096 DEBUG oslo_concurrency.lockutils [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.795 254096 DEBUG oslo_concurrency.lockutils [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.795 254096 DEBUG nova.compute.manager [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Processing event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.862 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9871c6ec-67c0-4286-9fc9-675a85d0ef41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.864 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.864 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddff42cd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:20 np0005535469 kernel: tapddff42cd-c0: entered promiscuous mode
Nov 25 12:13:20 np0005535469 NetworkManager[48891]: <info>  [1764090800.8662] manager: (tapddff42cd-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.868 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddff42cd-c0, col_values=(('external_ids', {'iface-id': '9e2ffcd6-4edd-4351-a44c-02659b205145'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:20 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:20Z|01430|binding|INFO|Releasing lport 9e2ffcd6-4edd-4351-a44c-02659b205145 from this chassis (sb_readonly=0)
Nov 25 12:13:20 np0005535469 nova_compute[254092]: 2025-11-25 17:13:20.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.898 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddff42cd-c011-4371-96b1-f2bb5093a16d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddff42cd-c011-4371-96b1-f2bb5093a16d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[056dd2d7-7a41-46f8-8481-73473608d67a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.902 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-ddff42cd-c011-4371-96b1-f2bb5093a16d
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/ddff42cd-c011-4371-96b1-f2bb5093a16d.pid.haproxy
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID ddff42cd-c011-4371-96b1-f2bb5093a16d
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:13:20 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.903 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'env', 'PROCESS_TAG=haproxy-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddff42cd-c011-4371-96b1-f2bb5093a16d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:13:21 np0005535469 podman[404553]: 2025-11-25 17:13:21.405055838 +0000 UTC m=+0.090584817 container create a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 12:13:21 np0005535469 systemd[1]: Started libpod-conmon-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587.scope.
Nov 25 12:13:21 np0005535469 podman[404553]: 2025-11-25 17:13:21.364261697 +0000 UTC m=+0.049790746 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:13:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51787ffbad5844d0d6317c349bae98d9600fb763a7a78dde87ceae6df6f2b06d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:21 np0005535469 podman[404553]: 2025-11-25 17:13:21.510807966 +0000 UTC m=+0.196336965 container init a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:13:21 np0005535469 podman[404553]: 2025-11-25 17:13:21.516831 +0000 UTC m=+0.202359979 container start a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:13:21 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : New worker (404574) forked
Nov 25 12:13:21 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : Loading success.
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.576 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 831eaa83-55bd-4098-9037-4b628eb8d994 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0063509c-db60-47db-9c49-72faf9b698d7#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.595 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86c9ebfd-2713-44ae-8d5c-2ce1ca46a3ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.596 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0063509c-d1 in ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.598 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0063509c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[807a0361-6006-4142-9db7-bd751d7b6305]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.599 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e3b3ae-4bb7-455d-b28e-4685061c9e42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.613 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c65c5c93-7bc0-4614-8106-06876a9de3bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.637 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8f175f8b-6a3b-477b-8cde-dedd727901a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.668 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d67e1803-cf72-43a0-a7ab-cbd338a713a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 NetworkManager[48891]: <info>  [1764090801.6784] manager: (tap0063509c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/589)
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.677 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03a24bc2-f6f3-4b5e-964a-14913e5a67fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.731 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf33da0-0876-4bb7-ba6a-e84d0ed2be83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.735 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e32fe6b1-568f-4c2b-98e6-11b0e4db20ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 NetworkManager[48891]: <info>  [1764090801.7650] device (tap0063509c-d0): carrier: link connected
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.776 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[650c746c-8518-4242-bab7-338be1cae420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.795 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7403eb-09e9-4b2f-8f34-fd66d660594a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404593, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.827 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dbfdd642-2765-4d55-bd5e-6e863670daa2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:4959'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725935, 'tstamp': 725935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404594, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.850 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc868875-3d3c-40d0-8e77-7b66b693f890]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404595, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[920397de-6f1a-4ca3-82c9-5595d1f791f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.937 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be1c7155-78e5-4475-814b-da04d0e77482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.940 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.940 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.941 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0063509c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:21 np0005535469 NetworkManager[48891]: <info>  [1764090801.9439] manager: (tap0063509c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Nov 25 12:13:21 np0005535469 nova_compute[254092]: 2025-11-25 17:13:21.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:21 np0005535469 kernel: tap0063509c-d0: entered promiscuous mode
Nov 25 12:13:21 np0005535469 nova_compute[254092]: 2025-11-25 17:13:21.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0063509c-d0, col_values=(('external_ids', {'iface-id': '76c9f96b-1150-4e5a-a9e7-680d9f908998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:21 np0005535469 nova_compute[254092]: 2025-11-25 17:13:21.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:21 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:21Z|01431|binding|INFO|Releasing lport 76c9f96b-1150-4e5a-a9e7-680d9f908998 from this chassis (sb_readonly=0)
Nov 25 12:13:21 np0005535469 nova_compute[254092]: 2025-11-25 17:13:21.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.951 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0063509c-db60-47db-9c49-72faf9b698d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0063509c-db60-47db-9c49-72faf9b698d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7174f2-44ea-43a1-b974-ccf8c5c0aa74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.953 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-0063509c-db60-47db-9c49-72faf9b698d7
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/0063509c-db60-47db-9c49-72faf9b698d7.pid.haproxy
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 0063509c-db60-47db-9c49-72faf9b698d7
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:13:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.954 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'env', 'PROCESS_TAG=haproxy-0063509c-db60-47db-9c49-72faf9b698d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0063509c-db60-47db-9c49-72faf9b698d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:13:21 np0005535469 nova_compute[254092]: 2025-11-25 17:13:21.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:22 np0005535469 podman[404663]: 2025-11-25 17:13:22.34748454 +0000 UTC m=+0.072342330 container create 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.363 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.364 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090802.3631496, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.364 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Started (Lifecycle Event)#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.370 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.378 254096 INFO nova.virt.libvirt.driver [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance spawned successfully.#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.378 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:13:22 np0005535469 systemd[1]: Started libpod-conmon-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0.scope.
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.384 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.394 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.394 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.394 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.395 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.395 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.395 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.398 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.398 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090802.3641853, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.399 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:13:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:22 np0005535469 podman[404663]: 2025-11-25 17:13:22.323148957 +0000 UTC m=+0.048006767 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:13:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e541847c2aac7dd59889538a8dcfac905bc7101e6c3c35a058f1755af39f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.420 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.422 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090802.3697693, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.423 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:13:22 np0005535469 podman[404663]: 2025-11-25 17:13:22.434418586 +0000 UTC m=+0.159276406 container init 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 12:13:22 np0005535469 podman[404663]: 2025-11-25 17:13:22.441856779 +0000 UTC m=+0.166714569 container start 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.445 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.448 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.454 254096 INFO nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 11.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.455 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:13:22 np0005535469 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : New worker (404690) forked
Nov 25 12:13:22 np0005535469 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : Loading success.
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.507 254096 INFO nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 12.01 seconds to build instance.#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.519 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.894 254096 DEBUG nova.compute.manager [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.894 254096 DEBUG oslo_concurrency.lockutils [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 DEBUG oslo_concurrency.lockutils [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 DEBUG oslo_concurrency.lockutils [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 DEBUG nova.compute.manager [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 WARNING nova.compute.manager [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.969 254096 DEBUG nova.compute.manager [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.969 254096 DEBUG oslo_concurrency.lockutils [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 DEBUG oslo_concurrency.lockutils [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 DEBUG oslo_concurrency.lockutils [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 DEBUG nova.compute.manager [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:13:22 np0005535469 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 WARNING nova.compute.manager [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:13:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:24 np0005535469 nova_compute[254092]: 2025-11-25 17:13:24.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:25 np0005535469 nova_compute[254092]: 2025-11-25 17:13:25.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:13:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:26Z|01432|binding|INFO|Releasing lport 76c9f96b-1150-4e5a-a9e7-680d9f908998 from this chassis (sb_readonly=0)
Nov 25 12:13:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:26Z|01433|binding|INFO|Releasing lport 9e2ffcd6-4edd-4351-a44c-02659b205145 from this chassis (sb_readonly=0)
Nov 25 12:13:26 np0005535469 NetworkManager[48891]: <info>  [1764090806.8308] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Nov 25 12:13:26 np0005535469 NetworkManager[48891]: <info>  [1764090806.8323] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Nov 25 12:13:26 np0005535469 nova_compute[254092]: 2025-11-25 17:13:26.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:26Z|01434|binding|INFO|Releasing lport 76c9f96b-1150-4e5a-a9e7-680d9f908998 from this chassis (sb_readonly=0)
Nov 25 12:13:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:26Z|01435|binding|INFO|Releasing lport 9e2ffcd6-4edd-4351-a44c-02659b205145 from this chassis (sb_readonly=0)
Nov 25 12:13:26 np0005535469 nova_compute[254092]: 2025-11-25 17:13:26.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:27 np0005535469 nova_compute[254092]: 2025-11-25 17:13:27.160 254096 DEBUG nova.compute.manager [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:27 np0005535469 nova_compute[254092]: 2025-11-25 17:13:27.161 254096 DEBUG nova.compute.manager [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:13:27 np0005535469 nova_compute[254092]: 2025-11-25 17:13:27.161 254096 DEBUG oslo_concurrency.lockutils [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:27 np0005535469 nova_compute[254092]: 2025-11-25 17:13:27.161 254096 DEBUG oslo_concurrency.lockutils [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:27 np0005535469 nova_compute[254092]: 2025-11-25 17:13:27.162 254096 DEBUG nova.network.neutron [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:13:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:13:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:29 np0005535469 nova_compute[254092]: 2025-11-25 17:13:29.113 254096 DEBUG nova.network.neutron [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated VIF entry in instance network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:13:29 np0005535469 nova_compute[254092]: 2025-11-25 17:13:29.114 254096 DEBUG nova.network.neutron [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:29 np0005535469 nova_compute[254092]: 2025-11-25 17:13:29.133 254096 DEBUG oslo_concurrency.lockutils [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:29 np0005535469 podman[404710]: 2025-11-25 17:13:29.192736923 +0000 UTC m=+0.061362672 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 12:13:29 np0005535469 podman[404700]: 2025-11-25 17:13:29.199915658 +0000 UTC m=+0.099677624 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:13:29 np0005535469 podman[404712]: 2025-11-25 17:13:29.239711221 +0000 UTC m=+0.102931742 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 12:13:29 np0005535469 nova_compute[254092]: 2025-11-25 17:13:29.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:30 np0005535469 nova_compute[254092]: 2025-11-25 17:13:30.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:13:31 np0005535469 nova_compute[254092]: 2025-11-25 17:13:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:13:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:34 np0005535469 nova_compute[254092]: 2025-11-25 17:13:34.335 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 12:13:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:35Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:43:89 10.100.0.8
Nov 25 12:13:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:35Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:43:89 10.100.0.8
Nov 25 12:13:35 np0005535469 nova_compute[254092]: 2025-11-25 17:13:35.682 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 118 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 25 12:13:38 np0005535469 nova_compute[254092]: 2025-11-25 17:13:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 118 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 25 12:13:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:39 np0005535469 nova_compute[254092]: 2025-11-25 17:13:39.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:13:40
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data']
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:13:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:13:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 39K writes, 162K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.84 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4681 writes, 19K keys, 4681 commit groups, 1.0 writes per commit group, ingest: 20.83 MB, 0.03 MB/s#012Interval WAL: 4681 writes, 1788 syncs, 2.62 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:13:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:13:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777566246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:13:40 np0005535469 nova_compute[254092]: 2025-11-25 17:13:40.961 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.056 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.056 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.275 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.276 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.94353485107422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.276 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.276 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.325 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 3b797ee6-c82f-4c01-bb54-31de659fcad8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.326 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.326 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.363 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:13:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000141578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.769 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.775 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.798 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.831 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:13:41 np0005535469 nova_compute[254092]: 2025-11-25 17:13:41.831 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:13:42 np0005535469 nova_compute[254092]: 2025-11-25 17:13:42.827 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:42 np0005535469 nova_compute[254092]: 2025-11-25 17:13:42.829 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:42 np0005535469 nova_compute[254092]: 2025-11-25 17:13:42.829 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:13:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 01e06fed-d822-4b7c-9fa4-72bcf23b5bdf does not exist
Nov 25 12:13:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev bb345b48-71a5-4c92-857f-3b1c83949a0f does not exist
Nov 25 12:13:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2007107d-0597-4843-aa30-1f934a325df9 does not exist
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:13:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:13:43 np0005535469 podman[405077]: 2025-11-25 17:13:43.626044457 +0000 UTC m=+0.040784821 container create 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:13:43 np0005535469 systemd[1]: Started libpod-conmon-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope.
Nov 25 12:13:43 np0005535469 podman[405077]: 2025-11-25 17:13:43.606240178 +0000 UTC m=+0.020980552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:13:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:43 np0005535469 podman[405077]: 2025-11-25 17:13:43.739520685 +0000 UTC m=+0.154261089 container init 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 12:13:43 np0005535469 podman[405077]: 2025-11-25 17:13:43.747831062 +0000 UTC m=+0.162571446 container start 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:13:43 np0005535469 hardcore_burnell[405093]: 167 167
Nov 25 12:13:43 np0005535469 systemd[1]: libpod-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope: Deactivated successfully.
Nov 25 12:13:43 np0005535469 conmon[405093]: conmon 9127fb75ee98ddc9998c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope/container/memory.events
Nov 25 12:13:43 np0005535469 podman[405077]: 2025-11-25 17:13:43.756375164 +0000 UTC m=+0.171115518 container attach 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:13:43 np0005535469 podman[405077]: 2025-11-25 17:13:43.757611278 +0000 UTC m=+0.172351672 container died 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:13:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-509dcad1c59146c290d28ab6d921495739286c56f60efe500719027c89afdc18-merged.mount: Deactivated successfully.
Nov 25 12:13:43 np0005535469 podman[405077]: 2025-11-25 17:13:43.836081074 +0000 UTC m=+0.250821438 container remove 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 12:13:43 np0005535469 systemd[1]: libpod-conmon-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope: Deactivated successfully.
Nov 25 12:13:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:44 np0005535469 podman[405115]: 2025-11-25 17:13:44.104906572 +0000 UTC m=+0.063776338 container create 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:13:44 np0005535469 systemd[1]: Started libpod-conmon-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope.
Nov 25 12:13:44 np0005535469 podman[405115]: 2025-11-25 17:13:44.084106455 +0000 UTC m=+0.042976161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:13:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:44 np0005535469 podman[405115]: 2025-11-25 17:13:44.212542751 +0000 UTC m=+0.171412457 container init 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:13:44 np0005535469 podman[405115]: 2025-11-25 17:13:44.221559706 +0000 UTC m=+0.180429392 container start 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:13:44 np0005535469 podman[405115]: 2025-11-25 17:13:44.225043712 +0000 UTC m=+0.183913398 container attach 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 25 12:13:44 np0005535469 nova_compute[254092]: 2025-11-25 17:13:44.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:44 np0005535469 nova_compute[254092]: 2025-11-25 17:13:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:44 np0005535469 nova_compute[254092]: 2025-11-25 17:13:44.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:13:44 np0005535469 nova_compute[254092]: 2025-11-25 17:13:44.508 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:13:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:13:45 np0005535469 admiring_jang[405132]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:13:45 np0005535469 admiring_jang[405132]: --> relative data size: 1.0
Nov 25 12:13:45 np0005535469 admiring_jang[405132]: --> All data devices are unavailable
Nov 25 12:13:45 np0005535469 systemd[1]: libpod-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope: Deactivated successfully.
Nov 25 12:13:45 np0005535469 systemd[1]: libpod-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope: Consumed 1.023s CPU time.
Nov 25 12:13:45 np0005535469 podman[405161]: 2025-11-25 17:13:45.369438171 +0000 UTC m=+0.047256718 container died 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:13:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7-merged.mount: Deactivated successfully.
Nov 25 12:13:45 np0005535469 podman[405161]: 2025-11-25 17:13:45.442332815 +0000 UTC m=+0.120151372 container remove 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 12:13:45 np0005535469 systemd[1]: libpod-conmon-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope: Deactivated successfully.
Nov 25 12:13:45 np0005535469 nova_compute[254092]: 2025-11-25 17:13:45.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:46 np0005535469 podman[405316]: 2025-11-25 17:13:46.118465098 +0000 UTC m=+0.052463868 container create fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:13:46 np0005535469 systemd[1]: Started libpod-conmon-fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25.scope.
Nov 25 12:13:46 np0005535469 podman[405316]: 2025-11-25 17:13:46.096068078 +0000 UTC m=+0.030066898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:13:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:46 np0005535469 podman[405316]: 2025-11-25 17:13:46.234341093 +0000 UTC m=+0.168339903 container init fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 12:13:46 np0005535469 podman[405316]: 2025-11-25 17:13:46.244045036 +0000 UTC m=+0.178043806 container start fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:13:46 np0005535469 podman[405316]: 2025-11-25 17:13:46.248496788 +0000 UTC m=+0.182495578 container attach fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:13:46 np0005535469 stoic_stonebraker[405333]: 167 167
Nov 25 12:13:46 np0005535469 systemd[1]: libpod-fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25.scope: Deactivated successfully.
Nov 25 12:13:46 np0005535469 podman[405316]: 2025-11-25 17:13:46.253877555 +0000 UTC m=+0.187876345 container died fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:13:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-94397c57a9ffb74739d2f457cc2b11d9187b1aa7c68f306f0192c67c2203ff8b-merged.mount: Deactivated successfully.
Nov 25 12:13:46 np0005535469 podman[405316]: 2025-11-25 17:13:46.299389343 +0000 UTC m=+0.233388123 container remove fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:13:46 np0005535469 systemd[1]: libpod-conmon-fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25.scope: Deactivated successfully.
Nov 25 12:13:46 np0005535469 podman[405357]: 2025-11-25 17:13:46.511689812 +0000 UTC m=+0.058621407 container create 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:13:46 np0005535469 nova_compute[254092]: 2025-11-25 17:13:46.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:46 np0005535469 nova_compute[254092]: 2025-11-25 17:13:46.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:13:46 np0005535469 nova_compute[254092]: 2025-11-25 17:13:46.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:13:46 np0005535469 systemd[1]: Started libpod-conmon-0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347.scope.
Nov 25 12:13:46 np0005535469 podman[405357]: 2025-11-25 17:13:46.491001578 +0000 UTC m=+0.037933213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:13:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:46 np0005535469 podman[405357]: 2025-11-25 17:13:46.633528668 +0000 UTC m=+0.180460263 container init 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:13:46 np0005535469 podman[405357]: 2025-11-25 17:13:46.653170253 +0000 UTC m=+0.200101838 container start 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 12:13:46 np0005535469 podman[405357]: 2025-11-25 17:13:46.658248521 +0000 UTC m=+0.205180126 container attach 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:13:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:13:46 np0005535469 nova_compute[254092]: 2025-11-25 17:13:46.829 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:46 np0005535469 nova_compute[254092]: 2025-11-25 17:13:46.830 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:46 np0005535469 nova_compute[254092]: 2025-11-25 17:13:46.830 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:13:46 np0005535469 nova_compute[254092]: 2025-11-25 17:13:46.831 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.378 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.378 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.397 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:13:47 np0005535469 silly_darwin[405373]: {
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:    "0": [
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:        {
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "devices": [
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "/dev/loop3"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            ],
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_name": "ceph_lv0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_size": "21470642176",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "name": "ceph_lv0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "tags": {
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cluster_name": "ceph",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.crush_device_class": "",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.encrypted": "0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osd_id": "0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.type": "block",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.vdo": "0"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            },
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "type": "block",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "vg_name": "ceph_vg0"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:        }
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:    ],
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:    "1": [
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:        {
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "devices": [
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "/dev/loop4"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            ],
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_name": "ceph_lv1",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_size": "21470642176",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "name": "ceph_lv1",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "tags": {
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cluster_name": "ceph",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.crush_device_class": "",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.encrypted": "0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osd_id": "1",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.type": "block",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.vdo": "0"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            },
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "type": "block",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "vg_name": "ceph_vg1"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:        }
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:    ],
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:    "2": [
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:        {
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "devices": [
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "/dev/loop5"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            ],
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_name": "ceph_lv2",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_size": "21470642176",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "name": "ceph_lv2",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "tags": {
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.cluster_name": "ceph",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.crush_device_class": "",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.encrypted": "0",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osd_id": "2",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.type": "block",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:                "ceph.vdo": "0"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            },
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "type": "block",
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:            "vg_name": "ceph_vg2"
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:        }
Nov 25 12:13:47 np0005535469 silly_darwin[405373]:    ]
Nov 25 12:13:47 np0005535469 silly_darwin[405373]: }
Nov 25 12:13:47 np0005535469 systemd[1]: libpod-0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347.scope: Deactivated successfully.
Nov 25 12:13:47 np0005535469 podman[405357]: 2025-11-25 17:13:47.468853325 +0000 UTC m=+1.015784910 container died 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.475 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.476 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.486 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.486 254096 INFO nova.compute.claims [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:13:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525-merged.mount: Deactivated successfully.
Nov 25 12:13:47 np0005535469 podman[405357]: 2025-11-25 17:13:47.532270281 +0000 UTC m=+1.079201866 container remove 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:13:47 np0005535469 systemd[1]: libpod-conmon-0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347.scope: Deactivated successfully.
Nov 25 12:13:47 np0005535469 nova_compute[254092]: 2025-11-25 17:13:47.585 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:13:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578496728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.058 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.065 254096 DEBUG nova.compute.provider_tree [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.077 254096 DEBUG nova.scheduler.client.report [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.094 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.094 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.140 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.140 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.154 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.172 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:13:48 np0005535469 podman[405555]: 2025-11-25 17:13:48.259960538 +0000 UTC m=+0.044181043 container create ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.261 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.262 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.263 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Creating image(s)#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.292 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:48 np0005535469 systemd[1]: Started libpod-conmon-ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325.scope.
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.322 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:48 np0005535469 podman[405555]: 2025-11-25 17:13:48.237169468 +0000 UTC m=+0.021389993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:13:48 np0005535469 podman[405555]: 2025-11-25 17:13:48.342394902 +0000 UTC m=+0.126615437 container init ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.348 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:48 np0005535469 podman[405555]: 2025-11-25 17:13:48.35185826 +0000 UTC m=+0.136078765 container start ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.354 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:48 np0005535469 podman[405555]: 2025-11-25 17:13:48.356137106 +0000 UTC m=+0.140357611 container attach ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:13:48 np0005535469 fervent_chaplygin[405590]: 167 167
Nov 25 12:13:48 np0005535469 systemd[1]: libpod-ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325.scope: Deactivated successfully.
Nov 25 12:13:48 np0005535469 podman[405555]: 2025-11-25 17:13:48.359400155 +0000 UTC m=+0.143620660 container died ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:13:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bae62e2d7d09056285387f3767efa94af798482c60c58d91549d7dfdf1b23562-merged.mount: Deactivated successfully.
Nov 25 12:13:48 np0005535469 podman[405555]: 2025-11-25 17:13:48.394968263 +0000 UTC m=+0.179188768 container remove ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.402 254096 DEBUG nova.policy [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:13:48 np0005535469 systemd[1]: libpod-conmon-ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325.scope: Deactivated successfully.
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.438 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.439 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.440 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.440 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.462 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.467 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6157e53b-5ff5-4b55-b71a-125301c5268a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:48 np0005535469 podman[405672]: 2025-11-25 17:13:48.579527787 +0000 UTC m=+0.043807784 container create 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:13:48 np0005535469 systemd[1]: Started libpod-conmon-452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1.scope.
Nov 25 12:13:48 np0005535469 podman[405672]: 2025-11-25 17:13:48.561820015 +0000 UTC m=+0.026100032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:13:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:13:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:13:48 np0005535469 podman[405672]: 2025-11-25 17:13:48.677456642 +0000 UTC m=+0.141736659 container init 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:13:48 np0005535469 podman[405672]: 2025-11-25 17:13:48.690413745 +0000 UTC m=+0.154693732 container start 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:13:48 np0005535469 podman[405672]: 2025-11-25 17:13:48.700011956 +0000 UTC m=+0.164291953 container attach 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.740 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6157e53b-5ff5-4b55-b71a-125301c5268a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 92 KiB/s wr, 10 op/s
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.811 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.918 254096 DEBUG nova.objects.instance [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 6157e53b-5ff5-4b55-b71a-125301c5268a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.929 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.929 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Ensure instance console log exists: /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.929 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:48 np0005535469 nova_compute[254092]: 2025-11-25 17:13:48.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:49.232 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:49.235 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:13:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:49.237 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.294 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.317 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.318 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.318 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.375 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully created port: c7c65059-3fc6-4a84-b8ce-de1306c01e13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]: {
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "osd_id": 1,
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "type": "bluestore"
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:    },
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "osd_id": 2,
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "type": "bluestore"
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:    },
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "osd_id": 0,
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:        "type": "bluestore"
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]:    }
Nov 25 12:13:49 np0005535469 festive_hypatia[405707]: }
Nov 25 12:13:49 np0005535469 systemd[1]: libpod-452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1.scope: Deactivated successfully.
Nov 25 12:13:49 np0005535469 podman[405812]: 2025-11-25 17:13:49.709038701 +0000 UTC m=+0.025668880 container died 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:13:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b-merged.mount: Deactivated successfully.
Nov 25 12:13:49 np0005535469 podman[405812]: 2025-11-25 17:13:49.767112812 +0000 UTC m=+0.083742971 container remove 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:13:49 np0005535469 systemd[1]: libpod-conmon-452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1.scope: Deactivated successfully.
Nov 25 12:13:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:13:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:13:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:13:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:13:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 60bc785f-4397-465b-ace3-cea5fb1ac089 does not exist
Nov 25 12:13:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f8b0ff6c-ab0a-4ddd-a0ba-a9a1998a64de does not exist
Nov 25 12:13:49 np0005535469 nova_compute[254092]: 2025-11-25 17:13:49.910 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully created port: 05bba2c3-2422-401c-831b-f9af92f47719 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:13:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:13:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.2 total, 600.0 interval#012Cumulative writes: 42K writes, 165K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 42K writes, 14K syncs, 2.83 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4620 writes, 20K keys, 4620 commit groups, 1.0 writes per commit group, ingest: 23.67 MB, 0.04 MB/s#012Interval WAL: 4620 writes, 1779 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:13:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:13:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:13:50 np0005535469 nova_compute[254092]: 2025-11-25 17:13:50.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 148 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 982 KiB/s wr, 25 op/s
Nov 25 12:13:50 np0005535469 nova_compute[254092]: 2025-11-25 17:13:50.831 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully updated port: c7c65059-3fc6-4a84-b8ce-de1306c01e13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:13:50 np0005535469 nova_compute[254092]: 2025-11-25 17:13:50.963 254096 DEBUG nova.compute.manager [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:50 np0005535469 nova_compute[254092]: 2025-11-25 17:13:50.963 254096 DEBUG nova.compute.manager [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:13:50 np0005535469 nova_compute[254092]: 2025-11-25 17:13:50.964 254096 DEBUG oslo_concurrency.lockutils [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:50 np0005535469 nova_compute[254092]: 2025-11-25 17:13:50.964 254096 DEBUG oslo_concurrency.lockutils [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:50 np0005535469 nova_compute[254092]: 2025-11-25 17:13:50.965 254096 DEBUG nova.network.neutron [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:13:51 np0005535469 nova_compute[254092]: 2025-11-25 17:13:51.196 254096 DEBUG nova.network.neutron [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:13:51 np0005535469 nova_compute[254092]: 2025-11-25 17:13:51.650 254096 DEBUG nova.network.neutron [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:51 np0005535469 nova_compute[254092]: 2025-11-25 17:13:51.662 254096 DEBUG oslo_concurrency.lockutils [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000928678697011135 of space, bias 1.0, pg target 0.2786036091033405 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:13:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:13:51 np0005535469 nova_compute[254092]: 2025-11-25 17:13:51.692 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully updated port: 05bba2c3-2422-401c-831b-f9af92f47719 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:13:51 np0005535469 nova_compute[254092]: 2025-11-25 17:13:51.708 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:51 np0005535469 nova_compute[254092]: 2025-11-25 17:13:51.708 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:51 np0005535469 nova_compute[254092]: 2025-11-25 17:13:51.709 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:13:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:52 np0005535469 nova_compute[254092]: 2025-11-25 17:13:52.870 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:13:53 np0005535469 nova_compute[254092]: 2025-11-25 17:13:53.039 254096 DEBUG nova.compute.manager [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:53 np0005535469 nova_compute[254092]: 2025-11-25 17:13:53.039 254096 DEBUG nova.compute.manager [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-05bba2c3-2422-401c-831b-f9af92f47719. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:13:53 np0005535469 nova_compute[254092]: 2025-11-25 17:13:53.040 254096 DEBUG oslo_concurrency.lockutils [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:13:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:54 np0005535469 nova_compute[254092]: 2025-11-25 17:13:54.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:13:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1647477624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:13:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:13:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1647477624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.882 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.902 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.903 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance network_info: |[{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.904 254096 DEBUG oslo_concurrency.lockutils [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.904 254096 DEBUG nova.network.neutron [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port 05bba2c3-2422-401c-831b-f9af92f47719 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.912 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start _get_guest_xml network_info=[{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.920 254096 WARNING nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.934 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.935 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.939 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.940 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.940 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.941 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.942 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.942 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.943 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.943 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.944 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.944 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.945 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.946 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.947 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.947 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:13:55 np0005535469 nova_compute[254092]: 2025-11-25 17:13:55.952 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.300 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:13:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865905048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.412 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.438 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.443 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:13:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506127068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.908 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.910 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.910 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.911 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.912 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.913 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.913 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.914 254096 DEBUG nova.objects.instance [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6157e53b-5ff5-4b55-b71a-125301c5268a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.928 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <uuid>6157e53b-5ff5-4b55-b71a-125301c5268a</uuid>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <name>instance-0000008a</name>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1481975263</nova:name>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:13:55</nova:creationTime>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:port uuid="c7c65059-3fc6-4a84-b8ce-de1306c01e13">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <nova:port uuid="05bba2c3-2422-401c-831b-f9af92f47719">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe47:6a96" ipVersion="6"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <entry name="serial">6157e53b-5ff5-4b55-b71a-125301c5268a</entry>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <entry name="uuid">6157e53b-5ff5-4b55-b71a-125301c5268a</entry>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6157e53b-5ff5-4b55-b71a-125301c5268a_disk">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:16:7a:83"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <target dev="tapc7c65059-3f"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:47:6a:96"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <target dev="tap05bba2c3-24"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/console.log" append="off"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:13:56 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:13:56 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:13:56 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:13:56 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.929 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Preparing to wait for external event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Preparing to wait for external event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.931 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.931 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.931 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.932 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.932 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.932 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.933 254096 DEBUG os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.934 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.934 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.937 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7c65059-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.938 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7c65059-3f, col_values=(('external_ids', {'iface-id': 'c7c65059-3fc6-4a84-b8ce-de1306c01e13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:7a:83', 'vm-uuid': '6157e53b-5ff5-4b55-b71a-125301c5268a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 NetworkManager[48891]: <info>  [1764090836.9403] manager: (tapc7c65059-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.946 254096 INFO os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f')#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.947 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.947 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.949 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.950 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05bba2c3-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.951 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05bba2c3-24, col_values=(('external_ids', {'iface-id': '05bba2c3-2422-401c-831b-f9af92f47719', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:6a:96', 'vm-uuid': '6157e53b-5ff5-4b55-b71a-125301c5268a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 NetworkManager[48891]: <info>  [1764090836.9526] manager: (tap05bba2c3-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:56 np0005535469 nova_compute[254092]: 2025-11-25 17:13:56.959 254096 INFO os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24')#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.006 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.006 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.007 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:16:7a:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.007 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:47:6a:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.008 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Using config drive#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.032 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.222 254096 DEBUG nova.network.neutron [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updated VIF entry in instance network info cache for port 05bba2c3-2422-401c-831b-f9af92f47719. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.222 254096 DEBUG nova.network.neutron [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.239 254096 DEBUG oslo_concurrency.lockutils [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.401 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Creating config drive at /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.409 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm74k4tiv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.566 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm74k4tiv" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.595 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.600 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.778 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.779 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deleting local config drive /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config because it was imported into RBD.#033[00m
Nov 25 12:13:57 np0005535469 kernel: tapc7c65059-3f: entered promiscuous mode
Nov 25 12:13:57 np0005535469 NetworkManager[48891]: <info>  [1764090837.8321] manager: (tapc7c65059-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/595)
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01436|binding|INFO|Claiming lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 for this chassis.
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01437|binding|INFO|c7c65059-3fc6-4a84-b8ce-de1306c01e13: Claiming fa:16:3e:16:7a:83 10.100.0.10
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.844 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:7a:83 10.100.0.10'], port_security=['fa:16:3e:16:7a:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7c65059-3fc6-4a84-b8ce-de1306c01e13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.845 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7c65059-3fc6-4a84-b8ce-de1306c01e13 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d bound to our chassis#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.846 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddff42cd-c011-4371-96b1-f2bb5093a16d#033[00m
Nov 25 12:13:57 np0005535469 NetworkManager[48891]: <info>  [1764090837.8492] manager: (tap05bba2c3-24): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Nov 25 12:13:57 np0005535469 systemd-udevd[406018]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:13:57 np0005535469 systemd-udevd[406017]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:13:57 np0005535469 kernel: tap05bba2c3-24: entered promiscuous mode
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01438|binding|INFO|Setting lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 ovn-installed in OVS
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01439|binding|INFO|Setting lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 up in Southbound
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.865 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f1280f9f-2d55-4fc3-b334-43448abe7250]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01440|if_status|INFO|Dropped 5 log messages in last 2609 seconds (most recently, 2609 seconds ago) due to excessive rate
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01441|if_status|INFO|Not updating pb chassis for 05bba2c3-2422-401c-831b-f9af92f47719 now as sb is readonly
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01442|binding|INFO|Claiming lport 05bba2c3-2422-401c-831b-f9af92f47719 for this chassis.
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01443|binding|INFO|05bba2c3-2422-401c-831b-f9af92f47719: Claiming fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96
Nov 25 12:13:57 np0005535469 NetworkManager[48891]: <info>  [1764090837.8810] device (tapc7c65059-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.878 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], port_security=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe47:6a96/64', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05bba2c3-2422-401c-831b-f9af92f47719) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:13:57 np0005535469 NetworkManager[48891]: <info>  [1764090837.8820] device (tap05bba2c3-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:13:57 np0005535469 NetworkManager[48891]: <info>  [1764090837.8831] device (tapc7c65059-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:13:57 np0005535469 NetworkManager[48891]: <info>  [1764090837.8836] device (tap05bba2c3-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01444|binding|INFO|Setting lport 05bba2c3-2422-401c-831b-f9af92f47719 ovn-installed in OVS
Nov 25 12:13:57 np0005535469 ovn_controller[153477]: 2025-11-25T17:13:57Z|01445|binding|INFO|Setting lport 05bba2c3-2422-401c-831b-f9af92f47719 up in Southbound
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:57 np0005535469 systemd-machined[216343]: New machine qemu-172-instance-0000008a.
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.904 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[680d6b72-88aa-40f0-8e9d-9ddd5cf981d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.907 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a97dd5a5-f87e-4a86-874e-657a0d513338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:57 np0005535469 systemd[1]: Started Virtual Machine qemu-172-instance-0000008a.
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.938 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[81f97c2d-822a-47a9-9e75-849552493d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.961 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[208437e2-d84b-4bec-88be-c962f4e8ba5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406030, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.979 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f95aa264-f18f-46e5-b171-63fdc2760040]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725840, 'tstamp': 725840}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406034, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725844, 'tstamp': 725844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406034, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.980 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:57 np0005535469 nova_compute[254092]: 2025-11-25 17:13:57.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.983 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddff42cd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.983 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.984 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddff42cd-c0, col_values=(('external_ids', {'iface-id': '9e2ffcd6-4edd-4351-a44c-02659b205145'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.984 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.986 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05bba2c3-2422-401c-831b-f9af92f47719 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis#033[00m
Nov 25 12:13:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.987 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0063509c-db60-47db-9c49-72faf9b698d7#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.002 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f41ab9c-14d3-4540-9aad-2dd2080d45bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.034 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3600d5-743c-45f9-907e-0fd3dec9a004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.036 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[74399148-9709-4825-811f-39ff2749855c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.066 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9da8974b-4e90-49b9-8f80-2d5895a3b3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac72200-22e8-4203-aeb7-a4f56dced32d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406041, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.084 254096 DEBUG nova.compute.manager [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.085 254096 DEBUG oslo_concurrency.lockutils [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.085 254096 DEBUG oslo_concurrency.lockutils [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.086 254096 DEBUG oslo_concurrency.lockutils [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.086 254096 DEBUG nova.compute.manager [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Processing event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae352293-7314-4b80-9485-fe6ce758bd32]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0063509c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725952, 'tstamp': 725952}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406042, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.101 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.104 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0063509c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0063509c-d0, col_values=(('external_ids', {'iface-id': '76c9f96b-1150-4e5a-a9e7-680d9f908998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:13:58 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.106 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.383 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090838.3828897, 6157e53b-5ff5-4b55-b71a-125301c5268a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.384 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Started (Lifecycle Event)#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.402 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.407 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090838.3831444, 6157e53b-5ff5-4b55-b71a-125301c5268a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.420 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.425 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.444 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:13:58 np0005535469 nova_compute[254092]: 2025-11-25 17:13:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:13:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:13:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:13:59 np0005535469 podman[406086]: 2025-11-25 17:13:59.646613774 +0000 UTC m=+0.063444839 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:13:59 np0005535469 podman[406087]: 2025-11-25 17:13:59.66450403 +0000 UTC m=+0.078519378 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:13:59 np0005535469 podman[406088]: 2025-11-25 17:13:59.675627323 +0000 UTC m=+0.083923755 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.198 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.198 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.199 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.199 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.199 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No event matching network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 in dict_keys([('network-vif-plugged', '05bba2c3-2422-401c-831b-f9af92f47719')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.200 254096 WARNING nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.200 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.200 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.201 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.201 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.201 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Processing event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.202 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.202 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.202 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.203 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.203 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.203 254096 WARNING nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.204 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.209 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090840.2086995, 6157e53b-5ff5-4b55-b71a-125301c5268a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.209 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.211 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.215 254096 INFO nova.virt.libvirt.driver [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance spawned successfully.#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.216 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.236 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.243 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.248 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.248 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.249 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.250 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.250 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.250 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.274 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.307 254096 INFO nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 12.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.307 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.399 254096 INFO nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 12.94 seconds to build instance.#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.413 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:00 np0005535469 nova_compute[254092]: 2025-11-25 17:14:00.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 12:14:01 np0005535469 nova_compute[254092]: 2025-11-25 17:14:01.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 943 KiB/s wr, 22 op/s
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.350095) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843350122, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3438721, "memory_usage": 3485072, "flush_reason": "Manual Compaction"}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843374600, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 3372636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54561, "largest_seqno": 56619, "table_properties": {"data_size": 3363219, "index_size": 5975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18919, "raw_average_key_size": 20, "raw_value_size": 3344585, "raw_average_value_size": 3561, "num_data_blocks": 265, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090617, "oldest_key_time": 1764090617, "file_creation_time": 1764090843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 25032 microseconds, and 6847 cpu microseconds.
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.375118) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 3372636 bytes OK
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.375308) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.376867) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.376892) EVENT_LOG_v1 {"time_micros": 1764090843376884, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.376916) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3430081, prev total WAL file size 3430081, number of live WAL files 2.
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.379398) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(3293KB)], [125(8212KB)]
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843379464, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11781904, "oldest_snapshot_seqno": -1}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7829 keys, 10090277 bytes, temperature: kUnknown
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843438371, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 10090277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10039465, "index_size": 30155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 203660, "raw_average_key_size": 26, "raw_value_size": 9901109, "raw_average_value_size": 1264, "num_data_blocks": 1177, "num_entries": 7829, "num_filter_entries": 7829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.438625) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 10090277 bytes
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.440231) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.8 rd, 171.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8343, records dropped: 514 output_compression: NoCompression
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.440254) EVENT_LOG_v1 {"time_micros": 1764090843440243, "job": 76, "event": "compaction_finished", "compaction_time_micros": 58980, "compaction_time_cpu_micros": 22663, "output_level": 6, "num_output_files": 1, "total_output_size": 10090277, "num_input_records": 8343, "num_output_records": 7829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843441035, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843442815, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.379270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:03 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:03 np0005535469 nova_compute[254092]: 2025-11-25 17:14:03.536 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 13 KiB/s wr, 10 op/s
Nov 25 12:14:05 np0005535469 nova_compute[254092]: 2025-11-25 17:14:05.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:05 np0005535469 nova_compute[254092]: 2025-11-25 17:14:05.937 254096 DEBUG nova.compute.manager [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:05 np0005535469 nova_compute[254092]: 2025-11-25 17:14:05.938 254096 DEBUG nova.compute.manager [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:14:05 np0005535469 nova_compute[254092]: 2025-11-25 17:14:05.938 254096 DEBUG oslo_concurrency.lockutils [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:14:05 np0005535469 nova_compute[254092]: 2025-11-25 17:14:05.939 254096 DEBUG oslo_concurrency.lockutils [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:14:05 np0005535469 nova_compute[254092]: 2025-11-25 17:14:05.939 254096 DEBUG nova.network.neutron [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:14:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:14:06 np0005535469 nova_compute[254092]: 2025-11-25 17:14:06.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:14:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.5 total, 600.0 interval#012Cumulative writes: 35K writes, 140K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.81 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3348 writes, 13K keys, 3348 commit groups, 1.0 writes per commit group, ingest: 15.40 MB, 0.03 MB/s#012Interval WAL: 3348 writes, 1330 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:14:07 np0005535469 nova_compute[254092]: 2025-11-25 17:14:07.608 254096 DEBUG nova.network.neutron [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updated VIF entry in instance network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:14:07 np0005535469 nova_compute[254092]: 2025-11-25 17:14:07.608 254096 DEBUG nova.network.neutron [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:14:07 np0005535469 nova_compute[254092]: 2025-11-25 17:14:07.626 254096 DEBUG oslo_concurrency.lockutils [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:14:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 25 12:14:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:14:10 np0005535469 nova_compute[254092]: 2025-11-25 17:14:10.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:14:11 np0005535469 nova_compute[254092]: 2025-11-25 17:14:11.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Nov 25 12:14:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:13Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:7a:83 10.100.0.10
Nov 25 12:14:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:13Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:7a:83 10.100.0.10
Nov 25 12:14:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:13.652 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:13.653 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Nov 25 12:14:15 np0005535469 nova_compute[254092]: 2025-11-25 17:14:15.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 25 12:14:16 np0005535469 nova_compute[254092]: 2025-11-25 17:14:16.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 12:14:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:20 np0005535469 nova_compute[254092]: 2025-11-25 17:14:20.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:14:21 np0005535469 nova_compute[254092]: 2025-11-25 17:14:21.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 12:14:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 12:14:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 12:14:25 np0005535469 nova_compute[254092]: 2025-11-25 17:14:25.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.458 254096 DEBUG nova.compute.manager [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.459 254096 DEBUG nova.compute.manager [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.460 254096 DEBUG oslo_concurrency.lockutils [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.460 254096 DEBUG oslo_concurrency.lockutils [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.461 254096 DEBUG nova.network.neutron [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.691 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.692 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.692 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.693 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.693 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.694 254096 INFO nova.compute.manager [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Terminating instance#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.695 254096 DEBUG nova.compute.manager [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:14:26 np0005535469 kernel: tapc7c65059-3f (unregistering): left promiscuous mode
Nov 25 12:14:26 np0005535469 NetworkManager[48891]: <info>  [1764090866.7553] device (tapc7c65059-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:14:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:26Z|01446|binding|INFO|Releasing lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 from this chassis (sb_readonly=0)
Nov 25 12:14:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:26Z|01447|binding|INFO|Setting lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 down in Southbound
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:26Z|01448|binding|INFO|Removing iface tapc7c65059-3f ovn-installed in OVS
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.781 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:7a:83 10.100.0.10'], port_security=['fa:16:3e:16:7a:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7c65059-3fc6-4a84-b8ce-de1306c01e13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.783 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7c65059-3fc6-4a84-b8ce-de1306c01e13 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d unbound from our chassis#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.785 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddff42cd-c011-4371-96b1-f2bb5093a16d#033[00m
Nov 25 12:14:26 np0005535469 kernel: tap05bba2c3-24 (unregistering): left promiscuous mode
Nov 25 12:14:26 np0005535469 NetworkManager[48891]: <info>  [1764090866.7911] device (tap05bba2c3-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:26Z|01449|binding|INFO|Releasing lport 05bba2c3-2422-401c-831b-f9af92f47719 from this chassis (sb_readonly=0)
Nov 25 12:14:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:26Z|01450|binding|INFO|Setting lport 05bba2c3-2422-401c-831b-f9af92f47719 down in Southbound
Nov 25 12:14:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:26Z|01451|binding|INFO|Removing iface tap05bba2c3-24 ovn-installed in OVS
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.808 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c261eb8-c702-4e55-b488-4037c998cabc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.811 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], port_security=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe47:6a96/64', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05bba2c3-2422-401c-831b-f9af92f47719) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.839 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[10a7e7d9-79de-40ae-87b4-3b9bc6e3d8ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 25 12:14:26 np0005535469 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Consumed 14.082s CPU time.
Nov 25 12:14:26 np0005535469 systemd-machined[216343]: Machine qemu-172-instance-0000008a terminated.
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.847 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[85abba5a-98c1-456e-855f-ec9870ef9703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.872 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ecca69a1-72ee-40d3-a86f-e661b839fdb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.891 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[616074b6-e2fd-4c27-9aa0-37415021e572]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406166, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.911 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6f592c-9a8b-4a4a-8809-cba691e3e782]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725840, 'tstamp': 725840}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406167, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725844, 'tstamp': 725844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406167, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.913 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.924 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddff42cd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.925 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:14:26 np0005535469 NetworkManager[48891]: <info>  [1764090866.9275] manager: (tap05bba2c3-24): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.927 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddff42cd-c0, col_values=(('external_ids', {'iface-id': '9e2ffcd6-4edd-4351-a44c-02659b205145'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.928 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.931 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05bba2c3-2422-401c-831b-f9af92f47719 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.933 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0063509c-db60-47db-9c49-72faf9b698d7#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.945 254096 INFO nova.virt.libvirt.driver [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance destroyed successfully.#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.945 254096 DEBUG nova.objects.instance [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 6157e53b-5ff5-4b55-b71a-125301c5268a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.958 254096 DEBUG nova.virt.libvirt.vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:14:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:14:00Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.958 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.959 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.956 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06c80d16-2efd-45b9-b332-d635739500e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.959 254096 DEBUG os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.961 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7c65059-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.974 254096 INFO os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f')#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.975 254096 DEBUG nova.virt.libvirt.vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:14:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:14:00Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.975 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.976 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.976 254096 DEBUG os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.978 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05bba2c3-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:26 np0005535469 nova_compute[254092]: 2025-11-25 17:14:26.982 254096 INFO os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24')#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.989 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6d784d-01c3-45b0-9615-00bbda31f239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.992 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a0704cc1-36c7-4a1f-ae85-87044f72a0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.023 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d5922ccf-2d8e-479e-bb01-bf135308ea74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.040 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[225d26c6-9092-4ed1-9394-59453fc69198]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406214, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.057 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5108a6e0-c50a-448d-81e8-850de92fdc0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0063509c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725952, 'tstamp': 725952}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406215, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.059 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.060 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.062 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0063509c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.062 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.063 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0063509c-d0, col_values=(('external_ids', {'iface-id': '76c9f96b-1150-4e5a-a9e7-680d9f908998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.063 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.367 254096 INFO nova.virt.libvirt.driver [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deleting instance files /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a_del#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.368 254096 INFO nova.virt.libvirt.driver [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deletion of /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a_del complete#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.415 254096 INFO nova.compute.manager [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.415 254096 DEBUG oslo.service.loopingcall [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.416 254096 DEBUG nova.compute.manager [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.416 254096 DEBUG nova.network.neutron [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.917 254096 DEBUG nova.network.neutron [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updated VIF entry in instance network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.918 254096 DEBUG nova.network.neutron [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:14:27 np0005535469 nova_compute[254092]: 2025-11-25 17:14:27.937 254096 DEBUG oslo_concurrency.lockutils [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.182 254096 DEBUG nova.compute.manager [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-deleted-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.183 254096 INFO nova.compute.manager [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Neutron deleted interface c7c65059-3fc6-4a84-b8ce-de1306c01e13; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.183 254096 DEBUG nova.network.neutron [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.210 254096 DEBUG nova.compute.manager [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Detach interface failed, port_id=c7c65059-3fc6-4a84-b8ce-de1306c01e13, reason: Instance 6157e53b-5ff5-4b55-b71a-125301c5268a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.482 254096 DEBUG nova.network.neutron [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.498 254096 INFO nova.compute.manager [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 1.08 seconds to deallocate network for instance.#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.537 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.538 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.565 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-unplugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.565 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.566 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.566 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.566 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-unplugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-unplugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-unplugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-unplugged-05bba2c3-2422-401c-831b-f9af92f47719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-unplugged-05bba2c3-2422-401c-831b-f9af92f47719 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.571 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:14:28 np0005535469 nova_compute[254092]: 2025-11-25 17:14:28.618 254096 DEBUG oslo_concurrency.processutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:14:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 25 12:14:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:14:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426995146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:14:29 np0005535469 nova_compute[254092]: 2025-11-25 17:14:29.189 254096 DEBUG oslo_concurrency.processutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:14:29 np0005535469 nova_compute[254092]: 2025-11-25 17:14:29.195 254096 DEBUG nova.compute.provider_tree [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:14:29 np0005535469 nova_compute[254092]: 2025-11-25 17:14:29.212 254096 DEBUG nova.scheduler.client.report [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:14:29 np0005535469 nova_compute[254092]: 2025-11-25 17:14:29.229 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:29 np0005535469 nova_compute[254092]: 2025-11-25 17:14:29.251 254096 INFO nova.scheduler.client.report [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 6157e53b-5ff5-4b55-b71a-125301c5268a#033[00m
Nov 25 12:14:29 np0005535469 nova_compute[254092]: 2025-11-25 17:14:29.308 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.274 254096 DEBUG nova.compute.manager [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-deleted-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG nova.compute.manager [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG nova.compute.manager [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG oslo_concurrency.lockutils [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG oslo_concurrency.lockutils [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.276 254096 DEBUG nova.network.neutron [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.332 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.332 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.333 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.333 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.333 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.334 254096 INFO nova.compute.manager [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Terminating instance#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.336 254096 DEBUG nova.compute.manager [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:14:30 np0005535469 kernel: tapbf0e0412-08 (unregistering): left promiscuous mode
Nov 25 12:14:30 np0005535469 NetworkManager[48891]: <info>  [1764090870.3908] device (tapbf0e0412-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:30Z|01452|binding|INFO|Releasing lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 from this chassis (sb_readonly=0)
Nov 25 12:14:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:30Z|01453|binding|INFO|Setting lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 down in Southbound
Nov 25 12:14:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:30Z|01454|binding|INFO|Removing iface tapbf0e0412-08 ovn-installed in OVS
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.413 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:43:89 10.100.0.8'], port_security=['fa:16:3e:3b:43:89 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bf0e0412-082f-4b0e-aabe-4e4f0de25b43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.414 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d unbound from our chassis#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.415 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddff42cd-c011-4371-96b1-f2bb5093a16d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4bf81b-1213-4bf6-a311-5494fd352113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.417 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d namespace which is not needed anymore#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 kernel: tap831eaa83-55 (unregistering): left promiscuous mode
Nov 25 12:14:30 np0005535469 NetworkManager[48891]: <info>  [1764090870.4327] device (tap831eaa83-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:30Z|01455|binding|INFO|Releasing lport 831eaa83-55bd-4098-9037-4b628eb8d994 from this chassis (sb_readonly=0)
Nov 25 12:14:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:30Z|01456|binding|INFO|Setting lport 831eaa83-55bd-4098-9037-4b628eb8d994 down in Southbound
Nov 25 12:14:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:14:30Z|01457|binding|INFO|Removing iface tap831eaa83-55 ovn-installed in OVS
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.459 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], port_security=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb0:cda4/64', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=831eaa83-55bd-4098-9037-4b628eb8d994) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 25 12:14:30 np0005535469 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 17.348s CPU time.
Nov 25 12:14:30 np0005535469 systemd-machined[216343]: Machine qemu-171-instance-00000089 terminated.
Nov 25 12:14:30 np0005535469 podman[406242]: 2025-11-25 17:14:30.510608804 +0000 UTC m=+0.071715253 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 12:14:30 np0005535469 podman[406241]: 2025-11-25 17:14:30.545079222 +0000 UTC m=+0.113646754 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:14:30 np0005535469 podman[406243]: 2025-11-25 17:14:30.548854336 +0000 UTC m=+0.110319115 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : haproxy version is 2.8.14-c23fe91
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : path to executable is /usr/sbin/haproxy
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [WARNING]  (404572) : Exiting Master process...
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [WARNING]  (404572) : Exiting Master process...
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [ALERT]    (404572) : Current worker (404574) exited with code 143 (Terminated)
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [WARNING]  (404572) : All workers exited. Exiting... (0)
Nov 25 12:14:30 np0005535469 systemd[1]: libpod-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587.scope: Deactivated successfully.
Nov 25 12:14:30 np0005535469 NetworkManager[48891]: <info>  [1764090870.5708] manager: (tap831eaa83-55): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Nov 25 12:14:30 np0005535469 podman[406316]: 2025-11-25 17:14:30.574868284 +0000 UTC m=+0.051560146 container died a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.602 254096 INFO nova.virt.libvirt.driver [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance destroyed successfully.#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.603 254096 DEBUG nova.objects.instance [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:14:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587-userdata-shm.mount: Deactivated successfully.
Nov 25 12:14:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-51787ffbad5844d0d6317c349bae98d9600fb763a7a78dde87ceae6df6f2b06d-merged.mount: Deactivated successfully.
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.614 254096 DEBUG nova.virt.libvirt.vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:13:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:13:22Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.614 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.615 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.615 254096 DEBUG os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.618 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0e0412-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.628 254096 INFO os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08')#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.629 254096 DEBUG nova.virt.libvirt.vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:13:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:13:22Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.629 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.630 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.630 254096 DEBUG os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:14:30 np0005535469 podman[406316]: 2025-11-25 17:14:30.630736254 +0000 UTC m=+0.107428116 container cleanup a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap831eaa83-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.636 254096 INFO os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55')#033[00m
Nov 25 12:14:30 np0005535469 systemd[1]: libpod-conmon-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587.scope: Deactivated successfully.
Nov 25 12:14:30 np0005535469 podman[406375]: 2025-11-25 17:14:30.703058643 +0000 UTC m=+0.047642558 container remove a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.709 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37ee0cd9-3a10-43a6-900a-b1a7d783b1cf]: (4, ('Tue Nov 25 05:14:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d (a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587)\na88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587\nTue Nov 25 05:14:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d (a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587)\na88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.710 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5293cfad-93b8-496e-beca-2e7f313d7fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.711 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.719 254096 DEBUG nova.compute.manager [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG oslo_concurrency.lockutils [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG oslo_concurrency.lockutils [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG oslo_concurrency.lockutils [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG nova.compute.manager [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-unplugged-831eaa83-55bd-4098-9037-4b628eb8d994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG nova.compute.manager [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-831eaa83-55bd-4098-9037-4b628eb8d994 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 kernel: tapddff42cd-c0: left promiscuous mode
Nov 25 12:14:30 np0005535469 nova_compute[254092]: 2025-11-25 17:14:30.768 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.771 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d144be8-b37f-4793-9c94-1f053619e146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.788 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e1064013-5c0c-4ecb-b90a-522ef44608d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4553f39e-116a-4eee-b78c-bfcd6e8d801a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.806 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e03a14-e9cf-475d-9071-8a4ec60b9e6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725816, 'reachable_time': 37407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406408, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 152 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 23 KiB/s wr, 17 op/s
Nov 25 12:14:30 np0005535469 systemd[1]: run-netns-ovnmeta\x2dddff42cd\x2dc011\x2d4371\x2d96b1\x2df2bb5093a16d.mount: Deactivated successfully.
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.811 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.812 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a1d07d-458b-4133-ad94-4b5d13db18b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.813 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 831eaa83-55bd-4098-9037-4b628eb8d994 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.814 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0063509c-db60-47db-9c49-72faf9b698d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.815 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f92c48-e052-4133-b4fd-2943d59c05bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.816 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 namespace which is not needed anymore#033[00m
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : haproxy version is 2.8.14-c23fe91
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : path to executable is /usr/sbin/haproxy
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [WARNING]  (404688) : Exiting Master process...
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [ALERT]    (404688) : Current worker (404690) exited with code 143 (Terminated)
Nov 25 12:14:30 np0005535469 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [WARNING]  (404688) : All workers exited. Exiting... (0)
Nov 25 12:14:30 np0005535469 systemd[1]: libpod-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0.scope: Deactivated successfully.
Nov 25 12:14:30 np0005535469 podman[406425]: 2025-11-25 17:14:30.965843655 +0000 UTC m=+0.047089682 container died 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:14:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0-userdata-shm.mount: Deactivated successfully.
Nov 25 12:14:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0c4e541847c2aac7dd59889538a8dcfac905bc7101e6c3c35a058f1755af39f3-merged.mount: Deactivated successfully.
Nov 25 12:14:31 np0005535469 podman[406425]: 2025-11-25 17:14:31.001376132 +0000 UTC m=+0.082622159 container cleanup 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:14:31 np0005535469 systemd[1]: libpod-conmon-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0.scope: Deactivated successfully.
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.047 254096 INFO nova.virt.libvirt.driver [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deleting instance files /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8_del#033[00m
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.048 254096 INFO nova.virt.libvirt.driver [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deletion of /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8_del complete#033[00m
Nov 25 12:14:31 np0005535469 podman[406456]: 2025-11-25 17:14:31.080175057 +0000 UTC m=+0.049222970 container remove 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.091 254096 INFO nova.compute.manager [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.092 254096 DEBUG oslo.service.loopingcall [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.092 254096 DEBUG nova.compute.manager [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.092 254096 DEBUG nova.network.neutron [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.093 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[08d3221b-77c3-46b6-a244-5a5488984789]: (4, ('Tue Nov 25 05:14:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 (4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0)\n4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0\nTue Nov 25 05:14:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 (4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0)\n4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.094 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb860e5-c64b-4b07-a6da-ecf241b8d29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:31 np0005535469 kernel: tap0063509c-d0: left promiscuous mode
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38d1796a-1168-4586-a791-d6f1e4454a0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f58f6e4-6c10-4d7c-897d-37603e8d928b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c05501dd-9f61-459c-95f4-2746581bd5f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.138 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7acc676a-85c4-43bc-b410-a74c6c6bbeef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725925, 'reachable_time': 32404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406472, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.140 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:14:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.141 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[413d65e5-7724-4b9e-b0fa-6c60de1263f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:14:31 np0005535469 systemd[1]: run-netns-ovnmeta\x2d0063509c\x2ddb60\x2d47db\x2d9c49\x2d72faf9b698d7.mount: Deactivated successfully.
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.740 254096 DEBUG nova.network.neutron [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated VIF entry in instance network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.741 254096 DEBUG nova.network.neutron [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:14:31 np0005535469 nova_compute[254092]: 2025-11-25 17:14:31.763 254096 DEBUG oslo_concurrency.lockutils [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.331 254096 DEBUG nova.network.neutron [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.356 254096 INFO nova.compute.manager [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 1.26 seconds to deallocate network for instance.#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.368 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-unplugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.371 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.371 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.371 254096 WARNING nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.372 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-deleted-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.398 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.399 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.446 254096 DEBUG oslo_concurrency.processutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 82 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 39 op/s
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.842 254096 DEBUG nova.compute.manager [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.842 254096 DEBUG oslo_concurrency.lockutils [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 DEBUG oslo_concurrency.lockutils [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 DEBUG oslo_concurrency.lockutils [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 DEBUG nova.compute.manager [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 WARNING nova.compute.manager [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:14:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:14:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3875746165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.921 254096 DEBUG oslo_concurrency.processutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.931 254096 DEBUG nova.compute.provider_tree [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.953 254096 DEBUG nova.scheduler.client.report [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:14:32 np0005535469 nova_compute[254092]: 2025-11-25 17:14:32.981 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:33 np0005535469 nova_compute[254092]: 2025-11-25 17:14:33.020 254096 INFO nova.scheduler.client.report [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 3b797ee6-c82f-4c01-bb54-31de659fcad8#033[00m
Nov 25 12:14:33 np0005535469 nova_compute[254092]: 2025-11-25 17:14:33.095 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.070176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874070240, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 482, "num_deletes": 257, "total_data_size": 436337, "memory_usage": 446840, "flush_reason": "Manual Compaction"}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874076050, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 432605, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56620, "largest_seqno": 57101, "table_properties": {"data_size": 429858, "index_size": 782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6259, "raw_average_key_size": 18, "raw_value_size": 424459, "raw_average_value_size": 1226, "num_data_blocks": 35, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090844, "oldest_key_time": 1764090844, "file_creation_time": 1764090874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 5963 microseconds, and 3329 cpu microseconds.
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.076131) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 432605 bytes OK
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.076169) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.077759) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.077788) EVENT_LOG_v1 {"time_micros": 1764090874077778, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.077817) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 433474, prev total WAL file size 433474, number of live WAL files 2.
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.078542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323632' seq:72057594037927935, type:22 .. '6C6F676D0032353135' seq:0, type:0; will stop at (end)
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(422KB)], [128(9853KB)]
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874078689, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10522882, "oldest_snapshot_seqno": -1}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7653 keys, 10421070 bytes, temperature: kUnknown
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874123179, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10421070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10370421, "index_size": 30440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 200925, "raw_average_key_size": 26, "raw_value_size": 10234063, "raw_average_value_size": 1337, "num_data_blocks": 1188, "num_entries": 7653, "num_filter_entries": 7653, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.123555) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10421070 bytes
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.124706) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.9 rd, 233.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(48.4) write-amplify(24.1) OK, records in: 8175, records dropped: 522 output_compression: NoCompression
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.124728) EVENT_LOG_v1 {"time_micros": 1764090874124717, "job": 78, "event": "compaction_finished", "compaction_time_micros": 44605, "compaction_time_cpu_micros": 23694, "output_level": 6, "num_output_files": 1, "total_output_size": 10421070, "num_input_records": 8175, "num_output_records": 7653, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874124982, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874127310, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.078362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:34 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:14:34 np0005535469 nova_compute[254092]: 2025-11-25 17:14:34.459 254096 DEBUG nova.compute.manager [req-b55de48c-d6d1-4152-99c5-dfd054e384f8 req-a9de94ad-fb7b-4eb3-ad48-c3173c12267a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-deleted-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:14:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 82 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 39 op/s
Nov 25 12:14:35 np0005535469 nova_compute[254092]: 2025-11-25 17:14:35.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:35 np0005535469 nova_compute[254092]: 2025-11-25 17:14:35.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 12:14:36 np0005535469 nova_compute[254092]: 2025-11-25 17:14:36.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:37 np0005535469 nova_compute[254092]: 2025-11-25 17:14:37.078 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:38 np0005535469 nova_compute[254092]: 2025-11-25 17:14:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 12:14:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:14:40
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'backups']
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 12:14:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:14:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975670560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:14:40 np0005535469 nova_compute[254092]: 2025-11-25 17:14:40.951 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.096 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.097 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3701MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.097 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.097 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.149 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.149 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.175 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:14:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:14:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1273956299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.622 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.628 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.640 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.662 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.662 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.943 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090866.940885, 6157e53b-5ff5-4b55-b71a-125301c5268a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.944 254096 INFO nova.compute.manager [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:14:41 np0005535469 nova_compute[254092]: 2025-11-25 17:14:41.963 254096 DEBUG nova.compute.manager [None req-05c568a2-6ab3-4e48-93e8-1470986d2b06 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:14:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 40 op/s
Nov 25 12:14:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:44 np0005535469 nova_compute[254092]: 2025-11-25 17:14:44.658 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:44 np0005535469 nova_compute[254092]: 2025-11-25 17:14:44.658 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:44 np0005535469 nova_compute[254092]: 2025-11-25 17:14:44.658 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 12:14:45 np0005535469 nova_compute[254092]: 2025-11-25 17:14:45.591 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090870.589668, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:14:45 np0005535469 nova_compute[254092]: 2025-11-25 17:14:45.592 254096 INFO nova.compute.manager [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:14:45 np0005535469 nova_compute[254092]: 2025-11-25 17:14:45.611 254096 DEBUG nova.compute.manager [None req-533ca3a7-0c19-45ef-ab7e-4ca2f7a13658 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:14:45 np0005535469 nova_compute[254092]: 2025-11-25 17:14:45.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:45 np0005535469 nova_compute[254092]: 2025-11-25 17:14:45.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 12:14:47 np0005535469 nova_compute[254092]: 2025-11-25 17:14:47.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:47 np0005535469 nova_compute[254092]: 2025-11-25 17:14:47.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:47 np0005535469 nova_compute[254092]: 2025-11-25 17:14:47.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:14:47 np0005535469 nova_compute[254092]: 2025-11-25 17:14:47.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:14:47 np0005535469 nova_compute[254092]: 2025-11-25 17:14:47.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:14:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:14:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:50 np0005535469 nova_compute[254092]: 2025-11-25 17:14:50.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:50.480 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:14:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:50.481 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:14:50 np0005535469 nova_compute[254092]: 2025-11-25 17:14:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:14:50 np0005535469 nova_compute[254092]: 2025-11-25 17:14:50.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:14:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 33e4e4ec-0e5c-460c-a4be-d824ab728a46 does not exist
Nov 25 12:14:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 67617a35-083c-4d28-a854-7469971d7d3d does not exist
Nov 25 12:14:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2726a8f7-c13d-41d8-bcc8-05d6017b9f14 does not exist
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:14:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:14:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:14:50 np0005535469 nova_compute[254092]: 2025-11-25 17:14:50.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:14:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:14:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:14:51 np0005535469 podman[406813]: 2025-11-25 17:14:51.311248012 +0000 UTC m=+0.041894501 container create 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:14:51 np0005535469 systemd[1]: Started libpod-conmon-1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0.scope.
Nov 25 12:14:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:14:51 np0005535469 podman[406813]: 2025-11-25 17:14:51.379014727 +0000 UTC m=+0.109661186 container init 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:14:51 np0005535469 podman[406813]: 2025-11-25 17:14:51.386275035 +0000 UTC m=+0.116921494 container start 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 12:14:51 np0005535469 podman[406813]: 2025-11-25 17:14:51.290861248 +0000 UTC m=+0.021507707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:14:51 np0005535469 podman[406813]: 2025-11-25 17:14:51.389853632 +0000 UTC m=+0.120500091 container attach 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:14:51 np0005535469 objective_hypatia[406830]: 167 167
Nov 25 12:14:51 np0005535469 systemd[1]: libpod-1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0.scope: Deactivated successfully.
Nov 25 12:14:51 np0005535469 podman[406813]: 2025-11-25 17:14:51.394203351 +0000 UTC m=+0.124849850 container died 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:14:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2c04b175fe8b5489cd072a981c0f8520302e335def70614668df541bcc14a64a-merged.mount: Deactivated successfully.
Nov 25 12:14:51 np0005535469 podman[406813]: 2025-11-25 17:14:51.435838504 +0000 UTC m=+0.166484963 container remove 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:14:51 np0005535469 systemd[1]: libpod-conmon-1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0.scope: Deactivated successfully.
Nov 25 12:14:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:14:51.482 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:14:51 np0005535469 podman[406854]: 2025-11-25 17:14:51.601690348 +0000 UTC m=+0.045859220 container create f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:14:51 np0005535469 systemd[1]: Started libpod-conmon-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope.
Nov 25 12:14:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:14:51 np0005535469 podman[406854]: 2025-11-25 17:14:51.582438324 +0000 UTC m=+0.026607216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:14:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:14:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:14:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:14:51 np0005535469 podman[406854]: 2025-11-25 17:14:51.69839936 +0000 UTC m=+0.142568232 container init f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:14:51 np0005535469 podman[406854]: 2025-11-25 17:14:51.70721538 +0000 UTC m=+0.151384252 container start f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 12:14:51 np0005535469 podman[406854]: 2025-11-25 17:14:51.710234242 +0000 UTC m=+0.154403114 container attach f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:14:52 np0005535469 wonderful_ramanujan[406871]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:14:52 np0005535469 wonderful_ramanujan[406871]: --> relative data size: 1.0
Nov 25 12:14:52 np0005535469 wonderful_ramanujan[406871]: --> All data devices are unavailable
Nov 25 12:14:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:14:52 np0005535469 systemd[1]: libpod-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope: Deactivated successfully.
Nov 25 12:14:52 np0005535469 podman[406854]: 2025-11-25 17:14:52.864617004 +0000 UTC m=+1.308785876 container died f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:14:52 np0005535469 systemd[1]: libpod-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope: Consumed 1.083s CPU time.
Nov 25 12:14:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4-merged.mount: Deactivated successfully.
Nov 25 12:14:52 np0005535469 podman[406854]: 2025-11-25 17:14:52.919251501 +0000 UTC m=+1.363420383 container remove f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:14:52 np0005535469 systemd[1]: libpod-conmon-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope: Deactivated successfully.
Nov 25 12:14:53 np0005535469 podman[407053]: 2025-11-25 17:14:53.603874975 +0000 UTC m=+0.042344313 container create c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:14:53 np0005535469 systemd[1]: Started libpod-conmon-c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591.scope.
Nov 25 12:14:53 np0005535469 podman[407053]: 2025-11-25 17:14:53.585412413 +0000 UTC m=+0.023881741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:14:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:14:53 np0005535469 podman[407053]: 2025-11-25 17:14:53.72123852 +0000 UTC m=+0.159707828 container init c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:14:53 np0005535469 podman[407053]: 2025-11-25 17:14:53.733016851 +0000 UTC m=+0.171486169 container start c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:14:53 np0005535469 podman[407053]: 2025-11-25 17:14:53.736949148 +0000 UTC m=+0.175418466 container attach c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:14:53 np0005535469 epic_buck[407069]: 167 167
Nov 25 12:14:53 np0005535469 systemd[1]: libpod-c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591.scope: Deactivated successfully.
Nov 25 12:14:53 np0005535469 podman[407053]: 2025-11-25 17:14:53.741101291 +0000 UTC m=+0.179570579 container died c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:14:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-32d15216be5815392119942e440544766e7ceb3456746b6342d09b52696b7639-merged.mount: Deactivated successfully.
Nov 25 12:14:53 np0005535469 podman[407053]: 2025-11-25 17:14:53.778371516 +0000 UTC m=+0.216840804 container remove c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:14:53 np0005535469 systemd[1]: libpod-conmon-c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591.scope: Deactivated successfully.
Nov 25 12:14:54 np0005535469 podman[407093]: 2025-11-25 17:14:54.019025736 +0000 UTC m=+0.062564434 container create b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:14:54 np0005535469 systemd[1]: Started libpod-conmon-b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4.scope.
Nov 25 12:14:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:14:54 np0005535469 podman[407093]: 2025-11-25 17:14:53.994590541 +0000 UTC m=+0.038129229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:14:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:14:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:54 np0005535469 podman[407093]: 2025-11-25 17:14:54.115733638 +0000 UTC m=+0.159272336 container init b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:14:54 np0005535469 podman[407093]: 2025-11-25 17:14:54.126122451 +0000 UTC m=+0.169661109 container start b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:14:54 np0005535469 podman[407093]: 2025-11-25 17:14:54.129739749 +0000 UTC m=+0.173278437 container attach b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:14:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:14:54 np0005535469 serene_kilby[407109]: {
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:    "0": [
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:        {
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "devices": [
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "/dev/loop3"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            ],
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_name": "ceph_lv0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_size": "21470642176",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "name": "ceph_lv0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "tags": {
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cluster_name": "ceph",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.crush_device_class": "",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.encrypted": "0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osd_id": "0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.type": "block",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.vdo": "0"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            },
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "type": "block",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "vg_name": "ceph_vg0"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:        }
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:    ],
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:    "1": [
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:        {
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "devices": [
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "/dev/loop4"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            ],
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_name": "ceph_lv1",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_size": "21470642176",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "name": "ceph_lv1",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "tags": {
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cluster_name": "ceph",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.crush_device_class": "",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.encrypted": "0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osd_id": "1",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.type": "block",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.vdo": "0"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            },
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "type": "block",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "vg_name": "ceph_vg1"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:        }
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:    ],
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:    "2": [
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:        {
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "devices": [
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "/dev/loop5"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            ],
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_name": "ceph_lv2",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_size": "21470642176",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "name": "ceph_lv2",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "tags": {
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.cluster_name": "ceph",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.crush_device_class": "",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.encrypted": "0",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osd_id": "2",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.type": "block",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:                "ceph.vdo": "0"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            },
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "type": "block",
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:            "vg_name": "ceph_vg2"
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:        }
Nov 25 12:14:54 np0005535469 serene_kilby[407109]:    ]
Nov 25 12:14:54 np0005535469 serene_kilby[407109]: }
Nov 25 12:14:54 np0005535469 systemd[1]: libpod-b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4.scope: Deactivated successfully.
Nov 25 12:14:54 np0005535469 podman[407093]: 2025-11-25 17:14:54.934845754 +0000 UTC m=+0.978384442 container died b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:14:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a-merged.mount: Deactivated successfully.
Nov 25 12:14:55 np0005535469 podman[407093]: 2025-11-25 17:14:54.999698819 +0000 UTC m=+1.043237487 container remove b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:14:55 np0005535469 systemd[1]: libpod-conmon-b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4.scope: Deactivated successfully.
Nov 25 12:14:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:14:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2422693041' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:14:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:14:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2422693041' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:14:55 np0005535469 nova_compute[254092]: 2025-11-25 17:14:55.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:55 np0005535469 podman[407270]: 2025-11-25 17:14:55.670520988 +0000 UTC m=+0.049525479 container create 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 12:14:55 np0005535469 systemd[1]: Started libpod-conmon-387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25.scope.
Nov 25 12:14:55 np0005535469 podman[407270]: 2025-11-25 17:14:55.649987079 +0000 UTC m=+0.028991530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:14:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:14:55 np0005535469 podman[407270]: 2025-11-25 17:14:55.784599383 +0000 UTC m=+0.163603834 container init 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:14:55 np0005535469 podman[407270]: 2025-11-25 17:14:55.791988845 +0000 UTC m=+0.170993296 container start 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:14:55 np0005535469 podman[407270]: 2025-11-25 17:14:55.797461703 +0000 UTC m=+0.176466234 container attach 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:14:55 np0005535469 kind_proskuriakova[407286]: 167 167
Nov 25 12:14:55 np0005535469 systemd[1]: libpod-387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25.scope: Deactivated successfully.
Nov 25 12:14:55 np0005535469 podman[407270]: 2025-11-25 17:14:55.798585964 +0000 UTC m=+0.177590415 container died 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:14:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1f2aaa79f28ed069b84a7d07ec9aa3a9955761f20ce788598614997203e54045-merged.mount: Deactivated successfully.
Nov 25 12:14:55 np0005535469 podman[407270]: 2025-11-25 17:14:55.837287468 +0000 UTC m=+0.216291959 container remove 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:14:55 np0005535469 nova_compute[254092]: 2025-11-25 17:14:55.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:14:55 np0005535469 systemd[1]: libpod-conmon-387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25.scope: Deactivated successfully.
Nov 25 12:14:56 np0005535469 podman[407310]: 2025-11-25 17:14:56.013831013 +0000 UTC m=+0.041837040 container create e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:14:56 np0005535469 systemd[1]: Started libpod-conmon-e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9.scope.
Nov 25 12:14:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:14:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:14:56 np0005535469 podman[407310]: 2025-11-25 17:14:55.994871197 +0000 UTC m=+0.022877244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:14:56 np0005535469 podman[407310]: 2025-11-25 17:14:56.098841747 +0000 UTC m=+0.126847794 container init e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:14:56 np0005535469 podman[407310]: 2025-11-25 17:14:56.105143028 +0000 UTC m=+0.133149055 container start e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:14:56 np0005535469 podman[407310]: 2025-11-25 17:14:56.10814395 +0000 UTC m=+0.136149977 container attach e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:14:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]: {
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "osd_id": 1,
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "type": "bluestore"
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:    },
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "osd_id": 2,
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "type": "bluestore"
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:    },
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "osd_id": 0,
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:        "type": "bluestore"
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]:    }
Nov 25 12:14:57 np0005535469 youthful_galileo[407327]: }
Nov 25 12:14:57 np0005535469 systemd[1]: libpod-e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9.scope: Deactivated successfully.
Nov 25 12:14:57 np0005535469 podman[407310]: 2025-11-25 17:14:57.056858783 +0000 UTC m=+1.084864820 container died e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:14:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186-merged.mount: Deactivated successfully.
Nov 25 12:14:57 np0005535469 podman[407310]: 2025-11-25 17:14:57.119383835 +0000 UTC m=+1.147389862 container remove e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:14:57 np0005535469 systemd[1]: libpod-conmon-e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9.scope: Deactivated successfully.
Nov 25 12:14:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:14:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:14:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:14:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:14:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b7fae12d-ee77-4666-89e3-1883c03d7f3a does not exist
Nov 25 12:14:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3eb96b36-b7f9-4955-bec8-04e0a67b9ed8 does not exist
Nov 25 12:14:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:14:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:14:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:14:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.259 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:72:a6 2001:db8:0:1:f816:3eff:fe7b:72a6 2001:db8::f816:3eff:fe7b:72a6'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7b:72a6/64 2001:db8::f816:3eff:fe7b:72a6/64', 'neutron:device_id': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fff6df53-4de9-409a-abf7-032bad835b32) old=Port_Binding(mac=['fa:16:3e:7b:72:a6 2001:db8::f816:3eff:fe7b:72a6'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7b:72a6/64', 'neutron:device_id': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:15:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.261 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fff6df53-4de9-409a-abf7-032bad835b32 in datapath c57073ad-8c41-459b-9402-c367011860c7 updated#033[00m
Nov 25 12:15:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.262 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c57073ad-8c41-459b-9402-c367011860c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:15:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.264 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efa761a4-2573-425f-aa3f-45a7a5cde85d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:00 np0005535469 podman[407424]: 2025-11-25 17:15:00.640392834 +0000 UTC m=+0.056761605 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:15:00 np0005535469 podman[407423]: 2025-11-25 17:15:00.642337138 +0000 UTC m=+0.059240814 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:15:00 np0005535469 nova_compute[254092]: 2025-11-25 17:15:00.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:00 np0005535469 podman[407425]: 2025-11-25 17:15:00.674442551 +0000 UTC m=+0.086163946 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 12:15:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:15:00 np0005535469 nova_compute[254092]: 2025-11-25 17:15:00.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:15:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:15:05 np0005535469 nova_compute[254092]: 2025-11-25 17:15:05.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:05 np0005535469 nova_compute[254092]: 2025-11-25 17:15:05.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.795 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.795 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.808 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:15:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.879 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.879 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.888 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.889 254096 INFO nova.compute.claims [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:15:06 np0005535469 nova_compute[254092]: 2025-11-25 17:15:06.984 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:15:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262071285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.413 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.422 254096 DEBUG nova.compute.provider_tree [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.439 254096 DEBUG nova.scheduler.client.report [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.461 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.462 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.516 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.517 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.535 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.565 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.677 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.678 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.678 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Creating image(s)#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.700 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.724 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.747 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.752 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.789 254096 DEBUG nova.policy [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.828 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.829 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.830 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.830 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.855 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:07 np0005535469 nova_compute[254092]: 2025-11-25 17:15:07.859 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cab3a333-1f68-435b-b6cb-a508755c2565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.183 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cab3a333-1f68-435b-b6cb-a508755c2565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.268 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.379 254096 DEBUG nova.objects.instance [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.392 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.392 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Ensure instance console log exists: /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.393 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.393 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.393 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:08 np0005535469 nova_compute[254092]: 2025-11-25 17:15:08.408 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully created port: bfd7bca3-f01a-4857-8c51-1085cde3ad00 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:15:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:15:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:09 np0005535469 nova_compute[254092]: 2025-11-25 17:15:09.229 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully created port: cd75615e-b80b-4685-b424-2c54f7fdbde8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:15:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.148 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully updated port: bfd7bca3-f01a-4857-8c51-1085cde3ad00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.258 254096 DEBUG nova.compute.manager [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.259 254096 DEBUG nova.compute.manager [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.259 254096 DEBUG oslo_concurrency.lockutils [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.259 254096 DEBUG oslo_concurrency.lockutils [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.260 254096 DEBUG nova.network.neutron [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.407 254096 DEBUG nova.network.neutron [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.838 254096 DEBUG nova.network.neutron [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 75 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.854 254096 DEBUG oslo_concurrency.lockutils [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:10 np0005535469 nova_compute[254092]: 2025-11-25 17:15:10.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:11 np0005535469 nova_compute[254092]: 2025-11-25 17:15:11.164 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully updated port: cd75615e-b80b-4685-b424-2c54f7fdbde8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:15:11 np0005535469 nova_compute[254092]: 2025-11-25 17:15:11.189 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:11 np0005535469 nova_compute[254092]: 2025-11-25 17:15:11.189 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:11 np0005535469 nova_compute[254092]: 2025-11-25 17:15:11.190 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:15:11 np0005535469 nova_compute[254092]: 2025-11-25 17:15:11.397 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:15:12 np0005535469 nova_compute[254092]: 2025-11-25 17:15:12.348 254096 DEBUG nova.compute.manager [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:12 np0005535469 nova_compute[254092]: 2025-11-25 17:15:12.348 254096 DEBUG nova.compute.manager [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-cd75615e-b80b-4685-b424-2c54f7fdbde8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:15:12 np0005535469 nova_compute[254092]: 2025-11-25 17:15:12.348 254096 DEBUG oslo_concurrency.lockutils [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:15:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:13.653 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:13.655 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.426 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.440 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.441 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance network_info: |[{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.441 254096 DEBUG oslo_concurrency.lockutils [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.442 254096 DEBUG nova.network.neutron [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port cd75615e-b80b-4685-b424-2c54f7fdbde8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.445 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start _get_guest_xml network_info=[{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.451 254096 WARNING nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.454 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.455 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.458 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.458 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.459 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.459 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.459 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.462 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.464 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:15:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:15:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717098985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.895 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.917 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:14 np0005535469 nova_compute[254092]: 2025-11-25 17:15:14.921 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:15:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991302879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.371 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.375 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.376 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.378 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.380 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.381 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.382 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.384 254096 DEBUG nova.objects.instance [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.399 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <uuid>cab3a333-1f68-435b-b6cb-a508755c2565</uuid>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <name>instance-0000008b</name>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1331536480</nova:name>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:15:14</nova:creationTime>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:port uuid="bfd7bca3-f01a-4857-8c51-1085cde3ad00">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <nova:port uuid="cd75615e-b80b-4685-b424-2c54f7fdbde8">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8f:6d38" ipVersion="6"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe8f:6d38" ipVersion="6"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <entry name="serial">cab3a333-1f68-435b-b6cb-a508755c2565</entry>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <entry name="uuid">cab3a333-1f68-435b-b6cb-a508755c2565</entry>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/cab3a333-1f68-435b-b6cb-a508755c2565_disk">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/cab3a333-1f68-435b-b6cb-a508755c2565_disk.config">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f9:85:52"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <target dev="tapbfd7bca3-f0"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:8f:6d:38"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <target dev="tapcd75615e-b8"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/console.log" append="off"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:15:15 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:15:15 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:15:15 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:15:15 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.401 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Preparing to wait for external event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.402 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.403 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.403 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.404 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Preparing to wait for external event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.404 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.404 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.405 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.406 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.406 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.408 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.409 254096 DEBUG os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.411 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.412 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.418 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfd7bca3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.418 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfd7bca3-f0, col_values=(('external_ids', {'iface-id': 'bfd7bca3-f01a-4857-8c51-1085cde3ad00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:85:52', 'vm-uuid': 'cab3a333-1f68-435b-b6cb-a508755c2565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:15 np0005535469 NetworkManager[48891]: <info>  [1764090915.4221] manager: (tapbfd7bca3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.424 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.429 254096 INFO os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0')#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.430 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.430 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.431 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.432 254096 DEBUG os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.436 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd75615e-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.436 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd75615e-b8, col_values=(('external_ids', {'iface-id': 'cd75615e-b80b-4685-b424-2c54f7fdbde8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:6d:38', 'vm-uuid': 'cab3a333-1f68-435b-b6cb-a508755c2565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:15 np0005535469 NetworkManager[48891]: <info>  [1764090915.4391] manager: (tapcd75615e-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.445 254096 INFO os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8')#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.494 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.494 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.495 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:f9:85:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.495 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:8f:6d:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.495 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Using config drive#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.520 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:15 np0005535469 nova_compute[254092]: 2025-11-25 17:15:15.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.034 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Creating config drive at /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.041 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xafk6j2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.192 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xafk6j2" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.221 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.225 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config cab3a333-1f68-435b-b6cb-a508755c2565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.396 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config cab3a333-1f68-435b-b6cb-a508755c2565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.397 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deleting local config drive /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config because it was imported into RBD.#033[00m
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.4721] manager: (tapbfd7bca3-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/601)
Nov 25 12:15:16 np0005535469 kernel: tapbfd7bca3-f0: entered promiscuous mode
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01458|binding|INFO|Claiming lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 for this chassis.
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01459|binding|INFO|bfd7bca3-f01a-4857-8c51-1085cde3ad00: Claiming fa:16:3e:f9:85:52 10.100.0.12
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.493 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:85:52 10.100.0.12'], port_security=['fa:16:3e:f9:85:52 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bfd7bca3-f01a-4857-8c51-1085cde3ad00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.495 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bfd7bca3-f01a-4857-8c51-1085cde3ad00 in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 bound to our chassis#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.496 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48#033[00m
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.4966] manager: (tapcd75615e-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Nov 25 12:15:16 np0005535469 systemd-udevd[407818]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:15:16 np0005535469 systemd-udevd[407817]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0fbba9-5499-43d8-9da4-a0097bed29b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.511 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c1b4538-71 in ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.513 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c1b4538-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a19a3cad-e8e3-4c25-9725-4768d0346b58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.514 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a72a95c-c99f-46f5-b057-c5d0af92d21e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.527 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d66751b3-3b3d-4d26-a82e-b92744031d2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.5308] device (tapbfd7bca3-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.5319] device (tapbfd7bca3-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:15:16 np0005535469 systemd-machined[216343]: New machine qemu-173-instance-0000008b.
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e380d23c-768c-43e4-be81-bde302736ac2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 systemd[1]: Started Virtual Machine qemu-173-instance-0000008b.
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.5670] device (tapcd75615e-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:15:16 np0005535469 kernel: tapcd75615e-b8: entered promiscuous mode
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.5683] device (tapcd75615e-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01460|binding|INFO|Claiming lport cd75615e-b80b-4685-b424-2c54f7fdbde8 for this chassis.
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01461|binding|INFO|cd75615e-b80b-4685-b424-2c54f7fdbde8: Claiming fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.580 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], port_security=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:6d38/64 2001:db8::f816:3eff:fe8f:6d38/64', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd75615e-b80b-4685-b424-2c54f7fdbde8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01462|binding|INFO|Setting lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 ovn-installed in OVS
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01463|binding|INFO|Setting lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 up in Southbound
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01464|binding|INFO|Setting lport cd75615e-b80b-4685-b424-2c54f7fdbde8 ovn-installed in OVS
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01465|binding|INFO|Setting lport cd75615e-b80b-4685-b424-2c54f7fdbde8 up in Southbound
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.605 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8098ee-273f-409f-81e3-cd6de020ae7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aba5607f-e68a-4f9f-9e34-949682e25add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.6110] manager: (tap8c1b4538-70): new Veth device (/org/freedesktop/NetworkManager/Devices/603)
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.649 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[57204a17-5468-472c-95dc-8dd7af997779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.653 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[edfcec99-8f1a-4512-af97-5f707f25d79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.6793] device (tap8c1b4538-70): carrier: link connected
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.687 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f747dfd0-2871-44fc-8af7-cf917820d62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4af7672b-a883-4b48-afe2-06a5c07e9c5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 34818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407853, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4975a0e0-77b2-427d-967e-49207c8cf9ea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:12ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737427, 'tstamp': 737427}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407854, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.750 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df2437a4-7d46-484e-860f-365c347ce2ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 34818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407855, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.777 254096 DEBUG nova.compute.manager [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.778 254096 DEBUG oslo_concurrency.lockutils [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.779 254096 DEBUG oslo_concurrency.lockutils [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.779 254096 DEBUG oslo_concurrency.lockutils [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.780 254096 DEBUG nova.compute.manager [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Processing event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.784 254096 DEBUG nova.network.neutron [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated VIF entry in instance network info cache for port cd75615e-b80b-4685-b424-2c54f7fdbde8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.786 254096 DEBUG nova.network.neutron [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.803 254096 DEBUG oslo_concurrency.lockutils [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.820 254096 DEBUG nova.compute.manager [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.821 254096 DEBUG oslo_concurrency.lockutils [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.823 254096 DEBUG oslo_concurrency.lockutils [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.823 254096 DEBUG oslo_concurrency.lockutils [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.823 254096 DEBUG nova.compute.manager [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Processing event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.826 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc345c9b-d14d-48e7-a12e-7ba3f46b6297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.902 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d95e6c1a-bf39-4fb0-ac41-a848f579fe6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.904 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.904 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.905 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c1b4538-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:16 np0005535469 NetworkManager[48891]: <info>  [1764090916.9082] manager: (tap8c1b4538-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Nov 25 12:15:16 np0005535469 kernel: tap8c1b4538-70: entered promiscuous mode
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.912 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c1b4538-70, col_values=(('external_ids', {'iface-id': '805e6fea-3f01-4342-8f5e-5b75e48ec68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:16Z|01466|binding|INFO|Releasing lport 805e6fea-3f01-4342-8f5e-5b75e48ec68e from this chassis (sb_readonly=0)
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.920 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:15:16 np0005535469 nova_compute[254092]: 2025-11-25 17:15:16.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.929 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80dbdba2-2e59-4fc5-8c5b-9fbd94bd3300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.932 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-8c1b4538-7e1d-41aa-8e91-8a97df87ce48
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.pid.haproxy
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 8c1b4538-7e1d-41aa-8e91-8a97df87ce48
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:15:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.934 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'env', 'PROCESS_TAG=haproxy-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:15:17 np0005535469 podman[407886]: 2025-11-25 17:15:17.312703453 +0000 UTC m=+0.058107563 container create 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:15:17 np0005535469 systemd[1]: Started libpod-conmon-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42.scope.
Nov 25 12:15:17 np0005535469 podman[407886]: 2025-11-25 17:15:17.280352932 +0000 UTC m=+0.025757062 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:15:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:15:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2a19e2c416b67cb68207b4518ff73fc69af0445413f18a70a9f65c350e3045/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:15:17 np0005535469 podman[407886]: 2025-11-25 17:15:17.411484761 +0000 UTC m=+0.156888891 container init 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 12:15:17 np0005535469 podman[407886]: 2025-11-25 17:15:17.419948081 +0000 UTC m=+0.165352191 container start 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:15:17 np0005535469 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : New worker (407943) forked
Nov 25 12:15:17 np0005535469 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : Loading success.
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.486 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd75615e-b80b-4685-b424-2c54f7fdbde8 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.488 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c57073ad-8c41-459b-9402-c367011860c7#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.502 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7757fdab-8638-42de-9a30-a67a05cec8da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.504 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc57073ad-81 in ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.506 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc57073ad-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.507 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05776bd0-f779-4927-899e-21cf0d96f4fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.508 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f25af7e-28c7-4dbb-ad48-f79e6e398e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.521 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1d759470-1cf7-4933-bafb-cdc864cebb57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50d6d8ea-80ec-4a2c-98fd-4adce82ccfc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.575 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.576 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090917.5744035, cab3a333-1f68-435b-b6cb-a508755c2565 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.577 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Started (Lifecycle Event)#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.582 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.586 254096 INFO nova.virt.libvirt.driver [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance spawned successfully.#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.587 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.609 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.611 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ece2663a-6126-4fdb-81bf-f7991c8bf8a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.615 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.618 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.618 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.619 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.619 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.619 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.620 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.619 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a350f9d6-f57e-4f79-82b7-0d5b8b7915e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 NetworkManager[48891]: <info>  [1764090917.6206] manager: (tapc57073ad-80): new Veth device (/org/freedesktop/NetworkManager/Devices/605)
Nov 25 12:15:17 np0005535469 systemd-udevd[407840]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.660 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.661 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090917.5757372, cab3a333-1f68-435b-b6cb-a508755c2565 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.661 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.665 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f08ed9a0-935a-4c4c-905e-6aca0a58204f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.667 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[03c6e658-abc5-4f0b-826d-80092f62e863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.679 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.683 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090917.5790198, cab3a333-1f68-435b-b6cb-a508755c2565 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.683 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:15:17 np0005535469 NetworkManager[48891]: <info>  [1764090917.6987] device (tapc57073ad-80): carrier: link connected
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.705 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a747c57-623d-4e32-95e3-53b9abf3fd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.706 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.719 254096 INFO nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 10.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.719 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.733 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[635fa236-b40a-40c9-9b86-209e93d3d228]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 38152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407969, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.745 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.767 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[899d15f7-ed62-4c0d-8715-f2608be74c3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:72a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737528, 'tstamp': 737528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407970, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.774 254096 INFO nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 10.92 seconds to build instance.#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.786 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c2acdb-ab06-4222-b629-02586fe9f392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 38152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407971, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.837 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4043483a-25c6-48c5-96fe-d175680d9d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.873 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f59da72a-8fe1-4ee6-bbb8-8658b82e5017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.875 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.875 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc57073ad-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:17 np0005535469 NetworkManager[48891]: <info>  [1764090917.9097] manager: (tapc57073ad-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Nov 25 12:15:17 np0005535469 kernel: tapc57073ad-80: entered promiscuous mode
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.911 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc57073ad-80, col_values=(('external_ids', {'iface-id': 'fff6df53-4de9-409a-abf7-032bad835b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.915 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c57073ad-8c41-459b-9402-c367011860c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c57073ad-8c41-459b-9402-c367011860c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:15:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:17Z|01467|binding|INFO|Releasing lport fff6df53-4de9-409a-abf7-032bad835b32 from this chassis (sb_readonly=0)
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28487843-bd72-4ce0-a18b-672209105024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.917 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-c57073ad-8c41-459b-9402-c367011860c7
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/c57073ad-8c41-459b-9402-c367011860c7.pid.haproxy
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID c57073ad-8c41-459b-9402-c367011860c7
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:15:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.919 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'env', 'PROCESS_TAG=haproxy-c57073ad-8c41-459b-9402-c367011860c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c57073ad-8c41-459b-9402-c367011860c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:17 np0005535469 nova_compute[254092]: 2025-11-25 17:15:17.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:18 np0005535469 podman[408002]: 2025-11-25 17:15:18.299616645 +0000 UTC m=+0.047236427 container create f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:15:18 np0005535469 systemd[1]: Started libpod-conmon-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135.scope.
Nov 25 12:15:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:15:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0817bcd731ce4c188b30a692b0672c775002a3b0968cf2b0f2382fce690850/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:15:18 np0005535469 podman[408002]: 2025-11-25 17:15:18.277418611 +0000 UTC m=+0.025038393 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:15:18 np0005535469 podman[408002]: 2025-11-25 17:15:18.378322067 +0000 UTC m=+0.125941869 container init f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:15:18 np0005535469 podman[408002]: 2025-11-25 17:15:18.383161479 +0000 UTC m=+0.130781261 container start f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 12:15:18 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : New worker (408024) forked
Nov 25 12:15:18 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : Loading success.
Nov 25 12:15:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG nova.compute.manager [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG oslo_concurrency.lockutils [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG oslo_concurrency.lockutils [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG oslo_concurrency.lockutils [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.861 254096 DEBUG nova.compute.manager [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.861 254096 WARNING nova.compute.manager [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG nova.compute.manager [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG oslo_concurrency.lockutils [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG oslo_concurrency.lockutils [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG oslo_concurrency.lockutils [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.897 254096 DEBUG nova.compute.manager [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:15:18 np0005535469 nova_compute[254092]: 2025-11-25 17:15:18.897 254096 WARNING nova.compute.manager [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:15:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:20 np0005535469 nova_compute[254092]: 2025-11-25 17:15:20.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 12:15:20 np0005535469 nova_compute[254092]: 2025-11-25 17:15:20.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 423 KiB/s wr, 75 op/s
Nov 25 12:15:23 np0005535469 NetworkManager[48891]: <info>  [1764090923.1242] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Nov 25 12:15:23 np0005535469 NetworkManager[48891]: <info>  [1764090923.1254] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Nov 25 12:15:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:23Z|01468|binding|INFO|Releasing lport fff6df53-4de9-409a-abf7-032bad835b32 from this chassis (sb_readonly=0)
Nov 25 12:15:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:23Z|01469|binding|INFO|Releasing lport 805e6fea-3f01-4342-8f5e-5b75e48ec68e from this chassis (sb_readonly=0)
Nov 25 12:15:23 np0005535469 nova_compute[254092]: 2025-11-25 17:15:23.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:23Z|01470|binding|INFO|Releasing lport fff6df53-4de9-409a-abf7-032bad835b32 from this chassis (sb_readonly=0)
Nov 25 12:15:23 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:23Z|01471|binding|INFO|Releasing lport 805e6fea-3f01-4342-8f5e-5b75e48ec68e from this chassis (sb_readonly=0)
Nov 25 12:15:23 np0005535469 nova_compute[254092]: 2025-11-25 17:15:23.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:23 np0005535469 nova_compute[254092]: 2025-11-25 17:15:23.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:24 np0005535469 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG nova.compute.manager [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:24 np0005535469 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG nova.compute.manager [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:15:24 np0005535469 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG oslo_concurrency.lockutils [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:24 np0005535469 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG oslo_concurrency.lockutils [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:24 np0005535469 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG nova.network.neutron [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:15:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:15:25 np0005535469 nova_compute[254092]: 2025-11-25 17:15:25.392 254096 DEBUG nova.network.neutron [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated VIF entry in instance network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:15:25 np0005535469 nova_compute[254092]: 2025-11-25 17:15:25.393 254096 DEBUG nova.network.neutron [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:25 np0005535469 nova_compute[254092]: 2025-11-25 17:15:25.413 254096 DEBUG oslo_concurrency.lockutils [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:25 np0005535469 nova_compute[254092]: 2025-11-25 17:15:25.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:25 np0005535469 nova_compute[254092]: 2025-11-25 17:15:25.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:15:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:15:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:29Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:85:52 10.100.0.12
Nov 25 12:15:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:29Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:85:52 10.100.0.12
Nov 25 12:15:30 np0005535469 nova_compute[254092]: 2025-11-25 17:15:30.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 105 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 12:15:30 np0005535469 nova_compute[254092]: 2025-11-25 17:15:30.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:31 np0005535469 podman[408038]: 2025-11-25 17:15:31.645476479 +0000 UTC m=+0.061933507 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 12:15:31 np0005535469 podman[408039]: 2025-11-25 17:15:31.665427922 +0000 UTC m=+0.081938551 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 12:15:31 np0005535469 podman[408040]: 2025-11-25 17:15:31.708611288 +0000 UTC m=+0.119606737 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:15:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 112 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 25 12:15:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:34 np0005535469 nova_compute[254092]: 2025-11-25 17:15:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 112 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 25 12:15:35 np0005535469 nova_compute[254092]: 2025-11-25 17:15:35.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:35 np0005535469 nova_compute[254092]: 2025-11-25 17:15:35.920 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:15:38 np0005535469 nova_compute[254092]: 2025-11-25 17:15:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:15:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:15:40
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control', 'images', 'vms']
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:15:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:15:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793098788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:15:40 np0005535469 nova_compute[254092]: 2025-11-25 17:15:40.959 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.033 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.033 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.213 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.215 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3461MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.215 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.215 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.288 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cab3a333-1f68-435b-b6cb-a508755c2565 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.288 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.289 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.371 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:15:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498552321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.857 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.862 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.892 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.925 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:15:41 np0005535469 nova_compute[254092]: 2025-11-25 17:15:41.925 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 394 KiB/s wr, 30 op/s
Nov 25 12:15:42 np0005535469 nova_compute[254092]: 2025-11-25 17:15:42.927 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:42 np0005535469 nova_compute[254092]: 2025-11-25 17:15:42.928 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:15:42 np0005535469 nova_compute[254092]: 2025-11-25 17:15:42.948 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:42 np0005535469 nova_compute[254092]: 2025-11-25 17:15:42.949 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:42 np0005535469 nova_compute[254092]: 2025-11-25 17:15:42.969 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.119 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.120 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.130 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.131 254096 INFO nova.compute.claims [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.251 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.269 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.271 254096 DEBUG nova.compute.provider_tree [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.285 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.313 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.361 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:15:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521690084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.816 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.824 254096 DEBUG nova.compute.provider_tree [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.878 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.899 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.900 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.945 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.946 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:15:43 np0005535469 nova_compute[254092]: 2025-11-25 17:15:43.984 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.009 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:15:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.117 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.118 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.119 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Creating image(s)#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.143 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.175 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.217 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.221 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.259 254096 DEBUG nova.policy [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.294 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.295 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.296 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.296 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.315 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.319 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:44 np0005535469 nova_compute[254092]: 2025-11-25 17:15:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 28 KiB/s wr, 14 op/s
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.172 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.273 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.509 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully created port: d4fd6164-a382-44de-8709-0a3941640a9d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.636 254096 DEBUG nova.objects.instance [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid f0fce250-5e4a-4063-a0c9-a2285f68c22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.649 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.649 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Ensure instance console log exists: /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.650 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.650 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.651 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:45 np0005535469 nova_compute[254092]: 2025-11-25 17:15:45.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 12:15:47 np0005535469 nova_compute[254092]: 2025-11-25 17:15:47.065 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully created port: 5cfdc6b4-a091-429f-a33f-cc941535c221 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.170 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully updated port: d4fd6164-a382-44de-8709-0a3941640a9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.260 254096 DEBUG nova.compute.manager [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.260 254096 DEBUG nova.compute.manager [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.260 254096 DEBUG oslo_concurrency.lockutils [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.261 254096 DEBUG oslo_concurrency.lockutils [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.261 254096 DEBUG nova.network.neutron [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.415 254096 DEBUG nova.network.neutron [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.754 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.754 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.755 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.755 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:15:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.887 254096 DEBUG nova.network.neutron [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:48 np0005535469 nova_compute[254092]: 2025-11-25 17:15:48.906 254096 DEBUG oslo_concurrency.lockutils [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:49 np0005535469 nova_compute[254092]: 2025-11-25 17:15:49.268 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully updated port: 5cfdc6b4-a091-429f-a33f-cc941535c221 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:15:49 np0005535469 nova_compute[254092]: 2025-11-25 17:15:49.326 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:49 np0005535469 nova_compute[254092]: 2025-11-25 17:15:49.327 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:49 np0005535469 nova_compute[254092]: 2025-11-25 17:15:49.327 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:15:49 np0005535469 nova_compute[254092]: 2025-11-25 17:15:49.483 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:15:50 np0005535469 nova_compute[254092]: 2025-11-25 17:15:50.350 254096 DEBUG nova.compute.manager [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:50 np0005535469 nova_compute[254092]: 2025-11-25 17:15:50.350 254096 DEBUG nova.compute.manager [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-5cfdc6b4-a091-429f-a33f-cc941535c221. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:15:50 np0005535469 nova_compute[254092]: 2025-11-25 17:15:50.351 254096 DEBUG oslo_concurrency.lockutils [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:50 np0005535469 nova_compute[254092]: 2025-11-25 17:15:50.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:15:50 np0005535469 nova_compute[254092]: 2025-11-25 17:15:50.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.064 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.088 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.088 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.251 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.283 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.284 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance network_info: |[{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.285 254096 DEBUG oslo_concurrency.lockutils [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.286 254096 DEBUG nova.network.neutron [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port 5cfdc6b4-a091-429f-a33f-cc941535c221 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.296 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start _get_guest_xml network_info=[{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.303 254096 WARNING nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.309 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.310 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.324 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.325 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.326 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.326 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.327 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.327 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.328 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.328 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.328 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.329 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.329 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.330 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.330 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.330 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.336 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011052700926287686 of space, bias 1.0, pg target 0.3315810277886306 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:15:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:15:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:15:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391039012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.856 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.882 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:51 np0005535469 nova_compute[254092]: 2025-11-25 17:15:51.887 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:15:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529115130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.308 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.309 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.309 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.310 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.311 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.311 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.312 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.313 254096 DEBUG nova.objects.instance [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0fce250-5e4a-4063-a0c9-a2285f68c22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.330 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <uuid>f0fce250-5e4a-4063-a0c9-a2285f68c22e</uuid>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <name>instance-0000008c</name>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1931135016</nova:name>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:15:51</nova:creationTime>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:port uuid="d4fd6164-a382-44de-8709-0a3941640a9d">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <nova:port uuid="5cfdc6b4-a091-429f-a33f-cc941535c221">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fea7:615b" ipVersion="6"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fea7:615b" ipVersion="6"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <entry name="serial">f0fce250-5e4a-4063-a0c9-a2285f68c22e</entry>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <entry name="uuid">f0fce250-5e4a-4063-a0c9-a2285f68c22e</entry>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:e8:fe:9c"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <target dev="tapd4fd6164-a3"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:a7:61:5b"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <target dev="tap5cfdc6b4-a0"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/console.log" append="off"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:15:52 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:15:52 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:15:52 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:15:52 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Preparing to wait for external event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Preparing to wait for external event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.333 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.333 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.333 254096 DEBUG os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.335 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.335 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.339 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4fd6164-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.339 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4fd6164-a3, col_values=(('external_ids', {'iface-id': 'd4fd6164-a382-44de-8709-0a3941640a9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:fe:9c', 'vm-uuid': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:52 np0005535469 NetworkManager[48891]: <info>  [1764090952.4429] manager: (tapd4fd6164-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.449 254096 INFO os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3')#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.450 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.450 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.450 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.453 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cfdc6b4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.453 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5cfdc6b4-a0, col_values=(('external_ids', {'iface-id': '5cfdc6b4-a091-429f-a33f-cc941535c221', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:61:5b', 'vm-uuid': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 NetworkManager[48891]: <info>  [1764090952.4555] manager: (tap5cfdc6b4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.460 254096 INFO os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0')#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:e8:fe:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:a7:61:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.514 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Using config drive#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.532 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.671 254096 DEBUG nova.network.neutron [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updated VIF entry in instance network info cache for port 5cfdc6b4-a091-429f-a33f-cc941535c221. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.672 254096 DEBUG nova.network.neutron [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:15:52 np0005535469 nova_compute[254092]: 2025-11-25 17:15:52.690 254096 DEBUG oslo_concurrency.lockutils [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:15:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:15:53 np0005535469 nova_compute[254092]: 2025-11-25 17:15:53.878 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Creating config drive at /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config#033[00m
Nov 25 12:15:53 np0005535469 nova_compute[254092]: 2025-11-25 17:15:53.882 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2t1rxeqr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.039 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2t1rxeqr" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.075 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:15:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.083 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.250 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.252 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deleting local config drive /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config because it was imported into RBD.#033[00m
Nov 25 12:15:54 np0005535469 kernel: tapd4fd6164-a3: entered promiscuous mode
Nov 25 12:15:54 np0005535469 NetworkManager[48891]: <info>  [1764090954.3078] manager: (tapd4fd6164-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/611)
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01472|binding|INFO|Claiming lport d4fd6164-a382-44de-8709-0a3941640a9d for this chassis.
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01473|binding|INFO|d4fd6164-a382-44de-8709-0a3941640a9d: Claiming fa:16:3e:e8:fe:9c 10.100.0.8
Nov 25 12:15:54 np0005535469 NetworkManager[48891]: <info>  [1764090954.3289] manager: (tap5cfdc6b4-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.328 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:fe:9c 10.100.0.8'], port_security=['fa:16:3e:e8:fe:9c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d4fd6164-a382-44de-8709-0a3941640a9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.330 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d4fd6164-a382-44de-8709-0a3941640a9d in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 bound to our chassis#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.332 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48#033[00m
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01474|binding|INFO|Setting lport d4fd6164-a382-44de-8709-0a3941640a9d ovn-installed in OVS
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01475|binding|INFO|Setting lport d4fd6164-a382-44de-8709-0a3941640a9d up in Southbound
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 kernel: tap5cfdc6b4-a0: entered promiscuous mode
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01476|if_status|INFO|Dropped 2 log messages in last 116 seconds (most recently, 116 seconds ago) due to excessive rate
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01477|if_status|INFO|Not updating pb chassis for 5cfdc6b4-a091-429f-a33f-cc941535c221 now as sb is readonly
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01478|binding|INFO|Claiming lport 5cfdc6b4-a091-429f-a33f-cc941535c221 for this chassis.
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01479|binding|INFO|5cfdc6b4-a091-429f-a33f-cc941535c221: Claiming fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.348 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], port_security=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea7:615b/64 2001:db8::f816:3eff:fea7:615b/64', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cfdc6b4-a091-429f-a33f-cc941535c221) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:15:54 np0005535469 systemd-udevd[408477]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:15:54 np0005535469 systemd-udevd[408476]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.352 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c841d3-db14-44c0-865c-934a34e346da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01480|binding|INFO|Setting lport 5cfdc6b4-a091-429f-a33f-cc941535c221 ovn-installed in OVS
Nov 25 12:15:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:15:54Z|01481|binding|INFO|Setting lport 5cfdc6b4-a091-429f-a33f-cc941535c221 up in Southbound
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 NetworkManager[48891]: <info>  [1764090954.3661] device (tapd4fd6164-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:15:54 np0005535469 NetworkManager[48891]: <info>  [1764090954.3671] device (tapd4fd6164-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:15:54 np0005535469 NetworkManager[48891]: <info>  [1764090954.3675] device (tap5cfdc6b4-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:15:54 np0005535469 NetworkManager[48891]: <info>  [1764090954.3682] device (tap5cfdc6b4-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:15:54 np0005535469 systemd-machined[216343]: New machine qemu-174-instance-0000008c.
Nov 25 12:15:54 np0005535469 systemd[1]: Started Virtual Machine qemu-174-instance-0000008c.
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.397 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c494c435-08c7-4a2d-8e67-7fc502097cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.401 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3663c5d2-79e8-49f0-8cd2-de1917feffe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.437 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[412c72e9-6c4c-488c-8306-6dd8764dd01c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0e3c2c-0341-4e00-8608-758f4c27d675]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 34818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408492, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.483 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd46f749-9448-474e-9598-283831499dfa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737444, 'tstamp': 737444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408494, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737448, 'tstamp': 737448}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408494, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.484 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.488 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c1b4538-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.489 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.489 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c1b4538-70, col_values=(('external_ids', {'iface-id': '805e6fea-3f01-4342-8f5e-5b75e48ec68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.489 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.491 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cfdc6b4-a091-429f-a33f-cc941535c221 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.492 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c57073ad-8c41-459b-9402-c367011860c7#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.509 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b46a3e-0184-4fba-ab50-f54236331285]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.533 254096 DEBUG nova.compute.manager [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.534 254096 DEBUG oslo_concurrency.lockutils [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.534 254096 DEBUG oslo_concurrency.lockutils [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.534 254096 DEBUG oslo_concurrency.lockutils [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.535 254096 DEBUG nova.compute.manager [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Processing event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.537 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[82d94a94-1c4d-48fd-97b4-147b4223e625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.539 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4dd88d-36ce-4ac2-984c-0e1cbd724fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.566 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6d28e13a-dbc4-4c4a-b90e-c64982452484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f692c81e-d663-45fe-a3dd-2d1db7f21f8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 38152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408500, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c5a6c3-77f3-4f1a-b8f8-a82fbb8ebf0c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc57073ad-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737545, 'tstamp': 737545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408516, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.605 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc57073ad-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.609 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc57073ad-80, col_values=(('external_ids', {'iface-id': 'fff6df53-4de9-409a-abf7-032bad835b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.609 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.615 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.615 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.616 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090954.7570217, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Started (Lifecycle Event)#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.780 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.786 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090954.7573197, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.786 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.805 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.808 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:15:54 np0005535469 nova_compute[254092]: 2025-11-25 17:15:54.827 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:15:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:15:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:15:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543334435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:15:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:15:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543334435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:15:55 np0005535469 nova_compute[254092]: 2025-11-25 17:15:55.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:15:55 np0005535469 nova_compute[254092]: 2025-11-25 17:15:55.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.659 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No event matching network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d in dict_keys([('network-vif-plugged', '5cfdc6b4-a091-429f-a33f-cc941535c221')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 WARNING nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Processing event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 WARNING nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.663 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.667 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090956.667536, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.667 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.669 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.672 254096 INFO nova.virt.libvirt.driver [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance spawned successfully.#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.673 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.689 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.695 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.699 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.700 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.700 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.701 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.701 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.701 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.729 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.776 254096 INFO nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 12.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.777 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.853 254096 INFO nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 13.80 seconds to build instance.#033[00m
Nov 25 12:15:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 12:15:56 np0005535469 nova_compute[254092]: 2025-11-25 17:15:56.874 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:15:57 np0005535469 nova_compute[254092]: 2025-11-25 17:15:57.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:15:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:15:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:15:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:15:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:15:58 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e4077344-3c54-46d9-a3d3-b7164ede7bbd does not exist
Nov 25 12:15:58 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 11ad4fc0-db03-4d58-8850-934fe2209a13 does not exist
Nov 25 12:15:58 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cbd18275-97f6-4472-beea-6b7746c180a3 does not exist
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:15:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:15:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 25 12:15:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:15:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:15:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:15:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:15:59 np0005535469 podman[408939]: 2025-11-25 17:15:59.4880282 +0000 UTC m=+0.080131122 container create 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:15:59 np0005535469 podman[408939]: 2025-11-25 17:15:59.439345875 +0000 UTC m=+0.031448817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:15:59 np0005535469 systemd[1]: Started libpod-conmon-142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b.scope.
Nov 25 12:15:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:15:59 np0005535469 nova_compute[254092]: 2025-11-25 17:15:59.615 254096 DEBUG nova.compute.manager [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:15:59 np0005535469 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG nova.compute.manager [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:15:59 np0005535469 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG oslo_concurrency.lockutils [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:15:59 np0005535469 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG oslo_concurrency.lockutils [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:15:59 np0005535469 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG nova.network.neutron [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:15:59 np0005535469 podman[408939]: 2025-11-25 17:15:59.634831036 +0000 UTC m=+0.226933978 container init 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:15:59 np0005535469 podman[408939]: 2025-11-25 17:15:59.643887012 +0000 UTC m=+0.235989934 container start 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:15:59 np0005535469 compassionate_mccarthy[408955]: 167 167
Nov 25 12:15:59 np0005535469 systemd[1]: libpod-142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b.scope: Deactivated successfully.
Nov 25 12:15:59 np0005535469 podman[408939]: 2025-11-25 17:15:59.654786199 +0000 UTC m=+0.246889091 container attach 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:15:59 np0005535469 podman[408939]: 2025-11-25 17:15:59.655391296 +0000 UTC m=+0.247494188 container died 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 12:15:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-883aa7876024061a26bf8dcfb407ee94c6935f3924593790ec45e7ec46897343-merged.mount: Deactivated successfully.
Nov 25 12:15:59 np0005535469 podman[408939]: 2025-11-25 17:15:59.769238535 +0000 UTC m=+0.361341447 container remove 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:15:59 np0005535469 systemd[1]: libpod-conmon-142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b.scope: Deactivated successfully.
Nov 25 12:15:59 np0005535469 podman[408979]: 2025-11-25 17:15:59.98949891 +0000 UTC m=+0.061125205 container create 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:16:00 np0005535469 systemd[1]: Started libpod-conmon-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope.
Nov 25 12:16:00 np0005535469 podman[408979]: 2025-11-25 17:15:59.953049298 +0000 UTC m=+0.024675613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:16:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:16:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:00 np0005535469 podman[408979]: 2025-11-25 17:16:00.08978682 +0000 UTC m=+0.161413295 container init 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:16:00 np0005535469 podman[408979]: 2025-11-25 17:16:00.097831089 +0000 UTC m=+0.169457384 container start 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:16:00 np0005535469 podman[408979]: 2025-11-25 17:16:00.104028878 +0000 UTC m=+0.175655193 container attach 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:16:00 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:00.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Nov 25 12:16:00 np0005535469 nova_compute[254092]: 2025-11-25 17:16:00.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:01 np0005535469 intelligent_nightingale[408995]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:16:01 np0005535469 intelligent_nightingale[408995]: --> relative data size: 1.0
Nov 25 12:16:01 np0005535469 intelligent_nightingale[408995]: --> All data devices are unavailable
Nov 25 12:16:01 np0005535469 systemd[1]: libpod-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope: Deactivated successfully.
Nov 25 12:16:01 np0005535469 systemd[1]: libpod-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope: Consumed 1.005s CPU time.
Nov 25 12:16:01 np0005535469 podman[408979]: 2025-11-25 17:16:01.178629207 +0000 UTC m=+1.250255502 container died 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:16:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547-merged.mount: Deactivated successfully.
Nov 25 12:16:01 np0005535469 podman[408979]: 2025-11-25 17:16:01.341217332 +0000 UTC m=+1.412843637 container remove 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:16:01 np0005535469 systemd[1]: libpod-conmon-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope: Deactivated successfully.
Nov 25 12:16:01 np0005535469 podman[409139]: 2025-11-25 17:16:01.779531393 +0000 UTC m=+0.055808460 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:16:01 np0005535469 podman[409138]: 2025-11-25 17:16:01.817583989 +0000 UTC m=+0.094216555 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:16:01 np0005535469 podman[409174]: 2025-11-25 17:16:01.901118753 +0000 UTC m=+0.081354636 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 12:16:02 np0005535469 podman[409235]: 2025-11-25 17:16:02.070704469 +0000 UTC m=+0.045168600 container create 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:16:02 np0005535469 systemd[1]: Started libpod-conmon-263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188.scope.
Nov 25 12:16:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:16:02 np0005535469 podman[409235]: 2025-11-25 17:16:02.049964474 +0000 UTC m=+0.024428625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:16:02 np0005535469 podman[409235]: 2025-11-25 17:16:02.194999782 +0000 UTC m=+0.169463933 container init 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:16:02 np0005535469 podman[409235]: 2025-11-25 17:16:02.200986485 +0000 UTC m=+0.175450616 container start 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:16:02 np0005535469 affectionate_brahmagupta[409251]: 167 167
Nov 25 12:16:02 np0005535469 systemd[1]: libpod-263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188.scope: Deactivated successfully.
Nov 25 12:16:02 np0005535469 podman[409235]: 2025-11-25 17:16:02.231324201 +0000 UTC m=+0.205788372 container attach 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:16:02 np0005535469 podman[409235]: 2025-11-25 17:16:02.231875795 +0000 UTC m=+0.206339966 container died 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:16:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-be1343b10cae17f6bda8c2b2aed07fb46cda7a844c5b96ac54f4314b9a148261-merged.mount: Deactivated successfully.
Nov 25 12:16:02 np0005535469 nova_compute[254092]: 2025-11-25 17:16:02.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:02 np0005535469 podman[409235]: 2025-11-25 17:16:02.497719622 +0000 UTC m=+0.472183763 container remove 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:16:02 np0005535469 systemd[1]: libpod-conmon-263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188.scope: Deactivated successfully.
Nov 25 12:16:02 np0005535469 podman[409277]: 2025-11-25 17:16:02.731823634 +0000 UTC m=+0.083260697 container create 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:16:02 np0005535469 podman[409277]: 2025-11-25 17:16:02.683573341 +0000 UTC m=+0.035010424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:16:02 np0005535469 systemd[1]: Started libpod-conmon-939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee.scope.
Nov 25 12:16:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:16:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:16:03 np0005535469 podman[409277]: 2025-11-25 17:16:03.169197489 +0000 UTC m=+0.520634672 container init 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 12:16:03 np0005535469 podman[409277]: 2025-11-25 17:16:03.183734344 +0000 UTC m=+0.535171417 container start 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:16:03 np0005535469 podman[409277]: 2025-11-25 17:16:03.254823879 +0000 UTC m=+0.606260982 container attach 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:16:03 np0005535469 nova_compute[254092]: 2025-11-25 17:16:03.916 254096 DEBUG nova.network.neutron [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updated VIF entry in instance network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:16:03 np0005535469 nova_compute[254092]: 2025-11-25 17:16:03.920 254096 DEBUG nova.network.neutron [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]: {
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:    "0": [
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:        {
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "devices": [
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "/dev/loop3"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            ],
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_name": "ceph_lv0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_size": "21470642176",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "name": "ceph_lv0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "tags": {
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cluster_name": "ceph",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.crush_device_class": "",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.encrypted": "0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osd_id": "0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.type": "block",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.vdo": "0"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            },
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "type": "block",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "vg_name": "ceph_vg0"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:        }
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:    ],
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:    "1": [
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:        {
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "devices": [
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "/dev/loop4"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            ],
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_name": "ceph_lv1",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_size": "21470642176",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "name": "ceph_lv1",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "tags": {
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cluster_name": "ceph",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.crush_device_class": "",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.encrypted": "0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osd_id": "1",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.type": "block",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.vdo": "0"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            },
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "type": "block",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "vg_name": "ceph_vg1"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:        }
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:    ],
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:    "2": [
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:        {
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "devices": [
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "/dev/loop5"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            ],
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_name": "ceph_lv2",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_size": "21470642176",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "name": "ceph_lv2",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "tags": {
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.cluster_name": "ceph",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.crush_device_class": "",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.encrypted": "0",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osd_id": "2",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.type": "block",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:                "ceph.vdo": "0"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            },
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "type": "block",
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:            "vg_name": "ceph_vg2"
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:        }
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]:    ]
Nov 25 12:16:03 np0005535469 elastic_feynman[409293]: }
Nov 25 12:16:04 np0005535469 systemd[1]: libpod-939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee.scope: Deactivated successfully.
Nov 25 12:16:04 np0005535469 podman[409277]: 2025-11-25 17:16:04.034157833 +0000 UTC m=+1.385594936 container died 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:16:04 np0005535469 nova_compute[254092]: 2025-11-25 17:16:04.069 254096 DEBUG oslo_concurrency.lockutils [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:16:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7-merged.mount: Deactivated successfully.
Nov 25 12:16:04 np0005535469 podman[409277]: 2025-11-25 17:16:04.396502825 +0000 UTC m=+1.747939898 container remove 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:16:04 np0005535469 systemd[1]: libpod-conmon-939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee.scope: Deactivated successfully.
Nov 25 12:16:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:16:05 np0005535469 podman[409456]: 2025-11-25 17:16:05.24645684 +0000 UTC m=+0.029394191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:16:05 np0005535469 podman[409456]: 2025-11-25 17:16:05.393713758 +0000 UTC m=+0.176651089 container create 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:16:05 np0005535469 systemd[1]: Started libpod-conmon-54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368.scope.
Nov 25 12:16:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:16:05 np0005535469 podman[409456]: 2025-11-25 17:16:05.845927837 +0000 UTC m=+0.628865168 container init 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 12:16:05 np0005535469 podman[409456]: 2025-11-25 17:16:05.853119593 +0000 UTC m=+0.636056954 container start 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:16:05 np0005535469 wonderful_moore[409473]: 167 167
Nov 25 12:16:05 np0005535469 systemd[1]: libpod-54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368.scope: Deactivated successfully.
Nov 25 12:16:05 np0005535469 nova_compute[254092]: 2025-11-25 17:16:05.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:06 np0005535469 podman[409456]: 2025-11-25 17:16:06.082280691 +0000 UTC m=+0.865218062 container attach 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:16:06 np0005535469 podman[409456]: 2025-11-25 17:16:06.084016048 +0000 UTC m=+0.866953429 container died 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:16:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0d7b4b84e971e696e208841c13b537eb075e7416c7664951de116f52c2f13d52-merged.mount: Deactivated successfully.
Nov 25 12:16:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 86 op/s
Nov 25 12:16:07 np0005535469 podman[409456]: 2025-11-25 17:16:07.25240567 +0000 UTC m=+2.035343021 container remove 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:16:07 np0005535469 systemd[1]: libpod-conmon-54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368.scope: Deactivated successfully.
Nov 25 12:16:07 np0005535469 nova_compute[254092]: 2025-11-25 17:16:07.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:07 np0005535469 podman[409499]: 2025-11-25 17:16:07.443947284 +0000 UTC m=+0.029656578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:16:07 np0005535469 podman[409499]: 2025-11-25 17:16:07.666010268 +0000 UTC m=+0.251719542 container create d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:16:07 np0005535469 systemd[1]: Started libpod-conmon-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope.
Nov 25 12:16:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:16:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:16:08 np0005535469 podman[409499]: 2025-11-25 17:16:08.045632202 +0000 UTC m=+0.631341496 container init d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:16:08 np0005535469 podman[409499]: 2025-11-25 17:16:08.054863483 +0000 UTC m=+0.640572757 container start d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:16:08 np0005535469 podman[409499]: 2025-11-25 17:16:08.147513114 +0000 UTC m=+0.733222499 container attach d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:16:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 76 op/s
Nov 25 12:16:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]: {
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "osd_id": 1,
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "type": "bluestore"
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:    },
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "osd_id": 2,
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "type": "bluestore"
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:    },
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "osd_id": 0,
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:        "type": "bluestore"
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]:    }
Nov 25 12:16:09 np0005535469 flamboyant_wescoff[409516]: }
Nov 25 12:16:09 np0005535469 systemd[1]: libpod-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope: Deactivated successfully.
Nov 25 12:16:09 np0005535469 systemd[1]: libpod-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope: Consumed 1.094s CPU time.
Nov 25 12:16:09 np0005535469 podman[409499]: 2025-11-25 17:16:09.156983541 +0000 UTC m=+1.742692815 container died d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:16:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e-merged.mount: Deactivated successfully.
Nov 25 12:16:09 np0005535469 podman[409499]: 2025-11-25 17:16:09.365424315 +0000 UTC m=+1.951133599 container remove d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:16:09 np0005535469 systemd[1]: libpod-conmon-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope: Deactivated successfully.
Nov 25 12:16:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:16:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:16:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:16:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:16:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev bb880e81-0749-4248-9eb5-d332f15d340d does not exist
Nov 25 12:16:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9c9c7613-c82a-4460-8405-c3541c3f2abc does not exist
Nov 25 12:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:16:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:16:10 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:10Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:fe:9c 10.100.0.8
Nov 25 12:16:10 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:10Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:fe:9c 10.100.0.8
Nov 25 12:16:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:16:10 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:16:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 180 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 111 op/s
Nov 25 12:16:10 np0005535469 nova_compute[254092]: 2025-11-25 17:16:10.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:12 np0005535469 nova_compute[254092]: 2025-11-25 17:16:12.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 12:16:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:13.655 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 12:16:15 np0005535469 nova_compute[254092]: 2025-11-25 17:16:15.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 12:16:17 np0005535469 nova_compute[254092]: 2025-11-25 17:16:17.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Nov 25 12:16:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Nov 25 12:16:20 np0005535469 nova_compute[254092]: 2025-11-25 17:16:20.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:22 np0005535469 nova_compute[254092]: 2025-11-25 17:16:22.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 289 KiB/s rd, 827 KiB/s wr, 76 op/s
Nov 25 12:16:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 83 KiB/s wr, 47 op/s
Nov 25 12:16:25 np0005535469 nova_compute[254092]: 2025-11-25 17:16:25.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 83 KiB/s wr, 47 op/s
Nov 25 12:16:27 np0005535469 nova_compute[254092]: 2025-11-25 17:16:27.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 21 KiB/s wr, 1 op/s
Nov 25 12:16:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:29Z|01482|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Nov 25 12:16:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 22 KiB/s wr, 1 op/s
Nov 25 12:16:31 np0005535469 nova_compute[254092]: 2025-11-25 17:16:31.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:32 np0005535469 nova_compute[254092]: 2025-11-25 17:16:32.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:32 np0005535469 podman[409615]: 2025-11-25 17:16:32.664531741 +0000 UTC m=+0.080729549 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:16:32 np0005535469 podman[409616]: 2025-11-25 17:16:32.701148207 +0000 UTC m=+0.108415002 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 12:16:32 np0005535469 podman[409617]: 2025-11-25 17:16:32.728597945 +0000 UTC m=+0.133215498 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:16:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Nov 25 12:16:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:34 np0005535469 nova_compute[254092]: 2025-11-25 17:16:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.061 254096 DEBUG nova.compute.manager [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.062 254096 DEBUG nova.compute.manager [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.063 254096 DEBUG oslo_concurrency.lockutils [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.063 254096 DEBUG oslo_concurrency.lockutils [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.063 254096 DEBUG nova.network.neutron [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.167 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.169 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.170 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.170 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.170 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.171 254096 INFO nova.compute.manager [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Terminating instance#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.172 254096 DEBUG nova.compute.manager [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:16:35 np0005535469 kernel: tapd4fd6164-a3 (unregistering): left promiscuous mode
Nov 25 12:16:35 np0005535469 NetworkManager[48891]: <info>  [1764090995.3791] device (tapd4fd6164-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:16:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:35Z|01483|binding|INFO|Releasing lport d4fd6164-a382-44de-8709-0a3941640a9d from this chassis (sb_readonly=0)
Nov 25 12:16:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:35Z|01484|binding|INFO|Setting lport d4fd6164-a382-44de-8709-0a3941640a9d down in Southbound
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:35Z|01485|binding|INFO|Removing iface tapd4fd6164-a3 ovn-installed in OVS
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.400 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.411 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 kernel: tap5cfdc6b4-a0 (unregistering): left promiscuous mode
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.424 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:fe:9c 10.100.0.8'], port_security=['fa:16:3e:e8:fe:9c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d4fd6164-a382-44de-8709-0a3941640a9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d4fd6164-a382-44de-8709-0a3941640a9d in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 unbound from our chassis#033[00m
Nov 25 12:16:35 np0005535469 NetworkManager[48891]: <info>  [1764090995.4267] device (tap5cfdc6b4-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.427 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48#033[00m
Nov 25 12:16:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:35Z|01486|binding|INFO|Releasing lport 5cfdc6b4-a091-429f-a33f-cc941535c221 from this chassis (sb_readonly=0)
Nov 25 12:16:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:35Z|01487|binding|INFO|Setting lport 5cfdc6b4-a091-429f-a33f-cc941535c221 down in Southbound
Nov 25 12:16:35 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:35Z|01488|binding|INFO|Removing iface tap5cfdc6b4-a0 ovn-installed in OVS
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.444 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b518f692-afaa-4963-b0be-c4a22befc222]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.448 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], port_security=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea7:615b/64 2001:db8::f816:3eff:fea7:615b/64', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cfdc6b4-a091-429f-a33f-cc941535c221) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.483 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6fece7-f073-4f89-9903-7d0461a9229c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.487 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3f03c88e-7c0b-4816-8477-a7d0fd2336e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 25 12:16:35 np0005535469 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Consumed 14.268s CPU time.
Nov 25 12:16:35 np0005535469 systemd-machined[216343]: Machine qemu-174-instance-0000008c terminated.
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.516 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[864a9ae3-2554-4e5c-8c8a-afea1ab8603f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.536 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84fda3c4-9646-449c-90dc-ce2164a4ee80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 31371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409693, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.557 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[33075044-f0d1-4906-92e3-d531312d678a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737444, 'tstamp': 737444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409694, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737448, 'tstamp': 737448}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409694, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.560 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c1b4538-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c1b4538-70, col_values=(('external_ids', {'iface-id': '805e6fea-3f01-4342-8f5e-5b75e48ec68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.577 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cfdc6b4-a091-429f-a33f-cc941535c221 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.581 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c57073ad-8c41-459b-9402-c367011860c7#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.601 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[deca42b5-fc3b-4da4-bcac-d6dadb89a15e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 NetworkManager[48891]: <info>  [1764090995.6087] manager: (tap5cfdc6b4-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.627 254096 INFO nova.virt.libvirt.driver [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance destroyed successfully.#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.628 254096 DEBUG nova.objects.instance [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid f0fce250-5e4a-4063-a0c9-a2285f68c22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.644 254096 DEBUG nova.virt.libvirt.vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:56Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.644 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.645 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.646 254096 DEBUG os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.647 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30665d06-8bf7-43c0-a09c-d8475fa1ccec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.648 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4fd6164-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.651 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca66f59-800d-4f24-b4ba-088eafe326c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.659 254096 INFO os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3')#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.660 254096 DEBUG nova.virt.libvirt.vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:56Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.660 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.661 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.661 254096 DEBUG os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.662 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cfdc6b4-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.667 254096 INFO os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0')#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.688 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bd547a31-918a-4a5d-ba5b-69964e419481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.709 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84dc7149-02b0-4beb-8c4c-63c133dc96b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 23395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409739, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.733 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5fde957e-dbe4-45a7-bd92-fbc96a7b85f4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc57073ad-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737545, 'tstamp': 737545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409743, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.734 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 nova_compute[254092]: 2025-11-25 17:16:35.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.786 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc57073ad-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc57073ad-80, col_values=(('external_ids', {'iface-id': 'fff6df53-4de9-409a-abf7-032bad835b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:35 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:16:36 np0005535469 nova_compute[254092]: 2025-11-25 17:16:36.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.129 254096 DEBUG nova.network.neutron [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updated VIF entry in instance network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.130 254096 DEBUG nova.network.neutron [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.163 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-unplugged-d4fd6164-a382-44de-8709-0a3941640a9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-d4fd6164-a382-44de-8709-0a3941640a9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 WARNING nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:16:37 np0005535469 nova_compute[254092]: 2025-11-25 17:16:37.206 254096 DEBUG oslo_concurrency.lockutils [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:16:38 np0005535469 nova_compute[254092]: 2025-11-25 17:16:38.329 254096 INFO nova.virt.libvirt.driver [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deleting instance files /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e_del#033[00m
Nov 25 12:16:38 np0005535469 nova_compute[254092]: 2025-11-25 17:16:38.330 254096 INFO nova.virt.libvirt.driver [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deletion of /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e_del complete#033[00m
Nov 25 12:16:38 np0005535469 nova_compute[254092]: 2025-11-25 17:16:38.439 254096 INFO nova.compute.manager [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 3.27 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:16:38 np0005535469 nova_compute[254092]: 2025-11-25 17:16:38.440 254096 DEBUG oslo.service.loopingcall [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:16:38 np0005535469 nova_compute[254092]: 2025-11-25 17:16:38.440 254096 DEBUG nova.compute.manager [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:16:38 np0005535469 nova_compute[254092]: 2025-11-25 17:16:38.440 254096 DEBUG nova.network.neutron [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:16:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 25 12:16:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.278 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.279 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-unplugged-5cfdc6b4-a091-429f-a33f-cc941535c221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-5cfdc6b4-a091-429f-a33f-cc941535c221 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.281 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.281 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.281 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.282 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.282 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.282 254096 WARNING nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.656 254096 DEBUG nova.network.neutron [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.694 254096 INFO nova.compute.manager [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.762 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.763 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:39 np0005535469 nova_compute[254092]: 2025-11-25 17:16:39.834 254096 DEBUG oslo_concurrency.processutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:16:40
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:16:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:16:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787955388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:16:40 np0005535469 nova_compute[254092]: 2025-11-25 17:16:40.264 254096 DEBUG oslo_concurrency.processutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:16:40 np0005535469 nova_compute[254092]: 2025-11-25 17:16:40.272 254096 DEBUG nova.compute.provider_tree [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:16:40 np0005535469 nova_compute[254092]: 2025-11-25 17:16:40.297 254096 DEBUG nova.scheduler.client.report [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:16:40 np0005535469 nova_compute[254092]: 2025-11-25 17:16:40.534 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:16:40 np0005535469 nova_compute[254092]: 2025-11-25 17:16:40.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:16:40 np0005535469 nova_compute[254092]: 2025-11-25 17:16:40.734 254096 INFO nova.scheduler.client.report [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance f0fce250-5e4a-4063-a0c9-a2285f68c22e#033[00m
Nov 25 12:16:40 np0005535469 nova_compute[254092]: 2025-11-25 17:16:40.884 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 144 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 22 op/s
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.482 254096 DEBUG nova.compute.manager [req-226fec3a-258f-4358-b0d4-47252684fdb1 req-b8cb8d48-33e1-4694-944d-0a4ce40ae755 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-deleted-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.483 254096 DEBUG nova.compute.manager [req-226fec3a-258f-4358-b0d4-47252684fdb1 req-b8cb8d48-33e1-4694-944d-0a4ce40ae755 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-deleted-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.510 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.510 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.511 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.511 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.511 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:16:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:16:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2243257493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:16:41 np0005535469 nova_compute[254092]: 2025-11-25 17:16:41.990 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.057 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.058 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.119 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.120 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.225 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.228 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3424MB free_disk=59.93061828613281GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.228 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.228 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cab3a333-1f68-435b-b6cb-a508755c2565 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.319 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.397 254096 DEBUG nova.compute.manager [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.397 254096 DEBUG nova.compute.manager [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.398 254096 DEBUG oslo_concurrency.lockutils [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.398 254096 DEBUG oslo_concurrency.lockutils [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.398 254096 DEBUG nova.network.neutron [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.429 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.429 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.430 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.430 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.430 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.431 254096 INFO nova.compute.manager [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Terminating instance#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.432 254096 DEBUG nova.compute.manager [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:16:42 np0005535469 kernel: tapbfd7bca3-f0 (unregistering): left promiscuous mode
Nov 25 12:16:42 np0005535469 NetworkManager[48891]: <info>  [1764091002.5074] device (tapbfd7bca3-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:42Z|01489|binding|INFO|Releasing lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 from this chassis (sb_readonly=0)
Nov 25 12:16:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:42Z|01490|binding|INFO|Setting lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 down in Southbound
Nov 25 12:16:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:42Z|01491|binding|INFO|Removing iface tapbfd7bca3-f0 ovn-installed in OVS
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.523 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:85:52 10.100.0.12'], port_security=['fa:16:3e:f9:85:52 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bfd7bca3-f01a-4857-8c51-1085cde3ad00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.525 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bfd7bca3-f01a-4857-8c51-1085cde3ad00 in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 unbound from our chassis#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.525 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3194d4ca-f9b9-4c58-87a1-651d70943ad4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.527 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 namespace which is not needed anymore#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.527 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 kernel: tapcd75615e-b8 (unregistering): left promiscuous mode
Nov 25 12:16:42 np0005535469 NetworkManager[48891]: <info>  [1764091002.5480] device (tapcd75615e-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:16:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:42Z|01492|binding|INFO|Releasing lport cd75615e-b80b-4685-b424-2c54f7fdbde8 from this chassis (sb_readonly=0)
Nov 25 12:16:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:42Z|01493|binding|INFO|Setting lport cd75615e-b80b-4685-b424-2c54f7fdbde8 down in Southbound
Nov 25 12:16:42 np0005535469 ovn_controller[153477]: 2025-11-25T17:16:42Z|01494|binding|INFO|Removing iface tapcd75615e-b8 ovn-installed in OVS
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.572 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], port_security=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:6d38/64 2001:db8::f816:3eff:fe8f:6d38/64', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd75615e-b80b-4685-b424-2c54f7fdbde8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 25 12:16:42 np0005535469 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Consumed 16.919s CPU time.
Nov 25 12:16:42 np0005535469 systemd-machined[216343]: Machine qemu-173-instance-0000008b terminated.
Nov 25 12:16:42 np0005535469 NetworkManager[48891]: <info>  [1764091002.6628] manager: (tapcd75615e-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Nov 25 12:16:42 np0005535469 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : haproxy version is 2.8.14-c23fe91
Nov 25 12:16:42 np0005535469 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : path to executable is /usr/sbin/haproxy
Nov 25 12:16:42 np0005535469 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [ALERT]    (407939) : Current worker (407943) exited with code 143 (Terminated)
Nov 25 12:16:42 np0005535469 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [WARNING]  (407939) : All workers exited. Exiting... (0)
Nov 25 12:16:42 np0005535469 systemd[1]: libpod-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42.scope: Deactivated successfully.
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.689 254096 INFO nova.virt.libvirt.driver [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance destroyed successfully.#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.689 254096 DEBUG nova.objects.instance [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:16:42 np0005535469 podman[409840]: 2025-11-25 17:16:42.697548691 +0000 UTC m=+0.058370240 container died 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.700 254096 DEBUG nova.virt.libvirt.vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.700 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.701 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.701 254096 DEBUG os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.703 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfd7bca3-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.711 254096 INFO os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0')#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.712 254096 DEBUG nova.virt.libvirt.vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.713 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.713 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.714 254096 DEBUG os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.716 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd75615e-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.721 254096 INFO os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8')#033[00m
Nov 25 12:16:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42-userdata-shm.mount: Deactivated successfully.
Nov 25 12:16:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3d2a19e2c416b67cb68207b4518ff73fc69af0445413f18a70a9f65c350e3045-merged.mount: Deactivated successfully.
Nov 25 12:16:42 np0005535469 podman[409840]: 2025-11-25 17:16:42.736576024 +0000 UTC m=+0.097397583 container cleanup 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:16:42 np0005535469 systemd[1]: libpod-conmon-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42.scope: Deactivated successfully.
Nov 25 12:16:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:16:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3823325371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.767 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.776 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.793 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:16:42 np0005535469 podman[409906]: 2025-11-25 17:16:42.804756169 +0000 UTC m=+0.045992552 container remove 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.811 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6b8776-3357-4325-9c27-9bc90bdc1a3f]: (4, ('Tue Nov 25 05:16:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 (069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42)\n069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42\nTue Nov 25 05:16:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 (069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42)\n069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.813 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e002aa4-490e-4fa7-80ec-95bc51f4d451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.814 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 kernel: tap8c1b4538-70: left promiscuous mode
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.819 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.820 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:42 np0005535469 nova_compute[254092]: 2025-11-25 17:16:42.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.832 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3ec6ad-f281-4f14-a715-23b6b322e6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[adbcf20a-cf57-4af9-8689-dae836a0afe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.848 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b2a7f8-36b9-4ab4-932d-d019c4673028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.864 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[484c1a23-1d1f-45b3-9f61-052dc0e27545]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737418, 'reachable_time': 31594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409927, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 systemd[1]: run-netns-ovnmeta\x2d8c1b4538\x2d7e1d\x2d41aa\x2d8e91\x2d8a97df87ce48.mount: Deactivated successfully.
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.869 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.869 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[92b87971-082f-46f1-bfd1-d01dcc9ce51e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.872 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd75615e-b80b-4685-b424-2c54f7fdbde8 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.874 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c57073ad-8c41-459b-9402-c367011860c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.874 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80c6be45-a3fa-47e2-ba5f-d5c53c1039e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:42 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.875 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 namespace which is not needed anymore#033[00m
Nov 25 12:16:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Nov 25 12:16:43 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : haproxy version is 2.8.14-c23fe91
Nov 25 12:16:43 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : path to executable is /usr/sbin/haproxy
Nov 25 12:16:43 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [WARNING]  (408022) : Exiting Master process...
Nov 25 12:16:43 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [WARNING]  (408022) : Exiting Master process...
Nov 25 12:16:43 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [ALERT]    (408022) : Current worker (408024) exited with code 143 (Terminated)
Nov 25 12:16:43 np0005535469 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [WARNING]  (408022) : All workers exited. Exiting... (0)
Nov 25 12:16:43 np0005535469 systemd[1]: libpod-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135.scope: Deactivated successfully.
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.039 254096 INFO nova.virt.libvirt.driver [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deleting instance files /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565_del#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.040 254096 INFO nova.virt.libvirt.driver [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deletion of /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565_del complete#033[00m
Nov 25 12:16:43 np0005535469 podman[409944]: 2025-11-25 17:16:43.04318149 +0000 UTC m=+0.057439935 container died f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:16:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135-userdata-shm.mount: Deactivated successfully.
Nov 25 12:16:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bd0817bcd731ce4c188b30a692b0672c775002a3b0968cf2b0f2382fce690850-merged.mount: Deactivated successfully.
Nov 25 12:16:43 np0005535469 podman[409944]: 2025-11-25 17:16:43.07077976 +0000 UTC m=+0.085038225 container cleanup f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:16:43 np0005535469 systemd[1]: libpod-conmon-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135.scope: Deactivated successfully.
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.119 254096 INFO nova.compute.manager [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.120 254096 DEBUG oslo.service.loopingcall [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.120 254096 DEBUG nova.compute.manager [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.120 254096 DEBUG nova.network.neutron [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:16:43 np0005535469 podman[409972]: 2025-11-25 17:16:43.129965282 +0000 UTC m=+0.039888858 container remove f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29268928-7bb4-4e95-9737-9e31166e6f39]: (4, ('Tue Nov 25 05:16:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 (f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135)\nf960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135\nTue Nov 25 05:16:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 (f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135)\nf960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b34f69cf-233a-43f1-858d-a9f71d070d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:43 np0005535469 kernel: tapc57073ad-80: left promiscuous mode
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.156 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5d3774-254d-44eb-bf9f-5e011ad49047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.175 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2780a5-30ae-4157-9225-d861794b6164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.176 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[78f54752-94eb-43a7-a3d2-e16c1cfd7dc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.197 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fc462a4d-cde8-4850-a6e7-69054a6ed359]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737518, 'reachable_time': 27575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409987, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.199 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:16:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.199 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5f321d4a-72bf-4a6b-a678-4145a67d0c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.613 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.613 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.613 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-unplugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.616 254096 WARNING nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.709 254096 DEBUG nova.network.neutron [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated VIF entry in instance network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.709 254096 DEBUG nova.network.neutron [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:16:43 np0005535469 systemd[1]: run-netns-ovnmeta\x2dc57073ad\x2d8c41\x2d459b\x2d9402\x2dc367011860c7.mount: Deactivated successfully.
Nov 25 12:16:43 np0005535469 nova_compute[254092]: 2025-11-25 17:16:43.727 254096 DEBUG oslo_concurrency.lockutils [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:16:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-unplugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 WARNING nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-deleted-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 INFO nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Neutron deleted interface cd75615e-b80b-4685-b424-2c54f7fdbde8; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 DEBUG nova.network.neutron [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.512 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Detach interface failed, port_id=cd75615e-b80b-4685-b424-2c54f7fdbde8, reason: Instance cab3a333-1f68-435b-b6cb-a508755c2565 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.573 254096 DEBUG nova.network.neutron [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.588 254096 INFO nova.compute.manager [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 1.47 seconds to deallocate network for instance.#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.638 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.639 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.696 254096 DEBUG oslo_concurrency.processutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.815 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.816 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.816 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:44 np0005535469 nova_compute[254092]: 2025-11-25 17:16:44.816 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:16:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Nov 25 12:16:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:16:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854538254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:16:45 np0005535469 nova_compute[254092]: 2025-11-25 17:16:45.130 254096 DEBUG oslo_concurrency.processutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:16:45 np0005535469 nova_compute[254092]: 2025-11-25 17:16:45.137 254096 DEBUG nova.compute.provider_tree [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:16:45 np0005535469 nova_compute[254092]: 2025-11-25 17:16:45.160 254096 DEBUG nova.scheduler.client.report [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:16:45 np0005535469 nova_compute[254092]: 2025-11-25 17:16:45.179 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:45 np0005535469 nova_compute[254092]: 2025-11-25 17:16:45.205 254096 INFO nova.scheduler.client.report [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance cab3a333-1f68-435b-b6cb-a508755c2565#033[00m
Nov 25 12:16:45 np0005535469 nova_compute[254092]: 2025-11-25 17:16:45.252 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:16:45 np0005535469 nova_compute[254092]: 2025-11-25 17:16:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:46 np0005535469 nova_compute[254092]: 2025-11-25 17:16:46.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:46 np0005535469 nova_compute[254092]: 2025-11-25 17:16:46.600 254096 DEBUG nova.compute.manager [req-a72a272b-159a-4a45-b40d-3c8ee2e34359 req-66d0cd3e-8702-425b-a5be-b5b16563f244 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-deleted-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:16:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 7.1 KiB/s wr, 57 op/s
Nov 25 12:16:47 np0005535469 nova_compute[254092]: 2025-11-25 17:16:47.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:48 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:16:48.121 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:16:48 np0005535469 nova_compute[254092]: 2025-11-25 17:16:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:48 np0005535469 nova_compute[254092]: 2025-11-25 17:16:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:16:48 np0005535469 nova_compute[254092]: 2025-11-25 17:16:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:16:48 np0005535469 nova_compute[254092]: 2025-11-25 17:16:48.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:16:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 7.0 KiB/s wr, 50 op/s
Nov 25 12:16:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:50 np0005535469 nova_compute[254092]: 2025-11-25 17:16:50.625 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090995.623302, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:16:50 np0005535469 nova_compute[254092]: 2025-11-25 17:16:50.625 254096 INFO nova.compute.manager [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:16:50 np0005535469 nova_compute[254092]: 2025-11-25 17:16:50.659 254096 DEBUG nova.compute.manager [None req-20073eca-0183-4492-be5f-fcc3ae87ef38 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:16:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 7.0 KiB/s wr, 50 op/s
Nov 25 12:16:51 np0005535469 nova_compute[254092]: 2025-11-25 17:16:51.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:16:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:16:52 np0005535469 nova_compute[254092]: 2025-11-25 17:16:52.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 6.2 KiB/s wr, 34 op/s
Nov 25 12:16:53 np0005535469 nova_compute[254092]: 2025-11-25 17:16:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:16:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:16:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:16:55 np0005535469 nova_compute[254092]: 2025-11-25 17:16:55.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396234751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:16:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:16:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396234751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:16:55 np0005535469 nova_compute[254092]: 2025-11-25 17:16:55.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:56 np0005535469 nova_compute[254092]: 2025-11-25 17:16:56.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:16:57 np0005535469 nova_compute[254092]: 2025-11-25 17:16:57.684 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091002.6830397, cab3a333-1f68-435b-b6cb-a508755c2565 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:16:57 np0005535469 nova_compute[254092]: 2025-11-25 17:16:57.685 254096 INFO nova.compute.manager [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:16:57 np0005535469 nova_compute[254092]: 2025-11-25 17:16:57.713 254096 DEBUG nova.compute.manager [None req-451e0bf5-c53b-448b-8a50-8d2dc3cdfad8 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:16:57 np0005535469 nova_compute[254092]: 2025-11-25 17:16:57.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:16:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:16:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:01 np0005535469 nova_compute[254092]: 2025-11-25 17:17:01.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:02 np0005535469 nova_compute[254092]: 2025-11-25 17:17:02.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:03 np0005535469 podman[410012]: 2025-11-25 17:17:03.644395021 +0000 UTC m=+0.054331600 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:17:03 np0005535469 podman[410011]: 2025-11-25 17:17:03.646673862 +0000 UTC m=+0.062412969 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 12:17:03 np0005535469 podman[410013]: 2025-11-25 17:17:03.677730078 +0000 UTC m=+0.081554691 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 12:17:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:06 np0005535469 nova_compute[254092]: 2025-11-25 17:17:06.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:07 np0005535469 nova_compute[254092]: 2025-11-25 17:17:07.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.123999) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029124028, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1471, "num_deletes": 250, "total_data_size": 2338260, "memory_usage": 2371624, "flush_reason": "Manual Compaction"}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029133476, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1360802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57102, "largest_seqno": 58572, "table_properties": {"data_size": 1355684, "index_size": 2385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13394, "raw_average_key_size": 20, "raw_value_size": 1344520, "raw_average_value_size": 2081, "num_data_blocks": 109, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090875, "oldest_key_time": 1764090875, "file_creation_time": 1764091029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 9511 microseconds, and 3714 cpu microseconds.
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.133510) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1360802 bytes OK
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.133524) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136194) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136207) EVENT_LOG_v1 {"time_micros": 1764091029136202, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 2331793, prev total WAL file size 2331793, number of live WAL files 2.
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136853) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323630' seq:72057594037927935, type:22 .. '6D6772737461740032353131' seq:0, type:0; will stop at (end)
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1328KB)], [131(10176KB)]
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029136917, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11781872, "oldest_snapshot_seqno": -1}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7858 keys, 9413788 bytes, temperature: kUnknown
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029192026, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9413788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9364473, "index_size": 28594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 205346, "raw_average_key_size": 26, "raw_value_size": 9227258, "raw_average_value_size": 1174, "num_data_blocks": 1116, "num_entries": 7858, "num_filter_entries": 7858, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.192273) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9413788 bytes
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.193614) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.9 rd, 170.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.9 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(15.6) write-amplify(6.9) OK, records in: 8299, records dropped: 441 output_compression: NoCompression
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.193631) EVENT_LOG_v1 {"time_micros": 1764091029193623, "job": 80, "event": "compaction_finished", "compaction_time_micros": 55085, "compaction_time_cpu_micros": 23941, "output_level": 6, "num_output_files": 1, "total_output_size": 9413788, "num_input_records": 8299, "num_output_records": 7858, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029194190, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029196314, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f3a34970-2565-4173-872f-3be1704826b5 does not exist
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 191f69bf-3462-4d21-a7b9-d2652e845f5b does not exist
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 96fd0942-8196-4dda-a999-435ccf4dd088 does not exist
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:17:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:17:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:11 np0005535469 nova_compute[254092]: 2025-11-25 17:17:11.092 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:17:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:17:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:17:11 np0005535469 podman[410346]: 2025-11-25 17:17:11.220460777 +0000 UTC m=+0.054343790 container create 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:17:11 np0005535469 systemd[1]: Started libpod-conmon-4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a.scope.
Nov 25 12:17:11 np0005535469 podman[410346]: 2025-11-25 17:17:11.191152709 +0000 UTC m=+0.025035742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:17:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:11 np0005535469 podman[410346]: 2025-11-25 17:17:11.334578583 +0000 UTC m=+0.168461596 container init 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:17:11 np0005535469 podman[410346]: 2025-11-25 17:17:11.345214552 +0000 UTC m=+0.179097565 container start 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:17:11 np0005535469 podman[410346]: 2025-11-25 17:17:11.349098048 +0000 UTC m=+0.182981181 container attach 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:17:11 np0005535469 clever_easley[410362]: 167 167
Nov 25 12:17:11 np0005535469 systemd[1]: libpod-4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a.scope: Deactivated successfully.
Nov 25 12:17:11 np0005535469 podman[410346]: 2025-11-25 17:17:11.353186109 +0000 UTC m=+0.187069132 container died 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 12:17:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4098c339792f0097d66fee199086caf1118f264fb5bf58ff80fb64baef081632-merged.mount: Deactivated successfully.
Nov 25 12:17:11 np0005535469 podman[410346]: 2025-11-25 17:17:11.402111912 +0000 UTC m=+0.235994925 container remove 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:17:11 np0005535469 systemd[1]: libpod-conmon-4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a.scope: Deactivated successfully.
Nov 25 12:17:11 np0005535469 podman[410385]: 2025-11-25 17:17:11.624577797 +0000 UTC m=+0.060087967 container create 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:17:11 np0005535469 systemd[1]: Started libpod-conmon-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope.
Nov 25 12:17:11 np0005535469 podman[410385]: 2025-11-25 17:17:11.598909318 +0000 UTC m=+0.034419558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:17:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:11 np0005535469 podman[410385]: 2025-11-25 17:17:11.718148844 +0000 UTC m=+0.153659004 container init 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:17:11 np0005535469 podman[410385]: 2025-11-25 17:17:11.727483368 +0000 UTC m=+0.162993498 container start 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:17:11 np0005535469 podman[410385]: 2025-11-25 17:17:11.731179328 +0000 UTC m=+0.166689458 container attach 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:17:12 np0005535469 nova_compute[254092]: 2025-11-25 17:17:12.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:12 np0005535469 tender_heisenberg[410401]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:17:12 np0005535469 tender_heisenberg[410401]: --> relative data size: 1.0
Nov 25 12:17:12 np0005535469 tender_heisenberg[410401]: --> All data devices are unavailable
Nov 25 12:17:12 np0005535469 systemd[1]: libpod-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope: Deactivated successfully.
Nov 25 12:17:12 np0005535469 podman[410385]: 2025-11-25 17:17:12.956491611 +0000 UTC m=+1.392001751 container died 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 12:17:12 np0005535469 systemd[1]: libpod-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope: Consumed 1.188s CPU time.
Nov 25 12:17:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f-merged.mount: Deactivated successfully.
Nov 25 12:17:13 np0005535469 podman[410385]: 2025-11-25 17:17:13.522334973 +0000 UTC m=+1.957845103 container remove 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:17:13 np0005535469 systemd[1]: libpod-conmon-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope: Deactivated successfully.
Nov 25 12:17:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:14 np0005535469 podman[410583]: 2025-11-25 17:17:14.334966012 +0000 UTC m=+0.031448488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:17:14 np0005535469 podman[410583]: 2025-11-25 17:17:14.465424932 +0000 UTC m=+0.161907358 container create 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:17:14 np0005535469 systemd[1]: Started libpod-conmon-895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816.scope.
Nov 25 12:17:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:14 np0005535469 podman[410583]: 2025-11-25 17:17:14.582289164 +0000 UTC m=+0.278771690 container init 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:17:14 np0005535469 podman[410583]: 2025-11-25 17:17:14.594060614 +0000 UTC m=+0.290543080 container start 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:17:14 np0005535469 beautiful_brahmagupta[410599]: 167 167
Nov 25 12:17:14 np0005535469 systemd[1]: libpod-895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816.scope: Deactivated successfully.
Nov 25 12:17:14 np0005535469 podman[410583]: 2025-11-25 17:17:14.626318562 +0000 UTC m=+0.322801028 container attach 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:17:14 np0005535469 podman[410583]: 2025-11-25 17:17:14.626876467 +0000 UTC m=+0.323358903 container died 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:17:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-faa2d5f466162a94993cab67db2b84414dd1e83deba03de099502e750531e56e-merged.mount: Deactivated successfully.
Nov 25 12:17:14 np0005535469 podman[410583]: 2025-11-25 17:17:14.770773094 +0000 UTC m=+0.467255540 container remove 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:17:14 np0005535469 systemd[1]: libpod-conmon-895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816.scope: Deactivated successfully.
Nov 25 12:17:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:15 np0005535469 podman[410626]: 2025-11-25 17:17:15.009671187 +0000 UTC m=+0.097577507 container create 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:17:15 np0005535469 podman[410626]: 2025-11-25 17:17:14.983175756 +0000 UTC m=+0.071082116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:17:15 np0005535469 systemd[1]: Started libpod-conmon-753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41.scope.
Nov 25 12:17:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:15 np0005535469 podman[410626]: 2025-11-25 17:17:15.160832121 +0000 UTC m=+0.248738501 container init 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:17:15 np0005535469 podman[410626]: 2025-11-25 17:17:15.178925464 +0000 UTC m=+0.266831784 container start 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 12:17:15 np0005535469 podman[410626]: 2025-11-25 17:17:15.192066932 +0000 UTC m=+0.279973262 container attach 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]: {
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:    "0": [
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:        {
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "devices": [
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "/dev/loop3"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            ],
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_name": "ceph_lv0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_size": "21470642176",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "name": "ceph_lv0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "tags": {
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cluster_name": "ceph",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.crush_device_class": "",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.encrypted": "0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osd_id": "0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.type": "block",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.vdo": "0"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            },
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "type": "block",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "vg_name": "ceph_vg0"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:        }
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:    ],
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:    "1": [
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:        {
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "devices": [
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "/dev/loop4"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            ],
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_name": "ceph_lv1",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_size": "21470642176",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "name": "ceph_lv1",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "tags": {
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cluster_name": "ceph",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.crush_device_class": "",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.encrypted": "0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osd_id": "1",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.type": "block",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.vdo": "0"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            },
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "type": "block",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "vg_name": "ceph_vg1"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:        }
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:    ],
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:    "2": [
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:        {
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "devices": [
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "/dev/loop5"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            ],
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_name": "ceph_lv2",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_size": "21470642176",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "name": "ceph_lv2",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "tags": {
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.cluster_name": "ceph",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.crush_device_class": "",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.encrypted": "0",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osd_id": "2",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.type": "block",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:                "ceph.vdo": "0"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            },
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "type": "block",
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:            "vg_name": "ceph_vg2"
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:        }
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]:    ]
Nov 25 12:17:15 np0005535469 jovial_yalow[410643]: }
Nov 25 12:17:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:15.999 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:01:70 2001:db8:0:1:f816:3eff:fe96:170 2001:db8::f816:3eff:fe96:170'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe96:170/64 2001:db8::f816:3eff:fe96:170/64', 'neutron:device_id': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7624b22b-8369-4a12-940b-9f95890a4040) old=Port_Binding(mac=['fa:16:3e:96:01:70 2001:db8::f816:3eff:fe96:170'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe96:170/64', 'neutron:device_id': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:17:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:16.003 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7624b22b-8369-4a12-940b-9f95890a4040 in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 updated#033[00m
Nov 25 12:17:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:16.004 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:17:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:16.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc6abeb-72f0-453c-8ac4-4c08a062d4cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:16 np0005535469 systemd[1]: libpod-753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41.scope: Deactivated successfully.
Nov 25 12:17:16 np0005535469 podman[410626]: 2025-11-25 17:17:16.01823283 +0000 UTC m=+1.106139150 container died 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:17:16 np0005535469 systemd[1]: var-lib-containers-storage-overlay-202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f-merged.mount: Deactivated successfully.
Nov 25 12:17:16 np0005535469 podman[410626]: 2025-11-25 17:17:16.067369686 +0000 UTC m=+1.155276006 container remove 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:17:16 np0005535469 systemd[1]: libpod-conmon-753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41.scope: Deactivated successfully.
Nov 25 12:17:16 np0005535469 nova_compute[254092]: 2025-11-25 17:17:16.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:16 np0005535469 podman[410808]: 2025-11-25 17:17:16.690007294 +0000 UTC m=+0.025219988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:17:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:17 np0005535469 podman[410808]: 2025-11-25 17:17:17.002592253 +0000 UTC m=+0.337804837 container create 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:17:17 np0005535469 systemd[1]: Started libpod-conmon-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope.
Nov 25 12:17:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:17 np0005535469 podman[410808]: 2025-11-25 17:17:17.213295689 +0000 UTC m=+0.548508293 container init 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 12:17:17 np0005535469 podman[410808]: 2025-11-25 17:17:17.220425263 +0000 UTC m=+0.555637847 container start 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:17:17 np0005535469 zen_dhawan[410824]: 167 167
Nov 25 12:17:17 np0005535469 systemd[1]: libpod-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope: Deactivated successfully.
Nov 25 12:17:17 np0005535469 podman[410808]: 2025-11-25 17:17:17.227765742 +0000 UTC m=+0.562978396 container attach 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:17:17 np0005535469 conmon[410824]: conmon 1992988ad09e758b6e8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope/container/memory.events
Nov 25 12:17:17 np0005535469 podman[410808]: 2025-11-25 17:17:17.22878265 +0000 UTC m=+0.563995234 container died 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:17:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-84455048f18bf1fbf698ff4f0edef84cff8eafa06f670baee7ec0568107e85cd-merged.mount: Deactivated successfully.
Nov 25 12:17:17 np0005535469 podman[410808]: 2025-11-25 17:17:17.263890585 +0000 UTC m=+0.599103169 container remove 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 12:17:17 np0005535469 systemd[1]: libpod-conmon-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope: Deactivated successfully.
Nov 25 12:17:17 np0005535469 podman[410847]: 2025-11-25 17:17:17.443835673 +0000 UTC m=+0.051613806 container create f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:17:17 np0005535469 systemd[1]: Started libpod-conmon-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope.
Nov 25 12:17:17 np0005535469 podman[410847]: 2025-11-25 17:17:17.417717102 +0000 UTC m=+0.025495255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:17:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:17 np0005535469 podman[410847]: 2025-11-25 17:17:17.559450881 +0000 UTC m=+0.167229034 container init f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:17:17 np0005535469 podman[410847]: 2025-11-25 17:17:17.571119208 +0000 UTC m=+0.178897351 container start f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:17:17 np0005535469 podman[410847]: 2025-11-25 17:17:17.574542261 +0000 UTC m=+0.182320404 container attach f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:17:17 np0005535469 nova_compute[254092]: 2025-11-25 17:17:17.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]: {
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "osd_id": 1,
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "type": "bluestore"
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:    },
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "osd_id": 2,
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "type": "bluestore"
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:    },
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "osd_id": 0,
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:        "type": "bluestore"
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]:    }
Nov 25 12:17:18 np0005535469 wonderful_greider[410863]: }
Nov 25 12:17:18 np0005535469 systemd[1]: libpod-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope: Deactivated successfully.
Nov 25 12:17:18 np0005535469 systemd[1]: libpod-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope: Consumed 1.194s CPU time.
Nov 25 12:17:18 np0005535469 podman[410847]: 2025-11-25 17:17:18.757991474 +0000 UTC m=+1.365769657 container died f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:17:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413-merged.mount: Deactivated successfully.
Nov 25 12:17:18 np0005535469 podman[410847]: 2025-11-25 17:17:18.831817573 +0000 UTC m=+1.439595716 container remove f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:17:18 np0005535469 systemd[1]: libpod-conmon-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope: Deactivated successfully.
Nov 25 12:17:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:17:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:17:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:17:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:17:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5e4bb086-ac9c-486b-97de-49e5da7579cd does not exist
Nov 25 12:17:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1cf3d535-71df-4b06-819c-0b1738041a9a does not exist
Nov 25 12:17:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:17:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:17:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:21 np0005535469 nova_compute[254092]: 2025-11-25 17:17:21.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:22 np0005535469 nova_compute[254092]: 2025-11-25 17:17:22.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.368 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.368 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.400 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.483 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.483 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.493 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.493 254096 INFO nova.compute.claims [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:17:24 np0005535469 nova_compute[254092]: 2025-11-25 17:17:24.644 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:17:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:17:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545645781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.067 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.074 254096 DEBUG nova.compute.provider_tree [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.099 254096 DEBUG nova.scheduler.client.report [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.134 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.136 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.225 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.226 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.249 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.273 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.391 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.394 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.394 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Creating image(s)#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.424 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.448 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.477 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.482 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.559 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.561 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.561 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.562 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.586 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.590 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.936 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:25 np0005535469 nova_compute[254092]: 2025-11-25 17:17:25.970 254096 DEBUG nova.policy [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.013 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.125 254096 DEBUG nova.objects.instance [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.140 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.141 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Ensure instance console log exists: /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.141 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.142 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:26 np0005535469 nova_compute[254092]: 2025-11-25 17:17:26.142 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 57 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 845 KiB/s wr, 1 op/s
Nov 25 12:17:27 np0005535469 nova_compute[254092]: 2025-11-25 17:17:27.200 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully created port: a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:17:27 np0005535469 nova_compute[254092]: 2025-11-25 17:17:27.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 57 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 845 KiB/s wr, 1 op/s
Nov 25 12:17:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:29 np0005535469 nova_compute[254092]: 2025-11-25 17:17:29.538 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully created port: fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:17:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 88 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.100 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.128 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully updated port: a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.253 254096 DEBUG nova.compute.manager [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.253 254096 DEBUG nova.compute.manager [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.253 254096 DEBUG oslo_concurrency.lockutils [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.254 254096 DEBUG oslo_concurrency.lockutils [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.254 254096 DEBUG nova.network.neutron [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.443 254096 DEBUG nova.network.neutron [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.913 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully updated port: fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.931 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:17:31 np0005535469 nova_compute[254092]: 2025-11-25 17:17:31.998 254096 DEBUG nova.network.neutron [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:17:32 np0005535469 nova_compute[254092]: 2025-11-25 17:17:32.024 254096 DEBUG oslo_concurrency.lockutils [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:17:32 np0005535469 nova_compute[254092]: 2025-11-25 17:17:32.026 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:17:32 np0005535469 nova_compute[254092]: 2025-11-25 17:17:32.026 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:17:32 np0005535469 nova_compute[254092]: 2025-11-25 17:17:32.174 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:17:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:17:32 np0005535469 nova_compute[254092]: 2025-11-25 17:17:32.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:33 np0005535469 nova_compute[254092]: 2025-11-25 17:17:33.358 254096 DEBUG nova.compute.manager [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:17:33 np0005535469 nova_compute[254092]: 2025-11-25 17:17:33.359 254096 DEBUG nova.compute.manager [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:17:33 np0005535469 nova_compute[254092]: 2025-11-25 17:17:33.359 254096 DEBUG oslo_concurrency.lockutils [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:17:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:33Z|01495|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 25 12:17:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.452 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.468 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.468 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance network_info: |[{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.469 254096 DEBUG oslo_concurrency.lockutils [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.469 254096 DEBUG nova.network.neutron [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.474 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start _get_guest_xml network_info=[{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.480 254096 WARNING nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.489 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.490 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.493 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.494 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.494 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.494 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.495 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.495 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.497 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.497 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.497 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.498 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:17:34 np0005535469 nova_compute[254092]: 2025-11-25 17:17:34.501 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:34 np0005535469 podman[411150]: 2025-11-25 17:17:34.679784562 +0000 UTC m=+0.085249731 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:17:34 np0005535469 podman[411151]: 2025-11-25 17:17:34.687021889 +0000 UTC m=+0.080124682 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 12:17:34 np0005535469 podman[411152]: 2025-11-25 17:17:34.715386161 +0000 UTC m=+0.111522007 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 12:17:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:17:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:17:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260520596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.001 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.036 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.040 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:17:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/886301348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.488 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.491 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.491 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.492 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.494 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.494 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.495 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.496 254096 DEBUG nova.objects.instance [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.521 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <uuid>e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac</uuid>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <name>instance-0000008d</name>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1474914346</nova:name>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:17:34</nova:creationTime>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:port uuid="a3c9174e-c8c3-4b9f-b87f-4d6244324c9b">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <nova:port uuid="fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feac:9cd4" ipVersion="6"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feac:9cd4" ipVersion="6"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <entry name="serial">e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac</entry>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <entry name="uuid">e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac</entry>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ef:d8:27"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <target dev="tapa3c9174e-c8"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:ac:9c:d4"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <target dev="tapfdd7f4f6-80"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/console.log" append="off"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:17:35 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:17:35 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:17:35 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:17:35 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.524 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Preparing to wait for external event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.524 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Preparing to wait for external event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.526 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.526 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.526 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.527 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.528 254096 DEBUG os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.533 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c9174e-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.533 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3c9174e-c8, col_values=(('external_ids', {'iface-id': 'a3c9174e-c8c3-4b9f-b87f-4d6244324c9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:d8:27', 'vm-uuid': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 NetworkManager[48891]: <info>  [1764091055.5370] manager: (tapa3c9174e-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.545 254096 INFO os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8')#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.547 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.547 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.549 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.549 254096 DEBUG os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.551 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.551 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.555 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdd7f4f6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.556 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdd7f4f6-80, col_values=(('external_ids', {'iface-id': 'fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:9c:d4', 'vm-uuid': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:35 np0005535469 NetworkManager[48891]: <info>  [1764091055.5591] manager: (tapfdd7f4f6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.566 254096 INFO os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80')#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.619 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.620 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.620 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:ef:d8:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.620 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:ac:9c:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.621 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Using config drive#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.642 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.923 254096 DEBUG nova.network.neutron [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated VIF entry in instance network info cache for port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.924 254096 DEBUG nova.network.neutron [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:17:35 np0005535469 nova_compute[254092]: 2025-11-25 17:17:35.947 254096 DEBUG oslo_concurrency.lockutils [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.336 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Creating config drive at /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.341 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80o3bp9w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.485 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80o3bp9w" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.511 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.515 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.560 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.760 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.761 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deleting local config drive /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config because it was imported into RBD.#033[00m
Nov 25 12:17:36 np0005535469 NetworkManager[48891]: <info>  [1764091056.8185] manager: (tapa3c9174e-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Nov 25 12:17:36 np0005535469 kernel: tapa3c9174e-c8: entered promiscuous mode
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01496|binding|INFO|Claiming lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for this chassis.
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01497|binding|INFO|a3c9174e-c8c3-4b9f-b87f-4d6244324c9b: Claiming fa:16:3e:ef:d8:27 10.100.0.8
Nov 25 12:17:36 np0005535469 NetworkManager[48891]: <info>  [1764091056.8354] manager: (tapfdd7f4f6-80): new Tun device (/org/freedesktop/NetworkManager/Devices/618)
Nov 25 12:17:36 np0005535469 kernel: tapfdd7f4f6-80: entered promiscuous mode
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.840 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:d8:27 10.100.0.8'], port_security=['fa:16:3e:ef:d8:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.842 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e bound to our chassis#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.843 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a702335-301a-4b90-b82e-e616a31e5b3e#033[00m
Nov 25 12:17:36 np0005535469 systemd-udevd[411352]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:17:36 np0005535469 systemd-udevd[411351]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.856 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99d34fe8-07fc-4f62-8509-7d55daf49fd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.857 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a702335-31 in ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.860 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a702335-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.860 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60e208dd-806c-4834-be66-41d9d70254dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 NetworkManager[48891]: <info>  [1764091056.8622] device (tapa3c9174e-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.861 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f25fcd4e-2408-4009-b986-90967642153f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 NetworkManager[48891]: <info>  [1764091056.8633] device (tapfdd7f4f6-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:17:36 np0005535469 NetworkManager[48891]: <info>  [1764091056.8640] device (tapa3c9174e-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:17:36 np0005535469 NetworkManager[48891]: <info>  [1764091056.8644] device (tapfdd7f4f6-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.876 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[80a66ade-8862-494f-87d7-05f4055a9d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 systemd-machined[216343]: New machine qemu-175-instance-0000008d.
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.903 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce872b4-9446-45a2-8f1a-0db8ea0e2a84]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01498|binding|INFO|Claiming lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for this chassis.
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01499|binding|INFO|fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d: Claiming fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:36 np0005535469 systemd[1]: Started Virtual Machine qemu-175-instance-0000008d.
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.924 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], port_security=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feac:9cd4/64 2001:db8::f816:3eff:feac:9cd4/64', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01500|binding|INFO|Setting lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b ovn-installed in OVS
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01501|binding|INFO|Setting lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b up in Southbound
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.934 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a07f4319-1792-4a22-a86c-af53d36d1b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01502|binding|INFO|Setting lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d ovn-installed in OVS
Nov 25 12:17:36 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:36Z|01503|binding|INFO|Setting lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d up in Southbound
Nov 25 12:17:36 np0005535469 nova_compute[254092]: 2025-11-25 17:17:36.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:36 np0005535469 NetworkManager[48891]: <info>  [1764091056.9435] manager: (tap4a702335-30): new Veth device (/org/freedesktop/NetworkManager/Devices/619)
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a45092-f6eb-4d13-90ef-9e486e63b902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.986 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[de7cbe0b-fba0-40d3-8108-866680f20da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.989 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1af36660-a00f-4881-bbab-a624dcc6a1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 NetworkManager[48891]: <info>  [1764091057.0131] device (tap4a702335-30): carrier: link connected
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.024 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e8903e-cdd2-4104-8419-0374b4f4adb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c19be6d-8948-4273-8ef1-29b2a6c13e0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411388, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.059 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7749bd8d-4803-48ce-b165-2ca820e65281]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:4bf0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751460, 'tstamp': 751460}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411389, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[753a4ccb-2f28-43ad-a147-79d8f533db6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411390, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28dbcc2e-ca68-4c52-bcb4-ecb90a1966b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9b40c6-392b-4ecd-abdb-15d5a87b6dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a702335-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:37 np0005535469 NetworkManager[48891]: <info>  [1764091057.1982] manager: (tap4a702335-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Nov 25 12:17:37 np0005535469 kernel: tap4a702335-30: entered promiscuous mode
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.200 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a702335-30, col_values=(('external_ids', {'iface-id': '1cef18fb-8ae8-44d1-93c6-659b405ed9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:37Z|01504|binding|INFO|Releasing lport 1cef18fb-8ae8-44d1-93c6-659b405ed9b8 from this chassis (sb_readonly=0)
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.217 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a702335-301a-4b90-b82e-e616a31e5b3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a702335-301a-4b90-b82e-e616a31e5b3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.219 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[21f88d87-bbe2-411f-aa1c-4c8569ecee4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.221 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-4a702335-301a-4b90-b82e-e616a31e5b3e
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/4a702335-301a-4b90-b82e-e616a31e5b3e.pid.haproxy
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 4a702335-301a-4b90-b82e-e616a31e5b3e
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.222 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'env', 'PROCESS_TAG=haproxy-4a702335-301a-4b90-b82e-e616a31e5b3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a702335-301a-4b90-b82e-e616a31e5b3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.315 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091057.3145525, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.315 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Started (Lifecycle Event)#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.338 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.343 254096 DEBUG nova.compute.manager [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.343 254096 DEBUG oslo_concurrency.lockutils [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.344 254096 DEBUG oslo_concurrency.lockutils [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.344 254096 DEBUG oslo_concurrency.lockutils [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.344 254096 DEBUG nova.compute.manager [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Processing event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.348 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091057.3153772, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.349 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.373 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.377 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:17:37 np0005535469 nova_compute[254092]: 2025-11-25 17:17:37.398 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:17:37 np0005535469 podman[411464]: 2025-11-25 17:17:37.631619099 +0000 UTC m=+0.067100118 container create 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:17:37 np0005535469 systemd[1]: Started libpod-conmon-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0.scope.
Nov 25 12:17:37 np0005535469 podman[411464]: 2025-11-25 17:17:37.595036293 +0000 UTC m=+0.030517412 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:17:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda96b984478ce4719be5239caee54b3e00b48097bfc3855698629f519ccf4eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:37 np0005535469 podman[411464]: 2025-11-25 17:17:37.743073183 +0000 UTC m=+0.178554232 container init 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 12:17:37 np0005535469 podman[411464]: 2025-11-25 17:17:37.754527284 +0000 UTC m=+0.190008313 container start 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:17:37 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : New worker (411485) forked
Nov 25 12:17:37 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : Loading success.
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.819 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.821 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.835 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[001ee9be-2a10-40bb-9dc4-7586b892f038]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.836 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51f47401-e1 in ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.839 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51f47401-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a9b6b6-8073-412e-9f42-9bd55e154632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.840 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5ab139-6e04-4010-8e4c-d3187fd24e69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.856 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcc5bb5-ed7d-4d60-8f41-bbc85a21f51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.873 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbc9929-1973-4947-b832-661eea3d770e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.905 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7f61d342-967f-431f-bdc2-263b91340768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.911 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9153f449-4ec8-4949-be64-07267f6a714e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 NetworkManager[48891]: <info>  [1764091057.9133] manager: (tap51f47401-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/621)
Nov 25 12:17:37 np0005535469 systemd-udevd[411377]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.949 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ecb5cb-0de9-4de4-8e0b-6e76d0bd8f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.953 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ad7c82-01f4-4c18-ba02-a1cb37d6ef96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:37 np0005535469 NetworkManager[48891]: <info>  [1764091057.9803] device (tap51f47401-e0): carrier: link connected
Nov 25 12:17:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.988 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8d80b9d3-1905-4acd-90ed-19274693b534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce18e651-8c9e-451f-a428-5b6370ca21e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411504, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.030 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[416f9f31-7a4e-4964-bfd9-148dc83046a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:170'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751557, 'tstamp': 751557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411505, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[888a1bb1-a3c0-425e-962a-a919cd7d75af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411506, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa118529-5608-4f9c-805c-f36acc70f6e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.130 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b223b204-0930-4811-91ef-9f7f0bfb3ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.133 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.133 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.134 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51f47401-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:38 np0005535469 nova_compute[254092]: 2025-11-25 17:17:38.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:38 np0005535469 kernel: tap51f47401-e0: entered promiscuous mode
Nov 25 12:17:38 np0005535469 NetworkManager[48891]: <info>  [1764091058.1384] manager: (tap51f47401-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Nov 25 12:17:38 np0005535469 nova_compute[254092]: 2025-11-25 17:17:38.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.143 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51f47401-e0, col_values=(('external_ids', {'iface-id': '7624b22b-8369-4a12-940b-9f95890a4040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:38 np0005535469 nova_compute[254092]: 2025-11-25 17:17:38.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:38Z|01505|binding|INFO|Releasing lport 7624b22b-8369-4a12-940b-9f95890a4040 from this chassis (sb_readonly=0)
Nov 25 12:17:38 np0005535469 nova_compute[254092]: 2025-11-25 17:17:38.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.146 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51f47401-ed2b-45e2-aea1-5cbbd48e5245.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51f47401-ed2b-45e2-aea1-5cbbd48e5245.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.147 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3ae38f-01a3-41d2-919e-3947c8e124d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.148 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-51f47401-ed2b-45e2-aea1-5cbbd48e5245
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/51f47401-ed2b-45e2-aea1-5cbbd48e5245.pid.haproxy
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 51f47401-ed2b-45e2-aea1-5cbbd48e5245
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:17:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.149 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'env', 'PROCESS_TAG=haproxy-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51f47401-ed2b-45e2-aea1-5cbbd48e5245.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:17:38 np0005535469 nova_compute[254092]: 2025-11-25 17:17:38.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:38 np0005535469 podman[411537]: 2025-11-25 17:17:38.536956571 +0000 UTC m=+0.049497908 container create cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 12:17:38 np0005535469 systemd[1]: Started libpod-conmon-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0.scope.
Nov 25 12:17:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:17:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f061dc6daf905b054161e2e14d141c86f73e0689fee8c1b141f9292e1f077fc2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:17:38 np0005535469 podman[411537]: 2025-11-25 17:17:38.608336744 +0000 UTC m=+0.120878111 container init cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 12:17:38 np0005535469 podman[411537]: 2025-11-25 17:17:38.511410656 +0000 UTC m=+0.023952003 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:17:38 np0005535469 podman[411537]: 2025-11-25 17:17:38.614607795 +0000 UTC m=+0.127149142 container start cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 12:17:38 np0005535469 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : New worker (411558) forked
Nov 25 12:17:38 np0005535469 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : Loading success.
Nov 25 12:17:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 970 KiB/s wr, 25 op/s
Nov 25 12:17:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.588 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.589 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.590 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.591 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.591 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No event matching network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b in dict_keys([('network-vif-plugged', 'fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.592 254096 WARNING nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.592 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.593 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.594 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.594 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.594 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Processing event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.595 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.595 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.595 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.596 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.596 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.596 254096 WARNING nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.599 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.603 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091059.603369, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.604 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.607 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.615 254096 INFO nova.virt.libvirt.driver [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance spawned successfully.#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.617 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.620 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.625 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.639 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.640 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.640 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.641 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.642 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.642 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.648 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.698 254096 INFO nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 14.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.699 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.757 254096 INFO nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 15.31 seconds to build instance.#033[00m
Nov 25 12:17:39 np0005535469 nova_compute[254092]: 2025-11-25 17:17:39.770 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:17:40
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.control', '.mgr', 'volumes', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:17:40 np0005535469 nova_compute[254092]: 2025-11-25 17:17:40.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:17:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 982 KiB/s wr, 34 op/s
Nov 25 12:17:41 np0005535469 nova_compute[254092]: 2025-11-25 17:17:41.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 12 KiB/s wr, 37 op/s
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.524 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:43.949 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:43.952 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:17:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:17:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182407296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:17:43 np0005535469 nova_compute[254092]: 2025-11-25 17:17:43.979 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.051 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:17:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.306 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.307 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3469MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.308 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.308 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.383 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.384 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.384 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.452 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:17:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 12 KiB/s wr, 36 op/s
Nov 25 12:17:44 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:44Z|01506|binding|INFO|Releasing lport 1cef18fb-8ae8-44d1-93c6-659b405ed9b8 from this chassis (sb_readonly=0)
Nov 25 12:17:44 np0005535469 NetworkManager[48891]: <info>  [1764091064.9232] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Nov 25 12:17:44 np0005535469 NetworkManager[48891]: <info>  [1764091064.9244] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Nov 25 12:17:44 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:44Z|01507|binding|INFO|Releasing lport 7624b22b-8369-4a12-940b-9f95890a4040 from this chassis (sb_readonly=0)
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:44 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:44Z|01508|binding|INFO|Releasing lport 1cef18fb-8ae8-44d1-93c6-659b405ed9b8 from this chassis (sb_readonly=0)
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:44 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:44Z|01509|binding|INFO|Releasing lport 7624b22b-8369-4a12-940b-9f95890a4040 from this chassis (sb_readonly=0)
Nov 25 12:17:44 np0005535469 nova_compute[254092]: 2025-11-25 17:17:44.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:17:44 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460267635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.013 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.020 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.035 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.050 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.051 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.287 254096 DEBUG nova.compute.manager [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.288 254096 DEBUG nova.compute.manager [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.288 254096 DEBUG oslo_concurrency.lockutils [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.289 254096 DEBUG oslo_concurrency.lockutils [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.289 254096 DEBUG nova.network.neutron [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:17:45 np0005535469 nova_compute[254092]: 2025-11-25 17:17:45.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:46 np0005535469 nova_compute[254092]: 2025-11-25 17:17:46.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:17:46 np0005535469 nova_compute[254092]: 2025-11-25 17:17:46.940 254096 DEBUG nova.network.neutron [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated VIF entry in instance network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:17:46 np0005535469 nova_compute[254092]: 2025-11-25 17:17:46.941 254096 DEBUG nova.network.neutron [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:17:46 np0005535469 nova_compute[254092]: 2025-11-25 17:17:46.960 254096 DEBUG oslo_concurrency.lockutils [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:17:47 np0005535469 nova_compute[254092]: 2025-11-25 17:17:47.046 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:47 np0005535469 nova_compute[254092]: 2025-11-25 17:17:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:17:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:49 np0005535469 nova_compute[254092]: 2025-11-25 17:17:49.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:49 np0005535469 nova_compute[254092]: 2025-11-25 17:17:49.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:17:49 np0005535469 nova_compute[254092]: 2025-11-25 17:17:49.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:17:49 np0005535469 nova_compute[254092]: 2025-11-25 17:17:49.877 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:17:49 np0005535469 nova_compute[254092]: 2025-11-25 17:17:49.878 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:17:49 np0005535469 nova_compute[254092]: 2025-11-25 17:17:49.879 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:17:49 np0005535469 nova_compute[254092]: 2025-11-25 17:17:49.879 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:17:49 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:17:49.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.302816) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070302870, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 604, "num_deletes": 251, "total_data_size": 659683, "memory_usage": 672072, "flush_reason": "Manual Compaction"}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070311269, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 653559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58573, "largest_seqno": 59176, "table_properties": {"data_size": 650267, "index_size": 1199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7643, "raw_average_key_size": 19, "raw_value_size": 643675, "raw_average_value_size": 1629, "num_data_blocks": 53, "num_entries": 395, "num_filter_entries": 395, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091029, "oldest_key_time": 1764091029, "file_creation_time": 1764091070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 8876 microseconds, and 5461 cpu microseconds.
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.311688) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 653559 bytes OK
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.311898) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.313719) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.313746) EVENT_LOG_v1 {"time_micros": 1764091070313738, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.313773) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 656380, prev total WAL file size 656380, number of live WAL files 2.
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.315492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(638KB)], [134(9193KB)]
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070315537, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 10067347, "oldest_snapshot_seqno": -1}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7739 keys, 8389251 bytes, temperature: kUnknown
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070379589, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 8389251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8341631, "index_size": 27155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 203562, "raw_average_key_size": 26, "raw_value_size": 8207398, "raw_average_value_size": 1060, "num_data_blocks": 1047, "num_entries": 7739, "num_filter_entries": 7739, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.379934) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 8389251 bytes
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.381355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.9 rd, 130.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.0 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(28.2) write-amplify(12.8) OK, records in: 8253, records dropped: 514 output_compression: NoCompression
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.381380) EVENT_LOG_v1 {"time_micros": 1764091070381368, "job": 82, "event": "compaction_finished", "compaction_time_micros": 64182, "compaction_time_cpu_micros": 22198, "output_level": 6, "num_output_files": 1, "total_output_size": 8389251, "num_input_records": 8253, "num_output_records": 7739, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070381666, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070383445, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.315355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:17:50 np0005535469 nova_compute[254092]: 2025-11-25 17:17:50.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:17:51 np0005535469 nova_compute[254092]: 2025-11-25 17:17:51.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:51 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:17:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:17:52 np0005535469 nova_compute[254092]: 2025-11-25 17:17:52.580 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:17:52 np0005535469 nova_compute[254092]: 2025-11-25 17:17:52.606 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:17:52 np0005535469 nova_compute[254092]: 2025-11-25 17:17:52.607 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:17:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 25 12:17:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:54Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:d8:27 10.100.0.8
Nov 25 12:17:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:17:54Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:d8:27 10.100.0.8
Nov 25 12:17:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:17:54 np0005535469 nova_compute[254092]: 2025-11-25 17:17:54.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 37 op/s
Nov 25 12:17:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:17:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129759022' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:17:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:17:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129759022' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:17:55 np0005535469 nova_compute[254092]: 2025-11-25 17:17:55.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:17:55 np0005535469 nova_compute[254092]: 2025-11-25 17:17:55.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:56 np0005535469 nova_compute[254092]: 2025-11-25 17:17:56.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:17:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 25 12:17:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:17:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:00 np0005535469 nova_compute[254092]: 2025-11-25 17:18:00.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:18:01 np0005535469 nova_compute[254092]: 2025-11-25 17:18:01.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:18:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.451 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.452 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.480 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.561 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.561 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.574 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.574 254096 INFO nova.compute.claims [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:18:04 np0005535469 nova_compute[254092]: 2025-11-25 17:18:04.696 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:18:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:18:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492810995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.204 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.210 254096 DEBUG nova.compute.provider_tree [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.224 254096 DEBUG nova.scheduler.client.report [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.252 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.253 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.296 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.297 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.318 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.342 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.428 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.429 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.430 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Creating image(s)#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.463 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.499 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.533 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.538 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.643 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.644 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.645 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.645 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:05 np0005535469 podman[411692]: 2025-11-25 17:18:05.666140047 +0000 UTC m=+0.068059943 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.682 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.687 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:05 np0005535469 podman[411691]: 2025-11-25 17:18:05.691245231 +0000 UTC m=+0.086301761 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:18:05 np0005535469 podman[411693]: 2025-11-25 17:18:05.712958042 +0000 UTC m=+0.107538088 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:18:05 np0005535469 nova_compute[254092]: 2025-11-25 17:18:05.994 254096 DEBUG nova.policy [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.003 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.076 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.235 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.244 254096 DEBUG nova.objects.instance [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ce224dc-5e5e-4105-bc00-9953c57babd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.255 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.256 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Ensure instance console log exists: /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.256 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.256 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.257 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:06 np0005535469 nova_compute[254092]: 2025-11-25 17:18:06.704 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully created port: 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:18:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 126 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Nov 25 12:18:07 np0005535469 nova_compute[254092]: 2025-11-25 17:18:07.934 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully created port: 479e8c0a-f171-45a0-b7de-778cf1b728bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:18:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 126 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 403 KiB/s wr, 1 op/s
Nov 25 12:18:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:09 np0005535469 nova_compute[254092]: 2025-11-25 17:18:09.470 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully updated port: 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:18:09 np0005535469 nova_compute[254092]: 2025-11-25 17:18:09.587 254096 DEBUG nova.compute.manager [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:09 np0005535469 nova_compute[254092]: 2025-11-25 17:18:09.588 254096 DEBUG nova.compute.manager [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:18:09 np0005535469 nova_compute[254092]: 2025-11-25 17:18:09.589 254096 DEBUG oslo_concurrency.lockutils [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:18:09 np0005535469 nova_compute[254092]: 2025-11-25 17:18:09.589 254096 DEBUG oslo_concurrency.lockutils [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:18:09 np0005535469 nova_compute[254092]: 2025-11-25 17:18:09.590 254096 DEBUG nova.network.neutron [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:18:09 np0005535469 nova_compute[254092]: 2025-11-25 17:18:09.859 254096 DEBUG nova.network.neutron [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:18:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.433 254096 DEBUG nova.network.neutron [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.454 254096 DEBUG oslo_concurrency.lockutils [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.609 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully updated port: 479e8c0a-f171-45a0-b7de-778cf1b728bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.627 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.628 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.628 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:18:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 163 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 27 op/s
Nov 25 12:18:10 np0005535469 nova_compute[254092]: 2025-11-25 17:18:10.987 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:18:11 np0005535469 nova_compute[254092]: 2025-11-25 17:18:11.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:11 np0005535469 nova_compute[254092]: 2025-11-25 17:18:11.704 254096 DEBUG nova.compute.manager [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:11 np0005535469 nova_compute[254092]: 2025-11-25 17:18:11.704 254096 DEBUG nova.compute.manager [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-479e8c0a-f171-45a0-b7de-778cf1b728bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:18:11 np0005535469 nova_compute[254092]: 2025-11-25 17:18:11.705 254096 DEBUG oslo_concurrency.lockutils [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:18:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.235 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.255 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.255 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance network_info: |[{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.256 254096 DEBUG oslo_concurrency.lockutils [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.256 254096 DEBUG nova.network.neutron [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 479e8c0a-f171-45a0-b7de-778cf1b728bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.261 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start _get_guest_xml network_info=[{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.268 254096 WARNING nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.282 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.283 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.288 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.289 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.290 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.291 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.292 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.293 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.294 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.294 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.295 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.296 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.296 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.297 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.298 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.298 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.305 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:13.655 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:18:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3916719126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.818 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.848 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:18:13 np0005535469 nova_compute[254092]: 2025-11-25 17:18:13.853 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:18:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40798532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.335 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.338 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.339 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.341 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.342 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.343 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.344 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.346 254096 DEBUG nova.objects.instance [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ce224dc-5e5e-4105-bc00-9953c57babd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.520 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <uuid>1ce224dc-5e5e-4105-bc00-9953c57babd7</uuid>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <name>instance-0000008e</name>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-775791465</nova:name>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:18:13</nova:creationTime>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:port uuid="81fef3aa-29c9-47a1-8cba-c758c43f8e45">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <nova:port uuid="479e8c0a-f171-45a0-b7de-778cf1b728bb">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe52:2771" ipVersion="6"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe52:2771" ipVersion="6"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <entry name="serial">1ce224dc-5e5e-4105-bc00-9953c57babd7</entry>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <entry name="uuid">1ce224dc-5e5e-4105-bc00-9953c57babd7</entry>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ce224dc-5e5e-4105-bc00-9953c57babd7_disk">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:68:79:bf"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <target dev="tap81fef3aa-29"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:52:27:71"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <target dev="tap479e8c0a-f1"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/console.log" append="off"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:18:14 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:18:14 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:18:14 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:18:14 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.521 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Preparing to wait for external event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.522 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.522 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.522 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.523 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Preparing to wait for external event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.523 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.523 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.524 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.525 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.525 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.526 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.527 254096 DEBUG os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.530 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.534 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81fef3aa-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.535 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81fef3aa-29, col_values=(('external_ids', {'iface-id': '81fef3aa-29c9-47a1-8cba-c758c43f8e45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:79:bf', 'vm-uuid': '1ce224dc-5e5e-4105-bc00-9953c57babd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:14 np0005535469 NetworkManager[48891]: <info>  [1764091094.5393] manager: (tap81fef3aa-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.551 254096 INFO os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29')#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.552 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.553 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.554 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.554 254096 DEBUG os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.555 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.556 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.558 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap479e8c0a-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.559 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap479e8c0a-f1, col_values=(('external_ids', {'iface-id': '479e8c0a-f171-45a0-b7de-778cf1b728bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:27:71', 'vm-uuid': '1ce224dc-5e5e-4105-bc00-9953c57babd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:14 np0005535469 NetworkManager[48891]: <info>  [1764091094.5617] manager: (tap479e8c0a-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.568 254096 INFO os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1')#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.608 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.609 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.609 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:68:79:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.609 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:52:27:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.610 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Using config drive#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.636 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.907 254096 DEBUG nova.network.neutron [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updated VIF entry in instance network info cache for port 479e8c0a-f171-45a0-b7de-778cf1b728bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.907 254096 DEBUG nova.network.neutron [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:14 np0005535469 nova_compute[254092]: 2025-11-25 17:18:14.925 254096 DEBUG oslo_concurrency.lockutils [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:18:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.366 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Creating config drive at /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.372 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmj7x_e5n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.543 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmj7x_e5n" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.581 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.588 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.795 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.796 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deleting local config drive /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config because it was imported into RBD.#033[00m
Nov 25 12:18:16 np0005535469 NetworkManager[48891]: <info>  [1764091096.8629] manager: (tap81fef3aa-29): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Nov 25 12:18:16 np0005535469 kernel: tap81fef3aa-29: entered promiscuous mode
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01510|binding|INFO|Claiming lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 for this chassis.
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01511|binding|INFO|81fef3aa-29c9-47a1-8cba-c758c43f8e45: Claiming fa:16:3e:68:79:bf 10.100.0.6
Nov 25 12:18:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.888 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:79:bf 10.100.0.6'], port_security=['fa:16:3e:68:79:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=81fef3aa-29c9-47a1-8cba-c758c43f8e45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:18:16 np0005535469 NetworkManager[48891]: <info>  [1764091096.8914] manager: (tap479e8c0a-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Nov 25 12:18:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.890 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e bound to our chassis#033[00m
Nov 25 12:18:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.892 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a702335-301a-4b90-b82e-e616a31e5b3e#033[00m
Nov 25 12:18:16 np0005535469 kernel: tap479e8c0a-f1: entered promiscuous mode
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01512|binding|INFO|Setting lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 ovn-installed in OVS
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01513|binding|INFO|Setting lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 up in Southbound
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01514|if_status|INFO|Not updating pb chassis for 479e8c0a-f171-45a0-b7de-778cf1b728bb now as sb is readonly
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01515|binding|INFO|Claiming lport 479e8c0a-f171-45a0-b7de-778cf1b728bb for this chassis.
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01516|binding|INFO|479e8c0a-f171-45a0-b7de-778cf1b728bb: Claiming fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771
Nov 25 12:18:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.908 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], port_security=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe52:2771/64 2001:db8::f816:3eff:fe52:2771/64', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479e8c0a-f171-45a0-b7de-778cf1b728bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:18:16 np0005535469 systemd-udevd[412000]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:18:16 np0005535469 systemd-udevd[412001]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:18:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3044930-72a0-46aa-ba5f-fa12a22ed788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01517|binding|INFO|Setting lport 479e8c0a-f171-45a0-b7de-778cf1b728bb up in Southbound
Nov 25 12:18:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:16Z|01518|binding|INFO|Setting lport 479e8c0a-f171-45a0-b7de-778cf1b728bb ovn-installed in OVS
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:16 np0005535469 nova_compute[254092]: 2025-11-25 17:18:16.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:16 np0005535469 NetworkManager[48891]: <info>  [1764091096.9299] device (tap81fef3aa-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:18:16 np0005535469 NetworkManager[48891]: <info>  [1764091096.9310] device (tap81fef3aa-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:18:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:18:16 np0005535469 NetworkManager[48891]: <info>  [1764091096.9384] device (tap479e8c0a-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:18:16 np0005535469 NetworkManager[48891]: <info>  [1764091096.9394] device (tap479e8c0a-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:18:16 np0005535469 systemd-machined[216343]: New machine qemu-176-instance-0000008e.
Nov 25 12:18:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.960 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[437beadc-bcef-46c0-9276-32a9f7602719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:16 np0005535469 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Nov 25 12:18:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.964 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2b29a49b-27fb-4d64-bc12-62b3eb45dbd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.002 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8701b2-bc9e-4f48-b38b-d3f52e4fedf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.022 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aeab5db7-e43f-4c0e-9fbf-d0e91d48e241]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412015, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[13041d0c-b6a1-4b85-9b58-19fa459076ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751474, 'tstamp': 751474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412018, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751477, 'tstamp': 751477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412018, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.046 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a702335-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.051 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a702335-30, col_values=(('external_ids', {'iface-id': '1cef18fb-8ae8-44d1-93c6-659b405ed9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.052 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.053 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479e8c0a-f171-45a0-b7de-778cf1b728bb in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.055 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.074 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05dfebe8-b397-45c3-9b9c-7f53fd0e7f99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.121 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28902694-42de-4742-9488-308312e3505c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.127 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1463651-9df5-45d0-bce0-c43a2b8b3a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.165 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d6643c14-1933-4042-857c-53dd73402d2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[640b0c92-9018-4243-bff9-4197a599ff35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412025, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.214 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae0d58f-87cf-4ede-915d-7a25e2a97f5e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51f47401-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751571, 'tstamp': 751571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412026, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.216 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.219 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51f47401-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.219 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51f47401-e0, col_values=(('external_ids', {'iface-id': '7624b22b-8369-4a12-940b-9f95890a4040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.366 254096 DEBUG nova.compute.manager [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.367 254096 DEBUG oslo_concurrency.lockutils [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.368 254096 DEBUG oslo_concurrency.lockutils [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.368 254096 DEBUG oslo_concurrency.lockutils [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.368 254096 DEBUG nova.compute.manager [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Processing event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.407 254096 DEBUG nova.compute.manager [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.407 254096 DEBUG oslo_concurrency.lockutils [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.408 254096 DEBUG oslo_concurrency.lockutils [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.408 254096 DEBUG oslo_concurrency.lockutils [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.409 254096 DEBUG nova.compute.manager [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Processing event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.651 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091097.650954, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.652 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Started (Lifecycle Event)#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.655 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.660 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.665 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance spawned successfully.#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.665 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.676 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.692 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.693 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.693 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.693 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.694 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.694 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.698 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.699 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091097.6514113, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.699 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.728 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.732 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091097.6585634, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.733 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.754 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.761 254096 INFO nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 12.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.762 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.763 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.780 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.837 254096 INFO nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 13.31 seconds to build instance.#033[00m
Nov 25 12:18:17 np0005535469 nova_compute[254092]: 2025-11-25 17:18:17.853 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Nov 25 12:18:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.489 254096 DEBUG nova.compute.manager [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.490 254096 DEBUG oslo_concurrency.lockutils [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.491 254096 DEBUG oslo_concurrency.lockutils [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.491 254096 DEBUG oslo_concurrency.lockutils [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.492 254096 DEBUG nova.compute.manager [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.492 254096 WARNING nova.compute.manager [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb for instance with vm_state active and task_state None.#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.523 254096 DEBUG nova.compute.manager [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.524 254096 DEBUG oslo_concurrency.lockutils [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.524 254096 DEBUG oslo_concurrency.lockutils [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.524 254096 DEBUG oslo_concurrency.lockutils [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.525 254096 DEBUG nova.compute.manager [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.525 254096 WARNING nova.compute.manager [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:18:19 np0005535469 nova_compute[254092]: 2025-11-25 17:18:19.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:18:20 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c787c164-d501-4cf8-b6b0-6433377924da does not exist
Nov 25 12:18:20 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 182631f9-589b-4df6-8082-41017ed6c6f8 does not exist
Nov 25 12:18:20 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e8d90fc7-7fc4-44c8-b3e5-f1da4b10b943 does not exist
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:18:20 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:18:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Nov 25 12:18:21 np0005535469 podman[412342]: 2025-11-25 17:18:21.045479952 +0000 UTC m=+0.060241211 container create 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:18:21 np0005535469 systemd[1]: Started libpod-conmon-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope.
Nov 25 12:18:21 np0005535469 podman[412342]: 2025-11-25 17:18:21.024725307 +0000 UTC m=+0.039486596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:18:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:18:21 np0005535469 podman[412342]: 2025-11-25 17:18:21.145769402 +0000 UTC m=+0.160530681 container init 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:18:21 np0005535469 podman[412342]: 2025-11-25 17:18:21.160264456 +0000 UTC m=+0.175025715 container start 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:18:21 np0005535469 podman[412342]: 2025-11-25 17:18:21.164184902 +0000 UTC m=+0.178946201 container attach 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:18:21 np0005535469 determined_wilbur[412356]: 167 167
Nov 25 12:18:21 np0005535469 systemd[1]: libpod-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope: Deactivated successfully.
Nov 25 12:18:21 np0005535469 conmon[412356]: conmon 444e11c0e23d7009a63b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope/container/memory.events
Nov 25 12:18:21 np0005535469 podman[412342]: 2025-11-25 17:18:21.172858559 +0000 UTC m=+0.187619828 container died 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:18:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9a1d8c6a3a8cf66d49859a2bb9da41a9f3b427fc1ffeb9544cc9dd2f2d985d18-merged.mount: Deactivated successfully.
Nov 25 12:18:21 np0005535469 nova_compute[254092]: 2025-11-25 17:18:21.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:21 np0005535469 podman[412342]: 2025-11-25 17:18:21.233831638 +0000 UTC m=+0.248592927 container remove 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:18:21 np0005535469 systemd[1]: libpod-conmon-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope: Deactivated successfully.
Nov 25 12:18:21 np0005535469 podman[412380]: 2025-11-25 17:18:21.493869266 +0000 UTC m=+0.058130903 container create 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:18:21 np0005535469 systemd[1]: Started libpod-conmon-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope.
Nov 25 12:18:21 np0005535469 podman[412380]: 2025-11-25 17:18:21.470446709 +0000 UTC m=+0.034708366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:18:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:18:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:21 np0005535469 podman[412380]: 2025-11-25 17:18:21.614715936 +0000 UTC m=+0.178977613 container init 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 12:18:21 np0005535469 podman[412380]: 2025-11-25 17:18:21.62184483 +0000 UTC m=+0.186106467 container start 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:18:21 np0005535469 podman[412380]: 2025-11-25 17:18:21.625074568 +0000 UTC m=+0.189336225 container attach 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 12:18:22 np0005535469 epic_heisenberg[412396]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:18:22 np0005535469 epic_heisenberg[412396]: --> relative data size: 1.0
Nov 25 12:18:22 np0005535469 epic_heisenberg[412396]: --> All data devices are unavailable
Nov 25 12:18:22 np0005535469 systemd[1]: libpod-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope: Deactivated successfully.
Nov 25 12:18:22 np0005535469 systemd[1]: libpod-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope: Consumed 1.134s CPU time.
Nov 25 12:18:22 np0005535469 podman[412380]: 2025-11-25 17:18:22.8147506 +0000 UTC m=+1.379012297 container died 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:18:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee-merged.mount: Deactivated successfully.
Nov 25 12:18:22 np0005535469 podman[412380]: 2025-11-25 17:18:22.908093611 +0000 UTC m=+1.472355268 container remove 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:18:22 np0005535469 systemd[1]: libpod-conmon-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope: Deactivated successfully.
Nov 25 12:18:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 355 KiB/s wr, 74 op/s
Nov 25 12:18:23 np0005535469 podman[412576]: 2025-11-25 17:18:23.825139412 +0000 UTC m=+0.052535111 container create 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:18:23 np0005535469 systemd[1]: Started libpod-conmon-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope.
Nov 25 12:18:23 np0005535469 podman[412576]: 2025-11-25 17:18:23.801614862 +0000 UTC m=+0.029010571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:18:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:18:23 np0005535469 podman[412576]: 2025-11-25 17:18:23.922205724 +0000 UTC m=+0.149601473 container init 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 12:18:23 np0005535469 podman[412576]: 2025-11-25 17:18:23.934146159 +0000 UTC m=+0.161541858 container start 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:18:23 np0005535469 podman[412576]: 2025-11-25 17:18:23.938253971 +0000 UTC m=+0.165649680 container attach 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 12:18:23 np0005535469 lucid_shockley[412592]: 167 167
Nov 25 12:18:23 np0005535469 systemd[1]: libpod-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope: Deactivated successfully.
Nov 25 12:18:23 np0005535469 conmon[412592]: conmon 1f14765a5cb76ee78855 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope/container/memory.events
Nov 25 12:18:23 np0005535469 podman[412576]: 2025-11-25 17:18:23.948736606 +0000 UTC m=+0.176132295 container died 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 25 12:18:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-62cf843cb59d7560c11422677473c5324720294182d3884d89ed2b597286396a-merged.mount: Deactivated successfully.
Nov 25 12:18:24 np0005535469 podman[412576]: 2025-11-25 17:18:24.011479284 +0000 UTC m=+0.238875023 container remove 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:18:24 np0005535469 systemd[1]: libpod-conmon-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope: Deactivated successfully.
Nov 25 12:18:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:24 np0005535469 podman[412615]: 2025-11-25 17:18:24.282997155 +0000 UTC m=+0.069771911 container create a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:18:24 np0005535469 systemd[1]: Started libpod-conmon-a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc.scope.
Nov 25 12:18:24 np0005535469 podman[412615]: 2025-11-25 17:18:24.251841567 +0000 UTC m=+0.038616383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:18:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:18:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:24 np0005535469 podman[412615]: 2025-11-25 17:18:24.405029066 +0000 UTC m=+0.191803832 container init a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:18:24 np0005535469 podman[412615]: 2025-11-25 17:18:24.413706982 +0000 UTC m=+0.200481728 container start a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:18:24 np0005535469 podman[412615]: 2025-11-25 17:18:24.417856205 +0000 UTC m=+0.204630941 container attach a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 12:18:24 np0005535469 nova_compute[254092]: 2025-11-25 17:18:24.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2846: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 12:18:24 np0005535469 nova_compute[254092]: 2025-11-25 17:18:24.968 254096 DEBUG nova.compute.manager [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:24 np0005535469 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG nova.compute.manager [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:18:24 np0005535469 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG oslo_concurrency.lockutils [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:18:24 np0005535469 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG oslo_concurrency.lockutils [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:18:24 np0005535469 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG nova.network.neutron [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]: {
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:    "0": [
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:        {
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "devices": [
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "/dev/loop3"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            ],
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_name": "ceph_lv0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_size": "21470642176",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "name": "ceph_lv0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "tags": {
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cluster_name": "ceph",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.crush_device_class": "",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.encrypted": "0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osd_id": "0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.type": "block",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.vdo": "0"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            },
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "type": "block",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "vg_name": "ceph_vg0"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:        }
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:    ],
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:    "1": [
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:        {
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "devices": [
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "/dev/loop4"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            ],
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_name": "ceph_lv1",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_size": "21470642176",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "name": "ceph_lv1",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "tags": {
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cluster_name": "ceph",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.crush_device_class": "",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.encrypted": "0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osd_id": "1",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.type": "block",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.vdo": "0"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            },
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "type": "block",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "vg_name": "ceph_vg1"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:        }
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:    ],
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:    "2": [
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:        {
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "devices": [
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "/dev/loop5"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            ],
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_name": "ceph_lv2",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_size": "21470642176",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "name": "ceph_lv2",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "tags": {
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.cluster_name": "ceph",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.crush_device_class": "",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.encrypted": "0",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osd_id": "2",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.type": "block",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:                "ceph.vdo": "0"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            },
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "type": "block",
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:            "vg_name": "ceph_vg2"
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:        }
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]:    ]
Nov 25 12:18:25 np0005535469 eager_bardeen[412630]: }
Nov 25 12:18:25 np0005535469 systemd[1]: libpod-a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc.scope: Deactivated successfully.
Nov 25 12:18:25 np0005535469 podman[412615]: 2025-11-25 17:18:25.24465938 +0000 UTC m=+1.031434156 container died a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:18:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8-merged.mount: Deactivated successfully.
Nov 25 12:18:25 np0005535469 podman[412615]: 2025-11-25 17:18:25.325607654 +0000 UTC m=+1.112382380 container remove a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:18:25 np0005535469 systemd[1]: libpod-conmon-a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc.scope: Deactivated successfully.
Nov 25 12:18:26 np0005535469 podman[412796]: 2025-11-25 17:18:26.157678352 +0000 UTC m=+0.043925897 container create 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:18:26 np0005535469 systemd[1]: Started libpod-conmon-3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e.scope.
Nov 25 12:18:26 np0005535469 nova_compute[254092]: 2025-11-25 17:18:26.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:18:26 np0005535469 podman[412796]: 2025-11-25 17:18:26.140160955 +0000 UTC m=+0.026408520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:18:26 np0005535469 podman[412796]: 2025-11-25 17:18:26.249331797 +0000 UTC m=+0.135579472 container init 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:18:26 np0005535469 podman[412796]: 2025-11-25 17:18:26.257073397 +0000 UTC m=+0.143320942 container start 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:18:26 np0005535469 podman[412796]: 2025-11-25 17:18:26.260506911 +0000 UTC m=+0.146754466 container attach 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:18:26 np0005535469 sweet_joliot[412814]: 167 167
Nov 25 12:18:26 np0005535469 systemd[1]: libpod-3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e.scope: Deactivated successfully.
Nov 25 12:18:26 np0005535469 podman[412796]: 2025-11-25 17:18:26.265234499 +0000 UTC m=+0.151482054 container died 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:18:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-829113fe87ec38ebdbd82d8f8d3dc7dd8606cdd333336d5633ff0c5aca92f2c8-merged.mount: Deactivated successfully.
Nov 25 12:18:26 np0005535469 podman[412796]: 2025-11-25 17:18:26.312858135 +0000 UTC m=+0.199105670 container remove 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:18:26 np0005535469 systemd[1]: libpod-conmon-3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e.scope: Deactivated successfully.
Nov 25 12:18:26 np0005535469 podman[412837]: 2025-11-25 17:18:26.531927379 +0000 UTC m=+0.046620480 container create 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:18:26 np0005535469 systemd[1]: Started libpod-conmon-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope.
Nov 25 12:18:26 np0005535469 podman[412837]: 2025-11-25 17:18:26.514200286 +0000 UTC m=+0.028893397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:18:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:18:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:18:26 np0005535469 podman[412837]: 2025-11-25 17:18:26.644314818 +0000 UTC m=+0.159008019 container init 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:18:26 np0005535469 podman[412837]: 2025-11-25 17:18:26.657908238 +0000 UTC m=+0.172601329 container start 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:18:26 np0005535469 podman[412837]: 2025-11-25 17:18:26.662600705 +0000 UTC m=+0.177293906 container attach 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:18:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 12:18:27 np0005535469 nova_compute[254092]: 2025-11-25 17:18:27.032 254096 DEBUG nova.network.neutron [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updated VIF entry in instance network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:18:27 np0005535469 nova_compute[254092]: 2025-11-25 17:18:27.035 254096 DEBUG nova.network.neutron [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:27 np0005535469 nova_compute[254092]: 2025-11-25 17:18:27.053 254096 DEBUG oslo_concurrency.lockutils [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]: {
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "osd_id": 1,
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "type": "bluestore"
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:    },
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "osd_id": 2,
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "type": "bluestore"
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:    },
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "osd_id": 0,
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:        "type": "bluestore"
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]:    }
Nov 25 12:18:27 np0005535469 wonderful_goodall[412854]: }
Nov 25 12:18:27 np0005535469 systemd[1]: libpod-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope: Deactivated successfully.
Nov 25 12:18:27 np0005535469 systemd[1]: libpod-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope: Consumed 1.167s CPU time.
Nov 25 12:18:27 np0005535469 podman[412887]: 2025-11-25 17:18:27.885701507 +0000 UTC m=+0.030122231 container died 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:18:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:18:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804-merged.mount: Deactivated successfully.
Nov 25 12:18:29 np0005535469 podman[412887]: 2025-11-25 17:18:29.507598804 +0000 UTC m=+1.652019468 container remove 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:18:29 np0005535469 systemd[1]: libpod-conmon-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope: Deactivated successfully.
Nov 25 12:18:29 np0005535469 nova_compute[254092]: 2025-11-25 17:18:29.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:18:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:18:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:18:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:18:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e2635490-f48b-4fad-9c41-380c8377f53d does not exist
Nov 25 12:18:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 53858401-390e-4b65-9bf2-56d2dfabff16 does not exist
Nov 25 12:18:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:18:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:18:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:18:31 np0005535469 nova_compute[254092]: 2025-11-25 17:18:31.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 756 KiB/s rd, 98 KiB/s wr, 30 op/s
Nov 25 12:18:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:33Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:79:bf 10.100.0.6
Nov 25 12:18:33 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:33Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:79:bf 10.100.0.6
Nov 25 12:18:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:34 np0005535469 nova_compute[254092]: 2025-11-25 17:18:34.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s wr, 0 op/s
Nov 25 12:18:36 np0005535469 nova_compute[254092]: 2025-11-25 17:18:36.231 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:36 np0005535469 podman[412953]: 2025-11-25 17:18:36.675769335 +0000 UTC m=+0.079884946 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 12:18:36 np0005535469 podman[412952]: 2025-11-25 17:18:36.707975401 +0000 UTC m=+0.113569531 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 12:18:36 np0005535469 podman[412954]: 2025-11-25 17:18:36.740793835 +0000 UTC m=+0.133752752 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 12:18:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:18:38 np0005535469 nova_compute[254092]: 2025-11-25 17:18:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 12:18:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:39 np0005535469 nova_compute[254092]: 2025-11-25 17:18:39.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:18:40
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data']
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:18:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:18:41 np0005535469 nova_compute[254092]: 2025-11-25 17:18:41.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:41 np0005535469 nova_compute[254092]: 2025-11-25 17:18:41.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:18:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:44.716 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:18:44 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:44.718 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.953 254096 DEBUG nova.compute.manager [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.954 254096 DEBUG nova.compute.manager [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.954 254096 DEBUG oslo_concurrency.lockutils [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.954 254096 DEBUG oslo_concurrency.lockutils [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:18:44 np0005535469 nova_compute[254092]: 2025-11-25 17:18:44.955 254096 DEBUG nova.network.neutron [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.011 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.012 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.012 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.015 254096 INFO nova.compute.manager [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Terminating instance#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.017 254096 DEBUG nova.compute.manager [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:18:45 np0005535469 kernel: tap81fef3aa-29 (unregistering): left promiscuous mode
Nov 25 12:18:45 np0005535469 NetworkManager[48891]: <info>  [1764091125.0799] device (tap81fef3aa-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:45Z|01519|binding|INFO|Releasing lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 from this chassis (sb_readonly=0)
Nov 25 12:18:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:45Z|01520|binding|INFO|Setting lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 down in Southbound
Nov 25 12:18:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:45Z|01521|binding|INFO|Removing iface tap81fef3aa-29 ovn-installed in OVS
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.097 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:79:bf 10.100.0.6'], port_security=['fa:16:3e:68:79:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=81fef3aa-29c9-47a1-8cba-c758c43f8e45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e unbound from our chassis#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.101 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a702335-301a-4b90-b82e-e616a31e5b3e#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 kernel: tap479e8c0a-f1 (unregistering): left promiscuous mode
Nov 25 12:18:45 np0005535469 NetworkManager[48891]: <info>  [1764091125.1176] device (tap479e8c0a-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:18:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:45Z|01522|binding|INFO|Releasing lport 479e8c0a-f171-45a0-b7de-778cf1b728bb from this chassis (sb_readonly=0)
Nov 25 12:18:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:45Z|01523|binding|INFO|Setting lport 479e8c0a-f171-45a0-b7de-778cf1b728bb down in Southbound
Nov 25 12:18:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:45Z|01524|binding|INFO|Removing iface tap479e8c0a-f1 ovn-installed in OVS
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.133 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6170a23a-023d-49e1-acc0-506a72352fca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.137 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], port_security=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe52:2771/64 2001:db8::f816:3eff:fe52:2771/64', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479e8c0a-f171-45a0-b7de-778cf1b728bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 25 12:18:45 np0005535469 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 15.841s CPU time.
Nov 25 12:18:45 np0005535469 systemd-machined[216343]: Machine qemu-176-instance-0000008e terminated.
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.189 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e4fa9d-cf84-4adc-8200-78b0b8577bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.195 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb2f1a4-11a5-4976-8017-f2329438a8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.235 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1711ad-950f-4a8c-93cd-1cf8829fc7d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 NetworkManager[48891]: <info>  [1764091125.2626] manager: (tap479e8c0a-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.265 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb96257-6afc-4ea5-b7df-04e8ae3ee972]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413033, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.285 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance destroyed successfully.#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.286 254096 DEBUG nova.objects.instance [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 1ce224dc-5e5e-4105-bc00-9953c57babd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.298 254096 DEBUG nova.virt.libvirt.vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:18:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:18:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.298 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff8fc0b-c2c0-41c6-bc32-739a562140ff]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751474, 'tstamp': 751474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413048, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751477, 'tstamp': 751477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413048, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.299 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.299 254096 DEBUG os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.300 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.301 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81fef3aa-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.311 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a702335-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.312 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.313 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a702335-30, col_values=(('external_ids', {'iface-id': '1cef18fb-8ae8-44d1-93c6-659b405ed9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.313 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.315 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479e8c0a-f171-45a0-b7de-778cf1b728bb in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.317 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.318 254096 INFO os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29')#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.319 254096 DEBUG nova.virt.libvirt.vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:18:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:18:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.320 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.321 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.321 254096 DEBUG os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.323 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap479e8c0a-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.328 254096 INFO os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1')#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.344 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0861b0-829f-4306-822d-eadd94be66a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.385 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e64b4dbc-eedc-4739-b145-4a7becde26f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.391 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2418bc78-92b3-4619-b1e1-4dc66e375a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.448 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0f16a947-d04c-483d-8c5d-718106d0a451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.484 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abf309d2-f420-4f42-8790-a88424d0732c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413078, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7daa8bfb-746d-4c71-b545-e63c105ffaa7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51f47401-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751571, 'tstamp': 751571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413079, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.530 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.537 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51f47401-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.538 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.539 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51f47401-e0, col_values=(('external_ids', {'iface-id': '7624b22b-8369-4a12-940b-9f95890a4040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.540 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.803 254096 INFO nova.virt.libvirt.driver [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deleting instance files /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7_del#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.804 254096 INFO nova.virt.libvirt.driver [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deletion of /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7_del complete#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.858 254096 INFO nova.compute.manager [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.859 254096 DEBUG oslo.service.loopingcall [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.859 254096 DEBUG nova.compute.manager [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:18:45 np0005535469 nova_compute[254092]: 2025-11-25 17:18:45.859 254096 DEBUG nova.network.neutron [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:18:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:18:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2667142588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.085 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.175 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.175 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.431 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.432 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3463MB free_disk=59.89716720581055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.432 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.433 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1ce224dc-5e5e-4105-bc00-9953c57babd7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.553 254096 DEBUG nova.compute.manager [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG oslo_concurrency.lockutils [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG oslo_concurrency.lockutils [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG oslo_concurrency.lockutils [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG nova.compute.manager [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-unplugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG nova.compute.manager [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:18:46 np0005535469 nova_compute[254092]: 2025-11-25 17:18:46.577 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.0 MiB/s wr, 73 op/s
Nov 25 12:18:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:18:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487486548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.055 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.062 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-unplugged-479e8c0a-f171-45a0-b7de-778cf1b728bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-479e8c0a-f171-45a0-b7de-778cf1b728bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.075 254096 WARNING nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.078 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.109 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.109 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.192 254096 DEBUG nova.network.neutron [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updated VIF entry in instance network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.192 254096 DEBUG nova.network.neutron [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.213 254096 DEBUG oslo_concurrency.lockutils [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.472 254096 DEBUG nova.network.neutron [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.496 254096 INFO nova.compute.manager [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 1.64 seconds to deallocate network for instance.#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.550 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.551 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:47 np0005535469 nova_compute[254092]: 2025-11-25 17:18:47.628 254096 DEBUG oslo_concurrency.processutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:18:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523132620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.118 254096 DEBUG oslo_concurrency.processutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.128 254096 DEBUG nova.compute.provider_tree [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.146 254096 DEBUG nova.scheduler.client.report [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.176 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.217 254096 INFO nova.scheduler.client.report [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 1ce224dc-5e5e-4105-bc00-9953c57babd7#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.269 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.661 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.662 254096 DEBUG oslo_concurrency.lockutils [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.662 254096 DEBUG oslo_concurrency.lockutils [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.662 254096 DEBUG oslo_concurrency.lockutils [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 WARNING nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-deleted-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:48 np0005535469 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-deleted-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 11 op/s
Nov 25 12:18:49 np0005535469 nova_compute[254092]: 2025-11-25 17:18:49.110 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.395 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.396 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.396 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.396 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.397 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.398 254096 INFO nova.compute.manager [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Terminating instance#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.400 254096 DEBUG nova.compute.manager [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:18:50 np0005535469 kernel: tapa3c9174e-c8 (unregistering): left promiscuous mode
Nov 25 12:18:50 np0005535469 NetworkManager[48891]: <info>  [1764091130.4674] device (tapa3c9174e-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:18:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:50Z|01525|binding|INFO|Releasing lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b from this chassis (sb_readonly=0)
Nov 25 12:18:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:50Z|01526|binding|INFO|Setting lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b down in Southbound
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:50Z|01527|binding|INFO|Removing iface tapa3c9174e-c8 ovn-installed in OVS
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.490 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:d8:27 10.100.0.8'], port_security=['fa:16:3e:ef:d8:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.491 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e unbound from our chassis#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.492 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a702335-301a-4b90-b82e-e616a31e5b3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.493 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab80ab09-d9d4-4b3a-9490-1f67b0f2c4cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.493 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e namespace which is not needed anymore#033[00m
Nov 25 12:18:50 np0005535469 kernel: tapfdd7f4f6-80 (unregistering): left promiscuous mode
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 NetworkManager[48891]: <info>  [1764091130.5113] device (tapfdd7f4f6-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:50Z|01528|binding|INFO|Releasing lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d from this chassis (sb_readonly=0)
Nov 25 12:18:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:50Z|01529|binding|INFO|Setting lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d down in Southbound
Nov 25 12:18:50 np0005535469 ovn_controller[153477]: 2025-11-25T17:18:50Z|01530|binding|INFO|Removing iface tapfdd7f4f6-80 ovn-installed in OVS
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.550 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], port_security=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feac:9cd4/64 2001:db8::f816:3eff:feac:9cd4/64', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.568 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 25 12:18:50 np0005535469 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Consumed 16.831s CPU time.
Nov 25 12:18:50 np0005535469 systemd-machined[216343]: Machine qemu-175-instance-0000008d terminated.
Nov 25 12:18:50 np0005535469 NetworkManager[48891]: <info>  [1764091130.6446] manager: (tapfdd7f4f6-80): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.659 254096 INFO nova.virt.libvirt.driver [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance destroyed successfully.#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.660 254096 DEBUG nova.objects.instance [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.682 254096 DEBUG nova.virt.libvirt.vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:17:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:17:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.683 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.684 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.684 254096 DEBUG os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.687 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c9174e-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.704 254096 INFO os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8')#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.704 254096 DEBUG nova.virt.libvirt.vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:17:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:17:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.705 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.706 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.706 254096 DEBUG os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.708 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdd7f4f6-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.715 254096 INFO os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80')#033[00m
Nov 25 12:18:50 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : haproxy version is 2.8.14-c23fe91
Nov 25 12:18:50 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : path to executable is /usr/sbin/haproxy
Nov 25 12:18:50 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [WARNING]  (411483) : Exiting Master process...
Nov 25 12:18:50 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [WARNING]  (411483) : Exiting Master process...
Nov 25 12:18:50 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [ALERT]    (411483) : Current worker (411485) exited with code 143 (Terminated)
Nov 25 12:18:50 np0005535469 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [WARNING]  (411483) : All workers exited. Exiting... (0)
Nov 25 12:18:50 np0005535469 systemd[1]: libpod-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0.scope: Deactivated successfully.
Nov 25 12:18:50 np0005535469 podman[413191]: 2025-11-25 17:18:50.751798512 +0000 UTC m=+0.074174520 container died 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.778 254096 DEBUG nova.compute.manager [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.779 254096 DEBUG oslo_concurrency.lockutils [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.780 254096 DEBUG oslo_concurrency.lockutils [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.780 254096 DEBUG oslo_concurrency.lockutils [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.782 254096 DEBUG nova.compute.manager [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-unplugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.782 254096 DEBUG nova.compute.manager [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:18:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0-userdata-shm.mount: Deactivated successfully.
Nov 25 12:18:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cda96b984478ce4719be5239caee54b3e00b48097bfc3855698629f519ccf4eb-merged.mount: Deactivated successfully.
Nov 25 12:18:50 np0005535469 podman[413191]: 2025-11-25 17:18:50.801994348 +0000 UTC m=+0.124370346 container cleanup 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 12:18:50 np0005535469 systemd[1]: libpod-conmon-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0.scope: Deactivated successfully.
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.862 254096 DEBUG nova.compute.manager [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.863 254096 DEBUG nova.compute.manager [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.863 254096 DEBUG oslo_concurrency.lockutils [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.864 254096 DEBUG oslo_concurrency.lockutils [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.864 254096 DEBUG nova.network.neutron [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:18:50 np0005535469 podman[413246]: 2025-11-25 17:18:50.894899497 +0000 UTC m=+0.059140681 container remove 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.902 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a70e6b6-14e3-4b61-8975-dc30594f0b71]: (4, ('Tue Nov 25 05:18:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e (405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0)\n405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0\nTue Nov 25 05:18:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e (405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0)\n405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31d04b17-c4cf-4299-9d56-0ae239410419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.906 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.908 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 kernel: tap4a702335-30: left promiscuous mode
Nov 25 12:18:50 np0005535469 nova_compute[254092]: 2025-11-25 17:18:50.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.925 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef13ac5c-e098-4633-96e8-f39a5fa47987]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0446432b-fc0e-4c0a-a907-99a5b9908546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.948 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b35a4a90-3c50-46d5-a21a-19b08a921771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 18 KiB/s wr, 31 op/s
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.970 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1972d516-dd16-4a73-9ad7-17d1748702a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751452, 'reachable_time': 15175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413262, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 systemd[1]: run-netns-ovnmeta\x2d4a702335\x2d301a\x2d4b90\x2db82e\x2de616a31e5b3e.mount: Deactivated successfully.
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.978 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.979 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8419bfef-035e-4305-a4ac-17a36a716756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.980 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.981 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.982 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3272c4c-20d1-4262-be2b-9605f8beb182]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:50 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.982 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 namespace which is not needed anymore#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.139 254096 INFO nova.virt.libvirt.driver [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deleting instance files /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_del#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.140 254096 INFO nova.virt.libvirt.driver [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deletion of /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_del complete#033[00m
Nov 25 12:18:51 np0005535469 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : haproxy version is 2.8.14-c23fe91
Nov 25 12:18:51 np0005535469 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : path to executable is /usr/sbin/haproxy
Nov 25 12:18:51 np0005535469 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [WARNING]  (411556) : Exiting Master process...
Nov 25 12:18:51 np0005535469 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [ALERT]    (411556) : Current worker (411558) exited with code 143 (Terminated)
Nov 25 12:18:51 np0005535469 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [WARNING]  (411556) : All workers exited. Exiting... (0)
Nov 25 12:18:51 np0005535469 systemd[1]: libpod-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0.scope: Deactivated successfully.
Nov 25 12:18:51 np0005535469 podman[413280]: 2025-11-25 17:18:51.157478634 +0000 UTC m=+0.060341273 container died cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:18:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0-userdata-shm.mount: Deactivated successfully.
Nov 25 12:18:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f061dc6daf905b054161e2e14d141c86f73e0689fee8c1b141f9292e1f077fc2-merged.mount: Deactivated successfully.
Nov 25 12:18:51 np0005535469 podman[413280]: 2025-11-25 17:18:51.199005325 +0000 UTC m=+0.101867964 container cleanup cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.216 254096 INFO nova.compute.manager [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.217 254096 DEBUG oslo.service.loopingcall [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.217 254096 DEBUG nova.compute.manager [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.217 254096 DEBUG nova.network.neutron [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:18:51 np0005535469 systemd[1]: libpod-conmon-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0.scope: Deactivated successfully.
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.241 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:51 np0005535469 podman[413309]: 2025-11-25 17:18:51.275538838 +0000 UTC m=+0.051091872 container remove cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.284 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f601f41-59be-4643-b364-1589a51af05d]: (4, ('Tue Nov 25 05:18:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 (cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0)\ncdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0\nTue Nov 25 05:18:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 (cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0)\ncdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.286 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a93cb4ac-d684-4b67-807b-8ad22a0803f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:51 np0005535469 kernel: tap51f47401-e0: left promiscuous mode
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.295 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d24506-f903-4d00-8ad8-6b759bc88e33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.312 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdc37f2-e5ed-442e-9ea1-a97ccc6289d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6676708c-1909-4049-ab8b-604f6c7aa52c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a65b4a0c-3594-4c4a-9c34-b1ca28934d81]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751549, 'reachable_time': 22070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413325, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.340 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:18:51 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.341 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[18f3cd86-afdf-4517-bd11-75f7e5029e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 25 12:18:51 np0005535469 nova_compute[254092]: 2025-11-25 17:18:51.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005685645186234291 of space, bias 1.0, pg target 0.1705693555870287 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:18:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:18:51 np0005535469 systemd[1]: run-netns-ovnmeta\x2d51f47401\x2ded2b\x2d45e2\x2daea1\x2d5cbbd48e5245.mount: Deactivated successfully.
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.884 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.885 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.886 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.886 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.887 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.887 254096 WARNING nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.887 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.888 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.888 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.889 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.889 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-unplugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.889 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.890 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.890 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.890 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.891 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.891 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:18:52 np0005535469 nova_compute[254092]: 2025-11-25 17:18:52.891 254096 WARNING nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:18:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 7.0 KiB/s wr, 31 op/s
Nov 25 12:18:53 np0005535469 nova_compute[254092]: 2025-11-25 17:18:53.250 254096 DEBUG nova.compute.manager [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-deleted-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:53 np0005535469 nova_compute[254092]: 2025-11-25 17:18:53.250 254096 INFO nova.compute.manager [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Neutron deleted interface a3c9174e-c8c3-4b9f-b87f-4d6244324c9b; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:18:53 np0005535469 nova_compute[254092]: 2025-11-25 17:18:53.251 254096 DEBUG nova.network.neutron [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:53 np0005535469 nova_compute[254092]: 2025-11-25 17:18:53.271 254096 DEBUG nova.compute.manager [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Detach interface failed, port_id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b, reason: Instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:18:53 np0005535469 nova_compute[254092]: 2025-11-25 17:18:53.290 254096 DEBUG nova.network.neutron [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated VIF entry in instance network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:18:53 np0005535469 nova_compute[254092]: 2025-11-25 17:18:53.291 254096 DEBUG nova.network.neutron [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:53 np0005535469 nova_compute[254092]: 2025-11-25 17:18:53.434 254096 DEBUG oslo_concurrency.lockutils [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:18:54 np0005535469 nova_compute[254092]: 2025-11-25 17:18:54.296 254096 DEBUG nova.network.neutron [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:18:54 np0005535469 nova_compute[254092]: 2025-11-25 17:18:54.318 254096 INFO nova.compute.manager [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 3.10 seconds to deallocate network for instance.#033[00m
Nov 25 12:18:54 np0005535469 nova_compute[254092]: 2025-11-25 17:18:54.362 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:18:54 np0005535469 nova_compute[254092]: 2025-11-25 17:18:54.363 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:18:54 np0005535469 nova_compute[254092]: 2025-11-25 17:18:54.425 254096 DEBUG oslo_concurrency.processutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:18:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:18:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:18:54.720 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:18:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:18:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224687489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:18:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.0 KiB/s wr, 30 op/s
Nov 25 12:18:54 np0005535469 nova_compute[254092]: 2025-11-25 17:18:54.961 254096 DEBUG oslo_concurrency.processutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:18:54 np0005535469 nova_compute[254092]: 2025-11-25 17:18:54.971 254096 DEBUG nova.compute.provider_tree [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:18:55 np0005535469 nova_compute[254092]: 2025-11-25 17:18:55.001 254096 DEBUG nova.scheduler.client.report [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:18:55 np0005535469 nova_compute[254092]: 2025-11-25 17:18:55.033 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:55 np0005535469 nova_compute[254092]: 2025-11-25 17:18:55.068 254096 INFO nova.scheduler.client.report [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac#033[00m
Nov 25 12:18:55 np0005535469 nova_compute[254092]: 2025-11-25 17:18:55.154 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:18:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:18:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166770898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:18:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:18:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166770898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:18:55 np0005535469 nova_compute[254092]: 2025-11-25 17:18:55.398 254096 DEBUG nova.compute.manager [req-2f76273b-7af5-442b-a10b-9e441c574181 req-ee1de235-8666-4e0a-89dd-f4b1453d3738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-deleted-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:18:55 np0005535469 nova_compute[254092]: 2025-11-25 17:18:55.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:56 np0005535469 nova_compute[254092]: 2025-11-25 17:18:56.267 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:18:56 np0005535469 nova_compute[254092]: 2025-11-25 17:18:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 7.2 KiB/s wr, 57 op/s
Nov 25 12:18:57 np0005535469 nova_compute[254092]: 2025-11-25 17:18:57.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:18:57 np0005535469 nova_compute[254092]: 2025-11-25 17:18:57.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:18:57 np0005535469 nova_compute[254092]: 2025-11-25 17:18:57.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:18:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 7.1 KiB/s wr, 47 op/s
Nov 25 12:18:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:00 np0005535469 nova_compute[254092]: 2025-11-25 17:19:00.283 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091125.282258, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:19:00 np0005535469 nova_compute[254092]: 2025-11-25 17:19:00.284 254096 INFO nova.compute.manager [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:19:00 np0005535469 nova_compute[254092]: 2025-11-25 17:19:00.301 254096 DEBUG nova.compute.manager [None req-d63e886b-8eb9-4935-8444-4adb40858532 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:19:00 np0005535469 nova_compute[254092]: 2025-11-25 17:19:00.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 7.1 KiB/s wr, 47 op/s
Nov 25 12:19:01 np0005535469 nova_compute[254092]: 2025-11-25 17:19:01.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:01 np0005535469 nova_compute[254092]: 2025-11-25 17:19:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:02 np0005535469 nova_compute[254092]: 2025-11-25 17:19:02.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:02 np0005535469 nova_compute[254092]: 2025-11-25 17:19:02.277 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 12:19:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 12:19:05 np0005535469 nova_compute[254092]: 2025-11-25 17:19:05.656 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091130.655965, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:19:05 np0005535469 nova_compute[254092]: 2025-11-25 17:19:05.657 254096 INFO nova.compute.manager [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:19:05 np0005535469 nova_compute[254092]: 2025-11-25 17:19:05.678 254096 DEBUG nova.compute.manager [None req-478d371e-f9c9-4339-9c29-912b00c2f278 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:19:05 np0005535469 nova_compute[254092]: 2025-11-25 17:19:05.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:06 np0005535469 nova_compute[254092]: 2025-11-25 17:19:06.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 12:19:07 np0005535469 nova_compute[254092]: 2025-11-25 17:19:07.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:07 np0005535469 nova_compute[254092]: 2025-11-25 17:19:07.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:19:07 np0005535469 podman[413351]: 2025-11-25 17:19:07.698535869 +0000 UTC m=+0.101350750 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 12:19:07 np0005535469 podman[413350]: 2025-11-25 17:19:07.712935531 +0000 UTC m=+0.114968360 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:19:07 np0005535469 podman[413352]: 2025-11-25 17:19:07.747999845 +0000 UTC m=+0.137510744 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:19:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:19:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:19:10 np0005535469 nova_compute[254092]: 2025-11-25 17:19:10.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:11 np0005535469 nova_compute[254092]: 2025-11-25 17:19:11.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:15 np0005535469 nova_compute[254092]: 2025-11-25 17:19:15.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:16 np0005535469 nova_compute[254092]: 2025-11-25 17:19:16.275 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:20 np0005535469 nova_compute[254092]: 2025-11-25 17:19:20.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:21 np0005535469 nova_compute[254092]: 2025-11-25 17:19:21.277 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.587 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.588 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.604 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.722 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.723 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.732 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.732 254096 INFO nova.compute.claims [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:19:24 np0005535469 nova_compute[254092]: 2025-11-25 17:19:24.827 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:19:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:19:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065312171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.315 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.322 254096 DEBUG nova.compute.provider_tree [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.340 254096 DEBUG nova.scheduler.client.report [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.358 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.359 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.409 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.411 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.432 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.448 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.546 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.548 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.549 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Creating image(s)#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.587 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.611 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.639 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.645 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.772 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.773 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.774 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.775 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.804 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:19:25 np0005535469 nova_compute[254092]: 2025-11-25 17:19:25.810 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 392384a1-1741-4504-b2c2-557420bbbbd0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.037 254096 DEBUG nova.policy [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.184 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 392384a1-1741-4504-b2c2-557420bbbbd0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.260 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.380 254096 DEBUG nova.objects.instance [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.396 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.397 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Ensure instance console log exists: /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.397 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.398 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:26 np0005535469 nova_compute[254092]: 2025-11-25 17:19:26.398 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 12:19:28 np0005535469 nova_compute[254092]: 2025-11-25 17:19:28.596 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully created port: b342a143-48a8-46f1-90fc-229fadeb167e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:19:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 12:19:29 np0005535469 nova_compute[254092]: 2025-11-25 17:19:29.016 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully created port: f6eeae44-ea00-4543-a1e0-9ce45fbc399f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:19:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:30 np0005535469 nova_compute[254092]: 2025-11-25 17:19:30.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:30 np0005535469 podman[413774]: 2025-11-25 17:19:30.847399424 +0000 UTC m=+0.104329490 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:19:30 np0005535469 podman[413774]: 2025-11-25 17:19:30.953237955 +0000 UTC m=+0.210168061 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:19:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:19:31 np0005535469 nova_compute[254092]: 2025-11-25 17:19:31.145 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully updated port: b342a143-48a8-46f1-90fc-229fadeb167e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:19:31 np0005535469 nova_compute[254092]: 2025-11-25 17:19:31.246 254096 DEBUG nova.compute.manager [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:19:31 np0005535469 nova_compute[254092]: 2025-11-25 17:19:31.247 254096 DEBUG nova.compute.manager [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:19:31 np0005535469 nova_compute[254092]: 2025-11-25 17:19:31.247 254096 DEBUG oslo_concurrency.lockutils [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:19:31 np0005535469 nova_compute[254092]: 2025-11-25 17:19:31.247 254096 DEBUG oslo_concurrency.lockutils [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:19:31 np0005535469 nova_compute[254092]: 2025-11-25 17:19:31.248 254096 DEBUG nova.network.neutron [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:19:31 np0005535469 nova_compute[254092]: 2025-11-25 17:19:31.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:19:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:19:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:32 np0005535469 nova_compute[254092]: 2025-11-25 17:19:32.026 254096 DEBUG nova.network.neutron [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f54bc50b-34ef-454b-ae48-cd2cf2fe07ed does not exist
Nov 25 12:19:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f9ae48dd-75f6-4a01-bae4-8db29768eab1 does not exist
Nov 25 12:19:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3191994b-b815-47fc-9102-e195d1380f44 does not exist
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:19:32 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:19:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:19:32 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:19:32 np0005535469 nova_compute[254092]: 2025-11-25 17:19:32.885 254096 DEBUG nova.network.neutron [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:19:32 np0005535469 nova_compute[254092]: 2025-11-25 17:19:32.897 254096 DEBUG oslo_concurrency.lockutils [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:19:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:19:32 np0005535469 nova_compute[254092]: 2025-11-25 17:19:32.984 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully updated port: f6eeae44-ea00-4543-a1e0-9ce45fbc399f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:19:33 np0005535469 nova_compute[254092]: 2025-11-25 17:19:33.003 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:19:33 np0005535469 nova_compute[254092]: 2025-11-25 17:19:33.003 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:19:33 np0005535469 nova_compute[254092]: 2025-11-25 17:19:33.003 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:19:33 np0005535469 nova_compute[254092]: 2025-11-25 17:19:33.116 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:19:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:19:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:19:33 np0005535469 nova_compute[254092]: 2025-11-25 17:19:33.338 254096 DEBUG nova.compute.manager [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:19:33 np0005535469 nova_compute[254092]: 2025-11-25 17:19:33.338 254096 DEBUG nova.compute.manager [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-f6eeae44-ea00-4543-a1e0-9ce45fbc399f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:19:33 np0005535469 nova_compute[254092]: 2025-11-25 17:19:33.338 254096 DEBUG oslo_concurrency.lockutils [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:19:33 np0005535469 podman[414207]: 2025-11-25 17:19:33.502530066 +0000 UTC m=+0.050546327 container create 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:19:33 np0005535469 systemd[1]: Started libpod-conmon-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope.
Nov 25 12:19:33 np0005535469 podman[414207]: 2025-11-25 17:19:33.480247069 +0000 UTC m=+0.028263340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:19:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:33 np0005535469 podman[414207]: 2025-11-25 17:19:33.618588544 +0000 UTC m=+0.166604845 container init 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:19:33 np0005535469 podman[414207]: 2025-11-25 17:19:33.632852643 +0000 UTC m=+0.180868944 container start 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:19:33 np0005535469 podman[414207]: 2025-11-25 17:19:33.637821278 +0000 UTC m=+0.185837549 container attach 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 12:19:33 np0005535469 competent_curran[414223]: 167 167
Nov 25 12:19:33 np0005535469 systemd[1]: libpod-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope: Deactivated successfully.
Nov 25 12:19:33 np0005535469 conmon[414223]: conmon 482c3de457a644e5d31c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope/container/memory.events
Nov 25 12:19:33 np0005535469 podman[414207]: 2025-11-25 17:19:33.64561339 +0000 UTC m=+0.193629661 container died 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:19:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f4d01be5cbaf65ba39ec30ba508c3bf34e3b046276aa25039140b71d9c625c1a-merged.mount: Deactivated successfully.
Nov 25 12:19:33 np0005535469 podman[414207]: 2025-11-25 17:19:33.691630643 +0000 UTC m=+0.239646914 container remove 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:19:33 np0005535469 systemd[1]: libpod-conmon-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope: Deactivated successfully.
Nov 25 12:19:33 np0005535469 podman[414246]: 2025-11-25 17:19:33.902264627 +0000 UTC m=+0.067909351 container create 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:19:33 np0005535469 systemd[1]: Started libpod-conmon-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope.
Nov 25 12:19:33 np0005535469 podman[414246]: 2025-11-25 17:19:33.876877285 +0000 UTC m=+0.042522019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:19:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:34 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:34 np0005535469 podman[414246]: 2025-11-25 17:19:34.034913897 +0000 UTC m=+0.200558611 container init 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:19:34 np0005535469 podman[414246]: 2025-11-25 17:19:34.048802905 +0000 UTC m=+0.214447639 container start 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:19:34 np0005535469 podman[414246]: 2025-11-25 17:19:34.053286177 +0000 UTC m=+0.218930871 container attach 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:19:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:19:35 np0005535469 crazy_goodall[414263]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:19:35 np0005535469 crazy_goodall[414263]: --> relative data size: 1.0
Nov 25 12:19:35 np0005535469 crazy_goodall[414263]: --> All data devices are unavailable
Nov 25 12:19:35 np0005535469 systemd[1]: libpod-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope: Deactivated successfully.
Nov 25 12:19:35 np0005535469 systemd[1]: libpod-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope: Consumed 1.149s CPU time.
Nov 25 12:19:35 np0005535469 podman[414246]: 2025-11-25 17:19:35.239057173 +0000 UTC m=+1.404701857 container died 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:19:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7-merged.mount: Deactivated successfully.
Nov 25 12:19:35 np0005535469 podman[414246]: 2025-11-25 17:19:35.314439035 +0000 UTC m=+1.480083729 container remove 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 12:19:35 np0005535469 systemd[1]: libpod-conmon-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope: Deactivated successfully.
Nov 25 12:19:35 np0005535469 nova_compute[254092]: 2025-11-25 17:19:35.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:36 np0005535469 podman[414448]: 2025-11-25 17:19:36.232551464 +0000 UTC m=+0.060212049 container create 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:19:36 np0005535469 systemd[1]: Started libpod-conmon-17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e.scope.
Nov 25 12:19:36 np0005535469 podman[414448]: 2025-11-25 17:19:36.203683759 +0000 UTC m=+0.031344364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:36 np0005535469 podman[414448]: 2025-11-25 17:19:36.382968429 +0000 UTC m=+0.210629004 container init 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:19:36 np0005535469 podman[414448]: 2025-11-25 17:19:36.391419339 +0000 UTC m=+0.219079884 container start 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:19:36 np0005535469 distracted_cohen[414464]: 167 167
Nov 25 12:19:36 np0005535469 podman[414448]: 2025-11-25 17:19:36.396329913 +0000 UTC m=+0.223990458 container attach 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:19:36 np0005535469 systemd[1]: libpod-17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e.scope: Deactivated successfully.
Nov 25 12:19:36 np0005535469 podman[414448]: 2025-11-25 17:19:36.397619518 +0000 UTC m=+0.225280063 container died 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:19:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3ade7895a896e6a7e7bfdcb3e72b13a16988ee8a0c0937391c3a0427c0b21d71-merged.mount: Deactivated successfully.
Nov 25 12:19:36 np0005535469 podman[414448]: 2025-11-25 17:19:36.453001085 +0000 UTC m=+0.280661660 container remove 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 25 12:19:36 np0005535469 systemd[1]: libpod-conmon-17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e.scope: Deactivated successfully.
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.483 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.507 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.507 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance network_info: |[{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.508 254096 DEBUG oslo_concurrency.lockutils [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.508 254096 DEBUG nova.network.neutron [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port f6eeae44-ea00-4543-a1e0-9ce45fbc399f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.511 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start _get_guest_xml network_info=[{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.517 254096 WARNING nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.523 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.524 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.532 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.532 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.533 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.533 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.533 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:19:36 np0005535469 nova_compute[254092]: 2025-11-25 17:19:36.538 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:36 np0005535469 podman[414489]: 2025-11-25 17:19:36.693090251 +0000 UTC m=+0.065617098 container create a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:19:36 np0005535469 podman[414489]: 2025-11-25 17:19:36.663717201 +0000 UTC m=+0.036244098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:19:36 np0005535469 systemd[1]: Started libpod-conmon-a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9.scope.
Nov 25 12:19:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:36 np0005535469 podman[414489]: 2025-11-25 17:19:36.824255361 +0000 UTC m=+0.196782188 container init a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:19:36 np0005535469 podman[414489]: 2025-11-25 17:19:36.835952429 +0000 UTC m=+0.208479236 container start a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:19:36 np0005535469 podman[414489]: 2025-11-25 17:19:36.839198787 +0000 UTC m=+0.211725594 container attach a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:19:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:19:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:19:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143290325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.034 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.066 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.072 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:19:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478714737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.594 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.598 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.598 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.599 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.600 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.600 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.600 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.602 254096 DEBUG nova.objects.instance [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.620 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <uuid>392384a1-1741-4504-b2c2-557420bbbbd0</uuid>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <name>instance-0000008f</name>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-473687599</nova:name>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:19:36</nova:creationTime>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:port uuid="b342a143-48a8-46f1-90fc-229fadeb167e">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <nova:port uuid="f6eeae44-ea00-4543-a1e0-9ce45fbc399f">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0f:88e0" ipVersion="6"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <entry name="serial">392384a1-1741-4504-b2c2-557420bbbbd0</entry>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <entry name="uuid">392384a1-1741-4504-b2c2-557420bbbbd0</entry>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/392384a1-1741-4504-b2c2-557420bbbbd0_disk">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/392384a1-1741-4504-b2c2-557420bbbbd0_disk.config">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:48:d2:35"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <target dev="tapb342a143-48"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:0f:88:e0"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <target dev="tapf6eeae44-ea"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/console.log" append="off"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:19:37 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:19:37 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:19:37 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:19:37 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.620 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Preparing to wait for external event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Preparing to wait for external event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.623 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.623 254096 DEBUG os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 kind_banach[414524]: {
Nov 25 12:19:37 np0005535469 kind_banach[414524]:    "0": [
Nov 25 12:19:37 np0005535469 kind_banach[414524]:        {
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "devices": [
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "/dev/loop3"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            ],
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_name": "ceph_lv0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_size": "21470642176",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "name": "ceph_lv0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "tags": {
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cluster_name": "ceph",
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.624 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.crush_device_class": "",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.encrypted": "0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osd_id": "0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.type": "block",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.vdo": "0"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            },
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "type": "block",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "vg_name": "ceph_vg0"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:        }
Nov 25 12:19:37 np0005535469 kind_banach[414524]:    ],
Nov 25 12:19:37 np0005535469 kind_banach[414524]:    "1": [
Nov 25 12:19:37 np0005535469 kind_banach[414524]:        {
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "devices": [
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "/dev/loop4"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            ],
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_name": "ceph_lv1",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_size": "21470642176",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "name": "ceph_lv1",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "tags": {
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cluster_name": "ceph",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.crush_device_class": "",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.encrypted": "0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osd_id": "1",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.type": "block",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.vdo": "0"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            },
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "type": "block",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "vg_name": "ceph_vg1"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:        }
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.625 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:19:37 np0005535469 kind_banach[414524]:    ],
Nov 25 12:19:37 np0005535469 kind_banach[414524]:    "2": [
Nov 25 12:19:37 np0005535469 kind_banach[414524]:        {
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "devices": [
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "/dev/loop5"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            ],
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_name": "ceph_lv2",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_size": "21470642176",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "name": "ceph_lv2",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "tags": {
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.cluster_name": "ceph",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.crush_device_class": "",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.encrypted": "0",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osd_id": "2",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.type": "block",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:                "ceph.vdo": "0"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            },
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "type": "block",
Nov 25 12:19:37 np0005535469 kind_banach[414524]:            "vg_name": "ceph_vg2"
Nov 25 12:19:37 np0005535469 kind_banach[414524]:        }
Nov 25 12:19:37 np0005535469 kind_banach[414524]:    ]
Nov 25 12:19:37 np0005535469 kind_banach[414524]: }
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.629 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.630 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb342a143-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb342a143-48, col_values=(('external_ids', {'iface-id': 'b342a143-48a8-46f1-90fc-229fadeb167e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:d2:35', 'vm-uuid': '392384a1-1741-4504-b2c2-557420bbbbd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:37 np0005535469 NetworkManager[48891]: <info>  [1764091177.6980] manager: (tapb342a143-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.697 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.707 254096 INFO os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48')#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.708 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.709 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.709 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.710 254096 DEBUG os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.711 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.711 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.713 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6eeae44-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.714 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6eeae44-ea, col_values=(('external_ids', {'iface-id': 'f6eeae44-ea00-4543-a1e0-9ce45fbc399f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:88:e0', 'vm-uuid': '392384a1-1741-4504-b2c2-557420bbbbd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 NetworkManager[48891]: <info>  [1764091177.7162] manager: (tapf6eeae44-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.724 254096 INFO os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea')#033[00m
Nov 25 12:19:37 np0005535469 systemd[1]: libpod-a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9.scope: Deactivated successfully.
Nov 25 12:19:37 np0005535469 podman[414489]: 2025-11-25 17:19:37.729326698 +0000 UTC m=+1.101853505 container died a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:19:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165-merged.mount: Deactivated successfully.
Nov 25 12:19:37 np0005535469 podman[414489]: 2025-11-25 17:19:37.804377008 +0000 UTC m=+1.176903825 container remove a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:19:37 np0005535469 systemd[1]: libpod-conmon-a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9.scope: Deactivated successfully.
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.819 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.821 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.821 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:48:d2:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.822 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:0f:88:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.822 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Using config drive#033[00m
Nov 25 12:19:37 np0005535469 podman[414579]: 2025-11-25 17:19:37.857275678 +0000 UTC m=+0.096329642 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:19:37 np0005535469 podman[414583]: 2025-11-25 17:19:37.86871635 +0000 UTC m=+0.091329957 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:19:37 np0005535469 nova_compute[254092]: 2025-11-25 17:19:37.871 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:19:37 np0005535469 podman[414599]: 2025-11-25 17:19:37.929432423 +0000 UTC m=+0.136093536 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.173 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Creating config drive at /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.178 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjfu4qcn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.259 254096 DEBUG nova.network.neutron [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated VIF entry in instance network info cache for port f6eeae44-ea00-4543-a1e0-9ce45fbc399f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.260 254096 DEBUG nova.network.neutron [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.275 254096 DEBUG oslo_concurrency.lockutils [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.342 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjfu4qcn" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.385 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.391 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.580 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.581 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deleting local config drive /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config because it was imported into RBD.#033[00m
Nov 25 12:19:38 np0005535469 podman[414851]: 2025-11-25 17:19:38.600347745 +0000 UTC m=+0.056076447 container create c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:19:38 np0005535469 systemd[1]: Started libpod-conmon-c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc.scope.
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.6590] manager: (tapb342a143-48): new Tun device (/org/freedesktop/NetworkManager/Devices/633)
Nov 25 12:19:38 np0005535469 kernel: tapb342a143-48: entered promiscuous mode
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01531|binding|INFO|Claiming lport b342a143-48a8-46f1-90fc-229fadeb167e for this chassis.
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01532|binding|INFO|b342a143-48a8-46f1-90fc-229fadeb167e: Claiming fa:16:3e:48:d2:35 10.100.0.12
Nov 25 12:19:38 np0005535469 podman[414851]: 2025-11-25 17:19:38.578199902 +0000 UTC m=+0.033928614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:38 np0005535469 kernel: tapf6eeae44-ea: entered promiscuous mode
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.6898] manager: (tapf6eeae44-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/634)
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.692 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:d2:35 10.100.0.12'], port_security=['fa:16:3e:48:d2:35 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b342a143-48a8-46f1-90fc-229fadeb167e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.693 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b342a143-48a8-46f1-90fc-229fadeb167e in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 bound to our chassis#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77970d23-547a-4e3a-bddf-f4770a15bf81#033[00m
Nov 25 12:19:38 np0005535469 podman[414851]: 2025-11-25 17:19:38.703464461 +0000 UTC m=+0.159193173 container init c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:19:38 np0005535469 systemd-udevd[414891]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:19:38 np0005535469 systemd-udevd[414890]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0df1bc1e-3e26-4b8f-b5d1-89e7c5cfc187]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.708 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77970d23-51 in ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.711 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77970d23-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf3fa46-4953-41fd-984a-933a9eb487c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.713 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac069815-3c97-43d4-9644-1bbcc22f5eb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.7221] device (tapf6eeae44-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.7239] device (tapf6eeae44-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.7294] device (tapb342a143-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.7309] device (tapb342a143-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:19:38 np0005535469 podman[414851]: 2025-11-25 17:19:38.731150035 +0000 UTC m=+0.186878707 container start c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:19:38 np0005535469 wizardly_meninsky[414875]: 167 167
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.730 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9986febd-b0b3-4173-a0d2-f95795378847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 systemd[1]: libpod-c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc.scope: Deactivated successfully.
Nov 25 12:19:38 np0005535469 podman[414851]: 2025-11-25 17:19:38.735657778 +0000 UTC m=+0.191386480 container attach c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:19:38 np0005535469 podman[414851]: 2025-11-25 17:19:38.73687438 +0000 UTC m=+0.192603052 container died c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:19:38 np0005535469 systemd-machined[216343]: New machine qemu-177-instance-0000008f.
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.762 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c85702f0-df2e-468d-8a75-c71d5d3e1788]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01533|binding|INFO|Setting lport b342a143-48a8-46f1-90fc-229fadeb167e ovn-installed in OVS
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01534|binding|INFO|Setting lport b342a143-48a8-46f1-90fc-229fadeb167e up in Southbound
Nov 25 12:19:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d101ee9a56448ffcb8b461e034b2ed6e61c3eaf5823968b5eaa6726218d05e2a-merged.mount: Deactivated successfully.
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01535|binding|INFO|Claiming lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f for this chassis.
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01536|binding|INFO|f6eeae44-ea00-4543-a1e0-9ce45fbc399f: Claiming fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.816 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[34d1a0e9-8f06-482e-bb95-cf47a2b37703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.821 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], port_security=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe0f:88e0/64', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f6eeae44-ea00-4543-a1e0-9ce45fbc399f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01537|binding|INFO|Setting lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f ovn-installed in OVS
Nov 25 12:19:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:38Z|01538|binding|INFO|Setting lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f up in Southbound
Nov 25 12:19:38 np0005535469 nova_compute[254092]: 2025-11-25 17:19:38.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.8310] manager: (tap77970d23-50): new Veth device (/org/freedesktop/NetworkManager/Devices/635)
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53c11709-a71c-499e-b959-486fc484d253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 podman[414851]: 2025-11-25 17:19:38.831235179 +0000 UTC m=+0.286963841 container remove c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:19:38 np0005535469 systemd[1]: libpod-conmon-c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc.scope: Deactivated successfully.
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.870 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a087b47e-c5a1-496e-8cec-9717ad80bb20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.875 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f402471e-0296-4dfb-a3c9-be98354e7e9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 NetworkManager[48891]: <info>  [1764091178.9031] device (tap77970d23-50): carrier: link connected
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2cc56b-cc94-4580-8074-b922b88a21ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.933 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8bacdc-77bf-4e0c-a6fb-095b97c2333e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414941, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.951 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[46cc7785-e529-46fc-8b8b-8e8833df7e70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:6c82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763649, 'tstamp': 763649}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414942, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.968 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a01cd9d9-8f45-469c-8a19-9e483010494e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414944, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[352f47af-29bb-4de6-adaf-2661b7faa53e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 podman[414949]: 2025-11-25 17:19:39.042391236 +0000 UTC m=+0.054100833 container create 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:19:39 np0005535469 systemd[1]: Started libpod-conmon-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope.
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.112 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0758d18b-131c-478c-8931-595fec44427a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.114 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.114 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.114 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77970d23-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:39 np0005535469 kernel: tap77970d23-50: entered promiscuous mode
Nov 25 12:19:39 np0005535469 NetworkManager[48891]: <info>  [1764091179.1174] manager: (tap77970d23-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.119 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77970d23-50, col_values=(('external_ids', {'iface-id': '56e5945a-607b-4b8d-baa7-f3eab82e874d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:39 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:39Z|01539|binding|INFO|Releasing lport 56e5945a-607b-4b8d-baa7-f3eab82e874d from this chassis (sb_readonly=0)
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:39 np0005535469 podman[414949]: 2025-11-25 17:19:39.025186348 +0000 UTC m=+0.036895965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.137 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77970d23-547a-4e3a-bddf-f4770a15bf81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77970d23-547a-4e3a-bddf-f4770a15bf81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:19:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.138 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5458c32b-e25d-41a5-8261-37ae8e2cbe9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.139 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-77970d23-547a-4e3a-bddf-f4770a15bf81
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/77970d23-547a-4e3a-bddf-f4770a15bf81.pid.haproxy
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 77970d23-547a-4e3a-bddf-f4770a15bf81
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.139 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'env', 'PROCESS_TAG=haproxy-77970d23-547a-4e3a-bddf-f4770a15bf81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77970d23-547a-4e3a-bddf-f4770a15bf81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:19:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:39 np0005535469 podman[414949]: 2025-11-25 17:19:39.161096478 +0000 UTC m=+0.172806075 container init 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:19:39 np0005535469 podman[414949]: 2025-11-25 17:19:39.17444891 +0000 UTC m=+0.186158507 container start 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 12:19:39 np0005535469 podman[414949]: 2025-11-25 17:19:39.177839964 +0000 UTC m=+0.189549611 container attach 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.233 254096 DEBUG nova.compute.manager [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.234 254096 DEBUG oslo_concurrency.lockutils [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.237 254096 DEBUG oslo_concurrency.lockutils [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.237 254096 DEBUG oslo_concurrency.lockutils [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.237 254096 DEBUG nova.compute.manager [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Processing event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.249 254096 DEBUG nova.compute.manager [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.249 254096 DEBUG oslo_concurrency.lockutils [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.250 254096 DEBUG oslo_concurrency.lockutils [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.250 254096 DEBUG oslo_concurrency.lockutils [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.250 254096 DEBUG nova.compute.manager [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Processing event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.290 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.291 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091179.289546, 392384a1-1741-4504-b2c2-557420bbbbd0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.292 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Started (Lifecycle Event)#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.299 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.302 254096 INFO nova.virt.libvirt.driver [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance spawned successfully.#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.303 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.353 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.353 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.354 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.354 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.355 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.355 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.359 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.363 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.395 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091179.2904263, 392384a1-1741-4504-b2c2-557420bbbbd0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.395 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.414 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.417 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091179.2966878, 392384a1-1741-4504-b2c2-557420bbbbd0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.418 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.427 254096 INFO nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 13.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.428 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.434 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:19:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.462 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.495 254096 INFO nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 14.81 seconds to build instance.#033[00m
Nov 25 12:19:39 np0005535469 nova_compute[254092]: 2025-11-25 17:19:39.511 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:39 np0005535469 podman[415043]: 2025-11-25 17:19:39.633225538 +0000 UTC m=+0.093000912 container create e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 12:19:39 np0005535469 podman[415043]: 2025-11-25 17:19:39.57928737 +0000 UTC m=+0.039062774 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:19:39 np0005535469 systemd[1]: Started libpod-conmon-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope.
Nov 25 12:19:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e3d52ff50bf53cf2fb737f96eefa352cab2347302bd90a6108c9336c866d1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:39 np0005535469 podman[415043]: 2025-11-25 17:19:39.756824072 +0000 UTC m=+0.216599456 container init e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 12:19:39 np0005535469 podman[415043]: 2025-11-25 17:19:39.762127177 +0000 UTC m=+0.221902531 container start e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:19:39 np0005535469 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : New worker (415065) forked
Nov 25 12:19:39 np0005535469 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : Loading success.
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.826 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f6eeae44-ea00-4543-a1e0-9ce45fbc399f in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad3526c9-ce3b-41ed-ae27-775dca6a1319#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.840 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[251e1c59-c2e1-42c2-b9ff-59bfc4cdecd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.841 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad3526c9-c1 in ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.842 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad3526c9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[542e35a9-7b15-4331-8310-6e626892962f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.843 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25ce0788-6a7e-494c-abea-213ae5f837e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.857 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[83968ab1-05fe-42b7-aa2b-e30ba359badf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.882 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3e402245-d78e-4896-948e-fb9432e1e263]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.921 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ea783e-18a3-4da9-8d54-ee89b6e44e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.929 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3361739-faad-4f82-83ab-d322014e317c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 NetworkManager[48891]: <info>  [1764091179.9305] manager: (tapad3526c9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/637)
Nov 25 12:19:39 np0005535469 systemd-udevd[414919]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.974 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8a160e0a-9d54-4743-9794-fefbdecd43b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.977 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a67e047f-d498-4ed5-ae34-cb43dbe5b5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 NetworkManager[48891]: <info>  [1764091180.0037] device (tapad3526c9-c0): carrier: link connected
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.014 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a33a1e99-7b6f-4fa5-bcd1-0fe686064041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.033 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[83061ffa-d3dd-4740-b116-98ed30886670]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415099, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.051 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6759224f-a31e-41f7-8750-2943205c9b9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:2348'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763759, 'tstamp': 763759}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415101, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50a99359-1c07-4c96-a729-9a80b10ed85b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415105, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[acf6dd7b-01a1-4ee7-90d0-9bc7ce1cc19f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[245ef586-7176-47c7-b555-98e04b115be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.137 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad3526c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:40 np0005535469 NetworkManager[48891]: <info>  [1764091180.1412] manager: (tapad3526c9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Nov 25 12:19:40 np0005535469 kernel: tapad3526c9-c0: entered promiscuous mode
Nov 25 12:19:40 np0005535469 nova_compute[254092]: 2025-11-25 17:19:40.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.145 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad3526c9-c0, col_values=(('external_ids', {'iface-id': '824643fe-f8f7-44ad-9711-a38080a49171'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:40Z|01540|binding|INFO|Releasing lport 824643fe-f8f7-44ad-9711-a38080a49171 from this chassis (sb_readonly=0)
Nov 25 12:19:40 np0005535469 nova_compute[254092]: 2025-11-25 17:19:40.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.148 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad3526c9-ce3b-41ed-ae27-775dca6a1319.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad3526c9-ce3b-41ed-ae27-775dca6a1319.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60833f83-3831-43b1-981f-44a536613267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.150 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-ad3526c9-ce3b-41ed-ae27-775dca6a1319
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/ad3526c9-ce3b-41ed-ae27-775dca6a1319.pid.haproxy
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID ad3526c9-ce3b-41ed-ae27-775dca6a1319
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:19:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.151 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'env', 'PROCESS_TAG=haproxy-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad3526c9-ce3b-41ed-ae27-775dca6a1319.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:19:40 np0005535469 nova_compute[254092]: 2025-11-25 17:19:40.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]: {
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "osd_id": 1,
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "type": "bluestore"
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:    },
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "osd_id": 2,
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "type": "bluestore"
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:    },
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "osd_id": 0,
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:        "type": "bluestore"
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]:    }
Nov 25 12:19:40 np0005535469 trusting_mayer[414990]: }
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:19:40
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:19:40 np0005535469 systemd[1]: libpod-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope: Deactivated successfully.
Nov 25 12:19:40 np0005535469 systemd[1]: libpod-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope: Consumed 1.002s CPU time.
Nov 25 12:19:40 np0005535469 podman[414949]: 2025-11-25 17:19:40.221240034 +0000 UTC m=+1.232949631 container died 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:19:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae-merged.mount: Deactivated successfully.
Nov 25 12:19:40 np0005535469 podman[414949]: 2025-11-25 17:19:40.285336698 +0000 UTC m=+1.297046295 container remove 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:19:40 np0005535469 systemd[1]: libpod-conmon-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope: Deactivated successfully.
Nov 25 12:19:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:19:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:19:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4d0bf10f-91c6-4d3b-b055-2f24d7a74275 does not exist
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 100e28f3-6f00-44a0-949e-e9098e4b98d3 does not exist
Nov 25 12:19:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:19:40 np0005535469 podman[415204]: 2025-11-25 17:19:40.545670685 +0000 UTC m=+0.052555902 container create 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:19:40 np0005535469 systemd[1]: Started libpod-conmon-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6.scope.
Nov 25 12:19:40 np0005535469 podman[415204]: 2025-11-25 17:19:40.519144003 +0000 UTC m=+0.026029240 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:19:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:19:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0883bcdd26bb7ed72d7d0ceebbe0bf6761be99f4ffb7fcfd366aae7f03f2ca46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:19:40 np0005535469 podman[415204]: 2025-11-25 17:19:40.640723482 +0000 UTC m=+0.147608719 container init 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:19:40 np0005535469 podman[415204]: 2025-11-25 17:19:40.649123271 +0000 UTC m=+0.156008488 container start 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 12:19:40 np0005535469 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : New worker (415226) forked
Nov 25 12:19:40 np0005535469 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : Loading success.
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:19:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 468 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.332 254096 DEBUG nova.compute.manager [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG oslo_concurrency.lockutils [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG oslo_concurrency.lockutils [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG oslo_concurrency.lockutils [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG nova.compute.manager [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.334 254096 WARNING nova.compute.manager [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e for instance with vm_state active and task_state None.#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.399 254096 DEBUG nova.compute.manager [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG oslo_concurrency.lockutils [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG oslo_concurrency.lockutils [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG oslo_concurrency.lockutils [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG nova.compute.manager [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:19:41 np0005535469 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 WARNING nova.compute.manager [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f for instance with vm_state active and task_state None.#033[00m
Nov 25 12:19:42 np0005535469 nova_compute[254092]: 2025-11-25 17:19:42.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 25 12:19:43 np0005535469 nova_compute[254092]: 2025-11-25 17:19:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:19:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277260093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:19:45 np0005535469 nova_compute[254092]: 2025-11-25 17:19:45.985 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.049 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.206 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.207 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.207 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.207 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 392384a1-1741-4504-b2c2-557420bbbbd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.279 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.279 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.320 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.357 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:19:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050400313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.757 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.767 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.784 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.811 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:19:46 np0005535469 nova_compute[254092]: 2025-11-25 17:19:46.811 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:19:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:19:47 np0005535469 NetworkManager[48891]: <info>  [1764091187.3749] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Nov 25 12:19:47 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:47Z|01541|binding|INFO|Releasing lport 824643fe-f8f7-44ad-9711-a38080a49171 from this chassis (sb_readonly=0)
Nov 25 12:19:47 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:47Z|01542|binding|INFO|Releasing lport 56e5945a-607b-4b8d-baa7-f3eab82e874d from this chassis (sb_readonly=0)
Nov 25 12:19:47 np0005535469 NetworkManager[48891]: <info>  [1764091187.3759] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:47 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:47Z|01543|binding|INFO|Releasing lport 824643fe-f8f7-44ad-9711-a38080a49171 from this chassis (sb_readonly=0)
Nov 25 12:19:47 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:47Z|01544|binding|INFO|Releasing lport 56e5945a-607b-4b8d-baa7-f3eab82e874d from this chassis (sb_readonly=0)
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.405 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:47.745 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:19:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:47.747 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.956 254096 DEBUG nova.compute.manager [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.957 254096 DEBUG nova.compute.manager [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.958 254096 DEBUG oslo_concurrency.lockutils [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.958 254096 DEBUG oslo_concurrency.lockutils [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:19:47 np0005535469 nova_compute[254092]: 2025-11-25 17:19:47.959 254096 DEBUG nova.network.neutron [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:19:48 np0005535469 nova_compute[254092]: 2025-11-25 17:19:48.808 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:48 np0005535469 nova_compute[254092]: 2025-11-25 17:19:48.808 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:19:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:49 np0005535469 nova_compute[254092]: 2025-11-25 17:19:49.909 254096 DEBUG nova.network.neutron [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated VIF entry in instance network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:19:49 np0005535469 nova_compute[254092]: 2025-11-25 17:19:49.910 254096 DEBUG nova.network.neutron [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:19:49 np0005535469 nova_compute[254092]: 2025-11-25 17:19:49.929 254096 DEBUG oslo_concurrency.lockutils [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:19:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 12:19:51 np0005535469 nova_compute[254092]: 2025-11-25 17:19:51.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:19:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:19:52 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:52Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:d2:35 10.100.0.12
Nov 25 12:19:52 np0005535469 ovn_controller[153477]: 2025-11-25T17:19:52Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:d2:35 10.100.0.12
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:19:52 np0005535469 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:19:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Nov 25 12:19:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:19:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Nov 25 12:19:55 np0005535469 nova_compute[254092]: 2025-11-25 17:19:55.269 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:19:55 np0005535469 nova_compute[254092]: 2025-11-25 17:19:55.284 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:19:55 np0005535469 nova_compute[254092]: 2025-11-25 17:19:55.285 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:19:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:19:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/568605804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:19:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:19:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/568605804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:19:56 np0005535469 nova_compute[254092]: 2025-11-25 17:19:56.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 12:19:57 np0005535469 nova_compute[254092]: 2025-11-25 17:19:57.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:57 np0005535469 nova_compute[254092]: 2025-11-25 17:19:57.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:19:57 np0005535469 nova_compute[254092]: 2025-11-25 17:19:57.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:19:57 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:19:57.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:19:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:19:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:20:01 np0005535469 nova_compute[254092]: 2025-11-25 17:20:01.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:02 np0005535469 nova_compute[254092]: 2025-11-25 17:20:02.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:20:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:20:06 np0005535469 nova_compute[254092]: 2025-11-25 17:20:06.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:20:07 np0005535469 nova_compute[254092]: 2025-11-25 17:20:07.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.196 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.196 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.212 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.273 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.273 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.281 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.281 254096 INFO nova.compute.claims [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.446 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:08 np0005535469 podman[415284]: 2025-11-25 17:20:08.656426151 +0000 UTC m=+0.069627966 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 25 12:20:08 np0005535469 podman[415283]: 2025-11-25 17:20:08.669285731 +0000 UTC m=+0.077953753 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 12:20:08 np0005535469 podman[415285]: 2025-11-25 17:20:08.70086993 +0000 UTC m=+0.109242794 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:20:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:20:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247401621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:20:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.992 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:08 np0005535469 nova_compute[254092]: 2025-11-25 17:20:08.999 254096 DEBUG nova.compute.provider_tree [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.009 254096 DEBUG nova.scheduler.client.report [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.025 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.025 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.065 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.065 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.079 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.094 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.182 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.183 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.184 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Creating image(s)#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.206 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.229 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.252 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.255 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.307 254096 DEBUG nova.policy [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.353 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.355 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.356 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.356 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.376 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.380 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.721 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.781 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.890 254096 DEBUG nova.objects.instance [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 9be9cbb4-878e-4fce-be7c-44b49480ff0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.901 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.901 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Ensure instance console log exists: /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.901 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.902 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:09 np0005535469 nova_compute[254092]: 2025-11-25 17:20:09.902 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:10 np0005535469 nova_compute[254092]: 2025-11-25 17:20:10.024 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully created port: 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:20:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:20:10 np0005535469 nova_compute[254092]: 2025-11-25 17:20:10.508 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully created port: 0d7b29be-145f-4598-af6d-8fec1624b66c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:20:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.173 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully updated port: 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.249 254096 DEBUG nova.compute.manager [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.249 254096 DEBUG nova.compute.manager [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.250 254096 DEBUG oslo_concurrency.lockutils [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.250 254096 DEBUG oslo_concurrency.lockutils [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.251 254096 DEBUG nova.network.neutron [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.419 254096 DEBUG nova.network.neutron [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:20:11 np0005535469 nova_compute[254092]: 2025-11-25 17:20:11.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.005 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully updated port: 0d7b29be-145f-4598-af6d-8fec1624b66c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.007 254096 DEBUG nova.network.neutron [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.016 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.017 254096 DEBUG oslo_concurrency.lockutils [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.017 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.153 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:20:12 np0005535469 nova_compute[254092]: 2025-11-25 17:20:12.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:20:13 np0005535469 nova_compute[254092]: 2025-11-25 17:20:13.349 254096 DEBUG nova.compute.manager [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:13 np0005535469 nova_compute[254092]: 2025-11-25 17:20:13.350 254096 DEBUG nova.compute.manager [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-0d7b29be-145f-4598-af6d-8fec1624b66c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:20:13 np0005535469 nova_compute[254092]: 2025-11-25 17:20:13.350 254096 DEBUG oslo_concurrency.lockutils [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:20:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:13.658 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.942 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.968 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.969 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance network_info: |[{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.969 254096 DEBUG oslo_concurrency.lockutils [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.969 254096 DEBUG nova.network.neutron [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 0d7b29be-145f-4598-af6d-8fec1624b66c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.975 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start _get_guest_xml network_info=[{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.982 254096 WARNING nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:20:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.995 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:20:14 np0005535469 nova_compute[254092]: 2025-11-25 17:20:14.997 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.002 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.003 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.004 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.004 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.005 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.006 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.006 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.006 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.008 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.008 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.011 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:20:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2769234872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.484 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.515 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.520 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:20:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527014632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.995 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.997 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.998 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:15 np0005535469 nova_compute[254092]: 2025-11-25 17:20:15.999 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.000 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.000 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.000 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.001 254096 DEBUG nova.objects.instance [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9be9cbb4-878e-4fce-be7c-44b49480ff0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.015 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <uuid>9be9cbb4-878e-4fce-be7c-44b49480ff0e</uuid>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <name>instance-00000090</name>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-247846949</nova:name>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:20:14</nova:creationTime>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:port uuid="67a238a8-a6f3-4b0f-b4da-7800dcf79375">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <nova:port uuid="0d7b29be-145f-4598-af6d-8fec1624b66c">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe53:c71b" ipVersion="6"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <entry name="serial">9be9cbb4-878e-4fce-be7c-44b49480ff0e</entry>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <entry name="uuid">9be9cbb4-878e-4fce-be7c-44b49480ff0e</entry>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:dd:c2:78"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <target dev="tap67a238a8-a6"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:53:c7:1b"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <target dev="tap0d7b29be-14"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/console.log" append="off"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:20:16 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:20:16 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:20:16 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:20:16 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.016 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Preparing to wait for external event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Preparing to wait for external event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.019 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.020 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.020 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.021 254096 DEBUG os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.022 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.022 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.026 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a238a8-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.027 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67a238a8-a6, col_values=(('external_ids', {'iface-id': '67a238a8-a6f3-4b0f-b4da-7800dcf79375', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:c2:78', 'vm-uuid': '9be9cbb4-878e-4fce-be7c-44b49480ff0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 NetworkManager[48891]: <info>  [1764091216.0298] manager: (tap67a238a8-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.037 254096 INFO os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6')#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.038 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.039 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.040 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.040 254096 DEBUG os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.041 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.041 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.044 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d7b29be-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.045 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d7b29be-14, col_values=(('external_ids', {'iface-id': '0d7b29be-145f-4598-af6d-8fec1624b66c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:c7:1b', 'vm-uuid': '9be9cbb4-878e-4fce-be7c-44b49480ff0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 NetworkManager[48891]: <info>  [1764091216.0467] manager: (tap0d7b29be-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.054 254096 INFO os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14')#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.108 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.108 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.109 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:dd:c2:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.109 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:53:c7:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.109 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Using config drive#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.129 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.490 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Creating config drive at /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.495 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuh1f5v0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.641 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuh1f5v0z" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.680 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.685 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.880 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.881 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deleting local config drive /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config because it was imported into RBD.#033[00m
Nov 25 12:20:16 np0005535469 NetworkManager[48891]: <info>  [1764091216.9514] manager: (tap67a238a8-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/643)
Nov 25 12:20:16 np0005535469 kernel: tap67a238a8-a6: entered promiscuous mode
Nov 25 12:20:16 np0005535469 nova_compute[254092]: 2025-11-25 17:20:16.996 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:16 np0005535469 systemd-udevd[415669]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:16Z|01545|binding|INFO|Claiming lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 for this chassis.
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:16Z|01546|binding|INFO|67a238a8-a6f3-4b0f-b4da-7800dcf79375: Claiming fa:16:3e:dd:c2:78 10.100.0.3
Nov 25 12:20:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:20:17 np0005535469 NetworkManager[48891]: <info>  [1764091217.0025] manager: (tap0d7b29be-14): new Tun device (/org/freedesktop/NetworkManager/Devices/644)
Nov 25 12:20:17 np0005535469 systemd-udevd[415674]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.010 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:c2:78 10.100.0.3'], port_security=['fa:16:3e:dd:c2:78 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=67a238a8-a6f3-4b0f-b4da-7800dcf79375) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:20:17 np0005535469 kernel: tap0d7b29be-14: entered promiscuous mode
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:17Z|01547|binding|INFO|Setting lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 ovn-installed in OVS
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:17Z|01548|binding|INFO|Setting lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 up in Southbound
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.012 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 bound to our chassis#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.014 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77970d23-547a-4e3a-bddf-f4770a15bf81#033[00m
Nov 25 12:20:17 np0005535469 NetworkManager[48891]: <info>  [1764091217.0211] device (tap67a238a8-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:17Z|01549|if_status|INFO|Not updating pb chassis for 0d7b29be-145f-4598-af6d-8fec1624b66c now as sb is readonly
Nov 25 12:20:17 np0005535469 NetworkManager[48891]: <info>  [1764091217.0240] device (tap0d7b29be-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 NetworkManager[48891]: <info>  [1764091217.0258] device (tap67a238a8-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:20:17 np0005535469 NetworkManager[48891]: <info>  [1764091217.0264] device (tap0d7b29be-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.038 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03ddce97-5b25-4a65-b4ea-d4fbf0b65c11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc65fb0e-f369-4d22-9ea0-8e219af82cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.073 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[06ba63c6-629b-46a3-902a-e362405f37e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:17Z|01550|binding|INFO|Claiming lport 0d7b29be-145f-4598-af6d-8fec1624b66c for this chassis.
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:17Z|01551|binding|INFO|0d7b29be-145f-4598-af6d-8fec1624b66c: Claiming fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:17Z|01552|binding|INFO|Setting lport 0d7b29be-145f-4598-af6d-8fec1624b66c ovn-installed in OVS
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:17Z|01553|binding|INFO|Setting lport 0d7b29be-145f-4598-af6d-8fec1624b66c up in Southbound
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.096 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], port_security=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe53:c71b/64', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d7b29be-145f-4598-af6d-8fec1624b66c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:20:17 np0005535469 systemd-machined[216343]: New machine qemu-178-instance-00000090.
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.107 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4624b368-add0-4bfb-9ea7-95dfe3ab0e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.129 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c23d59-2280-4550-b05c-3fd647017908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415686, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.151 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a79bf11d-79aa-4689-a7c6-36e269df5af2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763664, 'tstamp': 763664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415689, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763669, 'tstamp': 763669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415689, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.153 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.156 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77970d23-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.156 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.156 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77970d23-50, col_values=(('external_ids', {'iface-id': '56e5945a-607b-4b8d-baa7-f3eab82e874d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.157 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.158 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d7b29be-145f-4598-af6d-8fec1624b66c in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.159 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad3526c9-ce3b-41ed-ae27-775dca6a1319#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.178 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e36fa5-6abb-4c6f-810e-df86e4fcb2f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.209 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41af4d10-ab04-4443-90a8-fb31a57b3d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.213 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[055d3549-f21a-4fa8-880a-c6c73b9252cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.245 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[655264e1-e316-4f18-a20d-94028054f83b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.253 254096 DEBUG nova.network.neutron [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updated VIF entry in instance network info cache for port 0d7b29be-145f-4598-af6d-8fec1624b66c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.254 254096 DEBUG nova.network.neutron [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.265 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88286e49-7d94-496a-9220-fd4baf1ca2df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415699, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.266 254096 DEBUG oslo_concurrency.lockutils [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.282 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7de25d3f-4f50-4aa9-9447-fc96fe3632d5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad3526c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763771, 'tstamp': 763771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415700, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.284 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.287 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.288 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad3526c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.288 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad3526c9-c0, col_values=(('external_ids', {'iface-id': '824643fe-f8f7-44ad-9711-a38080a49171'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:17 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.616 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091217.6164224, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.617 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Started (Lifecycle Event)#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.636 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.640 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091217.6166012, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.641 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.666 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.670 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.688 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.907 254096 DEBUG nova.compute.manager [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.908 254096 DEBUG oslo_concurrency.lockutils [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.908 254096 DEBUG oslo_concurrency.lockutils [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.909 254096 DEBUG oslo_concurrency.lockutils [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:17 np0005535469 nova_compute[254092]: 2025-11-25 17:20:17.910 254096 DEBUG nova.compute.manager [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Processing event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:20:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:20:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No event matching network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 in dict_keys([('network-vif-plugged', '0d7b29be-145f-4598-af6d-8fec1624b66c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 WARNING nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Processing event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:20:19 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 WARNING nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c for instance with vm_state building and task_state spawning.#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.001 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.005 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091220.0049632, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.005 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.008 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.011 254096 INFO nova.virt.libvirt.driver [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance spawned successfully.#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.011 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.023 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.028 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.033 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.034 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.034 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.034 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.035 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.035 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.061 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.094 254096 INFO nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 10.91 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.094 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.149 254096 INFO nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 11.90 seconds to build instance.#033[00m
Nov 25 12:20:20 np0005535469 nova_compute[254092]: 2025-11-25 17:20:20.163 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 25 12:20:21 np0005535469 nova_compute[254092]: 2025-11-25 17:20:21.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:21 np0005535469 nova_compute[254092]: 2025-11-25 17:20:21.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 25 12:20:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:24 np0005535469 nova_compute[254092]: 2025-11-25 17:20:24.658 254096 DEBUG nova.compute.manager [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:24 np0005535469 nova_compute[254092]: 2025-11-25 17:20:24.659 254096 DEBUG nova.compute.manager [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:20:24 np0005535469 nova_compute[254092]: 2025-11-25 17:20:24.660 254096 DEBUG oslo_concurrency.lockutils [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:20:24 np0005535469 nova_compute[254092]: 2025-11-25 17:20:24.660 254096 DEBUG oslo_concurrency.lockutils [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:20:24 np0005535469 nova_compute[254092]: 2025-11-25 17:20:24.660 254096 DEBUG nova.network.neutron [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:20:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 25 12:20:26 np0005535469 nova_compute[254092]: 2025-11-25 17:20:26.032 254096 DEBUG nova.network.neutron [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updated VIF entry in instance network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:20:26 np0005535469 nova_compute[254092]: 2025-11-25 17:20:26.034 254096 DEBUG nova.network.neutron [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:26 np0005535469 nova_compute[254092]: 2025-11-25 17:20:26.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:26 np0005535469 nova_compute[254092]: 2025-11-25 17:20:26.052 254096 DEBUG oslo_concurrency.lockutils [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:20:26 np0005535469 nova_compute[254092]: 2025-11-25 17:20:26.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:20:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 12:20:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 25 12:20:31 np0005535469 nova_compute[254092]: 2025-11-25 17:20:31.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:31 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 12:20:31 np0005535469 nova_compute[254092]: 2025-11-25 17:20:31.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:32 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:32Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:c2:78 10.100.0.3
Nov 25 12:20:32 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:32Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:c2:78 10.100.0.3
Nov 25 12:20:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 25 12:20:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 25 12:20:36 np0005535469 nova_compute[254092]: 2025-11-25 17:20:36.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:36 np0005535469 nova_compute[254092]: 2025-11-25 17:20:36.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 25 12:20:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:20:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:39 np0005535469 nova_compute[254092]: 2025-11-25 17:20:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:39 np0005535469 podman[415745]: 2025-11-25 17:20:39.693888325 +0000 UTC m=+0.100577069 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:20:39 np0005535469 podman[415744]: 2025-11-25 17:20:39.702716915 +0000 UTC m=+0.109300626 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:20:39 np0005535469 podman[415746]: 2025-11-25 17:20:39.764854076 +0000 UTC m=+0.159730768 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:20:40
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.control']
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:20:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:20:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:20:41 np0005535469 nova_compute[254092]: 2025-11-25 17:20:41.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:41 np0005535469 nova_compute[254092]: 2025-11-25 17:20:41.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:20:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 77d31682-b1a7-4317-9a65-3557fea64c97 does not exist
Nov 25 12:20:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 11267756-94fc-450f-aa6c-59f4c5753433 does not exist
Nov 25 12:20:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0bfa43a3-e8ec-429d-9a4d-96f9c3fe2a3e does not exist
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:20:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:20:42 np0005535469 podman[416078]: 2025-11-25 17:20:42.264235808 +0000 UTC m=+0.065410261 container create 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:20:42 np0005535469 systemd[1]: Started libpod-conmon-8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b.scope.
Nov 25 12:20:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:20:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:20:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:20:42 np0005535469 podman[416078]: 2025-11-25 17:20:42.230701345 +0000 UTC m=+0.031875888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:20:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:20:42 np0005535469 podman[416078]: 2025-11-25 17:20:42.360437637 +0000 UTC m=+0.161612090 container init 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:20:42 np0005535469 podman[416078]: 2025-11-25 17:20:42.368876276 +0000 UTC m=+0.170050729 container start 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:20:42 np0005535469 podman[416078]: 2025-11-25 17:20:42.372364981 +0000 UTC m=+0.173539474 container attach 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:20:42 np0005535469 elegant_jones[416095]: 167 167
Nov 25 12:20:42 np0005535469 systemd[1]: libpod-8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b.scope: Deactivated successfully.
Nov 25 12:20:42 np0005535469 podman[416078]: 2025-11-25 17:20:42.377357877 +0000 UTC m=+0.178532340 container died 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:20:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-42527bf5a8ed86ecd9dfdaecca4537f1f84fd4ecd82f34e5ecb91ed8c3d0abc2-merged.mount: Deactivated successfully.
Nov 25 12:20:42 np0005535469 podman[416078]: 2025-11-25 17:20:42.427064481 +0000 UTC m=+0.228238934 container remove 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:20:42 np0005535469 systemd[1]: libpod-conmon-8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b.scope: Deactivated successfully.
Nov 25 12:20:42 np0005535469 podman[416120]: 2025-11-25 17:20:42.68683226 +0000 UTC m=+0.072469063 container create fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:20:42 np0005535469 systemd[1]: Started libpod-conmon-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope.
Nov 25 12:20:42 np0005535469 podman[416120]: 2025-11-25 17:20:42.661513351 +0000 UTC m=+0.047150194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:20:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:20:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:42 np0005535469 podman[416120]: 2025-11-25 17:20:42.811880295 +0000 UTC m=+0.197517108 container init fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:20:42 np0005535469 podman[416120]: 2025-11-25 17:20:42.827949442 +0000 UTC m=+0.213586275 container start fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:20:42 np0005535469 podman[416120]: 2025-11-25 17:20:42.833838132 +0000 UTC m=+0.219474965 container attach fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:20:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 12:20:43 np0005535469 nova_compute[254092]: 2025-11-25 17:20:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:43 np0005535469 sad_hypatia[416138]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:20:43 np0005535469 sad_hypatia[416138]: --> relative data size: 1.0
Nov 25 12:20:43 np0005535469 sad_hypatia[416138]: --> All data devices are unavailable
Nov 25 12:20:43 np0005535469 systemd[1]: libpod-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope: Deactivated successfully.
Nov 25 12:20:43 np0005535469 systemd[1]: libpod-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope: Consumed 1.094s CPU time.
Nov 25 12:20:43 np0005535469 conmon[416138]: conmon fdfed6496f870c34d850 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope/container/memory.events
Nov 25 12:20:44 np0005535469 podman[416167]: 2025-11-25 17:20:44.040862407 +0000 UTC m=+0.029298609 container died fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:20:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c-merged.mount: Deactivated successfully.
Nov 25 12:20:44 np0005535469 podman[416167]: 2025-11-25 17:20:44.104259282 +0000 UTC m=+0.092695484 container remove fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:20:44 np0005535469 systemd[1]: libpod-conmon-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope: Deactivated successfully.
Nov 25 12:20:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:44 np0005535469 podman[416323]: 2025-11-25 17:20:44.879061462 +0000 UTC m=+0.059872072 container create 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:20:44 np0005535469 podman[416323]: 2025-11-25 17:20:44.850300669 +0000 UTC m=+0.031111329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:20:44 np0005535469 systemd[1]: Started libpod-conmon-93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38.scope.
Nov 25 12:20:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:20:45 np0005535469 podman[416323]: 2025-11-25 17:20:45.00392206 +0000 UTC m=+0.184732660 container init 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:20:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 12:20:45 np0005535469 podman[416323]: 2025-11-25 17:20:45.019663548 +0000 UTC m=+0.200474118 container start 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:20:45 np0005535469 podman[416323]: 2025-11-25 17:20:45.023530433 +0000 UTC m=+0.204341013 container attach 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:20:45 np0005535469 sweet_bassi[416339]: 167 167
Nov 25 12:20:45 np0005535469 systemd[1]: libpod-93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38.scope: Deactivated successfully.
Nov 25 12:20:45 np0005535469 podman[416323]: 2025-11-25 17:20:45.031391128 +0000 UTC m=+0.212201698 container died 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:20:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b6b95701d1e55919532f7d2a6b348e17171362395eae2a0f46c5e0f8a7ffb3c2-merged.mount: Deactivated successfully.
Nov 25 12:20:45 np0005535469 podman[416323]: 2025-11-25 17:20:45.090724352 +0000 UTC m=+0.271534962 container remove 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:20:45 np0005535469 systemd[1]: libpod-conmon-93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38.scope: Deactivated successfully.
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.282 254096 DEBUG nova.compute.manager [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.285 254096 DEBUG nova.compute.manager [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.285 254096 DEBUG oslo_concurrency.lockutils [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.286 254096 DEBUG oslo_concurrency.lockutils [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.286 254096 DEBUG nova.network.neutron [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.328 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.328 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.328 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.329 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.329 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.330 254096 INFO nova.compute.manager [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Terminating instance#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.331 254096 DEBUG nova.compute.manager [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:20:45 np0005535469 podman[416362]: 2025-11-25 17:20:45.332874594 +0000 UTC m=+0.066705427 container create a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:20:45 np0005535469 systemd[1]: Started libpod-conmon-a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5.scope.
Nov 25 12:20:45 np0005535469 kernel: tap67a238a8-a6 (unregistering): left promiscuous mode
Nov 25 12:20:45 np0005535469 podman[416362]: 2025-11-25 17:20:45.302517928 +0000 UTC m=+0.036348631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:20:45 np0005535469 NetworkManager[48891]: <info>  [1764091245.3973] device (tap67a238a8-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:20:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:45Z|01554|binding|INFO|Releasing lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 from this chassis (sb_readonly=0)
Nov 25 12:20:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:45Z|01555|binding|INFO|Setting lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 down in Southbound
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:45Z|01556|binding|INFO|Removing iface tap67a238a8-a6 ovn-installed in OVS
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.411 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.419 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:c2:78 10.100.0.3'], port_security=['fa:16:3e:dd:c2:78 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=67a238a8-a6f3-4b0f-b4da-7800dcf79375) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.420 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 unbound from our chassis#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.421 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77970d23-547a-4e3a-bddf-f4770a15bf81#033[00m
Nov 25 12:20:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:45 np0005535469 kernel: tap0d7b29be-14 (unregistering): left promiscuous mode
Nov 25 12:20:45 np0005535469 podman[416362]: 2025-11-25 17:20:45.4495884 +0000 UTC m=+0.183419033 container init a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 12:20:45 np0005535469 NetworkManager[48891]: <info>  [1764091245.4515] device (tap0d7b29be-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.456 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5c3339-e076-46f7-89fd-d55136acda05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:45Z|01557|binding|INFO|Releasing lport 0d7b29be-145f-4598-af6d-8fec1624b66c from this chassis (sb_readonly=0)
Nov 25 12:20:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:45Z|01558|binding|INFO|Setting lport 0d7b29be-145f-4598-af6d-8fec1624b66c down in Southbound
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:45Z|01559|binding|INFO|Removing iface tap0d7b29be-14 ovn-installed in OVS
Nov 25 12:20:45 np0005535469 podman[416362]: 2025-11-25 17:20:45.483205005 +0000 UTC m=+0.217035608 container start a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:20:45 np0005535469 podman[416362]: 2025-11-25 17:20:45.486961388 +0000 UTC m=+0.220792021 container attach a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.489 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], port_security=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe53:c71b/64', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d7b29be-145f-4598-af6d-8fec1624b66c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.495 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[61f1c348-1459-4a9c-b740-31737a2d176f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.498 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24b2ec05-1f96-4c62-b6cc-31afb34144c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.531 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[11fbca66-dc82-435b-88df-3fc25195072a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 25 12:20:45 np0005535469 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 13.969s CPU time.
Nov 25 12:20:45 np0005535469 systemd-machined[216343]: Machine qemu-178-instance-00000090 terminated.
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.561 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a61b3b-34f2-45d0-8023-190a3005f5e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416400, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 NetworkManager[48891]: <info>  [1764091245.5783] manager: (tap0d7b29be-14): new Tun device (/org/freedesktop/NetworkManager/Devices/645)
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b163837e-5620-4cdb-b4fa-3064676de2f1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763664, 'tstamp': 763664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416409, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763669, 'tstamp': 763669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416409, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.598 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.612 254096 INFO nova.virt.libvirt.driver [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance destroyed successfully.#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.613 254096 DEBUG nova.objects.instance [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 9be9cbb4-878e-4fce-be7c-44b49480ff0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.612 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77970d23-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77970d23-50, col_values=(('external_ids', {'iface-id': '56e5945a-607b-4b8d-baa7-f3eab82e874d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.614 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d7b29be-145f-4598-af6d-8fec1624b66c in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.615 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad3526c9-ce3b-41ed-ae27-775dca6a1319#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.630 254096 DEBUG nova.virt.libvirt.vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:20:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:20:20Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.631 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.632 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.633 254096 DEBUG os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.633 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf169a03-a012-4042-9012-1052a16cfaf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.636 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a238a8-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.652 254096 INFO os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6')#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.653 254096 DEBUG nova.virt.libvirt.vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:20:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:20:20Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.654 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.654 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.655 254096 DEBUG os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.656 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d7b29be-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.663 254096 INFO os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14')#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.670 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0775b075-5077-4379-9f59-3b37b14a2fc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.675 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5966ea30-be46-4073-a704-e78a7112480b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.718 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[79280046-56ba-4cf0-908c-63f793302a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba75f6c-1275-4149-9c00-b92e7e93ed45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416467, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.756 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3aa770-8b3c-4f32-9541-1ec830c44444]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad3526c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763771, 'tstamp': 763771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416468, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.758 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 nova_compute[254092]: 2025-11-25 17:20:45.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.761 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad3526c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.762 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.762 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad3526c9-c0, col_values=(('external_ids', {'iface-id': '824643fe-f8f7-44ad-9711-a38080a49171'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.762 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:20:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:20:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2249891955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.088 254096 INFO nova.virt.libvirt.driver [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deleting instance files /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e_del#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.088 254096 INFO nova.virt.libvirt.driver [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deletion of /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e_del complete#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.105 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.134 254096 INFO nova.compute.manager [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.135 254096 DEBUG oslo.service.loopingcall [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.135 254096 DEBUG nova.compute.manager [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.135 254096 DEBUG nova.network.neutron [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.173 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.174 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.334 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.335 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3406MB free_disk=59.897212982177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.335 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.335 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]: {
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:    "0": [
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:        {
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "devices": [
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "/dev/loop3"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            ],
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_name": "ceph_lv0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_size": "21470642176",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "name": "ceph_lv0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "tags": {
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cluster_name": "ceph",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.crush_device_class": "",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.encrypted": "0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osd_id": "0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.type": "block",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.vdo": "0"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            },
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "type": "block",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "vg_name": "ceph_vg0"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:        }
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:    ],
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:    "1": [
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:        {
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "devices": [
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "/dev/loop4"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            ],
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_name": "ceph_lv1",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_size": "21470642176",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "name": "ceph_lv1",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "tags": {
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cluster_name": "ceph",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.crush_device_class": "",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.encrypted": "0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osd_id": "1",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.type": "block",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.vdo": "0"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            },
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "type": "block",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "vg_name": "ceph_vg1"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:        }
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:    ],
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:    "2": [
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:        {
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "devices": [
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "/dev/loop5"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            ],
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_name": "ceph_lv2",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_size": "21470642176",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "name": "ceph_lv2",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "tags": {
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.cluster_name": "ceph",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.crush_device_class": "",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.encrypted": "0",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osd_id": "2",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.type": "block",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:                "ceph.vdo": "0"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            },
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "type": "block",
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:            "vg_name": "ceph_vg2"
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:        }
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]:    ]
Nov 25 12:20:46 np0005535469 gallant_joliot[416378]: }
Nov 25 12:20:46 np0005535469 systemd[1]: libpod-a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5.scope: Deactivated successfully.
Nov 25 12:20:46 np0005535469 podman[416362]: 2025-11-25 17:20:46.403518927 +0000 UTC m=+1.137349530 container died a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:20:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d-merged.mount: Deactivated successfully.
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:46 np0005535469 podman[416362]: 2025-11-25 17:20:46.461618178 +0000 UTC m=+1.195448781 container remove a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:20:46 np0005535469 systemd[1]: libpod-conmon-a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5.scope: Deactivated successfully.
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.507 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 392384a1-1741-4504-b2c2-557420bbbbd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.508 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 9be9cbb4-878e-4fce-be7c-44b49480ff0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.509 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.509 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.574 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.621 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.622 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.641 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.661 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:20:46 np0005535469 nova_compute[254092]: 2025-11-25 17:20:46.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 25 12:20:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:20:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165480102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:20:47 np0005535469 podman[416649]: 2025-11-25 17:20:47.174990535 +0000 UTC m=+0.045209041 container create 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.181 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.188 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.205 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:20:47 np0005535469 systemd[1]: Started libpod-conmon-17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82.scope.
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.230 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.230 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:47 np0005535469 podman[416649]: 2025-11-25 17:20:47.158065745 +0000 UTC m=+0.028284241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:20:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:20:47 np0005535469 podman[416649]: 2025-11-25 17:20:47.272853929 +0000 UTC m=+0.143072425 container init 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:20:47 np0005535469 podman[416649]: 2025-11-25 17:20:47.287041975 +0000 UTC m=+0.157260491 container start 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:20:47 np0005535469 podman[416649]: 2025-11-25 17:20:47.292193126 +0000 UTC m=+0.162411652 container attach 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:20:47 np0005535469 focused_hofstadter[416667]: 167 167
Nov 25 12:20:47 np0005535469 systemd[1]: libpod-17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82.scope: Deactivated successfully.
Nov 25 12:20:47 np0005535469 podman[416649]: 2025-11-25 17:20:47.293129942 +0000 UTC m=+0.163348418 container died 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:20:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7eef56225a18b4c2d1c6a8e11865821a46bc3c7d4229e6fe9f31c3efdf8a478b-merged.mount: Deactivated successfully.
Nov 25 12:20:47 np0005535469 podman[416649]: 2025-11-25 17:20:47.34050838 +0000 UTC m=+0.210726856 container remove 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:20:47 np0005535469 systemd[1]: libpod-conmon-17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82.scope: Deactivated successfully.
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.411 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.412 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.412 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.412 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.413 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-unplugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.413 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.413 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.414 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.414 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.414 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 WARNING nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.416 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.416 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.416 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-unplugged-0d7b29be-145f-4598-af6d-8fec1624b66c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-0d7b29be-145f-4598-af6d-8fec1624b66c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.418 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.418 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.420 254096 WARNING nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.543 254096 DEBUG nova.network.neutron [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updated VIF entry in instance network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.544 254096 DEBUG nova.network.neutron [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.571 254096 DEBUG oslo_concurrency.lockutils [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:20:47 np0005535469 podman[416692]: 2025-11-25 17:20:47.599081079 +0000 UTC m=+0.070186451 container create 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:20:47 np0005535469 systemd[1]: Started libpod-conmon-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope.
Nov 25 12:20:47 np0005535469 podman[416692]: 2025-11-25 17:20:47.579884786 +0000 UTC m=+0.050990178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:20:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:20:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:20:47 np0005535469 podman[416692]: 2025-11-25 17:20:47.693777357 +0000 UTC m=+0.164882739 container init 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:20:47 np0005535469 podman[416692]: 2025-11-25 17:20:47.699332278 +0000 UTC m=+0.170437640 container start 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:20:47 np0005535469 podman[416692]: 2025-11-25 17:20:47.702822713 +0000 UTC m=+0.173928105 container attach 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:20:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:47.889 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:47 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:47.891 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.914 254096 DEBUG nova.network.neutron [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.926 254096 INFO nova.compute.manager [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 1.79 seconds to deallocate network for instance.#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.963 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:47 np0005535469 nova_compute[254092]: 2025-11-25 17:20:47.964 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:48 np0005535469 nova_compute[254092]: 2025-11-25 17:20:48.037 254096 DEBUG oslo_concurrency.processutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:20:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529710750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:20:48 np0005535469 nova_compute[254092]: 2025-11-25 17:20:48.483 254096 DEBUG oslo_concurrency.processutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:48 np0005535469 nova_compute[254092]: 2025-11-25 17:20:48.492 254096 DEBUG nova.compute.provider_tree [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:20:48 np0005535469 nova_compute[254092]: 2025-11-25 17:20:48.515 254096 DEBUG nova.scheduler.client.report [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:20:48 np0005535469 nova_compute[254092]: 2025-11-25 17:20:48.540 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:48 np0005535469 nova_compute[254092]: 2025-11-25 17:20:48.577 254096 INFO nova.scheduler.client.report [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 9be9cbb4-878e-4fce-be7c-44b49480ff0e#033[00m
Nov 25 12:20:48 np0005535469 nova_compute[254092]: 2025-11-25 17:20:48.643 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:48 np0005535469 cranky_booth[416707]: {
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "osd_id": 1,
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "type": "bluestore"
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:    },
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "osd_id": 2,
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "type": "bluestore"
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:    },
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "osd_id": 0,
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:        "type": "bluestore"
Nov 25 12:20:48 np0005535469 cranky_booth[416707]:    }
Nov 25 12:20:48 np0005535469 cranky_booth[416707]: }
Nov 25 12:20:48 np0005535469 systemd[1]: libpod-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope: Deactivated successfully.
Nov 25 12:20:48 np0005535469 systemd[1]: libpod-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope: Consumed 1.106s CPU time.
Nov 25 12:20:48 np0005535469 podman[416692]: 2025-11-25 17:20:48.803748679 +0000 UTC m=+1.274854041 container died 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:20:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699-merged.mount: Deactivated successfully.
Nov 25 12:20:48 np0005535469 podman[416692]: 2025-11-25 17:20:48.853275197 +0000 UTC m=+1.324380559 container remove 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:20:48 np0005535469 systemd[1]: libpod-conmon-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope: Deactivated successfully.
Nov 25 12:20:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:20:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:20:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:20:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:20:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d68ee3f2-6583-4ceb-a270-37a6e457aad0 does not exist
Nov 25 12:20:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a9ede9d2-45df-4942-9960-9968761e3c39 does not exist
Nov 25 12:20:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 6 op/s
Nov 25 12:20:49 np0005535469 nova_compute[254092]: 2025-11-25 17:20:49.230 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:49 np0005535469 nova_compute[254092]: 2025-11-25 17:20:49.231 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:20:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:20:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:20:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:49 np0005535469 nova_compute[254092]: 2025-11-25 17:20:49.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:49 np0005535469 nova_compute[254092]: 2025-11-25 17:20:49.522 254096 DEBUG nova.compute.manager [req-b50f1c42-ed9f-4ecf-a361-0b978ecba198 req-099a841f-9daf-4e3c-814c-71ef74b79285 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-deleted-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:49 np0005535469 nova_compute[254092]: 2025-11-25 17:20:49.523 254096 DEBUG nova.compute.manager [req-b50f1c42-ed9f-4ecf-a361-0b978ecba198 req-099a841f-9daf-4e3c-814c-71ef74b79285 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-deleted-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:50 np0005535469 nova_compute[254092]: 2025-11-25 17:20:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:50 np0005535469 nova_compute[254092]: 2025-11-25 17:20:50.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 30 op/s
Nov 25 12:20:51 np0005535469 nova_compute[254092]: 2025-11-25 17:20:51.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007602269119945302 of space, bias 1.0, pg target 0.22806807359835904 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:20:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:20:52 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:52.894 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.053 254096 DEBUG nova.compute.manager [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG nova.compute.manager [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG oslo_concurrency.lockutils [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG oslo_concurrency.lockutils [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG nova.network.neutron [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.117 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.118 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.118 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.118 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.119 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.120 254096 INFO nova.compute.manager [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Terminating instance#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.120 254096 DEBUG nova.compute.manager [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:20:53 np0005535469 kernel: tapb342a143-48 (unregistering): left promiscuous mode
Nov 25 12:20:53 np0005535469 NetworkManager[48891]: <info>  [1764091253.1817] device (tapb342a143-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:20:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:53Z|01560|binding|INFO|Releasing lport b342a143-48a8-46f1-90fc-229fadeb167e from this chassis (sb_readonly=0)
Nov 25 12:20:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:53Z|01561|binding|INFO|Setting lport b342a143-48a8-46f1-90fc-229fadeb167e down in Southbound
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:53Z|01562|binding|INFO|Removing iface tapb342a143-48 ovn-installed in OVS
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.255 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:d2:35 10.100.0.12'], port_security=['fa:16:3e:48:d2:35 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b342a143-48a8-46f1-90fc-229fadeb167e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.257 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b342a143-48a8-46f1-90fc-229fadeb167e in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 unbound from our chassis#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.259 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77970d23-547a-4e3a-bddf-f4770a15bf81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.260 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7807719c-f33e-4635-a26b-68ee8618254b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.261 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 namespace which is not needed anymore#033[00m
Nov 25 12:20:53 np0005535469 kernel: tapf6eeae44-ea (unregistering): left promiscuous mode
Nov 25 12:20:53 np0005535469 NetworkManager[48891]: <info>  [1764091253.2694] device (tapf6eeae44-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:53Z|01563|binding|INFO|Releasing lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f from this chassis (sb_readonly=0)
Nov 25 12:20:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:53Z|01564|binding|INFO|Setting lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f down in Southbound
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 ovn_controller[153477]: 2025-11-25T17:20:53Z|01565|binding|INFO|Removing iface tapf6eeae44-ea ovn-installed in OVS
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.291 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], port_security=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe0f:88e0/64', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f6eeae44-ea00-4543-a1e0-9ce45fbc399f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 25 12:20:53 np0005535469 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 16.563s CPU time.
Nov 25 12:20:53 np0005535469 systemd-machined[216343]: Machine qemu-177-instance-0000008f terminated.
Nov 25 12:20:53 np0005535469 NetworkManager[48891]: <info>  [1764091253.3464] manager: (tapb342a143-48): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Nov 25 12:20:53 np0005535469 NetworkManager[48891]: <info>  [1764091253.3577] manager: (tapf6eeae44-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/647)
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.375 254096 INFO nova.virt.libvirt.driver [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance destroyed successfully.#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.376 254096 DEBUG nova.objects.instance [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.388 254096 DEBUG nova.virt.libvirt.vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:19:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.389 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.391 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.391 254096 DEBUG os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.394 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb342a143-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.399 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.407 254096 INFO os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48')#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.409 254096 DEBUG nova.virt.libvirt.vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:19:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.409 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.410 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.411 254096 DEBUG os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.414 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6eeae44-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.422 254096 INFO os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea')#033[00m
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : haproxy version is 2.8.14-c23fe91
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : path to executable is /usr/sbin/haproxy
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [WARNING]  (415063) : Exiting Master process...
Nov 25 12:20:53 np0005535469 systemd[1]: libpod-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope: Deactivated successfully.
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [ALERT]    (415063) : Current worker (415065) exited with code 143 (Terminated)
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [WARNING]  (415063) : All workers exited. Exiting... (0)
Nov 25 12:20:53 np0005535469 conmon[415059]: conmon e9e5d2e4494488d2c7db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope/container/memory.events
Nov 25 12:20:53 np0005535469 podman[416874]: 2025-11-25 17:20:53.456894995 +0000 UTC m=+0.057909528 container died e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.480 254096 DEBUG nova.compute.manager [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-unplugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG oslo_concurrency.lockutils [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG oslo_concurrency.lockutils [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG oslo_concurrency.lockutils [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG nova.compute.manager [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-unplugged-b342a143-48a8-46f1-90fc-229fadeb167e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.482 254096 DEBUG nova.compute.manager [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-unplugged-b342a143-48a8-46f1-90fc-229fadeb167e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:20:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249-userdata-shm.mount: Deactivated successfully.
Nov 25 12:20:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e5e3d52ff50bf53cf2fb737f96eefa352cab2347302bd90a6108c9336c866d1c-merged.mount: Deactivated successfully.
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:20:53 np0005535469 podman[416874]: 2025-11-25 17:20:53.501993673 +0000 UTC m=+0.103008206 container cleanup e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:20:53 np0005535469 systemd[1]: libpod-conmon-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope: Deactivated successfully.
Nov 25 12:20:53 np0005535469 podman[416924]: 2025-11-25 17:20:53.605733936 +0000 UTC m=+0.068261508 container remove e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4b02f0-6a4e-41c9-b14f-3571d9592509]: (4, ('Tue Nov 25 05:20:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 (e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249)\ne9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249\nTue Nov 25 05:20:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 (e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249)\ne9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53ad154f-40d6-4fe0-b731-cfdc25147aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.615 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 kernel: tap77970d23-50: left promiscuous mode
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.636 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f79c9e7b-cab8-478f-9e24-d164ffaa6550]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.654 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[82cdab4a-d972-4712-925b-657246cce9b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.656 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fb09ab5d-2eca-48ef-bd43-8a46075009c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.677 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab63046-0abf-449d-96fa-3a2764610d0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763639, 'reachable_time': 22799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416940, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 systemd[1]: run-netns-ovnmeta\x2d77970d23\x2d547a\x2d4e3a\x2dbddf\x2df4770a15bf81.mount: Deactivated successfully.
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.682 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.682 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[df6aeafa-7fdc-4478-b252-618a4798f31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.683 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f6eeae44-ea00-4543-a1e0-9ce45fbc399f in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.684 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad3526c9-ce3b-41ed-ae27-775dca6a1319, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.685 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d36d086-9fa2-49d7-8941-81b500e32d14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.685 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 namespace which is not needed anymore#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.829 254096 INFO nova.virt.libvirt.driver [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deleting instance files /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0_del#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.831 254096 INFO nova.virt.libvirt.driver [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deletion of /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0_del complete#033[00m
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : haproxy version is 2.8.14-c23fe91
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : path to executable is /usr/sbin/haproxy
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [WARNING]  (415224) : Exiting Master process...
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [ALERT]    (415224) : Current worker (415226) exited with code 143 (Terminated)
Nov 25 12:20:53 np0005535469 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [WARNING]  (415224) : All workers exited. Exiting... (0)
Nov 25 12:20:53 np0005535469 systemd[1]: libpod-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6.scope: Deactivated successfully.
Nov 25 12:20:53 np0005535469 podman[416958]: 2025-11-25 17:20:53.847019034 +0000 UTC m=+0.065283268 container died 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:20:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6-userdata-shm.mount: Deactivated successfully.
Nov 25 12:20:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0883bcdd26bb7ed72d7d0ceebbe0bf6761be99f4ffb7fcfd366aae7f03f2ca46-merged.mount: Deactivated successfully.
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.890 254096 INFO nova.compute.manager [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.891 254096 DEBUG oslo.service.loopingcall [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.891 254096 DEBUG nova.compute.manager [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:20:53 np0005535469 nova_compute[254092]: 2025-11-25 17:20:53.891 254096 DEBUG nova.network.neutron [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:20:53 np0005535469 podman[416958]: 2025-11-25 17:20:53.895925875 +0000 UTC m=+0.114190109 container cleanup 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:20:53 np0005535469 systemd[1]: libpod-conmon-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6.scope: Deactivated successfully.
Nov 25 12:20:54 np0005535469 podman[416987]: 2025-11-25 17:20:54.00302556 +0000 UTC m=+0.071660622 container remove 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7576db8-aebe-4b58-89b2-a540abc1b68e]: (4, ('Tue Nov 25 05:20:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 (05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6)\n05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6\nTue Nov 25 05:20:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 (05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6)\n05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[375a2b59-f84f-424a-a0aa-3550785cfcde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:20:54 np0005535469 nova_compute[254092]: 2025-11-25 17:20:54.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:54 np0005535469 kernel: tapad3526c9-c0: left promiscuous mode
Nov 25 12:20:54 np0005535469 nova_compute[254092]: 2025-11-25 17:20:54.025 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.029 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc1b319-a1a6-4193-9f08-44eefb464d60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.064 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9866a294-69bd-447e-97ee-96b7ccdd8338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.066 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d664e360-2487-4959-8419-f7b21ae1ceb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.091 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6e7010-e7c3-49b3-953b-00a5d382a668]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763750, 'reachable_time': 17365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417002, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.094 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:20:54 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.094 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[423172fd-25c4-44f9-905d-c780b9d057ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:20:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:54 np0005535469 systemd[1]: run-netns-ovnmeta\x2dad3526c9\x2dce3b\x2d41ed\x2dae27\x2d775dca6a1319.mount: Deactivated successfully.
Nov 25 12:20:54 np0005535469 nova_compute[254092]: 2025-11-25 17:20:54.646 254096 DEBUG nova.network.neutron [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated VIF entry in instance network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:20:54 np0005535469 nova_compute[254092]: 2025-11-25 17:20:54.646 254096 DEBUG nova.network.neutron [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:54 np0005535469 nova_compute[254092]: 2025-11-25 17:20:54.661 254096 DEBUG oslo_concurrency.lockutils [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:20:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.097 254096 DEBUG nova.network.neutron [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.116 254096 INFO nova.compute.manager [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 1.22 seconds to deallocate network for instance.#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.158 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.158 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.187 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-unplugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-unplugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 WARNING nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-unplugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.190 254096 WARNING nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.224 254096 DEBUG oslo_concurrency.processutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:20:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:20:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775007000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:20:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:20:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775007000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.580 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.582 254096 DEBUG oslo_concurrency.lockutils [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.582 254096 DEBUG oslo_concurrency.lockutils [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 DEBUG oslo_concurrency.lockutils [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 WARNING nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-deleted-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.584 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-deleted-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:20:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:20:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2085277808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.728 254096 DEBUG oslo_concurrency.processutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.738 254096 DEBUG nova.compute.provider_tree [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.753 254096 DEBUG nova.scheduler.client.report [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.770 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.801 254096 INFO nova.scheduler.client.report [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 392384a1-1741-4504-b2c2-557420bbbbd0#033[00m
Nov 25 12:20:55 np0005535469 nova_compute[254092]: 2025-11-25 17:20:55.858 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:20:56 np0005535469 nova_compute[254092]: 2025-11-25 17:20:56.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 57 op/s
Nov 25 12:20:58 np0005535469 nova_compute[254092]: 2025-11-25 17:20:58.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:20:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 KiB/s wr, 51 op/s
Nov 25 12:20:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:20:59 np0005535469 nova_compute[254092]: 2025-11-25 17:20:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:00 np0005535469 nova_compute[254092]: 2025-11-25 17:21:00.595 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091245.593738, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:21:00 np0005535469 nova_compute[254092]: 2025-11-25 17:21:00.596 254096 INFO nova.compute.manager [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:21:00 np0005535469 nova_compute[254092]: 2025-11-25 17:21:00.623 254096 DEBUG nova.compute.manager [None req-0ef22437-c9bd-44c2-9bb7-b63dbdd7bede - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:21:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 KiB/s wr, 51 op/s
Nov 25 12:21:01 np0005535469 nova_compute[254092]: 2025-11-25 17:21:01.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:21:03 np0005535469 nova_compute[254092]: 2025-11-25 17:21:03.452 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:04 np0005535469 nova_compute[254092]: 2025-11-25 17:21:04.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:04 np0005535469 nova_compute[254092]: 2025-11-25 17:21:04.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:21:06 np0005535469 nova_compute[254092]: 2025-11-25 17:21:06.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 12:21:08 np0005535469 nova_compute[254092]: 2025-11-25 17:21:08.374 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091253.3736198, 392384a1-1741-4504-b2c2-557420bbbbd0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:21:08 np0005535469 nova_compute[254092]: 2025-11-25 17:21:08.375 254096 INFO nova.compute.manager [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:21:08 np0005535469 nova_compute[254092]: 2025-11-25 17:21:08.401 254096 DEBUG nova.compute.manager [None req-15471841-dae6-40ca-9669-ba9d8703fc9f - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:21:08 np0005535469 nova_compute[254092]: 2025-11-25 17:21:08.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:21:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:21:10 np0005535469 podman[417028]: 2025-11-25 17:21:10.665500262 +0000 UTC m=+0.070528160 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:21:10 np0005535469 podman[417027]: 2025-11-25 17:21:10.666201731 +0000 UTC m=+0.077463260 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:21:10 np0005535469 podman[417029]: 2025-11-25 17:21:10.703617179 +0000 UTC m=+0.111231528 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:21:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:11 np0005535469 nova_compute[254092]: 2025-11-25 17:21:11.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:13 np0005535469 nova_compute[254092]: 2025-11-25 17:21:13.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:13.658 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:13.659 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:13.659 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:16 np0005535469 nova_compute[254092]: 2025-11-25 17:21:16.499 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.383 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2 2001:db8::f816:3eff:fe42:fad8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:fad8/64', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f8afc421-e45b-4911-af18-dd32853c6b8c) old=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:21:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.385 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f8afc421-e45b-4911-af18-dd32853c6b8c in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 updated#033[00m
Nov 25 12:21:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.385 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:21:18 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.386 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dad5144a-5390-460f-b1b5-e63cc14f060d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:18 np0005535469 nova_compute[254092]: 2025-11-25 17:21:18.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:21 np0005535469 nova_compute[254092]: 2025-11-25 17:21:21.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:23 np0005535469 nova_compute[254092]: 2025-11-25 17:21:23.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.224 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2 2001:db8:0:1:f816:3eff:fe42:fad8 2001:db8::f816:3eff:fe42:fad8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe42:fad8/64 2001:db8::f816:3eff:fe42:fad8/64', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f8afc421-e45b-4911-af18-dd32853c6b8c) old=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2 2001:db8::f816:3eff:fe42:fad8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:fad8/64', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:21:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.226 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f8afc421-e45b-4911-af18-dd32853c6b8c in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 updated#033[00m
Nov 25 12:21:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.228 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:21:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1925bc6c-9bbe-4c38-b0f4-fed27e4cd590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:26 np0005535469 nova_compute[254092]: 2025-11-25 17:21:26.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:28 np0005535469 nova_compute[254092]: 2025-11-25 17:21:28.509 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:31 np0005535469 nova_compute[254092]: 2025-11-25 17:21:31.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.253 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.254 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.269 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.365 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.366 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.379 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.380 254096 INFO nova.compute.claims [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:33 np0005535469 nova_compute[254092]: 2025-11-25 17:21:33.519 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:21:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517089168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.000 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.007 254096 DEBUG nova.compute.provider_tree [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.026 254096 DEBUG nova.scheduler.client.report [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.052 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.053 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.121 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.122 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.138 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.153 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.231 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.232 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.232 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Creating image(s)#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.258 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.283 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.307 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.312 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.419 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.420 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.421 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.421 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.445 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.449 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.504 254096 DEBUG nova.policy [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.812 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:34 np0005535469 nova_compute[254092]: 2025-11-25 17:21:34.900 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:21:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 75 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 1.2 MiB/s wr, 3 op/s
Nov 25 12:21:35 np0005535469 nova_compute[254092]: 2025-11-25 17:21:35.089 254096 DEBUG nova.objects.instance [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:21:35 np0005535469 nova_compute[254092]: 2025-11-25 17:21:35.101 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:21:35 np0005535469 nova_compute[254092]: 2025-11-25 17:21:35.102 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Ensure instance console log exists: /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:21:35 np0005535469 nova_compute[254092]: 2025-11-25 17:21:35.102 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:35 np0005535469 nova_compute[254092]: 2025-11-25 17:21:35.103 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:35 np0005535469 nova_compute[254092]: 2025-11-25 17:21:35.103 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:35 np0005535469 nova_compute[254092]: 2025-11-25 17:21:35.167 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Successfully created port: c1ba1b56-3c61-42fa-b23d-44349357a11a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.690 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Successfully updated port: c1ba1b56-3c61-42fa-b23d-44349357a11a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.705 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.706 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.706 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.818 254096 DEBUG nova.compute.manager [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.819 254096 DEBUG nova.compute.manager [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing instance network info cache due to event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.819 254096 DEBUG oslo_concurrency.lockutils [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:21:36 np0005535469 nova_compute[254092]: 2025-11-25 17:21:36.890 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:21:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 12:21:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:21:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 60K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1322 writes, 5759 keys, 1322 commit groups, 1.0 writes per commit group, ingest: 8.57 MB, 0.01 MB/s#012Interval WAL: 1322 writes, 1322 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.7      2.20              0.23        41    0.054       0      0       0.0       0.0#012  L6      1/0    8.00 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7    115.7     97.3      3.35              0.91        40    0.084    253K    22K       0.0       0.0#012 Sum      1/0    8.00 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     69.9     71.3      5.54              1.15        81    0.068    253K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.6    154.7    154.6      0.27              0.11         8    0.034     33K   1991       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    115.7     97.3      3.35              0.91        40    0.084    253K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     32.4      2.15              0.23        40    0.054       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.1 total, 600.0 interval#012Flush(GB): cumulative 0.068, interval 0.005#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.39 GB write, 0.07 MB/s write, 0.38 GB read, 0.07 MB/s read, 5.5 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 45.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000589 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2956,43.58 MB,14.3359%) FilterBlock(82,707.05 KB,0.22713%) IndexBlock(82,1.14 MB,0.373805%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.290152) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298290189, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3433366, "memory_usage": 3500712, "flush_reason": "Manual Compaction"}
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298329204, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 3367376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59177, "largest_seqno": 61230, "table_properties": {"data_size": 3357947, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18821, "raw_average_key_size": 20, "raw_value_size": 3339374, "raw_average_value_size": 3571, "num_data_blocks": 265, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091071, "oldest_key_time": 1764091071, "file_creation_time": 1764091298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 39107 microseconds, and 10593 cpu microseconds.
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.329256) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 3367376 bytes OK
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.329280) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.331482) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.331499) EVENT_LOG_v1 {"time_micros": 1764091298331493, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.331520) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 3424761, prev total WAL file size 3424761, number of live WAL files 2.
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.332748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(3288KB)], [137(8192KB)]
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298332790, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11756627, "oldest_snapshot_seqno": -1}
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.381 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8160 keys, 10018119 bytes, temperature: kUnknown
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298422860, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 10018119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9966072, "index_size": 30558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 212956, "raw_average_key_size": 26, "raw_value_size": 9822893, "raw_average_value_size": 1203, "num_data_blocks": 1188, "num_entries": 8160, "num_filter_entries": 8160, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.423174) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10018119 bytes
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.424534) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.4 rd, 111.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8674, records dropped: 514 output_compression: NoCompression
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.424555) EVENT_LOG_v1 {"time_micros": 1764091298424544, "job": 84, "event": "compaction_finished", "compaction_time_micros": 90167, "compaction_time_cpu_micros": 32381, "output_level": 6, "num_output_files": 1, "total_output_size": 10018119, "num_input_records": 8674, "num_output_records": 8160, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298425472, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298427424, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.332614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.432 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.433 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance network_info: |[{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.433 254096 DEBUG oslo_concurrency.lockutils [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.433 254096 DEBUG nova.network.neutron [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.436 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start _get_guest_xml network_info=[{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.440 254096 WARNING nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.445 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.445 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.448 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.448 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.451 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.453 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:21:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765263425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.884 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.918 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:21:38 np0005535469 nova_compute[254092]: 2025-11-25 17:21:38.923 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 12:21:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:21:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854290813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.383 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.385 254096 DEBUG nova.virt.libvirt.vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:21:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1075663045',display_name='tempest-TestGettingAddress-server-1075663045',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1075663045',id=145,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-0xnzfd4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:21:34Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=36b839e5-d6db-406a-ab95-bbdcd48c531d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.385 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.386 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.387 254096 DEBUG nova.objects.instance [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.415 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <uuid>36b839e5-d6db-406a-ab95-bbdcd48c531d</uuid>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <name>instance-00000091</name>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1075663045</nova:name>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:21:38</nova:creationTime>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <nova:port uuid="c1ba1b56-3c61-42fa-b23d-44349357a11a">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe77:8aa5" ipVersion="6"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe77:8aa5" ipVersion="6"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <entry name="serial">36b839e5-d6db-406a-ab95-bbdcd48c531d</entry>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <entry name="uuid">36b839e5-d6db-406a-ab95-bbdcd48c531d</entry>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/36b839e5-d6db-406a-ab95-bbdcd48c531d_disk">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:77:8a:a5"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <target dev="tapc1ba1b56-3c"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/console.log" append="off"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:21:39 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:21:39 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:21:39 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:21:39 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.416 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Preparing to wait for external event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.417 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.417 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.417 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.418 254096 DEBUG nova.virt.libvirt.vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:21:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1075663045',display_name='tempest-TestGettingAddress-server-1075663045',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1075663045',id=145,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-0xnzfd4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:21:34Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=36b839e5-d6db-406a-ab95-bbdcd48c531d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.419 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.420 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.420 254096 DEBUG os_vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.421 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.422 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ba1b56-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ba1b56-3c, col_values=(('external_ids', {'iface-id': 'c1ba1b56-3c61-42fa-b23d-44349357a11a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:8a:a5', 'vm-uuid': '36b839e5-d6db-406a-ab95-bbdcd48c531d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:39 np0005535469 NetworkManager[48891]: <info>  [1764091299.4333] manager: (tapc1ba1b56-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.438 254096 INFO os_vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c')#033[00m
Nov 25 12:21:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.493 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.494 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.495 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:77:8a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.496 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Using config drive#033[00m
Nov 25 12:21:39 np0005535469 nova_compute[254092]: 2025-11-25 17:21:39.525 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.014 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Creating config drive at /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.019 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplg17ba1i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.170 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplg17ba1i" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.194 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.199 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:21:40
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data']
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.361 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.362 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deleting local config drive /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config because it was imported into RBD.#033[00m
Nov 25 12:21:40 np0005535469 kernel: tapc1ba1b56-3c: entered promiscuous mode
Nov 25 12:21:40 np0005535469 NetworkManager[48891]: <info>  [1764091300.4433] manager: (tapc1ba1b56-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Nov 25 12:21:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:40Z|01566|binding|INFO|Claiming lport c1ba1b56-3c61-42fa-b23d-44349357a11a for this chassis.
Nov 25 12:21:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:40Z|01567|binding|INFO|c1ba1b56-3c61-42fa-b23d-44349357a11a: Claiming fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.466 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], port_security=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe77:8aa5/64 2001:db8::f816:3eff:fe77:8aa5/64', 'neutron:device_id': '36b839e5-d6db-406a-ab95-bbdcd48c531d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c1ba1b56-3c61-42fa-b23d-44349357a11a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.469 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c1ba1b56-3c61-42fa-b23d-44349357a11a in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 bound to our chassis#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.470 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252#033[00m
Nov 25 12:21:40 np0005535469 systemd-udevd[417415]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.488 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1791ae-3f33-4c70-9573-a24057cb22e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.498 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84058e12-51 in ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:21:40 np0005535469 NetworkManager[48891]: <info>  [1764091300.5000] device (tapc1ba1b56-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:21:40 np0005535469 NetworkManager[48891]: <info>  [1764091300.5009] device (tapc1ba1b56-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:21:40 np0005535469 systemd-machined[216343]: New machine qemu-179-instance-00000091.
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.500 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84058e12-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.500 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bda60cd8-9372-43cb-8561-d5202b9b00a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb44e547-68e9-47eb-bf06-71a4adca7ca3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.519 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a116e07e-a18d-423a-b552-673ed478ff9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:40Z|01568|binding|INFO|Setting lport c1ba1b56-3c61-42fa-b23d-44349357a11a ovn-installed in OVS
Nov 25 12:21:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:40Z|01569|binding|INFO|Setting lport c1ba1b56-3c61-42fa-b23d-44349357a11a up in Southbound
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.549 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b06037ed-d158-488c-bde5-a412564e1e20]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.588 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2493ece1-ba7d-4b16-ba32-5d10eba2af02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e160b306-ba4e-4a75-88ff-1de8d4db2b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 NetworkManager[48891]: <info>  [1764091300.5985] manager: (tap84058e12-50): new Veth device (/org/freedesktop/NetworkManager/Devices/650)
Nov 25 12:21:40 np0005535469 systemd-udevd[417421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.639 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[440968ad-cb36-4fd0-9cd4-f49eb30eb369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.642 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f17b8b1-f98c-4f00-8d50-d98b42d755fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 NetworkManager[48891]: <info>  [1764091300.6703] device (tap84058e12-50): carrier: link connected
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.677 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4b00e507-210e-48e0-a845-952d953fbf1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.698 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[865e69e5-402f-4b4d-9979-5b535c82d2e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417453, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.713 254096 DEBUG nova.network.neutron [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated VIF entry in instance network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.714 254096 DEBUG nova.network.neutron [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e540a337-df1b-4b81-98d0-232314ecb719]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:fad8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775826, 'tstamp': 775826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417454, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.730 254096 DEBUG oslo_concurrency.lockutils [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.738 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2956a2-d8eb-4847-9b51-eeffd4bf1eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417455, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e8085b-1bba-4908-a8f0-afbcfce580a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.863 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4c3b4a-1890-4309-9438-9fa9381f0e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.865 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.866 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.867 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84058e12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:21:40 np0005535469 kernel: tap84058e12-50: entered promiscuous mode
Nov 25 12:21:40 np0005535469 NetworkManager[48891]: <info>  [1764091300.8705] manager: (tap84058e12-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84058e12-50, col_values=(('external_ids', {'iface-id': 'f8afc421-e45b-4911-af18-dd32853c6b8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:21:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:40Z|01570|binding|INFO|Releasing lport f8afc421-e45b-4911-af18-dd32853c6b8c from this chassis (sb_readonly=0)
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.884 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84058e12-5c2c-4ee6-a8bb-052eff4cc252.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84058e12-5c2c-4ee6-a8bb-052eff4cc252.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.885 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab9ec2e-0f69-4acf-9a00-8c63a33cede6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.886 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-84058e12-5c2c-4ee6-a8bb-052eff4cc252
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/84058e12-5c2c-4ee6-a8bb-052eff4cc252.pid.haproxy
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 84058e12-5c2c-4ee6-a8bb-052eff4cc252
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:21:40 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.887 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'env', 'PROCESS_TAG=haproxy-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84058e12-5c2c-4ee6-a8bb-052eff4cc252.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:21:40 np0005535469 nova_compute[254092]: 2025-11-25 17:21:40.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.147 254096 DEBUG nova.compute.manager [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG oslo_concurrency.lockutils [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG oslo_concurrency.lockutils [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG oslo_concurrency.lockutils [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG nova.compute.manager [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Processing event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:21:41 np0005535469 podman[417524]: 2025-11-25 17:21:41.274038323 +0000 UTC m=+0.055976205 container create 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.304 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091301.303422, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.305 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Started (Lifecycle Event)#033[00m
Nov 25 12:21:41 np0005535469 systemd[1]: Started libpod-conmon-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55.scope.
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.310 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.317 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.321 254096 INFO nova.virt.libvirt.driver [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance spawned successfully.#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.321 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.324 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.330 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:21:41 np0005535469 podman[417524]: 2025-11-25 17:21:41.244501758 +0000 UTC m=+0.026439700 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:21:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.340 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.341 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.341 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.342 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.342 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.342 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:21:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a1850d0a22111fadf1ed03693f6973e2c59b4c9475f9c71c2ca4d4e5534c52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.357 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.357 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091301.3035274, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:21:41 np0005535469 podman[417524]: 2025-11-25 17:21:41.365942704 +0000 UTC m=+0.147880606 container init 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:21:41 np0005535469 podman[417524]: 2025-11-25 17:21:41.371726662 +0000 UTC m=+0.153664534 container start 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:21:41 np0005535469 podman[417544]: 2025-11-25 17:21:41.381263141 +0000 UTC m=+0.069612156 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.388 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091301.317223, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.388 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:21:41 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : New worker (417604) forked
Nov 25 12:21:41 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : Loading success.
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.404 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.410 254096 INFO nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 7.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.410 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.411 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.434 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:21:41 np0005535469 podman[417547]: 2025-11-25 17:21:41.466778119 +0000 UTC m=+0.148517254 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.470 254096 INFO nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 8.15 seconds to build instance.#033[00m
Nov 25 12:21:41 np0005535469 podman[417546]: 2025-11-25 17:21:41.472371191 +0000 UTC m=+0.155610677 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.485 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:41 np0005535469 nova_compute[254092]: 2025-11-25 17:21:41.509 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 12:21:43 np0005535469 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG nova.compute.manager [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:21:43 np0005535469 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG oslo_concurrency.lockutils [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:43 np0005535469 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG oslo_concurrency.lockutils [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:43 np0005535469 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG oslo_concurrency.lockutils [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:43 np0005535469 nova_compute[254092]: 2025-11-25 17:21:43.251 254096 DEBUG nova.compute.manager [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] No waiting events found dispatching network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:21:43 np0005535469 nova_compute[254092]: 2025-11-25 17:21:43.251 254096 WARNING nova.compute.manager [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received unexpected event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a for instance with vm_state active and task_state None.#033[00m
Nov 25 12:21:44 np0005535469 nova_compute[254092]: 2025-11-25 17:21:44.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:44 np0005535469 nova_compute[254092]: 2025-11-25 17:21:44.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 12:21:45 np0005535469 nova_compute[254092]: 2025-11-25 17:21:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:45 np0005535469 nova_compute[254092]: 2025-11-25 17:21:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:45 np0005535469 nova_compute[254092]: 2025-11-25 17:21:45.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:45 np0005535469 nova_compute[254092]: 2025-11-25 17:21:45.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:45 np0005535469 nova_compute[254092]: 2025-11-25 17:21:45.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:45 np0005535469 nova_compute[254092]: 2025-11-25 17:21:45.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:21:45 np0005535469 nova_compute[254092]: 2025-11-25 17:21:45.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:21:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385882845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.079 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.176 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.176 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:21:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:46Z|01571|binding|INFO|Releasing lport f8afc421-e45b-4911-af18-dd32853c6b8c from this chassis (sb_readonly=0)
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:46 np0005535469 NetworkManager[48891]: <info>  [1764091306.3315] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Nov 25 12:21:46 np0005535469 NetworkManager[48891]: <info>  [1764091306.3327] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Nov 25 12:21:46 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:46Z|01572|binding|INFO|Releasing lport f8afc421-e45b-4911-af18-dd32853c6b8c from this chassis (sb_readonly=0)
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.365 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.372 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.373 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.374 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.374 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.437 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 36b839e5-d6db-406a-ab95-bbdcd48c531d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.488 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:21:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172299829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.966 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.972 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:21:46 np0005535469 nova_compute[254092]: 2025-11-25 17:21:46.987 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:21:47 np0005535469 nova_compute[254092]: 2025-11-25 17:21:47.007 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:21:47 np0005535469 nova_compute[254092]: 2025-11-25 17:21:47.008 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:21:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 97 op/s
Nov 25 12:21:47 np0005535469 nova_compute[254092]: 2025-11-25 17:21:47.262 254096 DEBUG nova.compute.manager [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:21:47 np0005535469 nova_compute[254092]: 2025-11-25 17:21:47.263 254096 DEBUG nova.compute.manager [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing instance network info cache due to event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:21:47 np0005535469 nova_compute[254092]: 2025-11-25 17:21:47.264 254096 DEBUG oslo_concurrency.lockutils [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:21:47 np0005535469 nova_compute[254092]: 2025-11-25 17:21:47.264 254096 DEBUG oslo_concurrency.lockutils [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:21:47 np0005535469 nova_compute[254092]: 2025-11-25 17:21:47.265 254096 DEBUG nova.network.neutron [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:21:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:21:49 np0005535469 nova_compute[254092]: 2025-11-25 17:21:49.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:49 np0005535469 nova_compute[254092]: 2025-11-25 17:21:49.442 254096 DEBUG nova.network.neutron [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated VIF entry in instance network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:21:49 np0005535469 nova_compute[254092]: 2025-11-25 17:21:49.443 254096 DEBUG nova.network.neutron [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:21:49 np0005535469 nova_compute[254092]: 2025-11-25 17:21:49.463 254096 DEBUG oslo_concurrency.lockutils [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:21:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:21:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0add22ef-98d6-47e6-ac32-751a39afb176 does not exist
Nov 25 12:21:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4a22fd6a-19d1-47de-a5ca-f68d6e0cc77e does not exist
Nov 25 12:21:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a65cfcfb-80e1-432c-8745-ff3e7b913c39 does not exist
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:21:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:21:51 np0005535469 podman[417936]: 2025-11-25 17:21:51.001904157 +0000 UTC m=+0.084504091 container create d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:21:51 np0005535469 nova_compute[254092]: 2025-11-25 17:21:51.004 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:51 np0005535469 nova_compute[254092]: 2025-11-25 17:21:51.005 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:51 np0005535469 nova_compute[254092]: 2025-11-25 17:21:51.005 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:21:51 np0005535469 systemd[1]: Started libpod-conmon-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope.
Nov 25 12:21:51 np0005535469 podman[417936]: 2025-11-25 17:21:50.966540435 +0000 UTC m=+0.049140459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:21:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:21:51 np0005535469 podman[417936]: 2025-11-25 17:21:51.089750258 +0000 UTC m=+0.172350222 container init d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:21:51 np0005535469 podman[417936]: 2025-11-25 17:21:51.097461229 +0000 UTC m=+0.180061153 container start d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:21:51 np0005535469 podman[417936]: 2025-11-25 17:21:51.101367065 +0000 UTC m=+0.183966999 container attach d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:21:51 np0005535469 affectionate_solomon[417954]: 167 167
Nov 25 12:21:51 np0005535469 systemd[1]: libpod-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope: Deactivated successfully.
Nov 25 12:21:51 np0005535469 conmon[417954]: conmon d3645e96b950b4609db5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope/container/memory.events
Nov 25 12:21:51 np0005535469 podman[417936]: 2025-11-25 17:21:51.105901778 +0000 UTC m=+0.188501712 container died d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:21:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-629f43ccc21f6f9b10d442d631691b756d692b9f22c80b602f0bfbd0537e1ba3-merged.mount: Deactivated successfully.
Nov 25 12:21:51 np0005535469 podman[417936]: 2025-11-25 17:21:51.151199931 +0000 UTC m=+0.233799875 container remove d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:21:51 np0005535469 systemd[1]: libpod-conmon-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope: Deactivated successfully.
Nov 25 12:21:51 np0005535469 podman[417977]: 2025-11-25 17:21:51.378439406 +0000 UTC m=+0.072330010 container create 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:21:51 np0005535469 podman[417977]: 2025-11-25 17:21:51.349255641 +0000 UTC m=+0.043146285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:21:51 np0005535469 systemd[1]: Started libpod-conmon-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope.
Nov 25 12:21:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:21:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:51 np0005535469 podman[417977]: 2025-11-25 17:21:51.503404597 +0000 UTC m=+0.197295281 container init 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:21:51 np0005535469 podman[417977]: 2025-11-25 17:21:51.51413996 +0000 UTC m=+0.208030544 container start 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 12:21:51 np0005535469 podman[417977]: 2025-11-25 17:21:51.517697727 +0000 UTC m=+0.211588411 container attach 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:21:51 np0005535469 nova_compute[254092]: 2025-11-25 17:21:51.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:21:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:21:52 np0005535469 nova_compute[254092]: 2025-11-25 17:21:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:52 np0005535469 unruffled_goldstine[417993]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:21:52 np0005535469 unruffled_goldstine[417993]: --> relative data size: 1.0
Nov 25 12:21:52 np0005535469 unruffled_goldstine[417993]: --> All data devices are unavailable
Nov 25 12:21:52 np0005535469 systemd[1]: libpod-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope: Deactivated successfully.
Nov 25 12:21:52 np0005535469 systemd[1]: libpod-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope: Consumed 1.136s CPU time.
Nov 25 12:21:52 np0005535469 podman[417977]: 2025-11-25 17:21:52.70852837 +0000 UTC m=+1.402418964 container died 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:21:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751-merged.mount: Deactivated successfully.
Nov 25 12:21:52 np0005535469 podman[417977]: 2025-11-25 17:21:52.780043507 +0000 UTC m=+1.473934111 container remove 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:21:52 np0005535469 systemd[1]: libpod-conmon-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope: Deactivated successfully.
Nov 25 12:21:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 12:21:53 np0005535469 podman[418177]: 2025-11-25 17:21:53.579818726 +0000 UTC m=+0.062466382 container create 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:21:53 np0005535469 systemd[1]: Started libpod-conmon-84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b.scope.
Nov 25 12:21:53 np0005535469 podman[418177]: 2025-11-25 17:21:53.5549604 +0000 UTC m=+0.037608076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:21:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:21:53 np0005535469 podman[418177]: 2025-11-25 17:21:53.673701741 +0000 UTC m=+0.156349417 container init 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:21:53 np0005535469 podman[418177]: 2025-11-25 17:21:53.681501764 +0000 UTC m=+0.164149410 container start 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:21:53 np0005535469 podman[418177]: 2025-11-25 17:21:53.685154563 +0000 UTC m=+0.167802229 container attach 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:21:53 np0005535469 quizzical_yonath[418194]: 167 167
Nov 25 12:21:53 np0005535469 systemd[1]: libpod-84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b.scope: Deactivated successfully.
Nov 25 12:21:53 np0005535469 podman[418177]: 2025-11-25 17:21:53.691279769 +0000 UTC m=+0.173927415 container died 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:21:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4069d982c1e13df212c966c5032461abc40ecbc7166043e6e4c8160348cdb910-merged.mount: Deactivated successfully.
Nov 25 12:21:53 np0005535469 podman[418177]: 2025-11-25 17:21:53.730566979 +0000 UTC m=+0.213214625 container remove 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:21:53 np0005535469 systemd[1]: libpod-conmon-84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b.scope: Deactivated successfully.
Nov 25 12:21:53 np0005535469 podman[418219]: 2025-11-25 17:21:53.967064807 +0000 UTC m=+0.069815962 container create cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:21:54 np0005535469 podman[418219]: 2025-11-25 17:21:53.933514993 +0000 UTC m=+0.036266198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:21:54 np0005535469 systemd[1]: Started libpod-conmon-cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1.scope.
Nov 25 12:21:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:21:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:54 np0005535469 podman[418219]: 2025-11-25 17:21:54.073764361 +0000 UTC m=+0.176515586 container init cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:21:54 np0005535469 podman[418219]: 2025-11-25 17:21:54.08513585 +0000 UTC m=+0.187887015 container start cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:21:54 np0005535469 podman[418219]: 2025-11-25 17:21:54.0902448 +0000 UTC m=+0.192996005 container attach cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:21:54 np0005535469 nova_compute[254092]: 2025-11-25 17:21:54.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:54Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:8a:a5 10.100.0.3
Nov 25 12:21:54 np0005535469 ovn_controller[153477]: 2025-11-25T17:21:54Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:8a:a5 10.100.0.3
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]: {
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:    "0": [
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:        {
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "devices": [
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "/dev/loop3"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            ],
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_name": "ceph_lv0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_size": "21470642176",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "name": "ceph_lv0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "tags": {
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cluster_name": "ceph",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.crush_device_class": "",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.encrypted": "0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osd_id": "0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.type": "block",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.vdo": "0"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            },
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "type": "block",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "vg_name": "ceph_vg0"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:        }
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:    ],
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:    "1": [
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:        {
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "devices": [
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "/dev/loop4"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            ],
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_name": "ceph_lv1",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_size": "21470642176",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "name": "ceph_lv1",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "tags": {
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cluster_name": "ceph",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.crush_device_class": "",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.encrypted": "0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osd_id": "1",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.type": "block",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.vdo": "0"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            },
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "type": "block",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "vg_name": "ceph_vg1"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:        }
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:    ],
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:    "2": [
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:        {
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "devices": [
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "/dev/loop5"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            ],
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_name": "ceph_lv2",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_size": "21470642176",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "name": "ceph_lv2",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "tags": {
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.cluster_name": "ceph",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.crush_device_class": "",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.encrypted": "0",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osd_id": "2",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.type": "block",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:                "ceph.vdo": "0"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            },
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "type": "block",
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:            "vg_name": "ceph_vg2"
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:        }
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]:    ]
Nov 25 12:21:54 np0005535469 wonderful_solomon[418236]: }
Nov 25 12:21:55 np0005535469 systemd[1]: libpod-cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1.scope: Deactivated successfully.
Nov 25 12:21:55 np0005535469 podman[418219]: 2025-11-25 17:21:55.012504733 +0000 UTC m=+1.115255888 container died cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:21:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 12:21:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054-merged.mount: Deactivated successfully.
Nov 25 12:21:55 np0005535469 podman[418219]: 2025-11-25 17:21:55.092432809 +0000 UTC m=+1.195183934 container remove cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:21:55 np0005535469 systemd[1]: libpod-conmon-cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1.scope: Deactivated successfully.
Nov 25 12:21:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:21:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2929909979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:21:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:21:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2929909979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:21:55 np0005535469 nova_compute[254092]: 2025-11-25 17:21:55.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:55 np0005535469 nova_compute[254092]: 2025-11-25 17:21:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:21:55 np0005535469 nova_compute[254092]: 2025-11-25 17:21:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:21:55 np0005535469 podman[418400]: 2025-11-25 17:21:55.86620635 +0000 UTC m=+0.057453085 container create bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:21:55 np0005535469 podman[418400]: 2025-11-25 17:21:55.841949179 +0000 UTC m=+0.033195914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:21:55 np0005535469 systemd[1]: Started libpod-conmon-bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a.scope.
Nov 25 12:21:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:21:56 np0005535469 podman[418400]: 2025-11-25 17:21:56.019865433 +0000 UTC m=+0.211112198 container init bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:21:56 np0005535469 podman[418400]: 2025-11-25 17:21:56.028181099 +0000 UTC m=+0.219427834 container start bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:21:56 np0005535469 podman[418400]: 2025-11-25 17:21:56.033552415 +0000 UTC m=+0.224799210 container attach bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:21:56 np0005535469 awesome_archimedes[418416]: 167 167
Nov 25 12:21:56 np0005535469 systemd[1]: libpod-bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a.scope: Deactivated successfully.
Nov 25 12:21:56 np0005535469 podman[418400]: 2025-11-25 17:21:56.035269841 +0000 UTC m=+0.226516576 container died bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:21:56 np0005535469 nova_compute[254092]: 2025-11-25 17:21:56.038 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:21:56 np0005535469 nova_compute[254092]: 2025-11-25 17:21:56.040 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:21:56 np0005535469 nova_compute[254092]: 2025-11-25 17:21:56.041 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:21:56 np0005535469 nova_compute[254092]: 2025-11-25 17:21:56.041 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:21:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-92dae0f95f62a01c91f3ac4362ae96f5e6b737730e5bdda49616dcc5476a8033-merged.mount: Deactivated successfully.
Nov 25 12:21:56 np0005535469 podman[418400]: 2025-11-25 17:21:56.083050812 +0000 UTC m=+0.274297507 container remove bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:21:56 np0005535469 systemd[1]: libpod-conmon-bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a.scope: Deactivated successfully.
Nov 25 12:21:56 np0005535469 podman[418440]: 2025-11-25 17:21:56.293421268 +0000 UTC m=+0.061620438 container create 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:21:56 np0005535469 systemd[1]: Started libpod-conmon-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope.
Nov 25 12:21:56 np0005535469 podman[418440]: 2025-11-25 17:21:56.265297972 +0000 UTC m=+0.033497142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:21:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:21:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:21:56 np0005535469 podman[418440]: 2025-11-25 17:21:56.389566766 +0000 UTC m=+0.157765926 container init 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:21:56 np0005535469 podman[418440]: 2025-11-25 17:21:56.398025386 +0000 UTC m=+0.166224546 container start 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:21:56 np0005535469 podman[418440]: 2025-11-25 17:21:56.402703362 +0000 UTC m=+0.170902552 container attach 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:21:56 np0005535469 nova_compute[254092]: 2025-11-25 17:21:56.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]: {
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "osd_id": 1,
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "type": "bluestore"
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:    },
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "osd_id": 2,
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "type": "bluestore"
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:    },
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "osd_id": 0,
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:        "type": "bluestore"
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]:    }
Nov 25 12:21:57 np0005535469 inspiring_knuth[418456]: }
Nov 25 12:21:57 np0005535469 systemd[1]: libpod-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope: Deactivated successfully.
Nov 25 12:21:57 np0005535469 systemd[1]: libpod-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope: Consumed 1.168s CPU time.
Nov 25 12:21:57 np0005535469 conmon[418456]: conmon 7c4d910f0adf015c8ec0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope/container/memory.events
Nov 25 12:21:57 np0005535469 podman[418440]: 2025-11-25 17:21:57.564867686 +0000 UTC m=+1.333066866 container died 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:21:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96-merged.mount: Deactivated successfully.
Nov 25 12:21:57 np0005535469 podman[418440]: 2025-11-25 17:21:57.637925144 +0000 UTC m=+1.406124284 container remove 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:21:57 np0005535469 systemd[1]: libpod-conmon-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope: Deactivated successfully.
Nov 25 12:21:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:21:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:21:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:21:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:21:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d6e90702-0f81-4bd7-8634-dd170be90ab5 does not exist
Nov 25 12:21:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 93360f85-3cd0-4faf-8092-c6dddb29d6b8 does not exist
Nov 25 12:21:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:21:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:21:58 np0005535469 nova_compute[254092]: 2025-11-25 17:21:58.458 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:21:58 np0005535469 nova_compute[254092]: 2025-11-25 17:21:58.474 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:21:58 np0005535469 nova_compute[254092]: 2025-11-25 17:21:58.474 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:21:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:21:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:21:59 np0005535469 nova_compute[254092]: 2025-11-25 17:21:59.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:21:59 np0005535469 nova_compute[254092]: 2025-11-25 17:21:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:21:59 np0005535469 nova_compute[254092]: 2025-11-25 17:21:59.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:22:01 np0005535469 nova_compute[254092]: 2025-11-25 17:22:01.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:22:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:04 np0005535469 nova_compute[254092]: 2025-11-25 17:22:04.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.694 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.694 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.709 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.795 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.796 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.807 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.807 254096 INFO nova.compute.claims [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:22:06 np0005535469 nova_compute[254092]: 2025-11-25 17:22:06.924 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 396 KiB/s wr, 32 op/s
Nov 25 12:22:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:22:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855715510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.490 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.497 254096 DEBUG nova.compute.provider_tree [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.513 254096 DEBUG nova.scheduler.client.report [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.531 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.532 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.594 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.594 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.614 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.626 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.738 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.739 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.740 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Creating image(s)#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.773 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.806 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.838 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.842 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.901 254096 DEBUG nova.policy [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.945 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.946 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.947 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.947 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.970 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:22:07 np0005535469 nova_compute[254092]: 2025-11-25 17:22:07.974 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.466 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.587 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.707 254096 DEBUG nova.objects.instance [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid fc92e0f7-adfa-4591-bb62-8e875c423b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.726 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.727 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Ensure instance console log exists: /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.728 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.728 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:08 np0005535469 nova_compute[254092]: 2025-11-25 17:22:08.728 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Nov 25 12:22:09 np0005535469 nova_compute[254092]: 2025-11-25 17:22:09.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:09.187 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:22:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:09.191 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:22:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:09.192 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:09 np0005535469 nova_compute[254092]: 2025-11-25 17:22:09.311 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Successfully created port: 6f0f388f-a1e7-4172-912a-ee02487d9833 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:09 np0005535469 nova_compute[254092]: 2025-11-25 17:22:09.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.480004) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329480072, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 534, "num_deletes": 257, "total_data_size": 496584, "memory_usage": 508232, "flush_reason": "Manual Compaction"}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329485806, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 492068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61231, "largest_seqno": 61764, "table_properties": {"data_size": 489104, "index_size": 935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6995, "raw_average_key_size": 18, "raw_value_size": 483074, "raw_average_value_size": 1288, "num_data_blocks": 41, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091299, "oldest_key_time": 1764091299, "file_creation_time": 1764091329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 5840 microseconds, and 3344 cpu microseconds.
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.485855) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 492068 bytes OK
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.485877) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487423) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487443) EVENT_LOG_v1 {"time_micros": 1764091329487436, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 493501, prev total WAL file size 493501, number of live WAL files 2.
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.488097) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353134' seq:72057594037927935, type:22 .. '6C6F676D0032373637' seq:0, type:0; will stop at (end)
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(480KB)], [140(9783KB)]
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329488141, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 10510187, "oldest_snapshot_seqno": -1}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8009 keys, 10392619 bytes, temperature: kUnknown
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329568921, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10392619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10340659, "index_size": 30855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 210773, "raw_average_key_size": 26, "raw_value_size": 10199108, "raw_average_value_size": 1273, "num_data_blocks": 1199, "num_entries": 8009, "num_filter_entries": 8009, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.569235) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10392619 bytes
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.570561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.9 rd, 128.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.5) write-amplify(21.1) OK, records in: 8535, records dropped: 526 output_compression: NoCompression
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.570589) EVENT_LOG_v1 {"time_micros": 1764091329570576, "job": 86, "event": "compaction_finished", "compaction_time_micros": 80893, "compaction_time_cpu_micros": 46704, "output_level": 6, "num_output_files": 1, "total_output_size": 10392619, "num_input_records": 8535, "num_output_records": 8009, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329571131, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329574230, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:22:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.039 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Successfully updated port: 6f0f388f-a1e7-4172-912a-ee02487d9833 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.054 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.055 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.055 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:22:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.136 254096 DEBUG nova.compute.manager [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.137 254096 DEBUG nova.compute.manager [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing instance network info cache due to event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.137 254096 DEBUG oslo_concurrency.lockutils [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:22:10 np0005535469 nova_compute[254092]: 2025-11-25 17:22:10.237 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:22:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:11 np0005535469 podman[418740]: 2025-11-25 17:22:11.668699079 +0000 UTC m=+0.074476448 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:22:11 np0005535469 podman[418741]: 2025-11-25 17:22:11.714809084 +0000 UTC m=+0.117564031 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:22:11 np0005535469 podman[418739]: 2025-11-25 17:22:11.715989817 +0000 UTC m=+0.122536387 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.810 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.825 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.826 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance network_info: |[{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.826 254096 DEBUG oslo_concurrency.lockutils [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.826 254096 DEBUG nova.network.neutron [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.829 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start _get_guest_xml network_info=[{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.834 254096 WARNING nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.843 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.844 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.847 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.848 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.849 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.849 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.850 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.850 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.850 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.852 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.852 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.852 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:22:11 np0005535469 nova_compute[254092]: 2025-11-25 17:22:11.856 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:22:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248647259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.296 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.331 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.337 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:22:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2238586271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.763 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.767 254096 DEBUG nova.virt.libvirt.vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1446051764',display_name='tempest-TestGettingAddress-server-1446051764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1446051764',id=146,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-m99ax5ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:22:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=fc92e0f7-adfa-4591-bb62-8e875c423b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.768 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.770 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.773 254096 DEBUG nova.objects.instance [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc92e0f7-adfa-4591-bb62-8e875c423b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.790 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <uuid>fc92e0f7-adfa-4591-bb62-8e875c423b6e</uuid>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <name>instance-00000092</name>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1446051764</nova:name>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:22:11</nova:creationTime>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <nova:port uuid="6f0f388f-a1e7-4172-912a-ee02487d9833">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe49:5345" ipVersion="6"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe49:5345" ipVersion="6"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <entry name="serial">fc92e0f7-adfa-4591-bb62-8e875c423b6e</entry>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <entry name="uuid">fc92e0f7-adfa-4591-bb62-8e875c423b6e</entry>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:49:53:45"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <target dev="tap6f0f388f-a1"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/console.log" append="off"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:22:12 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:22:12 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:22:12 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:22:12 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.793 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Preparing to wait for external event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.794 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.794 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.795 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.796 254096 DEBUG nova.virt.libvirt.vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1446051764',display_name='tempest-TestGettingAddress-server-1446051764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1446051764',id=146,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-m99ax5ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:22:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=fc92e0f7-adfa-4591-bb62-8e875c423b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.797 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.799 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.800 254096 DEBUG os_vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.802 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.803 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.810 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f0f388f-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.811 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f0f388f-a1, col_values=(('external_ids', {'iface-id': '6f0f388f-a1e7-4172-912a-ee02487d9833', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:53:45', 'vm-uuid': 'fc92e0f7-adfa-4591-bb62-8e875c423b6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:12 np0005535469 NetworkManager[48891]: <info>  [1764091332.8154] manager: (tap6f0f388f-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.827 254096 INFO os_vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1')#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.876 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.877 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.877 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:49:53:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.878 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Using config drive#033[00m
Nov 25 12:22:12 np0005535469 nova_compute[254092]: 2025-11-25 17:22:12.911 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:22:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.366 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Creating config drive at /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.376 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjm6nzjm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.533 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjm6nzjm" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.570 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.575 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.642 254096 DEBUG nova.network.neutron [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updated VIF entry in instance network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.643 254096 DEBUG nova.network.neutron [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.657 254096 DEBUG oslo_concurrency.lockutils [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.660 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.662 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.754 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.755 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deleting local config drive /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config because it was imported into RBD.#033[00m
Nov 25 12:22:13 np0005535469 kernel: tap6f0f388f-a1: entered promiscuous mode
Nov 25 12:22:13 np0005535469 NetworkManager[48891]: <info>  [1764091333.8439] manager: (tap6f0f388f-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Nov 25 12:22:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:13Z|01573|binding|INFO|Claiming lport 6f0f388f-a1e7-4172-912a-ee02487d9833 for this chassis.
Nov 25 12:22:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:13Z|01574|binding|INFO|6f0f388f-a1e7-4172-912a-ee02487d9833: Claiming fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:13 np0005535469 systemd-udevd[418931]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:13Z|01575|binding|INFO|Setting lport 6f0f388f-a1e7-4172-912a-ee02487d9833 ovn-installed in OVS
Nov 25 12:22:13 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:13Z|01576|binding|INFO|Setting lport 6f0f388f-a1e7-4172-912a-ee02487d9833 up in Southbound
Nov 25 12:22:13 np0005535469 nova_compute[254092]: 2025-11-25 17:22:13.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:13 np0005535469 NetworkManager[48891]: <info>  [1764091333.9190] device (tap6f0f388f-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:22:13 np0005535469 NetworkManager[48891]: <info>  [1764091333.9204] device (tap6f0f388f-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.918 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], port_security=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe49:5345/64 2001:db8::f816:3eff:fe49:5345/64', 'neutron:device_id': 'fc92e0f7-adfa-4591-bb62-8e875c423b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=6f0f388f-a1e7-4172-912a-ee02487d9833) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.921 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 6f0f388f-a1e7-4172-912a-ee02487d9833 in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 bound to our chassis#033[00m
Nov 25 12:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.923 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252#033[00m
Nov 25 12:22:13 np0005535469 systemd-machined[216343]: New machine qemu-180-instance-00000092.
Nov 25 12:22:13 np0005535469 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Nov 25 12:22:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3814c4cb-c0d1-4d06-ad27-fed1092483e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.004 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e054af91-8d78-4981-9f88-02b0ef08d2eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.009 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6000075d-e0be-491c-ab0c-380f3f7dbb43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.059 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[799a39e3-645f-4dc6-8ac6-0be9e7027a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f96c3dff-b5d4-47d9-804a-e32a73c5b101]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418948, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.109 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38bef246-25d1-4502-9b8a-7c254d564490]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775839, 'tstamp': 775839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418949, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775844, 'tstamp': 775844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418949, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.112 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:14 np0005535469 nova_compute[254092]: 2025-11-25 17:22:14.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:14 np0005535469 nova_compute[254092]: 2025-11-25 17:22:14.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.119 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84058e12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.120 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.121 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84058e12-50, col_values=(('external_ids', {'iface-id': 'f8afc421-e45b-4911-af18-dd32853c6b8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:22:14 np0005535469 nova_compute[254092]: 2025-11-25 17:22:14.123 254096 DEBUG nova.compute.manager [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:14 np0005535469 nova_compute[254092]: 2025-11-25 17:22:14.124 254096 DEBUG oslo_concurrency.lockutils [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:14 np0005535469 nova_compute[254092]: 2025-11-25 17:22:14.125 254096 DEBUG oslo_concurrency.lockutils [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:14 np0005535469 nova_compute[254092]: 2025-11-25 17:22:14.125 254096 DEBUG oslo_concurrency.lockutils [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:14 np0005535469 nova_compute[254092]: 2025-11-25 17:22:14.126 254096 DEBUG nova.compute.manager [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Processing event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:22:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.055 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.056 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091335.0545356, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.057 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Started (Lifecycle Event)#033[00m
Nov 25 12:22:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.064 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.069 254096 INFO nova.virt.libvirt.driver [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance spawned successfully.#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.069 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.083 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.096 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.104 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.104 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.105 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.106 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.106 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.107 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.122 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.122 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091335.0604644, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.123 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.158 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.162 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091335.0635834, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.163 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.197 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.201 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.207 254096 INFO nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 7.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.207 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.218 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.276 254096 INFO nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 8.51 seconds to build instance.#033[00m
Nov 25 12:22:15 np0005535469 nova_compute[254092]: 2025-11-25 17:22:15.292 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:16 np0005535469 nova_compute[254092]: 2025-11-25 17:22:16.235 254096 DEBUG nova.compute.manager [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:16 np0005535469 nova_compute[254092]: 2025-11-25 17:22:16.235 254096 DEBUG oslo_concurrency.lockutils [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:16 np0005535469 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 DEBUG oslo_concurrency.lockutils [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:16 np0005535469 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 DEBUG oslo_concurrency.lockutils [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:16 np0005535469 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 DEBUG nova.compute.manager [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] No waiting events found dispatching network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:22:16 np0005535469 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 WARNING nova.compute.manager [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received unexpected event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:22:16 np0005535469 nova_compute[254092]: 2025-11-25 17:22:16.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 12:22:17 np0005535469 nova_compute[254092]: 2025-11-25 17:22:17.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 12:22:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:20 np0005535469 nova_compute[254092]: 2025-11-25 17:22:20.035 254096 DEBUG nova.compute.manager [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:20 np0005535469 nova_compute[254092]: 2025-11-25 17:22:20.036 254096 DEBUG nova.compute.manager [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing instance network info cache due to event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:22:20 np0005535469 nova_compute[254092]: 2025-11-25 17:22:20.036 254096 DEBUG oslo_concurrency.lockutils [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:22:20 np0005535469 nova_compute[254092]: 2025-11-25 17:22:20.037 254096 DEBUG oslo_concurrency.lockutils [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:22:20 np0005535469 nova_compute[254092]: 2025-11-25 17:22:20.038 254096 DEBUG nova.network.neutron [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:22:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:22:21 np0005535469 nova_compute[254092]: 2025-11-25 17:22:21.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:22 np0005535469 nova_compute[254092]: 2025-11-25 17:22:22.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:22:23 np0005535469 nova_compute[254092]: 2025-11-25 17:22:23.499 254096 DEBUG nova.network.neutron [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updated VIF entry in instance network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:22:23 np0005535469 nova_compute[254092]: 2025-11-25 17:22:23.500 254096 DEBUG nova.network.neutron [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:23 np0005535469 nova_compute[254092]: 2025-11-25 17:22:23.518 254096 DEBUG oslo_concurrency.lockutils [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:22:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:22:26 np0005535469 nova_compute[254092]: 2025-11-25 17:22:26.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 25 12:22:27 np0005535469 nova_compute[254092]: 2025-11-25 17:22:27.821 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:28 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 25 12:22:28 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:28Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:53:45 10.100.0.6
Nov 25 12:22:28 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:28Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:53:45 10.100.0.6
Nov 25 12:22:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 56 op/s
Nov 25 12:22:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 25 12:22:31 np0005535469 nova_compute[254092]: 2025-11-25 17:22:31.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:32 np0005535469 nova_compute[254092]: 2025-11-25 17:22:32.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:22:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:22:36 np0005535469 nova_compute[254092]: 2025-11-25 17:22:36.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:22:37 np0005535469 nova_compute[254092]: 2025-11-25 17:22:37.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.254 254096 DEBUG nova.compute.manager [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.255 254096 DEBUG nova.compute.manager [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing instance network info cache due to event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.255 254096 DEBUG oslo_concurrency.lockutils [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.256 254096 DEBUG oslo_concurrency.lockutils [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.256 254096 DEBUG nova.network.neutron [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.338 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.338 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.338 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.339 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.339 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.340 254096 INFO nova.compute.manager [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Terminating instance#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.341 254096 DEBUG nova.compute.manager [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:22:38 np0005535469 kernel: tap6f0f388f-a1 (unregistering): left promiscuous mode
Nov 25 12:22:38 np0005535469 NetworkManager[48891]: <info>  [1764091358.4065] device (tap6f0f388f-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:38Z|01577|binding|INFO|Releasing lport 6f0f388f-a1e7-4172-912a-ee02487d9833 from this chassis (sb_readonly=0)
Nov 25 12:22:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:38Z|01578|binding|INFO|Setting lport 6f0f388f-a1e7-4172-912a-ee02487d9833 down in Southbound
Nov 25 12:22:38 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:38Z|01579|binding|INFO|Removing iface tap6f0f388f-a1 ovn-installed in OVS
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.444 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], port_security=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe49:5345/64 2001:db8::f816:3eff:fe49:5345/64', 'neutron:device_id': 'fc92e0f7-adfa-4591-bb62-8e875c423b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=6f0f388f-a1e7-4172-912a-ee02487d9833) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.447 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 6f0f388f-a1e7-4172-912a-ee02487d9833 in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 unbound from our chassis#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.449 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac17a4a3-e66b-4393-bb20-b17293ef4fe1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:38 np0005535469 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 25 12:22:38 np0005535469 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 15.139s CPU time.
Nov 25 12:22:38 np0005535469 systemd-machined[216343]: Machine qemu-180-instance-00000092 terminated.
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[18dd4376-4e16-4844-a217-0a75315416cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.540 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2033b0-712d-4532-9e8f-f4bf03aa19b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.612 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[10577990-a28f-4495-b370-867f55e9eb86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.619 254096 INFO nova.virt.libvirt.driver [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance destroyed successfully.#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.620 254096 DEBUG nova.objects.instance [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid fc92e0f7-adfa-4591-bb62-8e875c423b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.629 254096 DEBUG nova.virt.libvirt.vif [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1446051764',display_name='tempest-TestGettingAddress-server-1446051764',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1446051764',id=146,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:22:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-m99ax5ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:22:15Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=fc92e0f7-adfa-4591-bb62-8e875c423b6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.629 254096 DEBUG nova.network.os_vif_util [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.630 254096 DEBUG nova.network.os_vif_util [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.631 254096 DEBUG os_vif [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.633 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0f388f-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.639 254096 INFO os_vif [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1')#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.640 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cf405a-d221-4ea9-ae10-314ab19af596]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419014, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.669 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b081e38d-65cc-4957-a056-d0f9bfadf150]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775839, 'tstamp': 775839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419024, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775844, 'tstamp': 775844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419024, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.675 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 nova_compute[254092]: 2025-11-25 17:22:38.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.682 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84058e12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84058e12-50, col_values=(('external_ids', {'iface-id': 'f8afc421-e45b-4911-af18-dd32853c6b8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.684 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.010 254096 INFO nova.virt.libvirt.driver [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deleting instance files /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e_del#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.011 254096 INFO nova.virt.libvirt.driver [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deletion of /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e_del complete#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.086 254096 INFO nova.compute.manager [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.086 254096 DEBUG oslo.service.loopingcall [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.086 254096 DEBUG nova.compute.manager [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.087 254096 DEBUG nova.network.neutron [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:22:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.249 254096 DEBUG nova.compute.manager [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-unplugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.249 254096 DEBUG oslo_concurrency.lockutils [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG oslo_concurrency.lockutils [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG oslo_concurrency.lockutils [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG nova.compute.manager [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] No waiting events found dispatching network-vif-unplugged-6f0f388f-a1e7-4172-912a-ee02487d9833 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:22:39 np0005535469 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG nova.compute.manager [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-unplugged-6f0f388f-a1e7-4172-912a-ee02487d9833 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:22:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:22:40
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr']
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.261 254096 DEBUG nova.network.neutron [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.283 254096 INFO nova.compute.manager [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 1.20 seconds to deallocate network for instance.#033[00m
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.319 254096 DEBUG nova.compute.manager [req-5f08b3f8-2f17-4ea1-9b97-bb25aa34c6a7 req-cef8b54e-00a4-40c5-b5ce-78ad0d3fcf36 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-deleted-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.347 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.348 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.417 254096 DEBUG oslo_concurrency.processutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:22:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:22:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:22:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454553442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.950 254096 DEBUG oslo_concurrency.processutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.958 254096 DEBUG nova.compute.provider_tree [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:22:40 np0005535469 nova_compute[254092]: 2025-11-25 17:22:40.976 254096 DEBUG nova.scheduler.client.report [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.005 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.043 254096 INFO nova.scheduler.client.report [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance fc92e0f7-adfa-4591-bb62-8e875c423b6e#033[00m
Nov 25 12:22:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.131 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.454 254096 DEBUG nova.compute.manager [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.455 254096 DEBUG oslo_concurrency.lockutils [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.456 254096 DEBUG oslo_concurrency.lockutils [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.457 254096 DEBUG oslo_concurrency.lockutils [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.457 254096 DEBUG nova.compute.manager [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] No waiting events found dispatching network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.457 254096 WARNING nova.compute.manager [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received unexpected event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.531 254096 DEBUG nova.network.neutron [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updated VIF entry in instance network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.532 254096 DEBUG nova.network.neutron [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.549 254096 DEBUG oslo_concurrency.lockutils [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:22:41 np0005535469 nova_compute[254092]: 2025-11-25 17:22:41.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:42 np0005535469 nova_compute[254092]: 2025-11-25 17:22:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:42 np0005535469 podman[419060]: 2025-11-25 17:22:42.662272448 +0000 UTC m=+0.075079723 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:22:42 np0005535469 podman[419061]: 2025-11-25 17:22:42.667802379 +0000 UTC m=+0.073303126 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:22:42 np0005535469 podman[419062]: 2025-11-25 17:22:42.729008885 +0000 UTC m=+0.129670960 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:22:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.401 254096 DEBUG nova.compute.manager [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.401 254096 DEBUG nova.compute.manager [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing instance network info cache due to event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.401 254096 DEBUG oslo_concurrency.lockutils [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.402 254096 DEBUG oslo_concurrency.lockutils [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.402 254096 DEBUG nova.network.neutron [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.513 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.513 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.514 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.514 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.514 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.516 254096 INFO nova.compute.manager [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Terminating instance#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.517 254096 DEBUG nova.compute.manager [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:22:43 np0005535469 kernel: tapc1ba1b56-3c (unregistering): left promiscuous mode
Nov 25 12:22:43 np0005535469 NetworkManager[48891]: <info>  [1764091363.5639] device (tapc1ba1b56-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:22:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:43Z|01580|binding|INFO|Releasing lport c1ba1b56-3c61-42fa-b23d-44349357a11a from this chassis (sb_readonly=0)
Nov 25 12:22:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:43Z|01581|binding|INFO|Setting lport c1ba1b56-3c61-42fa-b23d-44349357a11a down in Southbound
Nov 25 12:22:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:22:43Z|01582|binding|INFO|Removing iface tapc1ba1b56-3c ovn-installed in OVS
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.582 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], port_security=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe77:8aa5/64 2001:db8::f816:3eff:fe77:8aa5/64', 'neutron:device_id': '36b839e5-d6db-406a-ab95-bbdcd48c531d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c1ba1b56-3c61-42fa-b23d-44349357a11a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.583 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c1ba1b56-3c61-42fa-b23d-44349357a11a in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 unbound from our chassis#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.584 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fbefbc2d-83b6-4d3a-8833-5dda31a98c55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.586 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 namespace which is not needed anymore#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 25 12:22:43 np0005535469 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 16.485s CPU time.
Nov 25 12:22:43 np0005535469 systemd-machined[216343]: Machine qemu-179-instance-00000091 terminated.
Nov 25 12:22:43 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : haproxy version is 2.8.14-c23fe91
Nov 25 12:22:43 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : path to executable is /usr/sbin/haproxy
Nov 25 12:22:43 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [WARNING]  (417594) : Exiting Master process...
Nov 25 12:22:43 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [WARNING]  (417594) : Exiting Master process...
Nov 25 12:22:43 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [ALERT]    (417594) : Current worker (417604) exited with code 143 (Terminated)
Nov 25 12:22:43 np0005535469 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [WARNING]  (417594) : All workers exited. Exiting... (0)
Nov 25 12:22:43 np0005535469 systemd[1]: libpod-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55.scope: Deactivated successfully.
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.762 254096 INFO nova.virt.libvirt.driver [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance destroyed successfully.#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.763 254096 DEBUG nova.objects.instance [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:22:43 np0005535469 podman[419143]: 2025-11-25 17:22:43.763517614 +0000 UTC m=+0.064066585 container died 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.774 254096 DEBUG nova.virt.libvirt.vif [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:21:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1075663045',display_name='tempest-TestGettingAddress-server-1075663045',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1075663045',id=145,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:21:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-0xnzfd4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:21:41Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=36b839e5-d6db-406a-ab95-bbdcd48c531d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.774 254096 DEBUG nova.network.os_vif_util [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.775 254096 DEBUG nova.network.os_vif_util [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.775 254096 DEBUG os_vif [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.779 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ba1b56-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.784 254096 INFO os_vif [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c')#033[00m
Nov 25 12:22:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55-userdata-shm.mount: Deactivated successfully.
Nov 25 12:22:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b8a1850d0a22111fadf1ed03693f6973e2c59b4c9475f9c71c2ca4d4e5534c52-merged.mount: Deactivated successfully.
Nov 25 12:22:43 np0005535469 podman[419143]: 2025-11-25 17:22:43.824163445 +0000 UTC m=+0.124712426 container cleanup 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:22:43 np0005535469 systemd[1]: libpod-conmon-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55.scope: Deactivated successfully.
Nov 25 12:22:43 np0005535469 podman[419200]: 2025-11-25 17:22:43.897941413 +0000 UTC m=+0.049183210 container remove 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.908 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[19e39e3b-f5c4-4c54-996d-58e581c43f86]: (4, ('Tue Nov 25 05:22:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 (7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55)\n7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55\nTue Nov 25 05:22:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 (7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55)\n7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.910 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[293b18f3-3e8f-49f6-8dd3-3a351f00c6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.911 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 kernel: tap84058e12-50: left promiscuous mode
Nov 25 12:22:43 np0005535469 nova_compute[254092]: 2025-11-25 17:22:43.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.938 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d1dd40-6b18-4d2d-a412-343fc2239fb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.954 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34d57346-52a3-4f5c-ac0c-5d2def994ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.955 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e14acd-f62d-4717-b9bf-aeebad79fcd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10a8e00b-b8f9-4575-96bb-06f12e0116ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775817, 'reachable_time': 36428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419216, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:43 np0005535469 systemd[1]: run-netns-ovnmeta\x2d84058e12\x2d5c2c\x2d4ee6\x2da8bb\x2d052eff4cc252.mount: Deactivated successfully.
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.983 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:22:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.983 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[26edf965-37b5-441c-abf0-4d4c27abe2a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:22:44 np0005535469 nova_compute[254092]: 2025-11-25 17:22:44.160 254096 INFO nova.virt.libvirt.driver [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deleting instance files /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d_del#033[00m
Nov 25 12:22:44 np0005535469 nova_compute[254092]: 2025-11-25 17:22:44.161 254096 INFO nova.virt.libvirt.driver [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deletion of /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d_del complete#033[00m
Nov 25 12:22:44 np0005535469 nova_compute[254092]: 2025-11-25 17:22:44.217 254096 INFO nova.compute.manager [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:22:44 np0005535469 nova_compute[254092]: 2025-11-25 17:22:44.218 254096 DEBUG oslo.service.loopingcall [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:22:44 np0005535469 nova_compute[254092]: 2025-11-25 17:22:44.218 254096 DEBUG nova.compute.manager [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:22:44 np0005535469 nova_compute[254092]: 2025-11-25 17:22:44.218 254096 DEBUG nova.network.neutron [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:22:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:44 np0005535469 nova_compute[254092]: 2025-11-25 17:22:44.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.073 254096 DEBUG nova.network.neutron [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.087 254096 INFO nova.compute.manager [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 0.87 seconds to deallocate network for instance.#033[00m
Nov 25 12:22:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 58 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 42 op/s
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.151 254096 DEBUG nova.network.neutron [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated VIF entry in instance network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.151 254096 DEBUG nova.network.neutron [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.168 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.169 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.184 254096 DEBUG oslo_concurrency.lockutils [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.224 254096 DEBUG oslo_concurrency.processutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.478 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-unplugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] No waiting events found dispatching network-vif-unplugged-c1ba1b56-3c61-42fa-b23d-44349357a11a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 WARNING nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received unexpected event network-vif-unplugged-c1ba1b56-3c61-42fa-b23d-44349357a11a for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] No waiting events found dispatching network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 WARNING nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received unexpected event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.481 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-deleted-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.481 254096 INFO nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Neutron deleted interface c1ba1b56-3c61-42fa-b23d-44349357a11a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.481 254096 DEBUG nova.network.neutron [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.504 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Detach interface failed, port_id=c1ba1b56-3c61-42fa-b23d-44349357a11a, reason: Instance 36b839e5-d6db-406a-ab95-bbdcd48c531d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 12:22:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:22:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033778206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.691 254096 DEBUG oslo_concurrency.processutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.697 254096 DEBUG nova.compute.provider_tree [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.710 254096 DEBUG nova.scheduler.client.report [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.725 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.749 254096 INFO nova.scheduler.client.report [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 36b839e5-d6db-406a-ab95-bbdcd48c531d#033[00m
Nov 25 12:22:45 np0005535469 nova_compute[254092]: 2025-11-25 17:22:45.794 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:22:46 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942597773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:22:46 np0005535469 nova_compute[254092]: 2025-11-25 17:22:46.972 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.5 KiB/s wr, 56 op/s
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.158 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.159 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3632MB free_disk=59.9774169921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.159 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.159 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.219 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.220 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.253 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:22:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:22:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/453444774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.710 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.727 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.746 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:22:47 np0005535469 nova_compute[254092]: 2025-11-25 17:22:47.747 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:22:48 np0005535469 nova_compute[254092]: 2025-11-25 17:22:48.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Nov 25 12:22:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:50 np0005535469 nova_compute[254092]: 2025-11-25 17:22:50.742 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:50 np0005535469 nova_compute[254092]: 2025-11-25 17:22:50.743 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:50 np0005535469 nova_compute[254092]: 2025-11-25 17:22:50.744 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Nov 25 12:22:51 np0005535469 nova_compute[254092]: 2025-11-25 17:22:51.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:22:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:22:52 np0005535469 nova_compute[254092]: 2025-11-25 17:22:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:52 np0005535469 nova_compute[254092]: 2025-11-25 17:22:52.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:52 np0005535469 nova_compute[254092]: 2025-11-25 17:22:52.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 25 12:22:53 np0005535469 nova_compute[254092]: 2025-11-25 17:22:53.619 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091358.6175413, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:22:53 np0005535469 nova_compute[254092]: 2025-11-25 17:22:53.619 254096 INFO nova.compute.manager [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:22:53 np0005535469 nova_compute[254092]: 2025-11-25 17:22:53.642 254096 DEBUG nova.compute.manager [None req-456ad93e-5af9-4513-982b-76fb0faafd8c - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:22:53 np0005535469 nova_compute[254092]: 2025-11-25 17:22:53.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 25 12:22:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:22:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/138076452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:22:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:22:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/138076452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:22:55 np0005535469 nova_compute[254092]: 2025-11-25 17:22:55.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:55 np0005535469 nova_compute[254092]: 2025-11-25 17:22:55.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:22:55 np0005535469 nova_compute[254092]: 2025-11-25 17:22:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:22:55 np0005535469 nova_compute[254092]: 2025-11-25 17:22:55.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:22:56 np0005535469 nova_compute[254092]: 2025-11-25 17:22:56.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Nov 25 12:22:58 np0005535469 nova_compute[254092]: 2025-11-25 17:22:58.761 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091363.759886, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:22:58 np0005535469 nova_compute[254092]: 2025-11-25 17:22:58.762 254096 INFO nova.compute.manager [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:22:58 np0005535469 nova_compute[254092]: 2025-11-25 17:22:58.804 254096 DEBUG nova.compute.manager [None req-60e15fcd-4cfd-43f5-9e91-37c8dc169ac4 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:22:58 np0005535469 nova_compute[254092]: 2025-11-25 17:22:58.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:22:58 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d468b05a-32c1-4df1-b31b-7a4b431c2714 does not exist
Nov 25 12:22:58 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2ffc52ef-405e-4642-88b9-80b53e2f2c48 does not exist
Nov 25 12:22:58 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a0fce6ca-9192-4f27-ad55-4ab4928ad93e does not exist
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:22:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:22:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:22:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:22:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:22:59 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:22:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:22:59 np0005535469 nova_compute[254092]: 2025-11-25 17:22:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:22:59 np0005535469 podman[419558]: 2025-11-25 17:22:59.869837246 +0000 UTC m=+0.058718479 container create c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:22:59 np0005535469 podman[419558]: 2025-11-25 17:22:59.840034914 +0000 UTC m=+0.028916187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:22:59 np0005535469 systemd[1]: Started libpod-conmon-c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167.scope.
Nov 25 12:22:59 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:23:00 np0005535469 podman[419558]: 2025-11-25 17:23:00.013114986 +0000 UTC m=+0.201996209 container init c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:23:00 np0005535469 podman[419558]: 2025-11-25 17:23:00.025368469 +0000 UTC m=+0.214249702 container start c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:23:00 np0005535469 podman[419558]: 2025-11-25 17:23:00.030128459 +0000 UTC m=+0.219009692 container attach c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:23:00 np0005535469 competent_stonebraker[419574]: 167 167
Nov 25 12:23:00 np0005535469 systemd[1]: libpod-c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167.scope: Deactivated successfully.
Nov 25 12:23:00 np0005535469 podman[419558]: 2025-11-25 17:23:00.039069502 +0000 UTC m=+0.227950735 container died c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:23:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9c2759a5c1cfc5e91f6e0185399c9381ef5dd0155efc1116c327aa7e42452283-merged.mount: Deactivated successfully.
Nov 25 12:23:00 np0005535469 podman[419558]: 2025-11-25 17:23:00.107404163 +0000 UTC m=+0.296285396 container remove c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:23:00 np0005535469 systemd[1]: libpod-conmon-c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167.scope: Deactivated successfully.
Nov 25 12:23:00 np0005535469 podman[419597]: 2025-11-25 17:23:00.298569056 +0000 UTC m=+0.056208381 container create 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:23:00 np0005535469 systemd[1]: Started libpod-conmon-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope.
Nov 25 12:23:00 np0005535469 podman[419597]: 2025-11-25 17:23:00.271897959 +0000 UTC m=+0.029537354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:23:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:23:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:00 np0005535469 podman[419597]: 2025-11-25 17:23:00.434850226 +0000 UTC m=+0.192489641 container init 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:23:00 np0005535469 podman[419597]: 2025-11-25 17:23:00.448776795 +0000 UTC m=+0.206416150 container start 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:23:00 np0005535469 podman[419597]: 2025-11-25 17:23:00.453108482 +0000 UTC m=+0.210747837 container attach 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:23:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:01 np0005535469 sad_tu[419614]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:23:01 np0005535469 sad_tu[419614]: --> relative data size: 1.0
Nov 25 12:23:01 np0005535469 sad_tu[419614]: --> All data devices are unavailable
Nov 25 12:23:01 np0005535469 nova_compute[254092]: 2025-11-25 17:23:01.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:01 np0005535469 systemd[1]: libpod-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope: Deactivated successfully.
Nov 25 12:23:01 np0005535469 systemd[1]: libpod-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope: Consumed 1.147s CPU time.
Nov 25 12:23:01 np0005535469 podman[419597]: 2025-11-25 17:23:01.639747362 +0000 UTC m=+1.397386707 container died 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:23:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d-merged.mount: Deactivated successfully.
Nov 25 12:23:01 np0005535469 podman[419597]: 2025-11-25 17:23:01.70947077 +0000 UTC m=+1.467110075 container remove 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:23:01 np0005535469 systemd[1]: libpod-conmon-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope: Deactivated successfully.
Nov 25 12:23:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.273 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2 2001:db8::f816:3eff:fec7:79ce'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec7:79ce/64', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=63e3c701-69aa-46d9-a2c2-91ded4242c02) old=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:23:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.275 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 63e3c701-69aa-46d9-a2c2-91ded4242c02 in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 updated#033[00m
Nov 25 12:23:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.277 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:23:02 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.279 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8d7384-6e7a-481a-aea6-f9a0c5f82e7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:02 np0005535469 podman[419796]: 2025-11-25 17:23:02.597389838 +0000 UTC m=+0.037985755 container create 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:23:02 np0005535469 systemd[1]: Started libpod-conmon-76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2.scope.
Nov 25 12:23:02 np0005535469 podman[419796]: 2025-11-25 17:23:02.582942404 +0000 UTC m=+0.023538351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:23:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:23:02 np0005535469 podman[419796]: 2025-11-25 17:23:02.696609949 +0000 UTC m=+0.137205916 container init 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:23:02 np0005535469 podman[419796]: 2025-11-25 17:23:02.708339738 +0000 UTC m=+0.148935695 container start 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:23:02 np0005535469 podman[419796]: 2025-11-25 17:23:02.712240434 +0000 UTC m=+0.152836441 container attach 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:23:02 np0005535469 agitated_heisenberg[419812]: 167 167
Nov 25 12:23:02 np0005535469 systemd[1]: libpod-76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2.scope: Deactivated successfully.
Nov 25 12:23:02 np0005535469 podman[419796]: 2025-11-25 17:23:02.715349909 +0000 UTC m=+0.155945876 container died 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:23:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-69e273e3d311f4a64250df1c40ee2ee2730fa19de282e83393c1a039b412c0ee-merged.mount: Deactivated successfully.
Nov 25 12:23:02 np0005535469 podman[419796]: 2025-11-25 17:23:02.76976886 +0000 UTC m=+0.210364827 container remove 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 12:23:02 np0005535469 systemd[1]: libpod-conmon-76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2.scope: Deactivated successfully.
Nov 25 12:23:02 np0005535469 podman[419838]: 2025-11-25 17:23:02.984285739 +0000 UTC m=+0.058012760 container create 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:23:03 np0005535469 systemd[1]: Started libpod-conmon-8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7.scope.
Nov 25 12:23:03 np0005535469 podman[419838]: 2025-11-25 17:23:02.955190977 +0000 UTC m=+0.028918058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:23:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:23:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:03 np0005535469 podman[419838]: 2025-11-25 17:23:03.094209751 +0000 UTC m=+0.167936832 container init 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:23:03 np0005535469 podman[419838]: 2025-11-25 17:23:03.106257709 +0000 UTC m=+0.179984740 container start 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 12:23:03 np0005535469 podman[419838]: 2025-11-25 17:23:03.110523035 +0000 UTC m=+0.184250106 container attach 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:23:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:03 np0005535469 nova_compute[254092]: 2025-11-25 17:23:03.833 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]: {
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:    "0": [
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:        {
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "devices": [
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "/dev/loop3"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            ],
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_name": "ceph_lv0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_size": "21470642176",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "name": "ceph_lv0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "tags": {
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cluster_name": "ceph",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.crush_device_class": "",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.encrypted": "0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osd_id": "0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.type": "block",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.vdo": "0"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            },
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "type": "block",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "vg_name": "ceph_vg0"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:        }
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:    ],
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:    "1": [
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:        {
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "devices": [
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "/dev/loop4"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            ],
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_name": "ceph_lv1",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_size": "21470642176",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "name": "ceph_lv1",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "tags": {
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cluster_name": "ceph",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.crush_device_class": "",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.encrypted": "0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osd_id": "1",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.type": "block",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.vdo": "0"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            },
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "type": "block",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "vg_name": "ceph_vg1"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:        }
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:    ],
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:    "2": [
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:        {
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "devices": [
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "/dev/loop5"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            ],
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_name": "ceph_lv2",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_size": "21470642176",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "name": "ceph_lv2",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "tags": {
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.cluster_name": "ceph",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.crush_device_class": "",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.encrypted": "0",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osd_id": "2",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.type": "block",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:                "ceph.vdo": "0"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            },
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "type": "block",
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:            "vg_name": "ceph_vg2"
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:        }
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]:    ]
Nov 25 12:23:03 np0005535469 quizzical_kepler[419854]: }
Nov 25 12:23:03 np0005535469 systemd[1]: libpod-8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7.scope: Deactivated successfully.
Nov 25 12:23:03 np0005535469 podman[419838]: 2025-11-25 17:23:03.989602163 +0000 UTC m=+1.063329194 container died 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:23:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9-merged.mount: Deactivated successfully.
Nov 25 12:23:04 np0005535469 podman[419838]: 2025-11-25 17:23:04.087411426 +0000 UTC m=+1.161138457 container remove 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:23:04 np0005535469 systemd[1]: libpod-conmon-8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7.scope: Deactivated successfully.
Nov 25 12:23:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:04 np0005535469 podman[420013]: 2025-11-25 17:23:04.886154477 +0000 UTC m=+0.054302080 container create 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 12:23:04 np0005535469 systemd[1]: Started libpod-conmon-86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55.scope.
Nov 25 12:23:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:23:04 np0005535469 podman[420013]: 2025-11-25 17:23:04.859459939 +0000 UTC m=+0.027607602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:23:04 np0005535469 podman[420013]: 2025-11-25 17:23:04.971188371 +0000 UTC m=+0.139336024 container init 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 25 12:23:04 np0005535469 podman[420013]: 2025-11-25 17:23:04.978944522 +0000 UTC m=+0.147092135 container start 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 12:23:04 np0005535469 podman[420013]: 2025-11-25 17:23:04.98329081 +0000 UTC m=+0.151438573 container attach 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:23:04 np0005535469 adoring_ishizaka[420030]: 167 167
Nov 25 12:23:04 np0005535469 systemd[1]: libpod-86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55.scope: Deactivated successfully.
Nov 25 12:23:04 np0005535469 podman[420013]: 2025-11-25 17:23:04.987892846 +0000 UTC m=+0.156040429 container died 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:23:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-94780c40ff144a02e697b24b3db591ab700c51ee95c3029f069bb61886eb768e-merged.mount: Deactivated successfully.
Nov 25 12:23:05 np0005535469 podman[420013]: 2025-11-25 17:23:05.036693113 +0000 UTC m=+0.204840686 container remove 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:23:05 np0005535469 systemd[1]: libpod-conmon-86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55.scope: Deactivated successfully.
Nov 25 12:23:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:05 np0005535469 podman[420054]: 2025-11-25 17:23:05.242336411 +0000 UTC m=+0.071655711 container create 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:23:05 np0005535469 systemd[1]: Started libpod-conmon-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope.
Nov 25 12:23:05 np0005535469 podman[420054]: 2025-11-25 17:23:05.21437232 +0000 UTC m=+0.043691670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:23:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:23:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:05 np0005535469 podman[420054]: 2025-11-25 17:23:05.351321058 +0000 UTC m=+0.180640408 container init 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:23:05 np0005535469 podman[420054]: 2025-11-25 17:23:05.373932543 +0000 UTC m=+0.203251853 container start 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 12:23:05 np0005535469 podman[420054]: 2025-11-25 17:23:05.378314512 +0000 UTC m=+0.207633822 container attach 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]: {
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "osd_id": 1,
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "type": "bluestore"
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:    },
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "osd_id": 2,
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "type": "bluestore"
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:    },
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "osd_id": 0,
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:        "type": "bluestore"
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]:    }
Nov 25 12:23:06 np0005535469 vigilant_nightingale[420071]: }
Nov 25 12:23:06 np0005535469 systemd[1]: libpod-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope: Deactivated successfully.
Nov 25 12:23:06 np0005535469 podman[420054]: 2025-11-25 17:23:06.494367271 +0000 UTC m=+1.323686581 container died 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:23:06 np0005535469 systemd[1]: libpod-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope: Consumed 1.129s CPU time.
Nov 25 12:23:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084-merged.mount: Deactivated successfully.
Nov 25 12:23:06 np0005535469 podman[420054]: 2025-11-25 17:23:06.563605015 +0000 UTC m=+1.392924285 container remove 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:23:06 np0005535469 systemd[1]: libpod-conmon-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope: Deactivated successfully.
Nov 25 12:23:06 np0005535469 nova_compute[254092]: 2025-11-25 17:23:06.589 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:23:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:23:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:23:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:23:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7780156a-c6c8-4537-a679-7f9422bc6bde does not exist
Nov 25 12:23:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b6721c70-3c10-44e5-bee8-64e9c4023f91 does not exist
Nov 25 12:23:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:23:07 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:23:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.804 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2 2001:db8:0:1:f816:3eff:fec7:79ce 2001:db8::f816:3eff:fec7:79ce'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fec7:79ce/64 2001:db8::f816:3eff:fec7:79ce/64', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=63e3c701-69aa-46d9-a2c2-91ded4242c02) old=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2 2001:db8::f816:3eff:fec7:79ce'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec7:79ce/64', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:23:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.806 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 63e3c701-69aa-46d9-a2c2-91ded4242c02 in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 updated#033[00m
Nov 25 12:23:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.807 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:23:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d303963-aaa6-4156-a409-c1f731d42c11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:08 np0005535469 nova_compute[254092]: 2025-11-25 17:23:08.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:23:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:23:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:11 np0005535469 nova_compute[254092]: 2025-11-25 17:23:11.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:13.660 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:13 np0005535469 podman[420165]: 2025-11-25 17:23:13.665180603 +0000 UTC m=+0.075612619 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:23:13 np0005535469 podman[420166]: 2025-11-25 17:23:13.69774959 +0000 UTC m=+0.097728121 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:23:13 np0005535469 podman[420167]: 2025-11-25 17:23:13.766855081 +0000 UTC m=+0.162067082 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:23:13 np0005535469 nova_compute[254092]: 2025-11-25 17:23:13.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.661 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.661 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.684 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.790 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.791 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.804 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.805 254096 INFO nova.compute.claims [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:23:14 np0005535469 nova_compute[254092]: 2025-11-25 17:23:14.923 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:23:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:23:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113816730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.429 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.436 254096 DEBUG nova.compute.provider_tree [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.451 254096 DEBUG nova.scheduler.client.report [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.483 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.484 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.535 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.535 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.555 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.577 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.670 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.673 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.673 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Creating image(s)#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.707 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.733 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.761 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.766 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.848 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.850 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.851 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.851 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.876 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:15 np0005535469 nova_compute[254092]: 2025-11-25 17:23:15.881 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 287c21fa-3b34-448c-ba84-c777124fbb3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.195 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 287c21fa-3b34-448c-ba84-c777124fbb3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.246 254096 DEBUG nova.policy [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.295 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.414 254096 DEBUG nova.objects.instance [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.544 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.545 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Ensure instance console log exists: /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.546 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.547 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.548 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:16 np0005535469 nova_compute[254092]: 2025-11-25 17:23:16.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 12:23:18 np0005535469 nova_compute[254092]: 2025-11-25 17:23:18.364 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Successfully created port: e7f2fe0c-53c0-4be7-8362-47c92514001c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:23:18 np0005535469 nova_compute[254092]: 2025-11-25 17:23:18.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 12:23:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:19 np0005535469 nova_compute[254092]: 2025-11-25 17:23:19.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:19.738 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:23:19 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:19.739 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:23:19 np0005535469 nova_compute[254092]: 2025-11-25 17:23:19.895 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Successfully updated port: e7f2fe0c-53c0-4be7-8362-47c92514001c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:23:20 np0005535469 nova_compute[254092]: 2025-11-25 17:23:20.005 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:23:20 np0005535469 nova_compute[254092]: 2025-11-25 17:23:20.005 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:23:20 np0005535469 nova_compute[254092]: 2025-11-25 17:23:20.005 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:23:21 np0005535469 nova_compute[254092]: 2025-11-25 17:23:21.070 254096 DEBUG nova.compute.manager [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:23:21 np0005535469 nova_compute[254092]: 2025-11-25 17:23:21.070 254096 DEBUG nova.compute.manager [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing instance network info cache due to event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:23:21 np0005535469 nova_compute[254092]: 2025-11-25 17:23:21.070 254096 DEBUG oslo_concurrency.lockutils [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:23:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:23:21 np0005535469 nova_compute[254092]: 2025-11-25 17:23:21.247 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:23:21 np0005535469 nova_compute[254092]: 2025-11-25 17:23:21.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:21 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:21.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:23:23 np0005535469 nova_compute[254092]: 2025-11-25 17:23:23.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.252 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.280 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.281 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance network_info: |[{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.283 254096 DEBUG oslo_concurrency.lockutils [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.283 254096 DEBUG nova.network.neutron [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.293 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start _get_guest_xml network_info=[{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.304 254096 WARNING nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.312 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.313 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.325 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.326 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.327 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.328 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.329 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.330 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.331 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.331 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.332 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.332 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.333 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.334 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.334 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.335 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.343 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:23:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1492400474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.813 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.845 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:24 np0005535469 nova_compute[254092]: 2025-11-25 17:23:24.851 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:23:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:23:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306123827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.350 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.354 254096 DEBUG nova.virt.libvirt.vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628099618',display_name='tempest-TestGettingAddress-server-1628099618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628099618',id=147,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-7s8020gn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:15Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=287c21fa-3b34-448c-ba84-c777124fbb3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.354 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.356 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.358 254096 DEBUG nova.objects.instance [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.374 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <uuid>287c21fa-3b34-448c-ba84-c777124fbb3d</uuid>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <name>instance-00000093</name>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1628099618</nova:name>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:23:24</nova:creationTime>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <nova:port uuid="e7f2fe0c-53c0-4be7-8362-47c92514001c">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe41:f7b1" ipVersion="6"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe41:f7b1" ipVersion="6"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <entry name="serial">287c21fa-3b34-448c-ba84-c777124fbb3d</entry>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <entry name="uuid">287c21fa-3b34-448c-ba84-c777124fbb3d</entry>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/287c21fa-3b34-448c-ba84-c777124fbb3d_disk">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:41:f7:b1"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <target dev="tape7f2fe0c-53"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/console.log" append="off"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:23:25 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:23:25 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:23:25 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:23:25 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.375 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Preparing to wait for external event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.377 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.377 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.377 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.378 254096 DEBUG nova.virt.libvirt.vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628099618',display_name='tempest-TestGettingAddress-server-1628099618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628099618',id=147,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-7s8020gn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:15Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=287c21fa-3b34-448c-ba84-c777124fbb3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.378 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.379 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.379 254096 DEBUG os_vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.380 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.381 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.388 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7f2fe0c-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.389 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7f2fe0c-53, col_values=(('external_ids', {'iface-id': 'e7f2fe0c-53c0-4be7-8362-47c92514001c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:f7:b1', 'vm-uuid': '287c21fa-3b34-448c-ba84-c777124fbb3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:23:25 np0005535469 NetworkManager[48891]: <info>  [1764091405.3923] manager: (tape7f2fe0c-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.405 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.406 254096 INFO os_vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53')#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.468 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.468 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.469 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:41:f7:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.470 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Using config drive#033[00m
Nov 25 12:23:25 np0005535469 nova_compute[254092]: 2025-11-25 17:23:25.509 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.293 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Creating config drive at /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.297 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvles7kle execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.454 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvles7kle" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.490 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.495 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.633 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.689 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.690 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deleting local config drive /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config because it was imported into RBD.#033[00m
Nov 25 12:23:26 np0005535469 kernel: tape7f2fe0c-53: entered promiscuous mode
Nov 25 12:23:26 np0005535469 NetworkManager[48891]: <info>  [1764091406.7787] manager: (tape7f2fe0c-53): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:26Z|01583|binding|INFO|Claiming lport e7f2fe0c-53c0-4be7-8362-47c92514001c for this chassis.
Nov 25 12:23:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:26Z|01584|binding|INFO|e7f2fe0c-53c0-4be7-8362-47c92514001c: Claiming fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.784 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.798 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], port_security=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe41:f7b1/64 2001:db8::f816:3eff:fe41:f7b1/64', 'neutron:device_id': '287c21fa-3b34-448c-ba84-c777124fbb3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7f2fe0c-53c0-4be7-8362-47c92514001c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.799 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7f2fe0c-53c0-4be7-8362-47c92514001c in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 bound to our chassis#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.800 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.815 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9646b86-2a64-4342-8aa3-28897948075b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.816 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap285f996f-d1 in ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.819 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap285f996f-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.819 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ce39c3-c30e-4436-8f2e-1266f7dedb2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.820 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d39c576-9644-4d1c-81b8-1d8bdb617dd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 systemd-machined[216343]: New machine qemu-181-instance-00000093.
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.837 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ebe701-4c11-4b90-903a-35ee46409343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:26 np0005535469 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Nov 25 12:23:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:26Z|01585|binding|INFO|Setting lport e7f2fe0c-53c0-4be7-8362-47c92514001c ovn-installed in OVS
Nov 25 12:23:26 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:26Z|01586|binding|INFO|Setting lport e7f2fe0c-53c0-4be7-8362-47c92514001c up in Southbound
Nov 25 12:23:26 np0005535469 nova_compute[254092]: 2025-11-25 17:23:26.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:26 np0005535469 systemd-udevd[420555]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.872 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a755382-3028-4857-af96-00552e08cbf6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 NetworkManager[48891]: <info>  [1764091406.8844] device (tape7f2fe0c-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:23:26 np0005535469 NetworkManager[48891]: <info>  [1764091406.8870] device (tape7f2fe0c-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.915 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0a28a28d-fca0-4e19-b110-fa0439fab627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc771a7-9d12-49ec-a817-f4323ed8a1be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 NetworkManager[48891]: <info>  [1764091406.9249] manager: (tap285f996f-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.966 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e043c45-bb00-4f05-910b-b6252301d40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:26 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.974 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[548b555e-6202-41a0-82db-7759ee1eb24d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 NetworkManager[48891]: <info>  [1764091407.0039] device (tap285f996f-d0): carrier: link connected
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.012 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0e291a0e-e7c3-493a-b05f-40d49b2b3f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.038 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a25e878-2e18-435a-94e9-d263e1b3c30b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420587, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.059 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26879668-2d60-43e6-9c3c-7b8b1afa095e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:79ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786459, 'tstamp': 786459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420588, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.087 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b76e746-2a56-42da-8802-db0668fbe11f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 420589, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf823cf3-22a9-4613-a778-2686e5cab2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b797d6a-63c4-43d0-ba7f-6db003a09f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.246 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.246 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.247 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap285f996f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:27 np0005535469 kernel: tap285f996f-d0: entered promiscuous mode
Nov 25 12:23:27 np0005535469 NetworkManager[48891]: <info>  [1764091407.2516] manager: (tap285f996f-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.251 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.255 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap285f996f-d0, col_values=(('external_ids', {'iface-id': '63e3c701-69aa-46d9-a2c2-91ded4242c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:27 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:27Z|01587|binding|INFO|Releasing lport 63e3c701-69aa-46d9-a2c2-91ded4242c02 from this chassis (sb_readonly=0)
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.260 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/285f996f-d0be-4c9e-9d0c-b8d730990ab6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/285f996f-d0be-4c9e-9d0c-b8d730990ab6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.262 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f98a2763-20f6-4f32-9fa8-680439e02085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.263 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-285f996f-d0be-4c9e-9d0c-b8d730990ab6
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/285f996f-d0be-4c9e-9d0c-b8d730990ab6.pid.haproxy
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 285f996f-d0be-4c9e-9d0c-b8d730990ab6
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:23:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.264 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'env', 'PROCESS_TAG=haproxy-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/285f996f-d0be-4c9e-9d0c-b8d730990ab6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.413 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091407.4130409, 287c21fa-3b34-448c-ba84-c777124fbb3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.414 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Started (Lifecycle Event)#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.442 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.448 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091407.4162164, 287c21fa-3b34-448c-ba84-c777124fbb3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.448 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.462 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.467 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.495 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.590 254096 DEBUG nova.compute.manager [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG oslo_concurrency.lockutils [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG oslo_concurrency.lockutils [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG oslo_concurrency.lockutils [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG nova.compute.manager [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Processing event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.592 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.596 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091407.5960097, 287c21fa-3b34-448c-ba84-c777124fbb3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.597 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.601 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.604 254096 DEBUG nova.network.neutron [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated VIF entry in instance network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.605 254096 DEBUG nova.network.neutron [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.611 254096 INFO nova.virt.libvirt.driver [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance spawned successfully.#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.612 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.634 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.640 254096 DEBUG oslo_concurrency.lockutils [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.642 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.646 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.646 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.647 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.647 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.647 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.648 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.681 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.707 254096 INFO nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 12.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.708 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.772 254096 INFO nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 13.03 seconds to build instance.#033[00m
Nov 25 12:23:27 np0005535469 podman[420663]: 2025-11-25 17:23:27.790081916 +0000 UTC m=+0.078124268 container create 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:23:27 np0005535469 nova_compute[254092]: 2025-11-25 17:23:27.788 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:27 np0005535469 podman[420663]: 2025-11-25 17:23:27.744467373 +0000 UTC m=+0.032509745 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:23:27 np0005535469 systemd[1]: Started libpod-conmon-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1.scope.
Nov 25 12:23:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:23:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97e2b4204e8786f8aea331eed1d7b42c5b5b099c4f387b397a3ecb178dc7bf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:23:27 np0005535469 podman[420663]: 2025-11-25 17:23:27.910423251 +0000 UTC m=+0.198465623 container init 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 12:23:27 np0005535469 podman[420663]: 2025-11-25 17:23:27.925050069 +0000 UTC m=+0.213092411 container start 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 12:23:27 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : New worker (420684) forked
Nov 25 12:23:27 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : Loading success.
Nov 25 12:23:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 12:23:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:29 np0005535469 nova_compute[254092]: 2025-11-25 17:23:29.666 254096 DEBUG nova.compute.manager [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:23:29 np0005535469 nova_compute[254092]: 2025-11-25 17:23:29.667 254096 DEBUG oslo_concurrency.lockutils [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:29 np0005535469 nova_compute[254092]: 2025-11-25 17:23:29.667 254096 DEBUG oslo_concurrency.lockutils [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:29 np0005535469 nova_compute[254092]: 2025-11-25 17:23:29.668 254096 DEBUG oslo_concurrency.lockutils [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:29 np0005535469 nova_compute[254092]: 2025-11-25 17:23:29.668 254096 DEBUG nova.compute.manager [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] No waiting events found dispatching network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:23:29 np0005535469 nova_compute[254092]: 2025-11-25 17:23:29.668 254096 WARNING nova.compute.manager [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received unexpected event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c for instance with vm_state active and task_state None.#033[00m
Nov 25 12:23:30 np0005535469 nova_compute[254092]: 2025-11-25 17:23:30.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:23:31 np0005535469 nova_compute[254092]: 2025-11-25 17:23:31.638 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:32 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:32Z|01588|binding|INFO|Releasing lport 63e3c701-69aa-46d9-a2c2-91ded4242c02 from this chassis (sb_readonly=0)
Nov 25 12:23:32 np0005535469 NetworkManager[48891]: <info>  [1764091412.4913] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Nov 25 12:23:32 np0005535469 nova_compute[254092]: 2025-11-25 17:23:32.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:32 np0005535469 NetworkManager[48891]: <info>  [1764091412.4933] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Nov 25 12:23:32 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:32Z|01589|binding|INFO|Releasing lport 63e3c701-69aa-46d9-a2c2-91ded4242c02 from this chassis (sb_readonly=0)
Nov 25 12:23:32 np0005535469 nova_compute[254092]: 2025-11-25 17:23:32.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:32 np0005535469 nova_compute[254092]: 2025-11-25 17:23:32.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:33 np0005535469 nova_compute[254092]: 2025-11-25 17:23:33.097 254096 DEBUG nova.compute.manager [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:23:33 np0005535469 nova_compute[254092]: 2025-11-25 17:23:33.098 254096 DEBUG nova.compute.manager [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing instance network info cache due to event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:23:33 np0005535469 nova_compute[254092]: 2025-11-25 17:23:33.099 254096 DEBUG oslo_concurrency.lockutils [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:23:33 np0005535469 nova_compute[254092]: 2025-11-25 17:23:33.099 254096 DEBUG oslo_concurrency.lockutils [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:23:33 np0005535469 nova_compute[254092]: 2025-11-25 17:23:33.099 254096 DEBUG nova.network.neutron [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:23:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:23:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:23:35 np0005535469 nova_compute[254092]: 2025-11-25 17:23:35.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:35 np0005535469 nova_compute[254092]: 2025-11-25 17:23:35.718 254096 DEBUG nova.network.neutron [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated VIF entry in instance network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:23:35 np0005535469 nova_compute[254092]: 2025-11-25 17:23:35.719 254096 DEBUG nova.network.neutron [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:23:35 np0005535469 nova_compute[254092]: 2025-11-25 17:23:35.739 254096 DEBUG oslo_concurrency.lockutils [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:23:36 np0005535469 nova_compute[254092]: 2025-11-25 17:23:36.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:23:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:23:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:23:40
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:23:40 np0005535469 nova_compute[254092]: 2025-11-25 17:23:40.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:40Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:f7:b1 10.100.0.4
Nov 25 12:23:40 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:40Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:f7:b1 10.100.0.4
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:23:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:23:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 42K writes, 173K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.82 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2774 writes, 10K keys, 2774 commit groups, 1.0 writes per commit group, ingest: 12.22 MB, 0.02 MB/s#012Interval WAL: 2774 writes, 1128 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:23:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:23:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 110 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 25 12:23:41 np0005535469 nova_compute[254092]: 2025-11-25 17:23:41.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 110 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Nov 25 12:23:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:44 np0005535469 nova_compute[254092]: 2025-11-25 17:23:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:44 np0005535469 podman[420694]: 2025-11-25 17:23:44.679181082 +0000 UTC m=+0.088463988 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 12:23:44 np0005535469 podman[420695]: 2025-11-25 17:23:44.686721888 +0000 UTC m=+0.097343671 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:23:44 np0005535469 podman[420696]: 2025-11-25 17:23:44.714419872 +0000 UTC m=+0.118988161 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:23:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:23:45 np0005535469 nova_compute[254092]: 2025-11-25 17:23:45.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:45 np0005535469 nova_compute[254092]: 2025-11-25 17:23:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:46 np0005535469 nova_compute[254092]: 2025-11-25 17:23:46.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:23:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073357996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.047 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.138 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.138 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:23:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.409 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3448MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.501 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 287c21fa-3b34-448c-ba84-c777124fbb3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.501 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:23:47 np0005535469 nova_compute[254092]: 2025-11-25 17:23:47.561 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:23:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375878292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:23:48 np0005535469 nova_compute[254092]: 2025-11-25 17:23:48.056 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:48 np0005535469 nova_compute[254092]: 2025-11-25 17:23:48.063 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:23:48 np0005535469 nova_compute[254092]: 2025-11-25 17:23:48.080 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:23:48 np0005535469 nova_compute[254092]: 2025-11-25 17:23:48.105 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:23:48 np0005535469 nova_compute[254092]: 2025-11-25 17:23:48.106 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:23:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:23:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.2 total, 600.0 interval#012Cumulative writes: 44K writes, 176K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 44K writes, 15K syncs, 2.82 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2510 writes, 10K keys, 2510 commit groups, 1.0 writes per commit group, ingest: 12.82 MB, 0.02 MB/s#012Interval WAL: 2510 writes, 964 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:23:50 np0005535469 nova_compute[254092]: 2025-11-25 17:23:50.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:23:51 np0005535469 nova_compute[254092]: 2025-11-25 17:23:51.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:23:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:23:51 np0005535469 nova_compute[254092]: 2025-11-25 17:23:51.987 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:51 np0005535469 nova_compute[254092]: 2025-11-25 17:23:51.988 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.005 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.073 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.074 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.083 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.084 254096 INFO nova.compute.claims [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.224 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:23:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121381838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.688 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.698 254096 DEBUG nova.compute.provider_tree [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.718 254096 DEBUG nova.scheduler.client.report [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.738 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.739 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.785 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.786 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.802 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.819 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.896 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.897 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.897 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Creating image(s)#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.924 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.951 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.976 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:52 np0005535469 nova_compute[254092]: 2025-11-25 17:23:52.980 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.033 254096 DEBUG nova.policy [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.086 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.088 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.089 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.090 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.121 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.125 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a54a9759-b1e7-4cbe-87b4-97878794f76e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 95 KiB/s wr, 17 op/s
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.166 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.168 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.169 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.169 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.425 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a54a9759-b1e7-4cbe-87b4-97878794f76e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.502 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.627 254096 DEBUG nova.objects.instance [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid a54a9759-b1e7-4cbe-87b4-97878794f76e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.640 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.641 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Ensure instance console log exists: /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.641 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.642 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:53 np0005535469 nova_compute[254092]: 2025-11-25 17:23:53.643 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:54 np0005535469 nova_compute[254092]: 2025-11-25 17:23:54.235 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Successfully created port: e68f7346-b984-4fd0-9e4a-3d8c74e511fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:23:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 152 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.316 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Successfully updated port: e68f7346-b984-4fd0-9e4a-3d8c74e511fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.329 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.329 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.330 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:23:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:23:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088015571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:23:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:23:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088015571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.426 254096 DEBUG nova.compute.manager [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.427 254096 DEBUG nova.compute.manager [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing instance network info cache due to event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.427 254096 DEBUG oslo_concurrency.lockutils [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.493 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.500 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.675 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.675 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.676 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:23:55 np0005535469 nova_compute[254092]: 2025-11-25 17:23:55.676 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:23:56 np0005535469 nova_compute[254092]: 2025-11-25 17:23:56.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.271 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.291 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.292 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance network_info: |[{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.292 254096 DEBUG oslo_concurrency.lockutils [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.292 254096 DEBUG nova.network.neutron [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.295 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start _get_guest_xml network_info=[{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.301 254096 WARNING nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.306 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.306 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.315 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.316 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.317 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.317 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.318 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.318 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.318 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.320 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.320 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.324 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:23:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441355535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.801 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.836 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:57 np0005535469 nova_compute[254092]: 2025-11-25 17:23:57.841 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.259 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.285 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.286 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:23:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:23:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925252480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.314 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.317 254096 DEBUG nova.virt.libvirt.vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1586289454',display_name='tempest-TestGettingAddress-server-1586289454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1586289454',id=148,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-vp2qu6ef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:52Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a54a9759-b1e7-4cbe-87b4-97878794f76e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.318 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.320 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.322 254096 DEBUG nova.objects.instance [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid a54a9759-b1e7-4cbe-87b4-97878794f76e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.343 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <uuid>a54a9759-b1e7-4cbe-87b4-97878794f76e</uuid>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <name>instance-00000094</name>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1586289454</nova:name>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:23:57</nova:creationTime>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <nova:port uuid="e68f7346-b984-4fd0-9e4a-3d8c74e511fd">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb9:c187" ipVersion="6"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb9:c187" ipVersion="6"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <entry name="serial">a54a9759-b1e7-4cbe-87b4-97878794f76e</entry>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <entry name="uuid">a54a9759-b1e7-4cbe-87b4-97878794f76e</entry>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a54a9759-b1e7-4cbe-87b4-97878794f76e_disk">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:b9:c1:87"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <target dev="tape68f7346-b9"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/console.log" append="off"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:23:58 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:23:58 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:23:58 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:23:58 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.344 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Preparing to wait for external event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.345 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.345 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.346 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.347 254096 DEBUG nova.virt.libvirt.vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1586289454',display_name='tempest-TestGettingAddress-server-1586289454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1586289454',id=148,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-vp2qu6ef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:52Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a54a9759-b1e7-4cbe-87b4-97878794f76e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.347 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.349 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.350 254096 DEBUG os_vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.352 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.353 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f7346-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape68f7346-b9, col_values=(('external_ids', {'iface-id': 'e68f7346-b984-4fd0-9e4a-3d8c74e511fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:c1:87', 'vm-uuid': 'a54a9759-b1e7-4cbe-87b4-97878794f76e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:58 np0005535469 NetworkManager[48891]: <info>  [1764091438.3650] manager: (tape68f7346-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.373 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.376 254096 INFO os_vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9')#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.429 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.429 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.429 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:b9:c1:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.430 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Using config drive#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.457 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.718 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Creating config drive at /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.728 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3r6fnge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.881 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3r6fnge" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.914 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:23:58 np0005535469 nova_compute[254092]: 2025-11-25 17:23:58.919 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.112 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.113 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deleting local config drive /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config because it was imported into RBD.#033[00m
Nov 25 12:23:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:23:59 np0005535469 kernel: tape68f7346-b9: entered promiscuous mode
Nov 25 12:23:59 np0005535469 NetworkManager[48891]: <info>  [1764091439.1746] manager: (tape68f7346-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Nov 25 12:23:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:59Z|01590|binding|INFO|Claiming lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd for this chassis.
Nov 25 12:23:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:59Z|01591|binding|INFO|e68f7346-b984-4fd0-9e4a-3d8c74e511fd: Claiming fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:59Z|01592|binding|INFO|Setting lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd ovn-installed in OVS
Nov 25 12:23:59 np0005535469 ovn_controller[153477]: 2025-11-25T17:23:59Z|01593|binding|INFO|Setting lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd up in Southbound
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.189 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], port_security=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feb9:c187/64 2001:db8::f816:3eff:feb9:c187/64', 'neutron:device_id': 'a54a9759-b1e7-4cbe-87b4-97878794f76e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e68f7346-b984-4fd0-9e4a-3d8c74e511fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.190 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e68f7346-b984-4fd0-9e4a-3d8c74e511fd in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 bound to our chassis#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.191 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:59 np0005535469 systemd-udevd[421126]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.211 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba52d0f2-9ee7-4bbd-94ec-508cd0eca59c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:59 np0005535469 systemd-machined[216343]: New machine qemu-182-instance-00000094.
Nov 25 12:23:59 np0005535469 NetworkManager[48891]: <info>  [1764091439.2285] device (tape68f7346-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:23:59 np0005535469 NetworkManager[48891]: <info>  [1764091439.2294] device (tape68f7346-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:23:59 np0005535469 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.246 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f2956179-876e-4e5c-8a29-d179d5ffbbfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.249 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[64796761-b524-44e0-93c4-c3b3b330dd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.269 254096 DEBUG nova.network.neutron [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updated VIF entry in instance network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.269 254096 DEBUG nova.network.neutron [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.283 254096 DEBUG oslo_concurrency.lockutils [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.283 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[223604d4-f32f-4a51-9d6b-76fb1e4727a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.309 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3624e202-ed3f-4ade-b9d2-9783837d590f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421138, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7744b8da-e396-46cc-92d2-683cb6119ddc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786477, 'tstamp': 786477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421141, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786482, 'tstamp': 786482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421141, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.339 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap285f996f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap285f996f-d0, col_values=(('external_ids', {'iface-id': '63e3c701-69aa-46d9-a2c2-91ded4242c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:23:59 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.345 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.411 254096 DEBUG nova.compute.manager [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.412 254096 DEBUG oslo_concurrency.lockutils [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.412 254096 DEBUG oslo_concurrency.lockutils [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.413 254096 DEBUG oslo_concurrency.lockutils [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.413 254096 DEBUG nova.compute.manager [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Processing event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:23:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.603 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.604 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091439.6029212, a54a9759-b1e7-4cbe-87b4-97878794f76e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.605 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Started (Lifecycle Event)#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.609 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.612 254096 INFO nova.virt.libvirt.driver [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance spawned successfully.#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.612 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.633 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.643 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.650 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.651 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.652 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.653 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.654 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.655 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.663 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.664 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091439.6032248, a54a9759-b1e7-4cbe-87b4-97878794f76e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.664 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.701 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.706 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091439.6092396, a54a9759-b1e7-4cbe-87b4-97878794f76e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.706 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.731 254096 INFO nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 6.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.731 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.733 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.740 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.777 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.799 254096 INFO nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 7.74 seconds to build instance.#033[00m
Nov 25 12:23:59 np0005535469 nova_compute[254092]: 2025-11-25 17:23:59.812 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:00 np0005535469 nova_compute[254092]: 2025-11-25 17:24:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:00 np0005535469 nova_compute[254092]: 2025-11-25 17:24:00.531 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.480 254096 DEBUG nova.compute.manager [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.480 254096 DEBUG oslo_concurrency.lockutils [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 DEBUG oslo_concurrency.lockutils [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 DEBUG oslo_concurrency.lockutils [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 DEBUG nova.compute.manager [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] No waiting events found dispatching network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 WARNING nova.compute.manager [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received unexpected event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd for instance with vm_state active and task_state None.#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:24:01 np0005535469 nova_compute[254092]: 2025-11-25 17:24:01.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 12:24:03 np0005535469 nova_compute[254092]: 2025-11-25 17:24:03.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:24:06 np0005535469 nova_compute[254092]: 2025-11-25 17:24:06.355 254096 DEBUG nova.compute.manager [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:06 np0005535469 nova_compute[254092]: 2025-11-25 17:24:06.355 254096 DEBUG nova.compute.manager [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing instance network info cache due to event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:24:06 np0005535469 nova_compute[254092]: 2025-11-25 17:24:06.356 254096 DEBUG oslo_concurrency.lockutils [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:24:06 np0005535469 nova_compute[254092]: 2025-11-25 17:24:06.356 254096 DEBUG oslo_concurrency.lockutils [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:24:06 np0005535469 nova_compute[254092]: 2025-11-25 17:24:06.357 254096 DEBUG nova.network.neutron [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:24:06 np0005535469 nova_compute[254092]: 2025-11-25 17:24:06.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 86 op/s
Nov 25 12:24:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:24:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.5 total, 600.0 interval#012Cumulative writes: 37K writes, 149K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2136 writes, 8742 keys, 2136 commit groups, 1.0 writes per commit group, ingest: 10.69 MB, 0.02 MB/s#012Interval WAL: 2136 writes, 831 syncs, 2.57 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:24:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d5524e19-2d43-409a-9094-82f4123f7ca9 does not exist
Nov 25 12:24:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f8fec3ff-4c02-4f7b-901c-fc5a8ae6487b does not exist
Nov 25 12:24:07 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 11a9a55c-5b04-494d-9dc3-b494320b8eb4 does not exist
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:24:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:24:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:24:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:24:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:24:08 np0005535469 nova_compute[254092]: 2025-11-25 17:24:08.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:08 np0005535469 nova_compute[254092]: 2025-11-25 17:24:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:08 np0005535469 nova_compute[254092]: 2025-11-25 17:24:08.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:24:08 np0005535469 podman[421451]: 2025-11-25 17:24:08.583314053 +0000 UTC m=+0.054448883 container create 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:24:08 np0005535469 systemd[1]: Started libpod-conmon-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope.
Nov 25 12:24:08 np0005535469 podman[421451]: 2025-11-25 17:24:08.559866675 +0000 UTC m=+0.031001515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:24:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:24:08 np0005535469 podman[421451]: 2025-11-25 17:24:08.692784043 +0000 UTC m=+0.163918913 container init 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:24:08 np0005535469 podman[421451]: 2025-11-25 17:24:08.703107674 +0000 UTC m=+0.174242504 container start 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:24:08 np0005535469 podman[421451]: 2025-11-25 17:24:08.707624017 +0000 UTC m=+0.178758867 container attach 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 12:24:08 np0005535469 festive_proskuriakova[421467]: 167 167
Nov 25 12:24:08 np0005535469 systemd[1]: libpod-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope: Deactivated successfully.
Nov 25 12:24:08 np0005535469 conmon[421467]: conmon 536528a6a72e508137b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope/container/memory.events
Nov 25 12:24:08 np0005535469 podman[421451]: 2025-11-25 17:24:08.713762594 +0000 UTC m=+0.184897454 container died 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:24:08 np0005535469 nova_compute[254092]: 2025-11-25 17:24:08.739 254096 DEBUG nova.network.neutron [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updated VIF entry in instance network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:24:08 np0005535469 nova_compute[254092]: 2025-11-25 17:24:08.741 254096 DEBUG nova.network.neutron [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:24:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1e223e0d48bbb5959f97c5f005fe09c02503d2d1afa71671b6ae6a57b62bbf34-merged.mount: Deactivated successfully.
Nov 25 12:24:08 np0005535469 podman[421451]: 2025-11-25 17:24:08.768495273 +0000 UTC m=+0.239630133 container remove 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:24:08 np0005535469 nova_compute[254092]: 2025-11-25 17:24:08.776 254096 DEBUG oslo_concurrency.lockutils [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:24:08 np0005535469 systemd[1]: libpod-conmon-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope: Deactivated successfully.
Nov 25 12:24:08 np0005535469 podman[421490]: 2025-11-25 17:24:08.9976185 +0000 UTC m=+0.051104092 container create cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:24:09 np0005535469 systemd[1]: Started libpod-conmon-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope.
Nov 25 12:24:09 np0005535469 podman[421490]: 2025-11-25 17:24:08.97998395 +0000 UTC m=+0.033469562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:24:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:24:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:09 np0005535469 podman[421490]: 2025-11-25 17:24:09.127243079 +0000 UTC m=+0.180728771 container init cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:24:09 np0005535469 podman[421490]: 2025-11-25 17:24:09.137931969 +0000 UTC m=+0.191417561 container start cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:24:09 np0005535469 podman[421490]: 2025-11-25 17:24:09.143567303 +0000 UTC m=+0.197052905 container attach cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:24:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:24:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:24:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:24:10 np0005535469 musing_hofstadter[421506]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:24:10 np0005535469 musing_hofstadter[421506]: --> relative data size: 1.0
Nov 25 12:24:10 np0005535469 musing_hofstadter[421506]: --> All data devices are unavailable
Nov 25 12:24:10 np0005535469 systemd[1]: libpod-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope: Deactivated successfully.
Nov 25 12:24:10 np0005535469 podman[421490]: 2025-11-25 17:24:10.419517163 +0000 UTC m=+1.473002785 container died cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:24:10 np0005535469 systemd[1]: libpod-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope: Consumed 1.209s CPU time.
Nov 25 12:24:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367-merged.mount: Deactivated successfully.
Nov 25 12:24:10 np0005535469 podman[421490]: 2025-11-25 17:24:10.492364696 +0000 UTC m=+1.545850288 container remove cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:24:10 np0005535469 nova_compute[254092]: 2025-11-25 17:24:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:10 np0005535469 systemd[1]: libpod-conmon-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope: Deactivated successfully.
Nov 25 12:24:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 168 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 335 KiB/s wr, 79 op/s
Nov 25 12:24:11 np0005535469 podman[421685]: 2025-11-25 17:24:11.233946042 +0000 UTC m=+0.058005260 container create cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:24:11 np0005535469 systemd[1]: Started libpod-conmon-cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0.scope.
Nov 25 12:24:11 np0005535469 podman[421685]: 2025-11-25 17:24:11.207949043 +0000 UTC m=+0.032008281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:24:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:24:11 np0005535469 podman[421685]: 2025-11-25 17:24:11.324119496 +0000 UTC m=+0.148178714 container init cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:24:11 np0005535469 podman[421685]: 2025-11-25 17:24:11.333410959 +0000 UTC m=+0.157470177 container start cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:24:11 np0005535469 podman[421685]: 2025-11-25 17:24:11.337082988 +0000 UTC m=+0.161142206 container attach cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:24:11 np0005535469 charming_jackson[421701]: 167 167
Nov 25 12:24:11 np0005535469 systemd[1]: libpod-cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0.scope: Deactivated successfully.
Nov 25 12:24:11 np0005535469 podman[421685]: 2025-11-25 17:24:11.34042781 +0000 UTC m=+0.164487058 container died cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:24:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ecca809090520be2f9f4f49afabd370a85ca8080d88bd7a6c4284ea4c8757319-merged.mount: Deactivated successfully.
Nov 25 12:24:11 np0005535469 podman[421685]: 2025-11-25 17:24:11.383309517 +0000 UTC m=+0.207368755 container remove cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:24:11 np0005535469 systemd[1]: libpod-conmon-cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0.scope: Deactivated successfully.
Nov 25 12:24:11 np0005535469 podman[421725]: 2025-11-25 17:24:11.60311269 +0000 UTC m=+0.067020306 container create d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:24:11 np0005535469 nova_compute[254092]: 2025-11-25 17:24:11.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:11 np0005535469 systemd[1]: Started libpod-conmon-d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4.scope.
Nov 25 12:24:11 np0005535469 podman[421725]: 2025-11-25 17:24:11.575556289 +0000 UTC m=+0.039463895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:24:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:24:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:11 np0005535469 podman[421725]: 2025-11-25 17:24:11.729017677 +0000 UTC m=+0.192925293 container init d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:24:11 np0005535469 podman[421725]: 2025-11-25 17:24:11.74124957 +0000 UTC m=+0.205157196 container start d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:24:11 np0005535469 podman[421725]: 2025-11-25 17:24:11.745791103 +0000 UTC m=+0.209698719 container attach d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:24:11 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:11Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:c1:87 10.100.0.6
Nov 25 12:24:11 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:11Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:c1:87 10.100.0.6
Nov 25 12:24:12 np0005535469 boring_swanson[421741]: {
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:    "0": [
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:        {
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "devices": [
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "/dev/loop3"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            ],
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_name": "ceph_lv0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_size": "21470642176",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "name": "ceph_lv0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "tags": {
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cluster_name": "ceph",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.crush_device_class": "",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.encrypted": "0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osd_id": "0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.type": "block",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.vdo": "0"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            },
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "type": "block",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "vg_name": "ceph_vg0"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:        }
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:    ],
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:    "1": [
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:        {
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "devices": [
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "/dev/loop4"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            ],
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_name": "ceph_lv1",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_size": "21470642176",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "name": "ceph_lv1",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "tags": {
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cluster_name": "ceph",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.crush_device_class": "",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.encrypted": "0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osd_id": "1",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.type": "block",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.vdo": "0"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            },
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "type": "block",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "vg_name": "ceph_vg1"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:        }
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:    ],
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:    "2": [
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:        {
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "devices": [
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "/dev/loop5"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            ],
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_name": "ceph_lv2",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_size": "21470642176",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "name": "ceph_lv2",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "tags": {
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.cluster_name": "ceph",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.crush_device_class": "",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.encrypted": "0",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osd_id": "2",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.type": "block",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:                "ceph.vdo": "0"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            },
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "type": "block",
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:            "vg_name": "ceph_vg2"
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:        }
Nov 25 12:24:12 np0005535469 boring_swanson[421741]:    ]
Nov 25 12:24:12 np0005535469 boring_swanson[421741]: }
Nov 25 12:24:12 np0005535469 systemd[1]: libpod-d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4.scope: Deactivated successfully.
Nov 25 12:24:12 np0005535469 podman[421725]: 2025-11-25 17:24:12.596572481 +0000 UTC m=+1.060480077 container died d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:24:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3-merged.mount: Deactivated successfully.
Nov 25 12:24:12 np0005535469 podman[421725]: 2025-11-25 17:24:12.663065241 +0000 UTC m=+1.126972827 container remove d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:24:12 np0005535469 systemd[1]: libpod-conmon-d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4.scope: Deactivated successfully.
Nov 25 12:24:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 168 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 323 KiB/s wr, 50 op/s
Nov 25 12:24:13 np0005535469 nova_compute[254092]: 2025-11-25 17:24:13.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:13 np0005535469 podman[421903]: 2025-11-25 17:24:13.521589449 +0000 UTC m=+0.065611586 container create d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:24:13 np0005535469 systemd[1]: Started libpod-conmon-d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d.scope.
Nov 25 12:24:13 np0005535469 podman[421903]: 2025-11-25 17:24:13.49039544 +0000 UTC m=+0.034417627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:24:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:24:13 np0005535469 podman[421903]: 2025-11-25 17:24:13.638591924 +0000 UTC m=+0.182614111 container init d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:24:13 np0005535469 podman[421903]: 2025-11-25 17:24:13.649196823 +0000 UTC m=+0.193218930 container start d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:24:13 np0005535469 podman[421903]: 2025-11-25 17:24:13.653436038 +0000 UTC m=+0.197458175 container attach d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:24:13 np0005535469 hardcore_almeida[421919]: 167 167
Nov 25 12:24:13 np0005535469 systemd[1]: libpod-d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d.scope: Deactivated successfully.
Nov 25 12:24:13 np0005535469 podman[421903]: 2025-11-25 17:24:13.658862256 +0000 UTC m=+0.202884403 container died d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 12:24:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:13.664 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:13.666 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e35157267a79c246699a0d7e67b39f02fbee09a0c444ab656233678a34cebf9a-merged.mount: Deactivated successfully.
Nov 25 12:24:13 np0005535469 podman[421903]: 2025-11-25 17:24:13.703906332 +0000 UTC m=+0.247928449 container remove d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 12:24:13 np0005535469 systemd[1]: libpod-conmon-d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d.scope: Deactivated successfully.
Nov 25 12:24:13 np0005535469 podman[421942]: 2025-11-25 17:24:13.973285104 +0000 UTC m=+0.078218770 container create 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:24:14 np0005535469 systemd[1]: Started libpod-conmon-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope.
Nov 25 12:24:14 np0005535469 podman[421942]: 2025-11-25 17:24:13.940177233 +0000 UTC m=+0.045110939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:24:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:24:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:24:14 np0005535469 podman[421942]: 2025-11-25 17:24:14.07930374 +0000 UTC m=+0.184237456 container init 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:24:14 np0005535469 podman[421942]: 2025-11-25 17:24:14.087729569 +0000 UTC m=+0.192663275 container start 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:24:14 np0005535469 podman[421942]: 2025-11-25 17:24:14.092369606 +0000 UTC m=+0.197303312 container attach 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:24:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:15 np0005535469 objective_carson[421959]: {
Nov 25 12:24:15 np0005535469 objective_carson[421959]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "osd_id": 1,
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "type": "bluestore"
Nov 25 12:24:15 np0005535469 objective_carson[421959]:    },
Nov 25 12:24:15 np0005535469 objective_carson[421959]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "osd_id": 2,
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "type": "bluestore"
Nov 25 12:24:15 np0005535469 objective_carson[421959]:    },
Nov 25 12:24:15 np0005535469 objective_carson[421959]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "osd_id": 0,
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:24:15 np0005535469 objective_carson[421959]:        "type": "bluestore"
Nov 25 12:24:15 np0005535469 objective_carson[421959]:    }
Nov 25 12:24:15 np0005535469 objective_carson[421959]: }
Nov 25 12:24:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 87 op/s
Nov 25 12:24:15 np0005535469 systemd[1]: libpod-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope: Deactivated successfully.
Nov 25 12:24:15 np0005535469 systemd[1]: libpod-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope: Consumed 1.132s CPU time.
Nov 25 12:24:15 np0005535469 podman[421993]: 2025-11-25 17:24:15.306206155 +0000 UTC m=+0.068797883 container died 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:24:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2-merged.mount: Deactivated successfully.
Nov 25 12:24:15 np0005535469 podman[421995]: 2025-11-25 17:24:15.406383142 +0000 UTC m=+0.159339968 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 12:24:15 np0005535469 podman[421993]: 2025-11-25 17:24:15.425561044 +0000 UTC m=+0.188152792 container remove 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:24:15 np0005535469 podman[421992]: 2025-11-25 17:24:15.433722126 +0000 UTC m=+0.188472681 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 12:24:15 np0005535469 systemd[1]: libpod-conmon-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope: Deactivated successfully.
Nov 25 12:24:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:24:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:24:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:24:15 np0005535469 podman[421996]: 2025-11-25 17:24:15.521180717 +0000 UTC m=+0.270296259 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 12:24:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:24:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3b9e0868-960b-43c7-9b13-656be2fa11cc does not exist
Nov 25 12:24:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b6afd4b7-c854-4cca-b42b-3ccceb2f423d does not exist
Nov 25 12:24:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:24:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:24:16 np0005535469 nova_compute[254092]: 2025-11-25 17:24:16.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:24:18 np0005535469 nova_compute[254092]: 2025-11-25 17:24:18.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:24:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 12:24:21 np0005535469 nova_compute[254092]: 2025-11-25 17:24:21.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 12:24:23 np0005535469 nova_compute[254092]: 2025-11-25 17:24:23.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:24.562 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:24:24 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:24.563 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.840 254096 DEBUG nova.compute.manager [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.840 254096 DEBUG nova.compute.manager [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing instance network info cache due to event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.841 254096 DEBUG oslo_concurrency.lockutils [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.842 254096 DEBUG oslo_concurrency.lockutils [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.842 254096 DEBUG nova.network.neutron [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.892 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.893 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.894 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.894 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.895 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.897 254096 INFO nova.compute.manager [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Terminating instance#033[00m
Nov 25 12:24:24 np0005535469 nova_compute[254092]: 2025-11-25 17:24:24.899 254096 DEBUG nova.compute.manager [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:24:24 np0005535469 kernel: tape68f7346-b9 (unregistering): left promiscuous mode
Nov 25 12:24:24 np0005535469 NetworkManager[48891]: <info>  [1764091464.9639] device (tape68f7346-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:24:25 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:25Z|01594|binding|INFO|Releasing lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd from this chassis (sb_readonly=0)
Nov 25 12:24:25 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:25Z|01595|binding|INFO|Setting lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd down in Southbound
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:25Z|01596|binding|INFO|Removing iface tape68f7346-b9 ovn-installed in OVS
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.035 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], port_security=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feb9:c187/64 2001:db8::f816:3eff:feb9:c187/64', 'neutron:device_id': 'a54a9759-b1e7-4cbe-87b4-97878794f76e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e68f7346-b984-4fd0-9e4a-3d8c74e511fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.036 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e68f7346-b984-4fd0-9e4a-3d8c74e511fd in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 unbound from our chassis#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.037 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.043 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.062 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7652e7c5-fc5e-49e3-b6e7-5f0219b8cb46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.106 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5e792bb2-6d05-42ee-ad8a-3ac9b1ce10b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:25 np0005535469 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 25 12:24:25 np0005535469 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 14.037s CPU time.
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.113 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b899dfa9-0e2b-438d-b305-8ff7b77d7f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:25 np0005535469 systemd-machined[216343]: Machine qemu-182-instance-00000094 terminated.
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.161 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30582e10-4181-4d81-a89a-05c34a7a3ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.187 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c0f47-e321-4648-b871-a248d014b97d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422131, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.210 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe3fbfb-f219-41da-ba4e-8999e85fee58]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786477, 'tstamp': 786477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422132, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786482, 'tstamp': 786482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422132, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap285f996f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap285f996f-d0, col_values=(('external_ids', {'iface-id': '63e3c701-69aa-46d9-a2c2-91ded4242c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:24:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.221 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.346 254096 INFO nova.virt.libvirt.driver [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance destroyed successfully.#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.347 254096 DEBUG nova.objects.instance [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid a54a9759-b1e7-4cbe-87b4-97878794f76e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.358 254096 DEBUG nova.virt.libvirt.vif [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:23:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1586289454',display_name='tempest-TestGettingAddress-server-1586289454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1586289454',id=148,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:23:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-vp2qu6ef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:23:59Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a54a9759-b1e7-4cbe-87b4-97878794f76e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.359 254096 DEBUG nova.network.os_vif_util [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.360 254096 DEBUG nova.network.os_vif_util [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.361 254096 DEBUG os_vif [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.365 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f7346-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:24:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.388 254096 INFO os_vif [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9')#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.859 254096 INFO nova.virt.libvirt.driver [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deleting instance files /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e_del#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.860 254096 INFO nova.virt.libvirt.driver [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deletion of /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e_del complete#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.912 254096 INFO nova.compute.manager [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.913 254096 DEBUG oslo.service.loopingcall [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.913 254096 DEBUG nova.compute.manager [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:24:25 np0005535469 nova_compute[254092]: 2025-11-25 17:24:25.913 254096 DEBUG nova.network.neutron [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.964 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-unplugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.964 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.965 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.965 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.965 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] No waiting events found dispatching network-vif-unplugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.966 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-unplugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.966 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.966 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.967 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.967 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.967 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] No waiting events found dispatching network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:24:26 np0005535469 nova_compute[254092]: 2025-11-25 17:24:26.968 254096 WARNING nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received unexpected event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:24:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 182 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 165 KiB/s wr, 35 op/s
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.732 254096 DEBUG nova.network.neutron [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.734 254096 DEBUG nova.network.neutron [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updated VIF entry in instance network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.735 254096 DEBUG nova.network.neutron [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.752 254096 DEBUG oslo_concurrency.lockutils [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.754 254096 INFO nova.compute.manager [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 1.84 seconds to deallocate network for instance.#033[00m
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.791 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.792 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:27 np0005535469 nova_compute[254092]: 2025-11-25 17:24:27.879 254096 DEBUG oslo_concurrency.processutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:24:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:24:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/497703530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:24:28 np0005535469 nova_compute[254092]: 2025-11-25 17:24:28.334 254096 DEBUG oslo_concurrency.processutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:24:28 np0005535469 nova_compute[254092]: 2025-11-25 17:24:28.346 254096 DEBUG nova.compute.provider_tree [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:24:28 np0005535469 nova_compute[254092]: 2025-11-25 17:24:28.372 254096 DEBUG nova.scheduler.client.report [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:24:28 np0005535469 nova_compute[254092]: 2025-11-25 17:24:28.422 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:28 np0005535469 nova_compute[254092]: 2025-11-25 17:24:28.451 254096 INFO nova.scheduler.client.report [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance a54a9759-b1e7-4cbe-87b4-97878794f76e#033[00m
Nov 25 12:24:28 np0005535469 nova_compute[254092]: 2025-11-25 17:24:28.524 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.084 254096 DEBUG nova.compute.manager [req-4f85f826-dae4-4501-b239-f5a6f4829437 req-365d071b-a1cc-4152-beb2-9a6def57582a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-deleted-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 182 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 13 op/s
Nov 25 12:24:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.627 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.628 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.629 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.629 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.630 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.632 254096 INFO nova.compute.manager [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Terminating instance#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.634 254096 DEBUG nova.compute.manager [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:24:29 np0005535469 kernel: tape7f2fe0c-53 (unregistering): left promiscuous mode
Nov 25 12:24:29 np0005535469 NetworkManager[48891]: <info>  [1764091469.7343] device (tape7f2fe0c-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:24:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:29Z|01597|binding|INFO|Releasing lport e7f2fe0c-53c0-4be7-8362-47c92514001c from this chassis (sb_readonly=0)
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:29Z|01598|binding|INFO|Setting lport e7f2fe0c-53c0-4be7-8362-47c92514001c down in Southbound
Nov 25 12:24:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:24:29Z|01599|binding|INFO|Removing iface tape7f2fe0c-53 ovn-installed in OVS
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.754 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], port_security=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe41:f7b1/64 2001:db8::f816:3eff:fe41:f7b1/64', 'neutron:device_id': '287c21fa-3b34-448c-ba84-c777124fbb3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7f2fe0c-53c0-4be7-8362-47c92514001c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:24:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.755 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7f2fe0c-53c0-4be7-8362-47c92514001c in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 unbound from our chassis#033[00m
Nov 25 12:24:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.756 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:24:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.757 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[157b6e13-dece-46df-bc3e-f25f25bb94f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.758 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 namespace which is not needed anymore#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 25 12:24:29 np0005535469 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 16.490s CPU time.
Nov 25 12:24:29 np0005535469 systemd-machined[216343]: Machine qemu-181-instance-00000093 terminated.
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.867 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.883 254096 INFO nova.virt.libvirt.driver [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance destroyed successfully.#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.884 254096 DEBUG nova.objects.instance [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.895 254096 DEBUG nova.virt.libvirt.vif [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628099618',display_name='tempest-TestGettingAddress-server-1628099618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628099618',id=147,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:23:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-7s8020gn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:23:27Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=287c21fa-3b34-448c-ba84-c777124fbb3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.896 254096 DEBUG nova.network.os_vif_util [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.898 254096 DEBUG nova.network.os_vif_util [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.899 254096 DEBUG os_vif [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7f2fe0c-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:29 np0005535469 nova_compute[254092]: 2025-11-25 17:24:29.910 254096 INFO os_vif [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53')#033[00m
Nov 25 12:24:29 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : haproxy version is 2.8.14-c23fe91
Nov 25 12:24:29 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : path to executable is /usr/sbin/haproxy
Nov 25 12:24:29 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [WARNING]  (420682) : Exiting Master process...
Nov 25 12:24:29 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [WARNING]  (420682) : Exiting Master process...
Nov 25 12:24:29 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [ALERT]    (420682) : Current worker (420684) exited with code 143 (Terminated)
Nov 25 12:24:29 np0005535469 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [WARNING]  (420682) : All workers exited. Exiting... (0)
Nov 25 12:24:29 np0005535469 systemd[1]: libpod-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1.scope: Deactivated successfully.
Nov 25 12:24:29 np0005535469 podman[422219]: 2025-11-25 17:24:29.981856428 +0000 UTC m=+0.061333710 container died 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 12:24:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1-userdata-shm.mount: Deactivated successfully.
Nov 25 12:24:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b97e2b4204e8786f8aea331eed1d7b42c5b5b099c4f387b397a3ecb178dc7bf5-merged.mount: Deactivated successfully.
Nov 25 12:24:30 np0005535469 podman[422219]: 2025-11-25 17:24:30.038231123 +0000 UTC m=+0.117708415 container cleanup 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:24:30 np0005535469 systemd[1]: libpod-conmon-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1.scope: Deactivated successfully.
Nov 25 12:24:30 np0005535469 podman[422264]: 2025-11-25 17:24:30.141985277 +0000 UTC m=+0.071292971 container remove 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.160 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aadff6e1-ee84-482c-bdfa-98a0fad06887]: (4, ('Tue Nov 25 05:24:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 (73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1)\n73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1\nTue Nov 25 05:24:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 (73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1)\n73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.163 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fc6848-f389-4392-8dc9-7ed37bdc96c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.164 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:30 np0005535469 kernel: tap285f996f-d0: left promiscuous mode
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.200 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc09232-f7e9-4279-aad7-8c9eee235b66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.216 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a794f0-b764-4ac3-b931-c54ad0b1fe5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.217 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[765042e7-5e9f-425d-b756-1bd77a1cf8b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.246 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[33c48aeb-6a31-4e92-8190-98756e34c16e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786449, 'reachable_time': 40400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422283, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:30 np0005535469 systemd[1]: run-netns-ovnmeta\x2d285f996f\x2dd0be\x2d4c9e\x2d9d0c\x2db8d730990ab6.mount: Deactivated successfully.
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.252 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:24:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.253 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[05469f18-e78a-4d66-bff7-80f2637aa70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.324 254096 INFO nova.virt.libvirt.driver [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deleting instance files /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d_del#033[00m
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.324 254096 INFO nova.virt.libvirt.driver [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deletion of /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d_del complete#033[00m
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.401 254096 INFO nova.compute.manager [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.402 254096 DEBUG oslo.service.loopingcall [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.403 254096 DEBUG nova.compute.manager [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:24:30 np0005535469 nova_compute[254092]: 2025-11-25 17:24:30.403 254096 DEBUG nova.network.neutron [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:24:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 24 KiB/s wr, 33 op/s
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.230 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing instance network info cache due to event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG nova.network.neutron [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.613 254096 DEBUG nova.network.neutron [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.639 254096 INFO nova.compute.manager [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 1.24 seconds to deallocate network for instance.#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.690 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.695 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.696 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:31 np0005535469 nova_compute[254092]: 2025-11-25 17:24:31.772 254096 DEBUG oslo_concurrency.processutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:24:32 np0005535469 nova_compute[254092]: 2025-11-25 17:24:32.052 254096 DEBUG nova.compute.manager [req-a0e1b1f8-59d9-4ad8-be0a-079c0ff0b47e req-a1b95170-e697-4276-be51-35b53894123d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-deleted-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:24:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/775001318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:24:32 np0005535469 nova_compute[254092]: 2025-11-25 17:24:32.301 254096 DEBUG oslo_concurrency.processutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:24:32 np0005535469 nova_compute[254092]: 2025-11-25 17:24:32.311 254096 DEBUG nova.compute.provider_tree [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:24:32 np0005535469 nova_compute[254092]: 2025-11-25 17:24:32.338 254096 DEBUG nova.scheduler.client.report [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:24:32 np0005535469 nova_compute[254092]: 2025-11-25 17:24:32.363 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:32 np0005535469 nova_compute[254092]: 2025-11-25 17:24:32.407 254096 INFO nova.scheduler.client.report [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 287c21fa-3b34-448c-ba84-c777124fbb3d#033[00m
Nov 25 12:24:32 np0005535469 nova_compute[254092]: 2025-11-25 17:24:32.465 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:32 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:32.565 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:24:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 32 op/s
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.915 254096 DEBUG nova.network.neutron [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated VIF entry in instance network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.917 254096 DEBUG nova.network.neutron [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.940 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.941 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-unplugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.942 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.942 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.943 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.943 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] No waiting events found dispatching network-vif-unplugged-e7f2fe0c-53c0-4be7-8362-47c92514001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.944 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-unplugged-e7f2fe0c-53c0-4be7-8362-47c92514001c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.944 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.945 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.945 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.946 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.946 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] No waiting events found dispatching network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:24:33 np0005535469 nova_compute[254092]: 2025-11-25 17:24:33.947 254096 WARNING nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received unexpected event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:24:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:34 np0005535469 nova_compute[254092]: 2025-11-25 17:24:34.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 41 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 12:24:36 np0005535469 nova_compute[254092]: 2025-11-25 17:24:36.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 12:24:38 np0005535469 nova_compute[254092]: 2025-11-25 17:24:38.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:38 np0005535469 nova_compute[254092]: 2025-11-25 17:24:38.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 45 op/s
Nov 25 12:24:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:39 np0005535469 nova_compute[254092]: 2025-11-25 17:24:39.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:24:40
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:24:40 np0005535469 nova_compute[254092]: 2025-11-25 17:24:40.344 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091465.342227, a54a9759-b1e7-4cbe-87b4-97878794f76e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:24:40 np0005535469 nova_compute[254092]: 2025-11-25 17:24:40.344 254096 INFO nova.compute.manager [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:24:40 np0005535469 nova_compute[254092]: 2025-11-25 17:24:40.368 254096 DEBUG nova.compute.manager [None req-9276b327-c376-43c0-a400-015fba9dd344 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:24:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:24:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 45 op/s
Nov 25 12:24:41 np0005535469 nova_compute[254092]: 2025-11-25 17:24:41.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 938 B/s wr, 25 op/s
Nov 25 12:24:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:44 np0005535469 nova_compute[254092]: 2025-11-25 17:24:44.880 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091469.879128, 287c21fa-3b34-448c-ba84-c777124fbb3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:24:44 np0005535469 nova_compute[254092]: 2025-11-25 17:24:44.881 254096 INFO nova.compute.manager [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:24:44 np0005535469 nova_compute[254092]: 2025-11-25 17:24:44.897 254096 DEBUG nova.compute.manager [None req-8e9de573-e3b3-401b-9535-28eb81215ca6 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:24:44 np0005535469 nova_compute[254092]: 2025-11-25 17:24:44.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 938 B/s wr, 25 op/s
Nov 25 12:24:45 np0005535469 podman[422307]: 2025-11-25 17:24:45.662935247 +0000 UTC m=+0.071051996 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 12:24:45 np0005535469 podman[422308]: 2025-11-25 17:24:45.694618069 +0000 UTC m=+0.096472268 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:24:45 np0005535469 podman[422309]: 2025-11-25 17:24:45.700194251 +0000 UTC m=+0.108418893 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 25 12:24:46 np0005535469 nova_compute[254092]: 2025-11-25 17:24:46.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:46 np0005535469 nova_compute[254092]: 2025-11-25 17:24:46.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:46 np0005535469 nova_compute[254092]: 2025-11-25 17:24:46.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:24:48 np0005535469 nova_compute[254092]: 2025-11-25 17:24:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:48 np0005535469 nova_compute[254092]: 2025-11-25 17:24:48.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:48 np0005535469 nova_compute[254092]: 2025-11-25 17:24:48.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:48 np0005535469 nova_compute[254092]: 2025-11-25 17:24:48.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:48 np0005535469 nova_compute[254092]: 2025-11-25 17:24:48.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:48 np0005535469 nova_compute[254092]: 2025-11-25 17:24:48.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:24:48 np0005535469 nova_compute[254092]: 2025-11-25 17:24:48.524 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:24:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:24:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3393384912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.030 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.181 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.182 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3665MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:24:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.237 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:24:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:24:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865904016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.743 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.750 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.764 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.782 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:24:49 np0005535469 nova_compute[254092]: 2025-11-25 17:24:49.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:24:51 np0005535469 nova_compute[254092]: 2025-11-25 17:24:51.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:24:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:24:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:24:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.624 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e9:6e 10.100.0.2 2001:db8::f816:3eff:fed7:e96e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed7:e96e/64', 'neutron:device_id': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=10a05078-1fb1-4ddb-9ad4-b52f3a95063f) old=Port_Binding(mac=['fa:16:3e:d7:e9:6e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:24:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.626 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 10a05078-1fb1-4ddb-9ad4-b52f3a95063f in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d updated#033[00m
Nov 25 12:24:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.628 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88c42593-de32-4e23-b7f7-5cac507fe68d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:24:53 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d49b7e5a-8f8e-4113-a037-76d32b0e6008]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:24:53 np0005535469 nova_compute[254092]: 2025-11-25 17:24:53.782 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:53 np0005535469 nova_compute[254092]: 2025-11-25 17:24:53.782 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:53 np0005535469 nova_compute[254092]: 2025-11-25 17:24:53.783 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:53 np0005535469 nova_compute[254092]: 2025-11-25 17:24:53.783 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:24:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:54 np0005535469 nova_compute[254092]: 2025-11-25 17:24:54.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:24:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:24:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521902727' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:24:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:24:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521902727' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:24:56 np0005535469 nova_compute[254092]: 2025-11-25 17:24:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:24:56 np0005535469 nova_compute[254092]: 2025-11-25 17:24:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:24:56 np0005535469 nova_compute[254092]: 2025-11-25 17:24:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:24:56 np0005535469 nova_compute[254092]: 2025-11-25 17:24:56.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:24:56 np0005535469 nova_compute[254092]: 2025-11-25 17:24:56.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:24:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:24:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:24:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:24:59 np0005535469 nova_compute[254092]: 2025-11-25 17:24:59.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.152 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.153 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.170 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.244 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.245 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.255 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.256 254096 INFO nova.compute.claims [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.378 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:25:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398225616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.871 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.879 254096 DEBUG nova.compute.provider_tree [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.893 254096 DEBUG nova.scheduler.client.report [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.908 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.909 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.948 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.948 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.971 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:25:00 np0005535469 nova_compute[254092]: 2025-11-25 17:25:00.985 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.071 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.073 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.073 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Creating image(s)#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.117 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.150 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.179 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.184 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.298 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.299 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.300 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.301 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.323 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.327 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.385 254096 DEBUG nova.policy [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.631 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.688 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.817 254096 DEBUG nova.objects.instance [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.847 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.848 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Ensure instance console log exists: /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.849 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.849 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:01 np0005535469 nova_compute[254092]: 2025-11-25 17:25:01.849 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:02 np0005535469 nova_compute[254092]: 2025-11-25 17:25:02.710 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Successfully created port: 329205f7-aac9-4c77-b1eb-b7af04c34038 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:25:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.548 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Successfully updated port: 329205f7-aac9-4c77-b1eb-b7af04c34038 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.566 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.567 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.567 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.667 254096 DEBUG nova.compute.manager [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.668 254096 DEBUG nova.compute.manager [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing instance network info cache due to event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.668 254096 DEBUG oslo_concurrency.lockutils [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:25:03 np0005535469 nova_compute[254092]: 2025-11-25 17:25:03.716 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:25:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.862 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.883 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.884 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance network_info: |[{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.885 254096 DEBUG oslo_concurrency.lockutils [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.885 254096 DEBUG nova.network.neutron [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.888 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start _get_guest_xml network_info=[{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.894 254096 WARNING nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.900 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.901 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.910 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.911 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.911 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.914 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.914 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.914 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.917 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:04 np0005535469 nova_compute[254092]: 2025-11-25 17:25:04.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 74 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Nov 25 12:25:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:25:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325347158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.435 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.475 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.481 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:25:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931644791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.983 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.985 254096 DEBUG nova.virt.libvirt.vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1562740359',display_name='tempest-TestGettingAddress-server-1562740359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1562740359',id=149,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-01dpdy1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:01Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=03d40f4f-1bf3-4e1d-8844-bae4b32443cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.986 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.987 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:25:05 np0005535469 nova_compute[254092]: 2025-11-25 17:25:05.989 254096 DEBUG nova.objects.instance [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.002 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <uuid>03d40f4f-1bf3-4e1d-8844-bae4b32443cc</uuid>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <name>instance-00000095</name>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1562740359</nova:name>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:25:04</nova:creationTime>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <nova:port uuid="329205f7-aac9-4c77-b1eb-b7af04c34038">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8d:692e" ipVersion="6"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <entry name="serial">03d40f4f-1bf3-4e1d-8844-bae4b32443cc</entry>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <entry name="uuid">03d40f4f-1bf3-4e1d-8844-bae4b32443cc</entry>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:8d:69:2e"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <target dev="tap329205f7-aa"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/console.log" append="off"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:25:06 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:25:06 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:25:06 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:25:06 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.004 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Preparing to wait for external event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.005 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.006 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.006 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.007 254096 DEBUG nova.virt.libvirt.vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1562740359',display_name='tempest-TestGettingAddress-server-1562740359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1562740359',id=149,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-01dpdy1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:01Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=03d40f4f-1bf3-4e1d-8844-bae4b32443cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.008 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.009 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.009 254096 DEBUG os_vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.010 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.011 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.011 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.016 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap329205f7-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.017 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap329205f7-aa, col_values=(('external_ids', {'iface-id': '329205f7-aac9-4c77-b1eb-b7af04c34038', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:69:2e', 'vm-uuid': '03d40f4f-1bf3-4e1d-8844-bae4b32443cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:06 np0005535469 NetworkManager[48891]: <info>  [1764091506.0223] manager: (tap329205f7-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.023 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.029 254096 INFO os_vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa')#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.069 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.070 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.070 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:8d:69:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.071 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Using config drive#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.097 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:06 np0005535469 nova_compute[254092]: 2025-11-25 17:25:06.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.421 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Creating config drive at /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config#033[00m
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.427 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98_qw4mz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.599 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98_qw4mz" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.644 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.650 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.816 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.818 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deleting local config drive /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config because it was imported into RBD.#033[00m
Nov 25 12:25:07 np0005535469 kernel: tap329205f7-aa: entered promiscuous mode
Nov 25 12:25:07 np0005535469 NetworkManager[48891]: <info>  [1764091507.8767] manager: (tap329205f7-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Nov 25 12:25:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:07Z|01600|binding|INFO|Claiming lport 329205f7-aac9-4c77-b1eb-b7af04c34038 for this chassis.
Nov 25 12:25:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:07Z|01601|binding|INFO|329205f7-aac9-4c77-b1eb-b7af04c34038: Claiming fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.912 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], port_security=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe8d:692e/64', 'neutron:device_id': '03d40f4f-1bf3-4e1d-8844-bae4b32443cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=329205f7-aac9-4c77-b1eb-b7af04c34038) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.914 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 329205f7-aac9-4c77-b1eb-b7af04c34038 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d bound to our chassis#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.915 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88c42593-de32-4e23-b7f7-5cac507fe68d#033[00m
Nov 25 12:25:07 np0005535469 systemd-machined[216343]: New machine qemu-183-instance-00000095.
Nov 25 12:25:07 np0005535469 systemd-udevd[422738]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.928 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53afe16e-3504-4a80-b5e0-6b6349a9c54c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.929 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88c42593-d1 in ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.931 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88c42593-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.931 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbb61cc-04fd-4cbd-afb9-63cc2a24b759]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6327ce-2f34-48d9-8718-be67d4c8a93f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:07 np0005535469 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.942 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe575e9-4cec-475b-9837-f7e6bf30231b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:07 np0005535469 NetworkManager[48891]: <info>  [1764091507.9523] device (tap329205f7-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:25:07 np0005535469 NetworkManager[48891]: <info>  [1764091507.9538] device (tap329205f7-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:25:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:07Z|01602|binding|INFO|Setting lport 329205f7-aac9-4c77-b1eb-b7af04c34038 ovn-installed in OVS
Nov 25 12:25:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:07Z|01603|binding|INFO|Setting lport 329205f7-aac9-4c77-b1eb-b7af04c34038 up in Southbound
Nov 25 12:25:07 np0005535469 nova_compute[254092]: 2025-11-25 17:25:07.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:07 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.969 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[577eaad2-b5b9-4e0f-a832-6211993ece3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.005 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e082f5dd-b413-465e-9940-64b1a09d506f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a45030d-9cdb-4eec-8c26-4699b31dcac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 NetworkManager[48891]: <info>  [1764091508.0151] manager: (tap88c42593-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/666)
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.054 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[99ad78ab-682a-4cbe-981e-b84e1f6edbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.058 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[909e30b8-7d9f-474d-b653-23a5dd24a5eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 NetworkManager[48891]: <info>  [1764091508.0905] device (tap88c42593-d0): carrier: link connected
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.098 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb8b48a-72ba-42a5-bb8a-8f65a43bb7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18f7ed6b-1588-4837-b48a-9addb1b07f07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422770, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[63198f45-81a4-4433-9a9e-5016074b811a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:e96e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796568, 'tstamp': 796568}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422771, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.162 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6accc09-8b11-4442-a91b-076e1cfbd1b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 422772, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.208 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20474430-9f3f-45b2-91ca-ece60439f454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.214 254096 DEBUG nova.network.neutron [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated VIF entry in instance network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.215 254096 DEBUG nova.network.neutron [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.226 254096 DEBUG nova.compute.manager [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.227 254096 DEBUG oslo_concurrency.lockutils [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.227 254096 DEBUG oslo_concurrency.lockutils [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.229 254096 DEBUG oslo_concurrency.lockutils [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.230 254096 DEBUG nova.compute.manager [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Processing event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.233 254096 DEBUG oslo_concurrency.lockutils [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.288 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7299f0-e640-4a35-8347-90ed4bc932ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88c42593-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.293 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:08 np0005535469 NetworkManager[48891]: <info>  [1764091508.2944] manager: (tap88c42593-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/667)
Nov 25 12:25:08 np0005535469 kernel: tap88c42593-d0: entered promiscuous mode
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.295 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.296 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88c42593-d0, col_values=(('external_ids', {'iface-id': '10a05078-1fb1-4ddb-9ad4-b52f3a95063f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:08 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:08Z|01604|binding|INFO|Releasing lport 10a05078-1fb1-4ddb-9ad4-b52f3a95063f from this chassis (sb_readonly=0)
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.300 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.300 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88c42593-de32-4e23-b7f7-5cac507fe68d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88c42593-de32-4e23-b7f7-5cac507fe68d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.301 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a802b99d-2778-4008-bb50-c0380a2c2ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.302 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-88c42593-de32-4e23-b7f7-5cac507fe68d
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/88c42593-de32-4e23-b7f7-5cac507fe68d.pid.haproxy
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 88c42593-de32-4e23-b7f7-5cac507fe68d
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:25:08 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.303 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'env', 'PROCESS_TAG=haproxy-88c42593-de32-4e23-b7f7-5cac507fe68d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88c42593-de32-4e23-b7f7-5cac507fe68d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.731 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.732 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091508.7321813, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.732 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Started (Lifecycle Event)#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.736 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.740 254096 INFO nova.virt.libvirt.driver [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance spawned successfully.#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.740 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.757 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:08 np0005535469 podman[422846]: 2025-11-25 17:25:08.758275035 +0000 UTC m=+0.061049373 container create 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.771 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.777 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.778 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.779 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.779 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.779 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.780 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.810 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.811 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091508.7328572, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.811 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:25:08 np0005535469 systemd[1]: Started libpod-conmon-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb.scope.
Nov 25 12:25:08 np0005535469 podman[422846]: 2025-11-25 17:25:08.72427627 +0000 UTC m=+0.027050648 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.846 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.853 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091508.736073, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.853 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:25:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2aaa9402eb79ede98028373e6a92c83002a08f5e7749bcbd352c3d9be74b73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.859 254096 INFO nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 7.79 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.860 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:08 np0005535469 podman[422846]: 2025-11-25 17:25:08.870424688 +0000 UTC m=+0.173199046 container init 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.874 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:08 np0005535469 podman[422846]: 2025-11-25 17:25:08.878531989 +0000 UTC m=+0.181306347 container start 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.879 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:25:08 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : New worker (422867) forked
Nov 25 12:25:08 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : Loading success.
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.905 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.945 254096 INFO nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 8.74 seconds to build instance.#033[00m
Nov 25 12:25:08 np0005535469 nova_compute[254092]: 2025-11-25 17:25:08.964 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:25:09 np0005535469 nova_compute[254092]: 2025-11-25 17:25:09.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:09 np0005535469 nova_compute[254092]: 2025-11-25 17:25:09.500 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 25 12:25:09 np0005535469 nova_compute[254092]: 2025-11-25 17:25:09.501 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:09 np0005535469 nova_compute[254092]: 2025-11-25 17:25:09.502 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:09 np0005535469 nova_compute[254092]: 2025-11-25 17:25:09.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:25:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:25:10 np0005535469 nova_compute[254092]: 2025-11-25 17:25:10.318 254096 DEBUG nova.compute.manager [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:10 np0005535469 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG oslo_concurrency.lockutils [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:10 np0005535469 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG oslo_concurrency.lockutils [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:10 np0005535469 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG oslo_concurrency.lockutils [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:10 np0005535469 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG nova.compute.manager [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] No waiting events found dispatching network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:25:10 np0005535469 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 WARNING nova.compute.manager [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received unexpected event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:25:11 np0005535469 nova_compute[254092]: 2025-11-25 17:25:11.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 12:25:11 np0005535469 nova_compute[254092]: 2025-11-25 17:25:11.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 12:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:13.663 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:13.664 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:13.665 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:25:15 np0005535469 nova_compute[254092]: 2025-11-25 17:25:15.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:15 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:15Z|01605|binding|INFO|Releasing lport 10a05078-1fb1-4ddb-9ad4-b52f3a95063f from this chassis (sb_readonly=0)
Nov 25 12:25:15 np0005535469 NetworkManager[48891]: <info>  [1764091515.4802] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Nov 25 12:25:15 np0005535469 NetworkManager[48891]: <info>  [1764091515.4811] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Nov 25 12:25:15 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:15Z|01606|binding|INFO|Releasing lport 10a05078-1fb1-4ddb-9ad4-b52f3a95063f from this chassis (sb_readonly=0)
Nov 25 12:25:15 np0005535469 nova_compute[254092]: 2025-11-25 17:25:15.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:15 np0005535469 nova_compute[254092]: 2025-11-25 17:25:15.731 254096 DEBUG nova.compute.manager [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:15 np0005535469 nova_compute[254092]: 2025-11-25 17:25:15.732 254096 DEBUG nova.compute.manager [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing instance network info cache due to event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:25:15 np0005535469 nova_compute[254092]: 2025-11-25 17:25:15.733 254096 DEBUG oslo_concurrency.lockutils [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:25:15 np0005535469 nova_compute[254092]: 2025-11-25 17:25:15.734 254096 DEBUG oslo_concurrency.lockutils [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:25:15 np0005535469 nova_compute[254092]: 2025-11-25 17:25:15.734 254096 DEBUG nova.network.neutron [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:25:15 np0005535469 podman[422902]: 2025-11-25 17:25:15.869459036 +0000 UTC m=+0.060951651 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:25:15 np0005535469 podman[422901]: 2025-11-25 17:25:15.879216032 +0000 UTC m=+0.071363384 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:25:15 np0005535469 podman[422903]: 2025-11-25 17:25:15.911630023 +0000 UTC m=+0.095236233 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 12:25:16 np0005535469 nova_compute[254092]: 2025-11-25 17:25:16.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:16 np0005535469 nova_compute[254092]: 2025-11-25 17:25:16.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 394 KiB/s wr, 75 op/s
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 62726802-90c4-4322-b3b1-ac4710ef7ed2 does not exist
Nov 25 12:25:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev da264df7-a29b-4ba8-b19e-42288d873774 does not exist
Nov 25 12:25:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 76787b66-8543-4100-a347-f8b8099f673a does not exist
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:25:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:25:17 np0005535469 podman[423326]: 2025-11-25 17:25:17.984600588 +0000 UTC m=+0.049200720 container create 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:25:18 np0005535469 systemd[1]: Started libpod-conmon-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope.
Nov 25 12:25:18 np0005535469 podman[423326]: 2025-11-25 17:25:17.958729614 +0000 UTC m=+0.023329766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:25:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:25:18 np0005535469 podman[423326]: 2025-11-25 17:25:18.089758111 +0000 UTC m=+0.154358263 container init 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:25:18 np0005535469 podman[423326]: 2025-11-25 17:25:18.099178987 +0000 UTC m=+0.163779119 container start 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:25:18 np0005535469 podman[423326]: 2025-11-25 17:25:18.102650011 +0000 UTC m=+0.167250173 container attach 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:25:18 np0005535469 adoring_villani[423342]: 167 167
Nov 25 12:25:18 np0005535469 systemd[1]: libpod-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope: Deactivated successfully.
Nov 25 12:25:18 np0005535469 conmon[423342]: conmon 2b93f973026daab90b32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope/container/memory.events
Nov 25 12:25:18 np0005535469 podman[423326]: 2025-11-25 17:25:18.111188403 +0000 UTC m=+0.175788545 container died 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 12:25:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dfff5c4a80ff1f2e265c2c5a9df8674c06ce95e21780ea9b64e5b1c3c9a46dc4-merged.mount: Deactivated successfully.
Nov 25 12:25:18 np0005535469 podman[423326]: 2025-11-25 17:25:18.15439613 +0000 UTC m=+0.218996272 container remove 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:25:18 np0005535469 systemd[1]: libpod-conmon-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope: Deactivated successfully.
Nov 25 12:25:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:25:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:18 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:25:18 np0005535469 nova_compute[254092]: 2025-11-25 17:25:18.303 254096 DEBUG nova.network.neutron [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated VIF entry in instance network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:25:18 np0005535469 nova_compute[254092]: 2025-11-25 17:25:18.303 254096 DEBUG nova.network.neutron [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:25:18 np0005535469 nova_compute[254092]: 2025-11-25 17:25:18.319 254096 DEBUG oslo_concurrency.lockutils [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:25:18 np0005535469 podman[423364]: 2025-11-25 17:25:18.338316157 +0000 UTC m=+0.043787374 container create d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:25:18 np0005535469 systemd[1]: Started libpod-conmon-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope.
Nov 25 12:25:18 np0005535469 podman[423364]: 2025-11-25 17:25:18.321381935 +0000 UTC m=+0.026853172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:25:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:25:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:18 np0005535469 podman[423364]: 2025-11-25 17:25:18.454330454 +0000 UTC m=+0.159801701 container init d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:25:18 np0005535469 podman[423364]: 2025-11-25 17:25:18.463568375 +0000 UTC m=+0.169039612 container start d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:25:18 np0005535469 podman[423364]: 2025-11-25 17:25:18.467496203 +0000 UTC m=+0.172967440 container attach d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:25:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:25:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:19 np0005535469 charming_jang[423380]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:25:19 np0005535469 charming_jang[423380]: --> relative data size: 1.0
Nov 25 12:25:19 np0005535469 charming_jang[423380]: --> All data devices are unavailable
Nov 25 12:25:19 np0005535469 systemd[1]: libpod-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope: Deactivated successfully.
Nov 25 12:25:19 np0005535469 podman[423364]: 2025-11-25 17:25:19.586751137 +0000 UTC m=+1.292222424 container died d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:25:19 np0005535469 systemd[1]: libpod-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope: Consumed 1.041s CPU time.
Nov 25 12:25:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa-merged.mount: Deactivated successfully.
Nov 25 12:25:19 np0005535469 podman[423364]: 2025-11-25 17:25:19.665163552 +0000 UTC m=+1.370634799 container remove d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:25:19 np0005535469 systemd[1]: libpod-conmon-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope: Deactivated successfully.
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.247925) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520248038, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1784, "num_deletes": 251, "total_data_size": 2885057, "memory_usage": 2930352, "flush_reason": "Manual Compaction"}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520268332, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 2834984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61765, "largest_seqno": 63548, "table_properties": {"data_size": 2826763, "index_size": 5034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16813, "raw_average_key_size": 20, "raw_value_size": 2810406, "raw_average_value_size": 3357, "num_data_blocks": 224, "num_entries": 837, "num_filter_entries": 837, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091329, "oldest_key_time": 1764091329, "file_creation_time": 1764091520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 20464 microseconds, and 12047 cpu microseconds.
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.268402) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 2834984 bytes OK
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.268435) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.270240) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.270267) EVENT_LOG_v1 {"time_micros": 1764091520270259, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.270297) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2877430, prev total WAL file size 2877430, number of live WAL files 2.
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.272165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(2768KB)], [143(10149KB)]
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520272224, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 13227603, "oldest_snapshot_seqno": -1}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8332 keys, 11488264 bytes, temperature: kUnknown
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520345142, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 11488264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11433003, "index_size": 33324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 218159, "raw_average_key_size": 26, "raw_value_size": 11284780, "raw_average_value_size": 1354, "num_data_blocks": 1299, "num_entries": 8332, "num_filter_entries": 8332, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.345578) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 11488264 bytes
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.346910) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.1 rd, 157.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.9 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.7) write-amplify(4.1) OK, records in: 8846, records dropped: 514 output_compression: NoCompression
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.346970) EVENT_LOG_v1 {"time_micros": 1764091520346951, "job": 88, "event": "compaction_finished", "compaction_time_micros": 73049, "compaction_time_cpu_micros": 45288, "output_level": 6, "num_output_files": 1, "total_output_size": 11488264, "num_input_records": 8846, "num_output_records": 8332, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520347721, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520349265, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.272016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:20 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:20 np0005535469 podman[423561]: 2025-11-25 17:25:20.454921388 +0000 UTC m=+0.054966737 container create 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:25:20 np0005535469 systemd[1]: Started libpod-conmon-1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b.scope.
Nov 25 12:25:20 np0005535469 podman[423561]: 2025-11-25 17:25:20.429399174 +0000 UTC m=+0.029444553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:25:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:25:20 np0005535469 podman[423561]: 2025-11-25 17:25:20.547767585 +0000 UTC m=+0.147812944 container init 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:25:20 np0005535469 podman[423561]: 2025-11-25 17:25:20.556971987 +0000 UTC m=+0.157017336 container start 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:25:20 np0005535469 podman[423561]: 2025-11-25 17:25:20.560214895 +0000 UTC m=+0.160260244 container attach 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:25:20 np0005535469 quirky_mahavira[423578]: 167 167
Nov 25 12:25:20 np0005535469 systemd[1]: libpod-1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b.scope: Deactivated successfully.
Nov 25 12:25:20 np0005535469 podman[423561]: 2025-11-25 17:25:20.562671142 +0000 UTC m=+0.162716481 container died 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:25:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-df3e55b07c47c0aa3a58accee35009433053e48301956abd7e5eee6c6e446f91-merged.mount: Deactivated successfully.
Nov 25 12:25:20 np0005535469 podman[423561]: 2025-11-25 17:25:20.603232835 +0000 UTC m=+0.203278194 container remove 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:25:20 np0005535469 systemd[1]: libpod-conmon-1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b.scope: Deactivated successfully.
Nov 25 12:25:20 np0005535469 podman[423603]: 2025-11-25 17:25:20.772778211 +0000 UTC m=+0.044259426 container create 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:25:20 np0005535469 systemd[1]: Started libpod-conmon-24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee.scope.
Nov 25 12:25:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:25:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:20 np0005535469 podman[423603]: 2025-11-25 17:25:20.754801381 +0000 UTC m=+0.026282626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:25:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:20 np0005535469 podman[423603]: 2025-11-25 17:25:20.86387645 +0000 UTC m=+0.135357695 container init 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:25:20 np0005535469 podman[423603]: 2025-11-25 17:25:20.872130034 +0000 UTC m=+0.143611249 container start 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:25:20 np0005535469 podman[423603]: 2025-11-25 17:25:20.876593606 +0000 UTC m=+0.148074821 container attach 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:25:21 np0005535469 nova_compute[254092]: 2025-11-25 17:25:21.025 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 422 KiB/s wr, 81 op/s
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]: {
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:    "0": [
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:        {
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "devices": [
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "/dev/loop3"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            ],
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_name": "ceph_lv0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_size": "21470642176",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "name": "ceph_lv0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "tags": {
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cluster_name": "ceph",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.crush_device_class": "",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.encrypted": "0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osd_id": "0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.type": "block",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.vdo": "0"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            },
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "type": "block",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "vg_name": "ceph_vg0"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:        }
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:    ],
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:    "1": [
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:        {
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "devices": [
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "/dev/loop4"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            ],
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_name": "ceph_lv1",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_size": "21470642176",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "name": "ceph_lv1",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "tags": {
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cluster_name": "ceph",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.crush_device_class": "",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.encrypted": "0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osd_id": "1",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.type": "block",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.vdo": "0"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            },
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "type": "block",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "vg_name": "ceph_vg1"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:        }
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:    ],
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:    "2": [
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:        {
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "devices": [
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "/dev/loop5"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            ],
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_name": "ceph_lv2",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_size": "21470642176",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "name": "ceph_lv2",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "tags": {
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.cluster_name": "ceph",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.crush_device_class": "",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.encrypted": "0",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osd_id": "2",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.type": "block",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:                "ceph.vdo": "0"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            },
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "type": "block",
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:            "vg_name": "ceph_vg2"
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:        }
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]:    ]
Nov 25 12:25:21 np0005535469 lucid_blackburn[423620]: }
Nov 25 12:25:21 np0005535469 systemd[1]: libpod-24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee.scope: Deactivated successfully.
Nov 25 12:25:21 np0005535469 podman[423603]: 2025-11-25 17:25:21.712322034 +0000 UTC m=+0.983803289 container died 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:25:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678-merged.mount: Deactivated successfully.
Nov 25 12:25:21 np0005535469 nova_compute[254092]: 2025-11-25 17:25:21.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:21 np0005535469 podman[423603]: 2025-11-25 17:25:21.793078372 +0000 UTC m=+1.064559597 container remove 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:25:21 np0005535469 systemd[1]: libpod-conmon-24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee.scope: Deactivated successfully.
Nov 25 12:25:21 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:21Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:69:2e 10.100.0.13
Nov 25 12:25:21 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:21Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:69:2e 10.100.0.13
Nov 25 12:25:22 np0005535469 podman[423784]: 2025-11-25 17:25:22.643487009 +0000 UTC m=+0.064298231 container create 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:25:22 np0005535469 systemd[1]: Started libpod-conmon-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope.
Nov 25 12:25:22 np0005535469 podman[423784]: 2025-11-25 17:25:22.612742422 +0000 UTC m=+0.033553704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:25:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:25:22 np0005535469 podman[423784]: 2025-11-25 17:25:22.742042763 +0000 UTC m=+0.162853985 container init 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:25:22 np0005535469 podman[423784]: 2025-11-25 17:25:22.755139028 +0000 UTC m=+0.175950220 container start 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:25:22 np0005535469 podman[423784]: 2025-11-25 17:25:22.759173119 +0000 UTC m=+0.179984351 container attach 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:25:22 np0005535469 nostalgic_chatterjee[423800]: 167 167
Nov 25 12:25:22 np0005535469 systemd[1]: libpod-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope: Deactivated successfully.
Nov 25 12:25:22 np0005535469 conmon[423800]: conmon 6d6436ede7eac6b4c310 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope/container/memory.events
Nov 25 12:25:22 np0005535469 podman[423805]: 2025-11-25 17:25:22.828631948 +0000 UTC m=+0.046094465 container died 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:25:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-93d50950902a9e0a3d5de0d32e5128817906869367e20d3cab4a4345aa52a8ea-merged.mount: Deactivated successfully.
Nov 25 12:25:22 np0005535469 podman[423805]: 2025-11-25 17:25:22.884370716 +0000 UTC m=+0.101833173 container remove 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:25:22 np0005535469 systemd[1]: libpod-conmon-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope: Deactivated successfully.
Nov 25 12:25:23 np0005535469 podman[423827]: 2025-11-25 17:25:23.139883081 +0000 UTC m=+0.058635637 container create 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:25:23 np0005535469 systemd[1]: Started libpod-conmon-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope.
Nov 25 12:25:23 np0005535469 podman[423827]: 2025-11-25 17:25:23.111052916 +0000 UTC m=+0.029805532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:25:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 410 KiB/s wr, 26 op/s
Nov 25 12:25:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:25:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:25:23 np0005535469 podman[423827]: 2025-11-25 17:25:23.241878207 +0000 UTC m=+0.160630763 container init 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:25:23 np0005535469 podman[423827]: 2025-11-25 17:25:23.252120616 +0000 UTC m=+0.170873152 container start 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:25:23 np0005535469 podman[423827]: 2025-11-25 17:25:23.255511488 +0000 UTC m=+0.174264004 container attach 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]: {
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "osd_id": 1,
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "type": "bluestore"
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:    },
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "osd_id": 2,
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "type": "bluestore"
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:    },
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "osd_id": 0,
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:        "type": "bluestore"
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]:    }
Nov 25 12:25:24 np0005535469 ecstatic_neumann[423843]: }
Nov 25 12:25:24 np0005535469 systemd[1]: libpod-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope: Deactivated successfully.
Nov 25 12:25:24 np0005535469 systemd[1]: libpod-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope: Consumed 1.160s CPU time.
Nov 25 12:25:24 np0005535469 podman[423827]: 2025-11-25 17:25:24.405372906 +0000 UTC m=+1.324125442 container died 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:25:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863-merged.mount: Deactivated successfully.
Nov 25 12:25:24 np0005535469 podman[423827]: 2025-11-25 17:25:24.46943963 +0000 UTC m=+1.388192156 container remove 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:25:24 np0005535469 systemd[1]: libpod-conmon-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope: Deactivated successfully.
Nov 25 12:25:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:25:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:25:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fc391700-7c3c-4f5f-85be-7edc32d6d5dd does not exist
Nov 25 12:25:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7276d3de-8df8-4ecf-9911-be8efac50051 does not exist
Nov 25 12:25:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 929 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Nov 25 12:25:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:25:26 np0005535469 nova_compute[254092]: 2025-11-25 17:25:26.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:26 np0005535469 nova_compute[254092]: 2025-11-25 17:25:26.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 12:25:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.525188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529525259, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 333, "num_deletes": 250, "total_data_size": 173340, "memory_usage": 180392, "flush_reason": "Manual Compaction"}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529528459, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 171718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63549, "largest_seqno": 63881, "table_properties": {"data_size": 169548, "index_size": 333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5897, "raw_average_key_size": 20, "raw_value_size": 165273, "raw_average_value_size": 571, "num_data_blocks": 14, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091521, "oldest_key_time": 1764091521, "file_creation_time": 1764091529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 3302 microseconds, and 1213 cpu microseconds.
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.528501) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 171718 bytes OK
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.528520) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530007) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530030) EVENT_LOG_v1 {"time_micros": 1764091529530022, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530054) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 171047, prev total WAL file size 171047, number of live WAL files 2.
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353130' seq:72057594037927935, type:22 .. '6D6772737461740032373631' seq:0, type:0; will stop at (end)
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(167KB)], [146(10MB)]
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529530587, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 11659982, "oldest_snapshot_seqno": -1}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8111 keys, 8360841 bytes, temperature: kUnknown
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529569122, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 8360841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8311883, "index_size": 27597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 213738, "raw_average_key_size": 26, "raw_value_size": 8172268, "raw_average_value_size": 1007, "num_data_blocks": 1060, "num_entries": 8111, "num_filter_entries": 8111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.569465) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 8360841 bytes
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.570948) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 301.5 rd, 216.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.0 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(116.6) write-amplify(48.7) OK, records in: 8621, records dropped: 510 output_compression: NoCompression
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.570982) EVENT_LOG_v1 {"time_micros": 1764091529570968, "job": 90, "event": "compaction_finished", "compaction_time_micros": 38678, "compaction_time_cpu_micros": 24565, "output_level": 6, "num_output_files": 1, "total_output_size": 8360841, "num_input_records": 8621, "num_output_records": 8111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529571467, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529575794, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:25:31 np0005535469 nova_compute[254092]: 2025-11-25 17:25:31.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 12:25:31 np0005535469 nova_compute[254092]: 2025-11-25 17:25:31.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 1.7 MiB/s wr, 58 op/s
Nov 25 12:25:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 1.7 MiB/s wr, 58 op/s
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.551 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.551 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.564 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.627 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.628 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.636 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.637 254096 INFO nova.compute.claims [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.762 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:36 np0005535469 nova_compute[254092]: 2025-11-25 17:25:36.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:25:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706073263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:25:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 59 KiB/s wr, 6 op/s
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.222 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.230 254096 DEBUG nova.compute.provider_tree [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.245 254096 DEBUG nova.scheduler.client.report [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.267 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.268 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.316 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.317 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.338 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.368 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.478 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.480 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.481 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Creating image(s)#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.516 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.561 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.620 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.629 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.752 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.754 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.755 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.756 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.793 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.798 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 88446199-25eb-4303-8df1-334acb721afc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:37 np0005535469 nova_compute[254092]: 2025-11-25 17:25:37.851 254096 DEBUG nova.policy [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.156 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 88446199-25eb-4303-8df1-334acb721afc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.240 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 88446199-25eb-4303-8df1-334acb721afc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.376 254096 DEBUG nova.objects.instance [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 88446199-25eb-4303-8df1-334acb721afc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.391 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.391 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Ensure instance console log exists: /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.392 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.392 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.392 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:38.549 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:38 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:38.551 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:25:38 np0005535469 nova_compute[254092]: 2025-11-25 17:25:38.644 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Successfully created port: e42bce82-3ee1-4272-b032-6bf70412fe94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:25:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.409 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Successfully updated port: e42bce82-3ee1-4272-b032-6bf70412fe94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.426 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.427 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.427 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.503 254096 DEBUG nova.compute.manager [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.504 254096 DEBUG nova.compute.manager [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing instance network info cache due to event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.505 254096 DEBUG oslo_concurrency.lockutils [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:25:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:39 np0005535469 nova_compute[254092]: 2025-11-25 17:25:39.834 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:25:40
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control', 'backups']
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:25:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.550 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.569 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.569 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance network_info: |[{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.570 254096 DEBUG oslo_concurrency.lockutils [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.570 254096 DEBUG nova.network.neutron [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.574 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start _get_guest_xml network_info=[{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.579 254096 WARNING nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.585 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.586 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.593 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.594 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.595 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.595 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.595 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.597 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.597 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.597 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.598 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.598 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.601 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:41 np0005535469 nova_compute[254092]: 2025-11-25 17:25:41.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:25:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/780226783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.048 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.088 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.095 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:25:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304960193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.660 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.663 254096 DEBUG nova.virt.libvirt.vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1061700885',display_name='tempest-TestGettingAddress-server-1061700885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1061700885',id=150,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-d90lz9sr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:37Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=88446199-25eb-4303-8df1-334acb721afc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.664 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.666 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.668 254096 DEBUG nova.objects.instance [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88446199-25eb-4303-8df1-334acb721afc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.695 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <uuid>88446199-25eb-4303-8df1-334acb721afc</uuid>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <name>instance-00000096</name>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestGettingAddress-server-1061700885</nova:name>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:25:41</nova:creationTime>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <nova:port uuid="e42bce82-3ee1-4272-b032-6bf70412fe94">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2d:491a" ipVersion="6"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <entry name="serial">88446199-25eb-4303-8df1-334acb721afc</entry>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <entry name="uuid">88446199-25eb-4303-8df1-334acb721afc</entry>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/88446199-25eb-4303-8df1-334acb721afc_disk">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/88446199-25eb-4303-8df1-334acb721afc_disk.config">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:2d:49:1a"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <target dev="tape42bce82-3e"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/console.log" append="off"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:25:42 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:25:42 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:25:42 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:25:42 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.696 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Preparing to wait for external event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.697 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.698 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.698 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.699 254096 DEBUG nova.virt.libvirt.vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1061700885',display_name='tempest-TestGettingAddress-server-1061700885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1061700885',id=150,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-d90lz9sr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:37Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=88446199-25eb-4303-8df1-334acb721afc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.700 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.701 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.702 254096 DEBUG os_vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.704 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.704 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.709 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape42bce82-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.710 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape42bce82-3e, col_values=(('external_ids', {'iface-id': 'e42bce82-3ee1-4272-b032-6bf70412fe94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:49:1a', 'vm-uuid': '88446199-25eb-4303-8df1-334acb721afc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:42 np0005535469 NetworkManager[48891]: <info>  [1764091542.7651] manager: (tape42bce82-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.768 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.774 254096 INFO os_vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e')#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.848 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.849 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.851 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:2d:49:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.852 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Using config drive#033[00m
Nov 25 12:25:42 np0005535469 nova_compute[254092]: 2025-11-25 17:25:42.890 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.343 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Creating config drive at /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.354 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsra3kles execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.510 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsra3kles" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.555 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.558 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config 88446199-25eb-4303-8df1-334acb721afc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.729 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config 88446199-25eb-4303-8df1-334acb721afc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.730 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deleting local config drive /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config because it was imported into RBD.#033[00m
Nov 25 12:25:43 np0005535469 kernel: tape42bce82-3e: entered promiscuous mode
Nov 25 12:25:43 np0005535469 NetworkManager[48891]: <info>  [1764091543.8044] manager: (tape42bce82-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/671)
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.823 254096 DEBUG nova.network.neutron [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updated VIF entry in instance network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.825 254096 DEBUG nova.network.neutron [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:43Z|01607|binding|INFO|Claiming lport e42bce82-3ee1-4272-b032-6bf70412fe94 for this chassis.
Nov 25 12:25:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:43Z|01608|binding|INFO|e42bce82-3ee1-4272-b032-6bf70412fe94: Claiming fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a
Nov 25 12:25:43 np0005535469 systemd-udevd[424263]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.847 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], port_security=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe2d:491a/64', 'neutron:device_id': '88446199-25eb-4303-8df1-334acb721afc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e42bce82-3ee1-4272-b032-6bf70412fe94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.849 254096 DEBUG oslo_concurrency.lockutils [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.848 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e42bce82-3ee1-4272-b032-6bf70412fe94 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d bound to our chassis#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.850 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88c42593-de32-4e23-b7f7-5cac507fe68d#033[00m
Nov 25 12:25:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:43Z|01609|binding|INFO|Setting lport e42bce82-3ee1-4272-b032-6bf70412fe94 ovn-installed in OVS
Nov 25 12:25:43 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:43Z|01610|binding|INFO|Setting lport e42bce82-3ee1-4272-b032-6bf70412fe94 up in Southbound
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:43 np0005535469 NetworkManager[48891]: <info>  [1764091543.8645] device (tape42bce82-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:25:43 np0005535469 NetworkManager[48891]: <info>  [1764091543.8656] device (tape42bce82-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:25:43 np0005535469 systemd-machined[216343]: New machine qemu-184-instance-00000096.
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.870 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39a33e3a-cd1d-4e7d-a641-b6a4fe82d4a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:43 np0005535469 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.906 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccc9397-fda4-4478-a489-5f58371571e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.910 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d03f937b-fb4c-45a8-9797-c94490975a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.935 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7200018d-8ac2-457f-ad38-e15d07a59c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.956 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[83d57320-9405-4643-b327-d90f3c2ea229]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424279, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.974 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddab87c1-2fa9-45e7-83e5-37756076fcbf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796583, 'tstamp': 796583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424281, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796586, 'tstamp': 796586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424281, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.976 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88c42593-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88c42593-d0, col_values=(('external_ids', {'iface-id': '10a05078-1fb1-4ddb-9ad4-b52f3a95063f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:43 np0005535469 nova_compute[254092]: 2025-11-25 17:25:43.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:43 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.980 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:25:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.572 254096 DEBUG nova.compute.manager [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.572 254096 DEBUG oslo_concurrency.lockutils [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.573 254096 DEBUG oslo_concurrency.lockutils [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.573 254096 DEBUG oslo_concurrency.lockutils [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.574 254096 DEBUG nova.compute.manager [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Processing event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.649 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091544.648669, 88446199-25eb-4303-8df1-334acb721afc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.649 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Started (Lifecycle Event)#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.651 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.655 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.658 254096 INFO nova.virt.libvirt.driver [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance spawned successfully.#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.658 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.680 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.684 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.685 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.685 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.686 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.686 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.686 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.691 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.713 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.713 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091544.6495414, 88446199-25eb-4303-8df1-334acb721afc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.714 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.737 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.741 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091544.654381, 88446199-25eb-4303-8df1-334acb721afc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.741 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.746 254096 INFO nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 7.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.747 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.756 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.759 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.779 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.805 254096 INFO nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 8.20 seconds to build instance.#033[00m
Nov 25 12:25:44 np0005535469 nova_compute[254092]: 2025-11-25 17:25:44.821 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:46 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:25:46.554 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.635 254096 DEBUG nova.compute.manager [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.635 254096 DEBUG oslo_concurrency.lockutils [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 DEBUG oslo_concurrency.lockutils [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 DEBUG oslo_concurrency.lockutils [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 DEBUG nova.compute.manager [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] No waiting events found dispatching network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 WARNING nova.compute.manager [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received unexpected event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 for instance with vm_state active and task_state None.#033[00m
Nov 25 12:25:46 np0005535469 podman[424325]: 2025-11-25 17:25:46.659531205 +0000 UTC m=+0.063168490 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 12:25:46 np0005535469 podman[424324]: 2025-11-25 17:25:46.665745834 +0000 UTC m=+0.071841636 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 12:25:46 np0005535469 podman[424326]: 2025-11-25 17:25:46.694582369 +0000 UTC m=+0.096683173 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:25:46 np0005535469 nova_compute[254092]: 2025-11-25 17:25:46.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 12:25:47 np0005535469 nova_compute[254092]: 2025-11-25 17:25:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:47 np0005535469 nova_compute[254092]: 2025-11-25 17:25:47.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:48 np0005535469 nova_compute[254092]: 2025-11-25 17:25:48.748 254096 DEBUG nova.compute.manager [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:25:48 np0005535469 nova_compute[254092]: 2025-11-25 17:25:48.748 254096 DEBUG nova.compute.manager [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing instance network info cache due to event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:25:48 np0005535469 nova_compute[254092]: 2025-11-25 17:25:48.749 254096 DEBUG oslo_concurrency.lockutils [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:25:48 np0005535469 nova_compute[254092]: 2025-11-25 17:25:48.749 254096 DEBUG oslo_concurrency.lockutils [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:25:48 np0005535469 nova_compute[254092]: 2025-11-25 17:25:48.749 254096 DEBUG nova.network.neutron [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:25:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 12:25:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.560 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.561 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.561 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.562 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.562 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.620 254096 DEBUG nova.network.neutron [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updated VIF entry in instance network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.621 254096 DEBUG nova.network.neutron [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:25:50 np0005535469 nova_compute[254092]: 2025-11-25 17:25:50.650 254096 DEBUG oslo_concurrency.lockutils [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:25:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:25:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809576172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.061 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.147 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.148 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.154 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.154 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.328 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.329 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3247MB free_disk=59.921836853027344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.329 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.330 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 03d40f4f-1bf3-4e1d-8844-bae4b32443cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 88446199-25eb-4303-8df1-334acb721afc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.621 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.691 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.692 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.709 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.733 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:51 np0005535469 nova_compute[254092]: 2025-11-25 17:25:51.810 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011076229488181286 of space, bias 1.0, pg target 0.3322868846454386 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:25:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:25:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:25:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520429479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:25:52 np0005535469 nova_compute[254092]: 2025-11-25 17:25:52.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:25:52 np0005535469 nova_compute[254092]: 2025-11-25 17:25:52.260 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:25:52 np0005535469 nova_compute[254092]: 2025-11-25 17:25:52.280 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:25:52 np0005535469 nova_compute[254092]: 2025-11-25 17:25:52.300 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:25:52 np0005535469 nova_compute[254092]: 2025-11-25 17:25:52.301 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:25:52 np0005535469 nova_compute[254092]: 2025-11-25 17:25:52.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:25:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:25:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Nov 25 12:25:55 np0005535469 nova_compute[254092]: 2025-11-25 17:25:55.300 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:55 np0005535469 nova_compute[254092]: 2025-11-25 17:25:55.302 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:55 np0005535469 nova_compute[254092]: 2025-11-25 17:25:55.302 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:25:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:25:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4266833940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:25:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:25:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4266833940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:25:55 np0005535469 nova_compute[254092]: 2025-11-25 17:25:55.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:56 np0005535469 nova_compute[254092]: 2025-11-25 17:25:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:25:56 np0005535469 nova_compute[254092]: 2025-11-25 17:25:56.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:25:56 np0005535469 nova_compute[254092]: 2025-11-25 17:25:56.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:25:56 np0005535469 nova_compute[254092]: 2025-11-25 17:25:56.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 66 op/s
Nov 25 12:25:57 np0005535469 nova_compute[254092]: 2025-11-25 17:25:57.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:25:57 np0005535469 nova_compute[254092]: 2025-11-25 17:25:57.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:25:57 np0005535469 nova_compute[254092]: 2025-11-25 17:25:57.281 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:25:57 np0005535469 nova_compute[254092]: 2025-11-25 17:25:57.281 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:25:57 np0005535469 nova_compute[254092]: 2025-11-25 17:25:57.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:25:58 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:58Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:49:1a 10.100.0.3
Nov 25 12:25:58 np0005535469 ovn_controller[153477]: 2025-11-25T17:25:58Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:49:1a 10.100.0.3
Nov 25 12:25:58 np0005535469 nova_compute[254092]: 2025-11-25 17:25:58.706 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:25:58 np0005535469 nova_compute[254092]: 2025-11-25 17:25:58.725 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:25:58 np0005535469 nova_compute[254092]: 2025-11-25 17:25:58.725 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:25:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 60 op/s
Nov 25 12:25:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 12:26:01 np0005535469 nova_compute[254092]: 2025-11-25 17:26:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:01 np0005535469 nova_compute[254092]: 2025-11-25 17:26:01.523 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:01 np0005535469 nova_compute[254092]: 2025-11-25 17:26:01.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:02 np0005535469 nova_compute[254092]: 2025-11-25 17:26:02.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:26:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG nova.compute.manager [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG nova.compute.manager [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing instance network info cache due to event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG oslo_concurrency.lockutils [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG oslo_concurrency.lockutils [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.051 254096 DEBUG nova.network.neutron [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.122 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.123 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.124 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.124 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.125 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.127 254096 INFO nova.compute.manager [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Terminating instance#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.130 254096 DEBUG nova.compute.manager [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:26:06 np0005535469 kernel: tape42bce82-3e (unregistering): left promiscuous mode
Nov 25 12:26:06 np0005535469 NetworkManager[48891]: <info>  [1764091566.1908] device (tape42bce82-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:26:06 np0005535469 ovn_controller[153477]: 2025-11-25T17:26:06Z|01611|binding|INFO|Releasing lport e42bce82-3ee1-4272-b032-6bf70412fe94 from this chassis (sb_readonly=0)
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.198 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 ovn_controller[153477]: 2025-11-25T17:26:06Z|01612|binding|INFO|Setting lport e42bce82-3ee1-4272-b032-6bf70412fe94 down in Southbound
Nov 25 12:26:06 np0005535469 ovn_controller[153477]: 2025-11-25T17:26:06Z|01613|binding|INFO|Removing iface tape42bce82-3e ovn-installed in OVS
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.208 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], port_security=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe2d:491a/64', 'neutron:device_id': '88446199-25eb-4303-8df1-334acb721afc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e42bce82-3ee1-4272-b032-6bf70412fe94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.210 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e42bce82-3ee1-4272-b032-6bf70412fe94 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d unbound from our chassis#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.210 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88c42593-de32-4e23-b7f7-5cac507fe68d#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70e6b545-42dd-4380-918c-693cfe1c4368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:06 np0005535469 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 25 12:26:06 np0005535469 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 14.231s CPU time.
Nov 25 12:26:06 np0005535469 systemd-machined[216343]: Machine qemu-184-instance-00000096 terminated.
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.266 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abda95db-9aaf-468a-83e8-e14a3e7f1254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.272 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c293111-6757-4b9f-996e-73090b60586f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.309 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7e55a6c5-2915-4216-b2b2-b449ecc18bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.329 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f39e5b-d3c2-4e50-a86d-cc1572b8d885]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424444, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.350 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b514f167-5b3b-4a2a-84a3-9e0b4db54e1d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796583, 'tstamp': 796583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424445, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796586, 'tstamp': 796586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424445, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.354 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.379 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88c42593-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88c42593-d0, col_values=(('external_ids', {'iface-id': '10a05078-1fb1-4ddb-9ad4-b52f3a95063f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:26:06 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.393 254096 INFO nova.virt.libvirt.driver [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance destroyed successfully.#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.394 254096 DEBUG nova.objects.instance [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 88446199-25eb-4303-8df1-334acb721afc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.404 254096 DEBUG nova.virt.libvirt.vif [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1061700885',display_name='tempest-TestGettingAddress-server-1061700885',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1061700885',id=150,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:25:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-d90lz9sr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:25:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=88446199-25eb-4303-8df1-334acb721afc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.404 254096 DEBUG nova.network.os_vif_util [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.406 254096 DEBUG nova.network.os_vif_util [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.407 254096 DEBUG os_vif [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.409 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape42bce82-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.411 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.415 254096 INFO os_vif [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e')#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.804 254096 INFO nova.virt.libvirt.driver [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deleting instance files /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc_del#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.806 254096 INFO nova.virt.libvirt.driver [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deletion of /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc_del complete#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.882 254096 INFO nova.compute.manager [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.884 254096 DEBUG oslo.service.loopingcall [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.884 254096 DEBUG nova.compute.manager [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:26:06 np0005535469 nova_compute[254092]: 2025-11-25 17:26:06.885 254096 DEBUG nova.network.neutron [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:26:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.519 254096 DEBUG nova.network.neutron [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updated VIF entry in instance network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.520 254096 DEBUG nova.network.neutron [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.539 254096 DEBUG nova.network.neutron [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.542 254096 DEBUG oslo_concurrency.lockutils [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.563 254096 INFO nova.compute.manager [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.596 254096 DEBUG nova.compute.manager [req-85d1dc0b-ffa7-480a-9988-72eb835818fc req-4339aea6-0e7d-4416-99a4-6f67e7e79be1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-deleted-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.638 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.639 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:07 np0005535469 nova_compute[254092]: 2025-11-25 17:26:07.729 254096 DEBUG oslo_concurrency.processutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.124 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-unplugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.125 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.126 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.127 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.127 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] No waiting events found dispatching network-vif-unplugged-e42bce82-3ee1-4272-b032-6bf70412fe94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.127 254096 WARNING nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received unexpected event network-vif-unplugged-e42bce82-3ee1-4272-b032-6bf70412fe94 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.128 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.128 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.129 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.129 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.129 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] No waiting events found dispatching network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.130 254096 WARNING nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received unexpected event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:26:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:26:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900996606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.204 254096 DEBUG oslo_concurrency.processutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.214 254096 DEBUG nova.compute.provider_tree [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.239 254096 DEBUG nova.scheduler.client.report [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.266 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.296 254096 INFO nova.scheduler.client.report [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 88446199-25eb-4303-8df1-334acb721afc#033[00m
Nov 25 12:26:08 np0005535469 nova_compute[254092]: 2025-11-25 17:26:08.365 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 12:26:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.857 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.857 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.858 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.858 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.858 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.860 254096 INFO nova.compute.manager [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Terminating instance#033[00m
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.861 254096 DEBUG nova.compute.manager [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:26:09 np0005535469 kernel: tap329205f7-aa (unregistering): left promiscuous mode
Nov 25 12:26:09 np0005535469 NetworkManager[48891]: <info>  [1764091569.9232] device (tap329205f7-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:26:09Z|01614|binding|INFO|Releasing lport 329205f7-aac9-4c77-b1eb-b7af04c34038 from this chassis (sb_readonly=0)
Nov 25 12:26:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:26:09Z|01615|binding|INFO|Setting lport 329205f7-aac9-4c77-b1eb-b7af04c34038 down in Southbound
Nov 25 12:26:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:26:09Z|01616|binding|INFO|Removing iface tap329205f7-aa ovn-installed in OVS
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.973 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], port_security=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe8d:692e/64', 'neutron:device_id': '03d40f4f-1bf3-4e1d-8844-bae4b32443cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=329205f7-aac9-4c77-b1eb-b7af04c34038) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:26:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.975 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 329205f7-aac9-4c77-b1eb-b7af04c34038 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d unbound from our chassis#033[00m
Nov 25 12:26:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.977 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88c42593-de32-4e23-b7f7-5cac507fe68d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:26:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f78e335b-7213-4047-9bd0-72e3007359a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.979 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d namespace which is not needed anymore#033[00m
Nov 25 12:26:09 np0005535469 nova_compute[254092]: 2025-11-25 17:26:09.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:10 np0005535469 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 25 12:26:10 np0005535469 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 16.582s CPU time.
Nov 25 12:26:10 np0005535469 systemd-machined[216343]: Machine qemu-183-instance-00000095 terminated.
Nov 25 12:26:10 np0005535469 kernel: tap329205f7-aa: entered promiscuous mode
Nov 25 12:26:10 np0005535469 kernel: tap329205f7-aa (unregistering): left promiscuous mode
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.099 254096 INFO nova.virt.libvirt.driver [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance destroyed successfully.#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.100 254096 DEBUG nova.objects.instance [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:26:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.116 254096 DEBUG nova.virt.libvirt.vif [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1562740359',display_name='tempest-TestGettingAddress-server-1562740359',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1562740359',id=149,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:25:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-01dpdy1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:25:08Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=03d40f4f-1bf3-4e1d-8844-bae4b32443cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.116 254096 DEBUG nova.network.os_vif_util [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.118 254096 DEBUG nova.network.os_vif_util [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.118 254096 DEBUG os_vif [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.122 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap329205f7-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.126 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.129 254096 INFO os_vif [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa')#033[00m
Nov 25 12:26:10 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : haproxy version is 2.8.14-c23fe91
Nov 25 12:26:10 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : path to executable is /usr/sbin/haproxy
Nov 25 12:26:10 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [WARNING]  (422865) : Exiting Master process...
Nov 25 12:26:10 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [WARNING]  (422865) : Exiting Master process...
Nov 25 12:26:10 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [ALERT]    (422865) : Current worker (422867) exited with code 143 (Terminated)
Nov 25 12:26:10 np0005535469 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [WARNING]  (422865) : All workers exited. Exiting... (0)
Nov 25 12:26:10 np0005535469 systemd[1]: libpod-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb.scope: Deactivated successfully.
Nov 25 12:26:10 np0005535469 podman[424533]: 2025-11-25 17:26:10.170522557 +0000 UTC m=+0.056071697 container died 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:26:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb-userdata-shm.mount: Deactivated successfully.
Nov 25 12:26:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6a2aaa9402eb79ede98028373e6a92c83002a08f5e7749bcbd352c3d9be74b73-merged.mount: Deactivated successfully.
Nov 25 12:26:10 np0005535469 podman[424533]: 2025-11-25 17:26:10.214811373 +0000 UTC m=+0.100360513 container cleanup 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.230 254096 DEBUG nova.compute.manager [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-unplugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:10 np0005535469 systemd[1]: libpod-conmon-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb.scope: Deactivated successfully.
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.230 254096 DEBUG oslo_concurrency.lockutils [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.230 254096 DEBUG oslo_concurrency.lockutils [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.231 254096 DEBUG oslo_concurrency.lockutils [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.231 254096 DEBUG nova.compute.manager [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] No waiting events found dispatching network-vif-unplugged-329205f7-aac9-4c77-b1eb-b7af04c34038 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.231 254096 DEBUG nova.compute.manager [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-unplugged-329205f7-aac9-4c77-b1eb-b7af04c34038 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:26:10 np0005535469 podman[424579]: 2025-11-25 17:26:10.291592533 +0000 UTC m=+0.048378588 container remove 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.303 254096 DEBUG nova.compute.manager [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG nova.compute.manager [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing instance network info cache due to event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG oslo_concurrency.lockutils [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG oslo_concurrency.lockutils [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG nova.network.neutron [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.304 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c565c72-93b6-4fe4-91bf-01b249e4847b]: (4, ('Tue Nov 25 05:26:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d (5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb)\n5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb\nTue Nov 25 05:26:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d (5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb)\n5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[146d4061-c60b-4579-8e57-ed3377b49ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.308 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:26:10 np0005535469 kernel: tap88c42593-d0: left promiscuous mode
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.312 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.316 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26baf4db-aaec-4d32-9939-a76c60718166]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.333 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de68d10d-694e-4525-98c6-25c7d9242eb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.334 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2443b04-e27f-42b3-8435-6cb4755ab0d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7440897-987f-4d20-864c-5d1a98941836]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796558, 'reachable_time': 20320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424594, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:10 np0005535469 systemd[1]: run-netns-ovnmeta\x2d88c42593\x2dde32\x2d4e23\x2db7f7\x2d5cac507fe68d.mount: Deactivated successfully.
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.361 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:26:10 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.362 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5abdcd-92d5-4fce-abdf-ffc234a1ca74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.558 254096 INFO nova.virt.libvirt.driver [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deleting instance files /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_del#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.559 254096 INFO nova.virt.libvirt.driver [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deletion of /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_del complete#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.650 254096 INFO nova.compute.manager [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.651 254096 DEBUG oslo.service.loopingcall [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.652 254096 DEBUG nova.compute.manager [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:26:10 np0005535469 nova_compute[254092]: 2025-11-25 17:26:10.652 254096 DEBUG nova.network.neutron [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.192 254096 DEBUG nova.network.neutron [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.208 254096 INFO nova.compute.manager [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 0.56 seconds to deallocate network for instance.#033[00m
Nov 25 12:26:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 99 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.2 MiB/s wr, 154 op/s
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.256 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.257 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.309 254096 DEBUG oslo_concurrency.processutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:26:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:26:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583867834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.801 254096 DEBUG oslo_concurrency.processutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.810 254096 DEBUG nova.compute.provider_tree [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.834 254096 DEBUG nova.scheduler.client.report [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.868 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.905 254096 INFO nova.scheduler.client.report [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 03d40f4f-1bf3-4e1d-8844-bae4b32443cc#033[00m
Nov 25 12:26:11 np0005535469 nova_compute[254092]: 2025-11-25 17:26:11.973 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.203 254096 DEBUG nova.network.neutron [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated VIF entry in instance network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.205 254096 DEBUG nova.network.neutron [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.237 254096 DEBUG oslo_concurrency.lockutils [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.327 254096 DEBUG nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.328 254096 DEBUG oslo_concurrency.lockutils [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.329 254096 DEBUG oslo_concurrency.lockutils [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.330 254096 DEBUG oslo_concurrency.lockutils [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.330 254096 DEBUG nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] No waiting events found dispatching network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.331 254096 WARNING nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received unexpected event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:26:12 np0005535469 nova_compute[254092]: 2025-11-25 17:26:12.331 254096 DEBUG nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-deleted-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:26:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 99 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 24 KiB/s wr, 91 op/s
Nov 25 12:26:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:13.665 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:13.665 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:13.666 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:15 np0005535469 nova_compute[254092]: 2025-11-25 17:26:15.128 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 41 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 25 KiB/s wr, 118 op/s
Nov 25 12:26:16 np0005535469 nova_compute[254092]: 2025-11-25 17:26:16.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:16 np0005535469 nova_compute[254092]: 2025-11-25 17:26:16.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:16 np0005535469 nova_compute[254092]: 2025-11-25 17:26:16.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 23 KiB/s wr, 88 op/s
Nov 25 12:26:17 np0005535469 podman[424619]: 2025-11-25 17:26:17.70132793 +0000 UTC m=+0.104299780 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:26:17 np0005535469 podman[424620]: 2025-11-25 17:26:17.717106349 +0000 UTC m=+0.115923747 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:26:17 np0005535469 podman[424621]: 2025-11-25 17:26:17.737866244 +0000 UTC m=+0.135392196 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:26:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 12 KiB/s wr, 69 op/s
Nov 25 12:26:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:20 np0005535469 nova_compute[254092]: 2025-11-25 17:26:20.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 12 KiB/s wr, 69 op/s
Nov 25 12:26:21 np0005535469 nova_compute[254092]: 2025-11-25 17:26:21.390 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091566.388775, 88446199-25eb-4303-8df1-334acb721afc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:26:21 np0005535469 nova_compute[254092]: 2025-11-25 17:26:21.390 254096 INFO nova.compute.manager [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:26:21 np0005535469 nova_compute[254092]: 2025-11-25 17:26:21.415 254096 DEBUG nova.compute.manager [None req-e867a435-bd72-4c7a-9452-8adfc35fcb1b - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:26:21 np0005535469 nova_compute[254092]: 2025-11-25 17:26:21.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 12:26:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:25 np0005535469 nova_compute[254092]: 2025-11-25 17:26:25.097 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091570.096119, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:26:25 np0005535469 nova_compute[254092]: 2025-11-25 17:26:25.098 254096 INFO nova.compute.manager [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:26:25 np0005535469 nova_compute[254092]: 2025-11-25 17:26:25.118 254096 DEBUG nova.compute.manager [None req-b9f7e2fe-6e4a-4058-8c02-634a77cf18fd - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:26:25 np0005535469 nova_compute[254092]: 2025-11-25 17:26:25.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 12:26:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:26:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:26:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:26 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:26 np0005535469 nova_compute[254092]: 2025-11-25 17:26:26.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:27 np0005535469 podman[425070]: 2025-11-25 17:26:27.039674354 +0000 UTC m=+0.064315552 container create 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:26:27 np0005535469 systemd[1]: Started libpod-conmon-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope.
Nov 25 12:26:27 np0005535469 podman[425070]: 2025-11-25 17:26:27.015471075 +0000 UTC m=+0.040112273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:27 np0005535469 podman[425070]: 2025-11-25 17:26:27.152207437 +0000 UTC m=+0.176848635 container init 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:26:27 np0005535469 podman[425070]: 2025-11-25 17:26:27.162439845 +0000 UTC m=+0.187081003 container start 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:26:27 np0005535469 podman[425070]: 2025-11-25 17:26:27.166532347 +0000 UTC m=+0.191173505 container attach 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:26:27 np0005535469 systemd[1]: libpod-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope: Deactivated successfully.
Nov 25 12:26:27 np0005535469 modest_noyce[425086]: 167 167
Nov 25 12:26:27 np0005535469 conmon[425086]: conmon 35eaa49a9671d1ba87cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope/container/memory.events
Nov 25 12:26:27 np0005535469 podman[425070]: 2025-11-25 17:26:27.175560572 +0000 UTC m=+0.200201730 container died 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 12:26:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c1702c71715a6ce3cd8fee68b53ca713d20ce5ae5352ffcc4984dea94ee2edca-merged.mount: Deactivated successfully.
Nov 25 12:26:27 np0005535469 podman[425070]: 2025-11-25 17:26:27.211750588 +0000 UTC m=+0.236391766 container remove 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:26:27 np0005535469 systemd[1]: libpod-conmon-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope: Deactivated successfully.
Nov 25 12:26:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:27 np0005535469 podman[425109]: 2025-11-25 17:26:27.395615123 +0000 UTC m=+0.053817667 container create 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:26:27 np0005535469 systemd[1]: Started libpod-conmon-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope.
Nov 25 12:26:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:27 np0005535469 podman[425109]: 2025-11-25 17:26:27.367259411 +0000 UTC m=+0.025461935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:27 np0005535469 podman[425109]: 2025-11-25 17:26:27.48995457 +0000 UTC m=+0.148157174 container init 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:26:27 np0005535469 podman[425109]: 2025-11-25 17:26:27.501126504 +0000 UTC m=+0.159329018 container start 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:26:27 np0005535469 podman[425109]: 2025-11-25 17:26:27.50539221 +0000 UTC m=+0.163594744 container attach 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:26:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]: [
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:    {
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "available": false,
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "ceph_device": false,
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "lsm_data": {},
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "lvs": [],
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "path": "/dev/sr0",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "rejected_reasons": [
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "Has a FileSystem",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "Insufficient space (<5GB)"
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        ],
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        "sys_api": {
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "actuators": null,
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "device_nodes": "sr0",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "devname": "sr0",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "human_readable_size": "482.00 KB",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "id_bus": "ata",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "model": "QEMU DVD-ROM",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "nr_requests": "2",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "parent": "/dev/sr0",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "partitions": {},
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "path": "/dev/sr0",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "removable": "1",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "rev": "2.5+",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "ro": "0",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "rotational": "1",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "sas_address": "",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "sas_device_handle": "",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "scheduler_mode": "mq-deadline",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "sectors": 0,
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "sectorsize": "2048",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "size": 493568.0,
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "support_discard": "2048",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "type": "disk",
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:            "vendor": "QEMU"
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:        }
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]:    }
Nov 25 12:26:29 np0005535469 magical_mestorf[425125]: ]
Nov 25 12:26:29 np0005535469 systemd[1]: libpod-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope: Deactivated successfully.
Nov 25 12:26:29 np0005535469 podman[425109]: 2025-11-25 17:26:29.435789065 +0000 UTC m=+2.093991569 container died 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 12:26:29 np0005535469 systemd[1]: libpod-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope: Consumed 2.017s CPU time.
Nov 25 12:26:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2-merged.mount: Deactivated successfully.
Nov 25 12:26:29 np0005535469 podman[425109]: 2025-11-25 17:26:29.49659923 +0000 UTC m=+2.154801724 container remove 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:26:29 np0005535469 systemd[1]: libpod-conmon-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope: Deactivated successfully.
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b5eb80e1-e79c-43e7-99d2-9f0d15d969bc does not exist
Nov 25 12:26:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 36232c1e-7870-487c-9b46-8a89dff5876c does not exist
Nov 25 12:26:29 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cbfed74a-6692-48fe-b73d-cca45bd837d9 does not exist
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:26:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:26:30 np0005535469 nova_compute[254092]: 2025-11-25 17:26:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:30 np0005535469 podman[427445]: 2025-11-25 17:26:30.263579976 +0000 UTC m=+0.050214527 container create cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:26:30 np0005535469 systemd[1]: Started libpod-conmon-cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a.scope.
Nov 25 12:26:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:30 np0005535469 podman[427445]: 2025-11-25 17:26:30.24166157 +0000 UTC m=+0.028296131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:30 np0005535469 podman[427445]: 2025-11-25 17:26:30.34491545 +0000 UTC m=+0.131549991 container init cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:26:30 np0005535469 podman[427445]: 2025-11-25 17:26:30.353400751 +0000 UTC m=+0.140035252 container start cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:26:30 np0005535469 podman[427445]: 2025-11-25 17:26:30.357253716 +0000 UTC m=+0.143888277 container attach cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:26:30 np0005535469 distracted_kowalevski[427461]: 167 167
Nov 25 12:26:30 np0005535469 systemd[1]: libpod-cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a.scope: Deactivated successfully.
Nov 25 12:26:30 np0005535469 podman[427445]: 2025-11-25 17:26:30.361325107 +0000 UTC m=+0.147959648 container died cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:26:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a826506abb3e61211dd27a28915cd8c6ad4d0d0ac711234302088d21bd36154c-merged.mount: Deactivated successfully.
Nov 25 12:26:30 np0005535469 podman[427445]: 2025-11-25 17:26:30.402887718 +0000 UTC m=+0.189522249 container remove cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 12:26:30 np0005535469 systemd[1]: libpod-conmon-cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a.scope: Deactivated successfully.
Nov 25 12:26:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:26:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:26:30 np0005535469 podman[427486]: 2025-11-25 17:26:30.587561815 +0000 UTC m=+0.055698527 container create 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 12:26:30 np0005535469 systemd[1]: Started libpod-conmon-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope.
Nov 25 12:26:30 np0005535469 podman[427486]: 2025-11-25 17:26:30.559372738 +0000 UTC m=+0.027509530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:30 np0005535469 podman[427486]: 2025-11-25 17:26:30.689122079 +0000 UTC m=+0.157258841 container init 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 12:26:30 np0005535469 podman[427486]: 2025-11-25 17:26:30.699658826 +0000 UTC m=+0.167795548 container start 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:26:30 np0005535469 podman[427486]: 2025-11-25 17:26:30.703691326 +0000 UTC m=+0.171828128 container attach 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 12:26:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:31 np0005535469 eloquent_liskov[427502]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:26:31 np0005535469 eloquent_liskov[427502]: --> relative data size: 1.0
Nov 25 12:26:31 np0005535469 eloquent_liskov[427502]: --> All data devices are unavailable
Nov 25 12:26:31 np0005535469 nova_compute[254092]: 2025-11-25 17:26:31.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:31 np0005535469 systemd[1]: libpod-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope: Deactivated successfully.
Nov 25 12:26:31 np0005535469 systemd[1]: libpod-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope: Consumed 1.141s CPU time.
Nov 25 12:26:31 np0005535469 conmon[427502]: conmon 4c4786442844c6c8b103 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope/container/memory.events
Nov 25 12:26:31 np0005535469 podman[427486]: 2025-11-25 17:26:31.897818939 +0000 UTC m=+1.365955691 container died 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:26:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e-merged.mount: Deactivated successfully.
Nov 25 12:26:31 np0005535469 podman[427486]: 2025-11-25 17:26:31.975682619 +0000 UTC m=+1.443819341 container remove 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:26:31 np0005535469 systemd[1]: libpod-conmon-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope: Deactivated successfully.
Nov 25 12:26:32 np0005535469 podman[427680]: 2025-11-25 17:26:32.874866164 +0000 UTC m=+0.051991297 container create d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:26:32 np0005535469 systemd[1]: Started libpod-conmon-d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0.scope.
Nov 25 12:26:32 np0005535469 podman[427680]: 2025-11-25 17:26:32.856055672 +0000 UTC m=+0.033180825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:32 np0005535469 podman[427680]: 2025-11-25 17:26:32.973821237 +0000 UTC m=+0.150946360 container init d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:26:32 np0005535469 podman[427680]: 2025-11-25 17:26:32.984919369 +0000 UTC m=+0.162044492 container start d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:26:32 np0005535469 podman[427680]: 2025-11-25 17:26:32.988935348 +0000 UTC m=+0.166060511 container attach d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:26:32 np0005535469 practical_nobel[427696]: 167 167
Nov 25 12:26:32 np0005535469 systemd[1]: libpod-d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0.scope: Deactivated successfully.
Nov 25 12:26:32 np0005535469 podman[427680]: 2025-11-25 17:26:32.995269101 +0000 UTC m=+0.172394254 container died d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:26:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dc25570abb5f6776ce301496b22c21939a3f4658a09c2f32922f2c7713bbe727-merged.mount: Deactivated successfully.
Nov 25 12:26:33 np0005535469 podman[427680]: 2025-11-25 17:26:33.041786517 +0000 UTC m=+0.218911640 container remove d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:26:33 np0005535469 systemd[1]: libpod-conmon-d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0.scope: Deactivated successfully.
Nov 25 12:26:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:33 np0005535469 podman[427718]: 2025-11-25 17:26:33.263779449 +0000 UTC m=+0.062523293 container create 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:26:33 np0005535469 systemd[1]: Started libpod-conmon-97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd.scope.
Nov 25 12:26:33 np0005535469 podman[427718]: 2025-11-25 17:26:33.233577978 +0000 UTC m=+0.032321872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:33 np0005535469 podman[427718]: 2025-11-25 17:26:33.381072762 +0000 UTC m=+0.179816656 container init 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:26:33 np0005535469 podman[427718]: 2025-11-25 17:26:33.392102232 +0000 UTC m=+0.190846046 container start 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:26:33 np0005535469 podman[427718]: 2025-11-25 17:26:33.395814694 +0000 UTC m=+0.194558528 container attach 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]: {
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:    "0": [
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:        {
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "devices": [
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "/dev/loop3"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            ],
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_name": "ceph_lv0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_size": "21470642176",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "name": "ceph_lv0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "tags": {
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cluster_name": "ceph",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.crush_device_class": "",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.encrypted": "0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osd_id": "0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.type": "block",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.vdo": "0"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            },
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "type": "block",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "vg_name": "ceph_vg0"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:        }
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:    ],
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:    "1": [
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:        {
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "devices": [
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "/dev/loop4"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            ],
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_name": "ceph_lv1",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_size": "21470642176",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "name": "ceph_lv1",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "tags": {
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cluster_name": "ceph",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.crush_device_class": "",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.encrypted": "0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osd_id": "1",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.type": "block",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.vdo": "0"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            },
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "type": "block",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "vg_name": "ceph_vg1"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:        }
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:    ],
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:    "2": [
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:        {
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "devices": [
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "/dev/loop5"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            ],
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_name": "ceph_lv2",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_size": "21470642176",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "name": "ceph_lv2",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "tags": {
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.cluster_name": "ceph",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.crush_device_class": "",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.encrypted": "0",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osd_id": "2",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.type": "block",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:                "ceph.vdo": "0"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            },
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "type": "block",
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:            "vg_name": "ceph_vg2"
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:        }
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]:    ]
Nov 25 12:26:34 np0005535469 relaxed_snyder[427734]: }
Nov 25 12:26:34 np0005535469 systemd[1]: libpod-97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd.scope: Deactivated successfully.
Nov 25 12:26:34 np0005535469 podman[427718]: 2025-11-25 17:26:34.264521779 +0000 UTC m=+1.063265583 container died 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:26:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b-merged.mount: Deactivated successfully.
Nov 25 12:26:34 np0005535469 podman[427718]: 2025-11-25 17:26:34.340543378 +0000 UTC m=+1.139287192 container remove 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:26:34 np0005535469 systemd[1]: libpod-conmon-97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd.scope: Deactivated successfully.
Nov 25 12:26:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:35 np0005535469 nova_compute[254092]: 2025-11-25 17:26:35.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:35 np0005535469 podman[427898]: 2025-11-25 17:26:35.257034414 +0000 UTC m=+0.059537681 container create 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:26:35 np0005535469 systemd[1]: Started libpod-conmon-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope.
Nov 25 12:26:35 np0005535469 podman[427898]: 2025-11-25 17:26:35.23522484 +0000 UTC m=+0.037728137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:35 np0005535469 podman[427898]: 2025-11-25 17:26:35.365114367 +0000 UTC m=+0.167617704 container init 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:26:35 np0005535469 podman[427898]: 2025-11-25 17:26:35.37519447 +0000 UTC m=+0.177697777 container start 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:26:35 np0005535469 podman[427898]: 2025-11-25 17:26:35.379634872 +0000 UTC m=+0.182138189 container attach 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 12:26:35 np0005535469 bold_merkle[427915]: 167 167
Nov 25 12:26:35 np0005535469 systemd[1]: libpod-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope: Deactivated successfully.
Nov 25 12:26:35 np0005535469 conmon[427915]: conmon 15111bdc3d8ac45854dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope/container/memory.events
Nov 25 12:26:35 np0005535469 podman[427898]: 2025-11-25 17:26:35.384780662 +0000 UTC m=+0.187283939 container died 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 12:26:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0f8789c09ab604725dca5c6e4cf67d697f87085f5ae762be989f40cd7653e5a9-merged.mount: Deactivated successfully.
Nov 25 12:26:35 np0005535469 podman[427898]: 2025-11-25 17:26:35.43432629 +0000 UTC m=+0.236829567 container remove 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 12:26:35 np0005535469 systemd[1]: libpod-conmon-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope: Deactivated successfully.
Nov 25 12:26:35 np0005535469 podman[427939]: 2025-11-25 17:26:35.651153822 +0000 UTC m=+0.045550161 container create 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:26:35 np0005535469 systemd[1]: Started libpod-conmon-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope.
Nov 25 12:26:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:26:35 np0005535469 podman[427939]: 2025-11-25 17:26:35.633068329 +0000 UTC m=+0.027464648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:26:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:26:35 np0005535469 podman[427939]: 2025-11-25 17:26:35.74438959 +0000 UTC m=+0.138785889 container init 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:26:35 np0005535469 podman[427939]: 2025-11-25 17:26:35.75836479 +0000 UTC m=+0.152761099 container start 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:26:35 np0005535469 podman[427939]: 2025-11-25 17:26:35.761895456 +0000 UTC m=+0.156291755 container attach 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:26:36 np0005535469 strange_feistel[427954]: {
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "osd_id": 1,
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "type": "bluestore"
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:    },
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "osd_id": 2,
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "type": "bluestore"
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:    },
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "osd_id": 0,
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:        "type": "bluestore"
Nov 25 12:26:36 np0005535469 strange_feistel[427954]:    }
Nov 25 12:26:36 np0005535469 strange_feistel[427954]: }
Nov 25 12:26:36 np0005535469 systemd[1]: libpod-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope: Deactivated successfully.
Nov 25 12:26:36 np0005535469 systemd[1]: libpod-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope: Consumed 1.026s CPU time.
Nov 25 12:26:36 np0005535469 podman[427939]: 2025-11-25 17:26:36.775391633 +0000 UTC m=+1.169787982 container died 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:26:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1-merged.mount: Deactivated successfully.
Nov 25 12:26:36 np0005535469 podman[427939]: 2025-11-25 17:26:36.843004273 +0000 UTC m=+1.237400582 container remove 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:26:36 np0005535469 systemd[1]: libpod-conmon-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope: Deactivated successfully.
Nov 25 12:26:36 np0005535469 nova_compute[254092]: 2025-11-25 17:26:36.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:26:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:26:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:36 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7bdfde25-ac8d-4822-a8ea-959246ebe259 does not exist
Nov 25 12:26:36 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a96007a6-b467-45d6-81b1-4e857bc609e1 does not exist
Nov 25 12:26:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:37 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:37 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:26:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:39 np0005535469 nova_compute[254092]: 2025-11-25 17:26:39.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:39.254 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:26:39 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:39.255 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:26:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:26:40 np0005535469 nova_compute[254092]: 2025-11-25 17:26:40.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:26:40
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:26:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:26:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:41 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:26:41.257 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:26:41 np0005535469 nova_compute[254092]: 2025-11-25 17:26:41.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:45 np0005535469 nova_compute[254092]: 2025-11-25 17:26:45.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:46 np0005535469 nova_compute[254092]: 2025-11-25 17:26:46.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:47 np0005535469 nova_compute[254092]: 2025-11-25 17:26:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:48 np0005535469 nova_compute[254092]: 2025-11-25 17:26:48.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:48 np0005535469 podman[428052]: 2025-11-25 17:26:48.653206167 +0000 UTC m=+0.066629925 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:26:48 np0005535469 podman[428051]: 2025-11-25 17:26:48.65919616 +0000 UTC m=+0.072626578 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:26:48 np0005535469 podman[428053]: 2025-11-25 17:26:48.695663583 +0000 UTC m=+0.102176492 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:26:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:26:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:26:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173695511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:26:50 np0005535469 nova_compute[254092]: 2025-11-25 17:26:50.990 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.219 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.222 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3621MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.222 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.222 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.319 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.345 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:26:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:26:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717067192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.839 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.848 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.867 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:26:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.899 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.900 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:26:51 np0005535469 nova_compute[254092]: 2025-11-25 17:26:51.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:26:54 np0005535469 nova_compute[254092]: 2025-11-25 17:26:54.900 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:54 np0005535469 nova_compute[254092]: 2025-11-25 17:26:54.901 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:54 np0005535469 nova_compute[254092]: 2025-11-25 17:26:54.901 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:26:55 np0005535469 nova_compute[254092]: 2025-11-25 17:26:55.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:26:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/420285285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:26:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:26:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/420285285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:26:56 np0005535469 nova_compute[254092]: 2025-11-25 17:26:56.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:26:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:57 np0005535469 nova_compute[254092]: 2025-11-25 17:26:57.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:58 np0005535469 nova_compute[254092]: 2025-11-25 17:26:58.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:26:58 np0005535469 nova_compute[254092]: 2025-11-25 17:26:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:26:58 np0005535469 nova_compute[254092]: 2025-11-25 17:26:58.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:26:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:26:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:00 np0005535469 nova_compute[254092]: 2025-11-25 17:27:00.160 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:01 np0005535469 nova_compute[254092]: 2025-11-25 17:27:01.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:03 np0005535469 nova_compute[254092]: 2025-11-25 17:27:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:04 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:04Z|01617|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 12:27:05 np0005535469 nova_compute[254092]: 2025-11-25 17:27:05.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:07 np0005535469 nova_compute[254092]: 2025-11-25 17:27:07.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:27:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:27:10 np0005535469 nova_compute[254092]: 2025-11-25 17:27:10.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:12 np0005535469 nova_compute[254092]: 2025-11-25 17:27:12.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:13.666 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:13.667 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:13.667 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:15 np0005535469 nova_compute[254092]: 2025-11-25 17:27:15.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:17 np0005535469 nova_compute[254092]: 2025-11-25 17:27:17.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.509 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.509 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.526 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:27:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.612 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.613 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.624 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.625 254096 INFO nova.compute.claims [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:27:19 np0005535469 podman[428160]: 2025-11-25 17:27:19.676280865 +0000 UTC m=+0.075893196 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:27:19 np0005535469 podman[428159]: 2025-11-25 17:27:19.698231133 +0000 UTC m=+0.105601056 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:27:19 np0005535469 podman[428167]: 2025-11-25 17:27:19.755381738 +0000 UTC m=+0.135128559 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:27:19 np0005535469 nova_compute[254092]: 2025-11-25 17:27:19.798 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:27:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500796303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.340 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.348 254096 DEBUG nova.compute.provider_tree [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.365 254096 DEBUG nova.scheduler.client.report [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.396 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.398 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.472 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.473 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.504 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.533 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.637 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.639 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.640 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Creating image(s)#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.679 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.720 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.749 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.755 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.851 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.852 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.881 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:27:20 np0005535469 nova_compute[254092]: 2025-11-25 17:27:20.885 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.373 254096 DEBUG nova.policy [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e6ab922303af4fc0a70862a72b3ea9c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.636 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.710 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] resizing rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.811 254096 DEBUG nova.objects.instance [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.825 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.826 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Ensure instance console log exists: /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.827 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.827 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:21 np0005535469 nova_compute[254092]: 2025-11-25 17:27:21.827 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:22 np0005535469 nova_compute[254092]: 2025-11-25 17:27:22.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:23 np0005535469 nova_compute[254092]: 2025-11-25 17:27:23.230 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Successfully created port: 4c9f1115-1d14-4772-b092-e842077e160a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:27:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:27:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 72 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.371 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Successfully updated port: 4c9f1115-1d14-4772-b092-e842077e160a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.388 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.388 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.388 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.528 254096 DEBUG nova.compute.manager [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-changed-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.528 254096 DEBUG nova.compute.manager [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing instance network info cache due to event network-changed-4c9f1115-1d14-4772-b092-e842077e160a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.529 254096 DEBUG oslo_concurrency.lockutils [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:27:25 np0005535469 nova_compute[254092]: 2025-11-25 17:27:25.562 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:27:27 np0005535469 nova_compute[254092]: 2025-11-25 17:27:27.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:27:27 np0005535469 nova_compute[254092]: 2025-11-25 17:27:27.982 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.001 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.001 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance network_info: |[{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.001 254096 DEBUG oslo_concurrency.lockutils [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.002 254096 DEBUG nova.network.neutron [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.004 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start _get_guest_xml network_info=[{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.008 254096 WARNING nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.011 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.012 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.018 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.018 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.021 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.021 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.021 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.024 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:27:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385939879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.523 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.546 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:27:28 np0005535469 nova_compute[254092]: 2025-11-25 17:27:28.550 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:27:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265014662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.025 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.027 254096 DEBUG nova.virt.libvirt.vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-142394671',display_name='tempest-TestSnapshotPattern-server-142394671',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-142394671',id=151,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-pb0kf19x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:27:20Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=7d3f09ec-6bad-4674-ab8b-907560448ab0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.027 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.028 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.029 254096 DEBUG nova.objects.instance [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.041 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <uuid>7d3f09ec-6bad-4674-ab8b-907560448ab0</uuid>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <name>instance-00000097</name>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestSnapshotPattern-server-142394671</nova:name>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:27:28</nova:creationTime>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:user uuid="e6ab922303af4fc0a70862a72b3ea9c8">tempest-TestSnapshotPattern-1072505445-project-member</nova:user>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:project uuid="d295e1cfcd234c4391fda20fc4264d70">tempest-TestSnapshotPattern-1072505445</nova:project>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <nova:port uuid="4c9f1115-1d14-4772-b092-e842077e160a">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <entry name="serial">7d3f09ec-6bad-4674-ab8b-907560448ab0</entry>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <entry name="uuid">7d3f09ec-6bad-4674-ab8b-907560448ab0</entry>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7d3f09ec-6bad-4674-ab8b-907560448ab0_disk">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:dc:a2:36"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <target dev="tap4c9f1115-1d"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/console.log" append="off"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:27:29 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:27:29 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:27:29 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:27:29 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.043 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Preparing to wait for external event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.043 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.043 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.044 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.044 254096 DEBUG nova.virt.libvirt.vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-142394671',display_name='tempest-TestSnapshotPattern-server-142394671',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-142394671',id=151,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-pb0kf19x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:27:20Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=7d3f09ec-6bad-4674-ab8b-907560448ab0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.045 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.045 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.046 254096 DEBUG os_vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.047 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.047 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.050 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c9f1115-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.051 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c9f1115-1d, col_values=(('external_ids', {'iface-id': '4c9f1115-1d14-4772-b092-e842077e160a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:a2:36', 'vm-uuid': '7d3f09ec-6bad-4674-ab8b-907560448ab0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:27:29 np0005535469 NetworkManager[48891]: <info>  [1764091649.0534] manager: (tap4c9f1115-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.063 254096 INFO os_vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d')#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.111 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.112 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.112 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No VIF found with MAC fa:16:3e:dc:a2:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.113 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Using config drive#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.132 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:27:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.386 254096 DEBUG nova.network.neutron [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated VIF entry in instance network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.386 254096 DEBUG nova.network.neutron [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.398 254096 DEBUG oslo_concurrency.lockutils [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.459 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Creating config drive at /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.463 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpct9ky4_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.620 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpct9ky4_c" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.648 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.653 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.835 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.837 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deleting local config drive /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config because it was imported into RBD.#033[00m
Nov 25 12:27:29 np0005535469 kernel: tap4c9f1115-1d: entered promiscuous mode
Nov 25 12:27:29 np0005535469 NetworkManager[48891]: <info>  [1764091649.9056] manager: (tap4c9f1115-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/673)
Nov 25 12:27:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:29Z|01618|binding|INFO|Claiming lport 4c9f1115-1d14-4772-b092-e842077e160a for this chassis.
Nov 25 12:27:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:29Z|01619|binding|INFO|4c9f1115-1d14-4772-b092-e842077e160a: Claiming fa:16:3e:dc:a2:36 10.100.0.14
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.925 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:a2:36 10.100.0.14'], port_security=['fa:16:3e:dc:a2:36 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7d3f09ec-6bad-4674-ab8b-907560448ab0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '2', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4c9f1115-1d14-4772-b092-e842077e160a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.926 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4c9f1115-1d14-4772-b092-e842077e160a in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 bound to our chassis#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.927 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a1a00fe-6b82-48c5-a534-9040cbe84499#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.941 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cd7cbe-a594-45ba-b0be-98d3385ba55d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.942 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a1a00fe-61 in ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.944 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a1a00fe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.944 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7bc51e-cfbe-4fb4-a3c9-d8556d0347fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.945 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af139fda-c795-4337-a81a-05124ced9e96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:29 np0005535469 systemd-udevd[428548]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:27:29 np0005535469 systemd-machined[216343]: New machine qemu-185-instance-00000097.
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.958 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[57a72459-add4-4a97-90a5-f6e2f8712833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:29 np0005535469 NetworkManager[48891]: <info>  [1764091649.9759] device (tap4c9f1115-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:27:29 np0005535469 NetworkManager[48891]: <info>  [1764091649.9767] device (tap4c9f1115-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:27:29 np0005535469 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Nov 25 12:27:29 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.988 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01ab046c-0f3d-4387-ad91-5cb733befe9b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:29Z|01620|binding|INFO|Setting lport 4c9f1115-1d14-4772-b092-e842077e160a ovn-installed in OVS
Nov 25 12:27:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:29Z|01621|binding|INFO|Setting lport 4c9f1115-1d14-4772-b092-e842077e160a up in Southbound
Nov 25 12:27:29 np0005535469 nova_compute[254092]: 2025-11-25 17:27:29.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.032 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f014617-26c4-4706-bda6-4d5f4e60fc01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.037 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54c86242-f0cb-4ee8-a0d8-99747d059da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 NetworkManager[48891]: <info>  [1764091650.0391] manager: (tap7a1a00fe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/674)
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.093 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6c31f4-fced-4e8b-a03c-76deee77681d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.098 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6da27bfb-0d00-4f83-a78f-b486b2783349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 NetworkManager[48891]: <info>  [1764091650.1299] device (tap7a1a00fe-60): carrier: link connected
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.140 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4f1b69-8250-4041-9d80-a2aa549c71c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1e9fb0-6d73-412c-b993-7fed167457c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428580, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.188 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0120d931-d4be-47f7-a2c1-1fdd58f9f98e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:4820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810772, 'tstamp': 810772}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428581, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50004794-c253-4feb-9118-54735feca32d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 428582, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.249 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66dcd667-7768-47f0-b9a8-66cabbad3433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5117b06a-7eec-44e0-a9ef-04e148dffa31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.325 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.325 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.326 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a1a00fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:27:30 np0005535469 NetworkManager[48891]: <info>  [1764091650.3284] manager: (tap7a1a00fe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/675)
Nov 25 12:27:30 np0005535469 kernel: tap7a1a00fe-60: entered promiscuous mode
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.332 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a1a00fe-60, col_values=(('external_ids', {'iface-id': '0a114fd0-0e8c-4ae1-8b45-56c99d3c790e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:30 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:30Z|01622|binding|INFO|Releasing lport 0a114fd0-0e8c-4ae1-8b45-56c99d3c790e from this chassis (sb_readonly=0)
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.355 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a1a00fe-6b82-48c5-a534-9040cbe84499.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a1a00fe-6b82-48c5-a534-9040cbe84499.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4bcfb81-ebdf-495b-81c9-fba26129f4ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.357 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: global
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    log         /dev/log local0 debug
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    log-tag     haproxy-metadata-proxy-7a1a00fe-6b82-48c5-a534-9040cbe84499
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    user        root
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    group       root
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    maxconn     1024
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    pidfile     /var/lib/neutron/external/pids/7a1a00fe-6b82-48c5-a534-9040cbe84499.pid.haproxy
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    daemon
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: defaults
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    log global
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    mode http
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    option httplog
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    option dontlognull
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    option http-server-close
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    option forwardfor
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    retries                 3
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    timeout http-request    30s
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    timeout connect         30s
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    timeout client          32s
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    timeout server          32s
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    timeout http-keep-alive 30s
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: listen listener
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    bind 169.254.169.254:80
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]:    http-request add-header X-OVN-Network-ID 7a1a00fe-6b82-48c5-a534-9040cbe84499
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 12:27:30 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.358 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'env', 'PROCESS_TAG=haproxy-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a1a00fe-6b82-48c5-a534-9040cbe84499.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 12:27:30 np0005535469 podman[428651]: 2025-11-25 17:27:30.736726672 +0000 UTC m=+0.048933902 container create 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 12:27:30 np0005535469 systemd[1]: Started libpod-conmon-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b.scope.
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.777 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091650.7769167, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.778 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Started (Lifecycle Event)#033[00m
Nov 25 12:27:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:27:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bdd69f08ecf9e368f25903ee5b299b22e79808015eca7d99f37cf6be366f960/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:30 np0005535469 podman[428651]: 2025-11-25 17:27:30.709848831 +0000 UTC m=+0.022056091 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 12:27:30 np0005535469 podman[428651]: 2025-11-25 17:27:30.81084988 +0000 UTC m=+0.123057130 container init 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:27:30 np0005535469 podman[428651]: 2025-11-25 17:27:30.816104393 +0000 UTC m=+0.128311613 container start 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.819 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.823 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091650.7771146, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.823 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:27:30 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : New worker (428677) forked
Nov 25 12:27:30 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : Loading success.
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.860 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.864 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:27:30 np0005535469 nova_compute[254092]: 2025-11-25 17:27:30.879 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:27:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.503 254096 DEBUG nova.compute.manager [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.505 254096 DEBUG oslo_concurrency.lockutils [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.505 254096 DEBUG oslo_concurrency.lockutils [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.506 254096 DEBUG oslo_concurrency.lockutils [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.506 254096 DEBUG nova.compute.manager [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Processing event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.507 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.511 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091651.5108566, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.511 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.513 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.515 254096 INFO nova.virt.libvirt.driver [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance spawned successfully.#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.515 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.528 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.532 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.544 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.545 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.546 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.546 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.547 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.548 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.554 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.600 254096 INFO nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 10.96 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.600 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.654 254096 INFO nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 12.08 seconds to build instance.#033[00m
Nov 25 12:27:31 np0005535469 nova_compute[254092]: 2025-11-25 17:27:31.670 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:32 np0005535469 nova_compute[254092]: 2025-11-25 17:27:32.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 12:27:33 np0005535469 nova_compute[254092]: 2025-11-25 17:27:33.610 254096 DEBUG nova.compute.manager [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:27:33 np0005535469 nova_compute[254092]: 2025-11-25 17:27:33.610 254096 DEBUG oslo_concurrency.lockutils [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:33 np0005535469 nova_compute[254092]: 2025-11-25 17:27:33.610 254096 DEBUG oslo_concurrency.lockutils [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:33 np0005535469 nova_compute[254092]: 2025-11-25 17:27:33.611 254096 DEBUG oslo_concurrency.lockutils [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:33 np0005535469 nova_compute[254092]: 2025-11-25 17:27:33.611 254096 DEBUG nova.compute.manager [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] No waiting events found dispatching network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:27:33 np0005535469 nova_compute[254092]: 2025-11-25 17:27:33.611 254096 WARNING nova.compute.manager [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received unexpected event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a for instance with vm_state active and task_state None.#033[00m
Nov 25 12:27:34 np0005535469 nova_compute[254092]: 2025-11-25 17:27:34.053 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 12:27:37 np0005535469 nova_compute[254092]: 2025-11-25 17:27:37.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 464 KiB/s wr, 75 op/s
Nov 25 12:27:37 np0005535469 NetworkManager[48891]: <info>  [1764091657.7180] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/676)
Nov 25 12:27:37 np0005535469 NetworkManager[48891]: <info>  [1764091657.7196] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Nov 25 12:27:37 np0005535469 nova_compute[254092]: 2025-11-25 17:27:37.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:37 np0005535469 nova_compute[254092]: 2025-11-25 17:27:37.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:37 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:37Z|01623|binding|INFO|Releasing lport 0a114fd0-0e8c-4ae1-8b45-56c99d3c790e from this chassis (sb_readonly=0)
Nov 25 12:27:37 np0005535469 nova_compute[254092]: 2025-11-25 17:27:37.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:27:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fb92a4ae-d742-4141-9f4c-0826ec552bbb does not exist
Nov 25 12:27:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 15019d22-2e8e-46d9-9126-f9651f047b3c does not exist
Nov 25 12:27:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a6b0a232-6f22-4666-8246-a19a9e19ff24 does not exist
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:27:38 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:27:38 np0005535469 nova_compute[254092]: 2025-11-25 17:27:38.511 254096 DEBUG nova.compute.manager [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-changed-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:27:38 np0005535469 nova_compute[254092]: 2025-11-25 17:27:38.512 254096 DEBUG nova.compute.manager [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing instance network info cache due to event network-changed-4c9f1115-1d14-4772-b092-e842077e160a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:27:38 np0005535469 nova_compute[254092]: 2025-11-25 17:27:38.512 254096 DEBUG oslo_concurrency.lockutils [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:27:38 np0005535469 nova_compute[254092]: 2025-11-25 17:27:38.513 254096 DEBUG oslo_concurrency.lockutils [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:27:38 np0005535469 nova_compute[254092]: 2025-11-25 17:27:38.513 254096 DEBUG nova.network.neutron [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:27:38 np0005535469 podman[428959]: 2025-11-25 17:27:38.813480665 +0000 UTC m=+0.055014257 container create a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:27:38 np0005535469 systemd[1]: Started libpod-conmon-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope.
Nov 25 12:27:38 np0005535469 podman[428959]: 2025-11-25 17:27:38.785194476 +0000 UTC m=+0.026728078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:27:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:27:38 np0005535469 podman[428959]: 2025-11-25 17:27:38.904897584 +0000 UTC m=+0.146431176 container init a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:27:38 np0005535469 podman[428959]: 2025-11-25 17:27:38.911805453 +0000 UTC m=+0.153339025 container start a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:27:38 np0005535469 podman[428959]: 2025-11-25 17:27:38.915499853 +0000 UTC m=+0.157033415 container attach a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:27:38 np0005535469 vigorous_gagarin[428975]: 167 167
Nov 25 12:27:38 np0005535469 systemd[1]: libpod-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope: Deactivated successfully.
Nov 25 12:27:38 np0005535469 conmon[428975]: conmon a275309ccd5f2f044997 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope/container/memory.events
Nov 25 12:27:38 np0005535469 podman[428959]: 2025-11-25 17:27:38.922689748 +0000 UTC m=+0.164223300 container died a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:27:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-15c09957d2981aed2073673d5fce38c66f508f0078d4c8ad7b3a665396e93480-merged.mount: Deactivated successfully.
Nov 25 12:27:38 np0005535469 podman[428959]: 2025-11-25 17:27:38.970240522 +0000 UTC m=+0.211774084 container remove a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:27:38 np0005535469 systemd[1]: libpod-conmon-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope: Deactivated successfully.
Nov 25 12:27:39 np0005535469 nova_compute[254092]: 2025-11-25 17:27:39.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:39 np0005535469 podman[428999]: 2025-11-25 17:27:39.189351117 +0000 UTC m=+0.061408572 container create b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 25 12:27:39 np0005535469 systemd[1]: Started libpod-conmon-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope.
Nov 25 12:27:39 np0005535469 podman[428999]: 2025-11-25 17:27:39.161130389 +0000 UTC m=+0.033187904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:27:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:27:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:27:39 np0005535469 podman[428999]: 2025-11-25 17:27:39.322909602 +0000 UTC m=+0.194967107 container init b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:27:39 np0005535469 podman[428999]: 2025-11-25 17:27:39.335250898 +0000 UTC m=+0.207308353 container start b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 12:27:39 np0005535469 podman[428999]: 2025-11-25 17:27:39.341053226 +0000 UTC m=+0.213110721 container attach b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:27:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:39 np0005535469 nova_compute[254092]: 2025-11-25 17:27:39.891 254096 DEBUG nova.network.neutron [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated VIF entry in instance network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:27:39 np0005535469 nova_compute[254092]: 2025-11-25 17:27:39.892 254096 DEBUG nova.network.neutron [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:27:39 np0005535469 nova_compute[254092]: 2025-11-25 17:27:39.915 254096 DEBUG oslo_concurrency.lockutils [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:27:40
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control']
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:27:40 np0005535469 friendly_benz[429015]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:27:40 np0005535469 friendly_benz[429015]: --> relative data size: 1.0
Nov 25 12:27:40 np0005535469 friendly_benz[429015]: --> All data devices are unavailable
Nov 25 12:27:40 np0005535469 systemd[1]: libpod-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope: Deactivated successfully.
Nov 25 12:27:40 np0005535469 systemd[1]: libpod-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope: Consumed 1.182s CPU time.
Nov 25 12:27:40 np0005535469 podman[428999]: 2025-11-25 17:27:40.579038663 +0000 UTC m=+1.451096108 container died b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:27:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2-merged.mount: Deactivated successfully.
Nov 25 12:27:40 np0005535469 podman[428999]: 2025-11-25 17:27:40.658592108 +0000 UTC m=+1.530649523 container remove b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 12:27:40 np0005535469 systemd[1]: libpod-conmon-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope: Deactivated successfully.
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:27:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:27:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 12:27:41 np0005535469 podman[429196]: 2025-11-25 17:27:41.437081968 +0000 UTC m=+0.043984019 container create 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:27:41 np0005535469 systemd[1]: Started libpod-conmon-317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669.scope.
Nov 25 12:27:41 np0005535469 podman[429196]: 2025-11-25 17:27:41.41548234 +0000 UTC m=+0.022384411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:27:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:27:41 np0005535469 podman[429196]: 2025-11-25 17:27:41.536500314 +0000 UTC m=+0.143402375 container init 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 12:27:41 np0005535469 podman[429196]: 2025-11-25 17:27:41.543307419 +0000 UTC m=+0.150209460 container start 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 12:27:41 np0005535469 podman[429196]: 2025-11-25 17:27:41.546681851 +0000 UTC m=+0.153583922 container attach 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:27:41 np0005535469 angry_brattain[429212]: 167 167
Nov 25 12:27:41 np0005535469 systemd[1]: libpod-317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669.scope: Deactivated successfully.
Nov 25 12:27:41 np0005535469 podman[429217]: 2025-11-25 17:27:41.592317813 +0000 UTC m=+0.030374097 container died 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:27:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-19a0f1e498a38357313d3aa1f7e3b07686691bc655e6dab15bf01040713d9fdf-merged.mount: Deactivated successfully.
Nov 25 12:27:41 np0005535469 podman[429217]: 2025-11-25 17:27:41.626542095 +0000 UTC m=+0.064598349 container remove 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:27:41 np0005535469 systemd[1]: libpod-conmon-317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669.scope: Deactivated successfully.
Nov 25 12:27:41 np0005535469 podman[429236]: 2025-11-25 17:27:41.853273497 +0000 UTC m=+0.060828267 container create 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 12:27:41 np0005535469 systemd[1]: Started libpod-conmon-7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56.scope.
Nov 25 12:27:41 np0005535469 podman[429236]: 2025-11-25 17:27:41.829659493 +0000 UTC m=+0.037214293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:27:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:27:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:41 np0005535469 podman[429236]: 2025-11-25 17:27:41.965198372 +0000 UTC m=+0.172753162 container init 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:27:41 np0005535469 podman[429236]: 2025-11-25 17:27:41.973687154 +0000 UTC m=+0.181241924 container start 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:27:42 np0005535469 podman[429236]: 2025-11-25 17:27:42.067441466 +0000 UTC m=+0.274996236 container attach 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:27:42 np0005535469 nova_compute[254092]: 2025-11-25 17:27:42.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:42 np0005535469 elated_meitner[429250]: {
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:    "0": [
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:        {
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "devices": [
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "/dev/loop3"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            ],
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_name": "ceph_lv0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_size": "21470642176",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "name": "ceph_lv0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "tags": {
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cluster_name": "ceph",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.crush_device_class": "",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.encrypted": "0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osd_id": "0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.type": "block",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.vdo": "0"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            },
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "type": "block",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "vg_name": "ceph_vg0"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:        }
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:    ],
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:    "1": [
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:        {
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "devices": [
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "/dev/loop4"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            ],
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_name": "ceph_lv1",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_size": "21470642176",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "name": "ceph_lv1",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "tags": {
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cluster_name": "ceph",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.crush_device_class": "",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.encrypted": "0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osd_id": "1",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.type": "block",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.vdo": "0"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            },
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "type": "block",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "vg_name": "ceph_vg1"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:        }
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:    ],
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:    "2": [
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:        {
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "devices": [
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "/dev/loop5"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            ],
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_name": "ceph_lv2",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_size": "21470642176",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "name": "ceph_lv2",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "tags": {
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.cluster_name": "ceph",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.crush_device_class": "",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.encrypted": "0",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osd_id": "2",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.type": "block",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:                "ceph.vdo": "0"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            },
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "type": "block",
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:            "vg_name": "ceph_vg2"
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:        }
Nov 25 12:27:42 np0005535469 elated_meitner[429250]:    ]
Nov 25 12:27:42 np0005535469 elated_meitner[429250]: }
Nov 25 12:27:42 np0005535469 systemd[1]: libpod-7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56.scope: Deactivated successfully.
Nov 25 12:27:42 np0005535469 podman[429236]: 2025-11-25 17:27:42.824233105 +0000 UTC m=+1.031787885 container died 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:27:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152-merged.mount: Deactivated successfully.
Nov 25 12:27:42 np0005535469 podman[429236]: 2025-11-25 17:27:42.893896531 +0000 UTC m=+1.101451301 container remove 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:27:42 np0005535469 systemd[1]: libpod-conmon-7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56.scope: Deactivated successfully.
Nov 25 12:27:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 12:27:43 np0005535469 podman[429415]: 2025-11-25 17:27:43.595752016 +0000 UTC m=+0.044459492 container create 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:27:43 np0005535469 systemd[1]: Started libpod-conmon-8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2.scope.
Nov 25 12:27:43 np0005535469 podman[429415]: 2025-11-25 17:27:43.577245472 +0000 UTC m=+0.025952968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:27:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:27:43 np0005535469 podman[429415]: 2025-11-25 17:27:43.708902645 +0000 UTC m=+0.157610211 container init 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:27:43 np0005535469 podman[429415]: 2025-11-25 17:27:43.724498419 +0000 UTC m=+0.173205935 container start 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:27:43 np0005535469 podman[429415]: 2025-11-25 17:27:43.729276089 +0000 UTC m=+0.177983795 container attach 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:27:43 np0005535469 flamboyant_matsumoto[429431]: 167 167
Nov 25 12:27:43 np0005535469 systemd[1]: libpod-8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2.scope: Deactivated successfully.
Nov 25 12:27:43 np0005535469 podman[429415]: 2025-11-25 17:27:43.737883634 +0000 UTC m=+0.186591180 container died 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:27:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0506b0cc3545dbd2e64915f3b0184445da0995b0524b9dfe00dcecf4e9ee0e8c-merged.mount: Deactivated successfully.
Nov 25 12:27:43 np0005535469 podman[429415]: 2025-11-25 17:27:43.803225703 +0000 UTC m=+0.251933179 container remove 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:27:43 np0005535469 systemd[1]: libpod-conmon-8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2.scope: Deactivated successfully.
Nov 25 12:27:44 np0005535469 podman[429455]: 2025-11-25 17:27:44.063305751 +0000 UTC m=+0.085793255 container create d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:27:44 np0005535469 nova_compute[254092]: 2025-11-25 17:27:44.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:44 np0005535469 podman[429455]: 2025-11-25 17:27:44.025938154 +0000 UTC m=+0.048425708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:27:44 np0005535469 systemd[1]: Started libpod-conmon-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope.
Nov 25 12:27:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:27:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:27:44 np0005535469 podman[429455]: 2025-11-25 17:27:44.212988316 +0000 UTC m=+0.235475860 container init d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:27:44 np0005535469 podman[429455]: 2025-11-25 17:27:44.220519021 +0000 UTC m=+0.243006495 container start d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:27:44 np0005535469 podman[429455]: 2025-11-25 17:27:44.224632343 +0000 UTC m=+0.247119857 container attach d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:27:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:45Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:a2:36 10.100.0.14
Nov 25 12:27:45 np0005535469 ovn_controller[153477]: 2025-11-25T17:27:45Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:a2:36 10.100.0.14
Nov 25 12:27:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 118 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Nov 25 12:27:45 np0005535469 festive_rubin[429471]: {
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "osd_id": 1,
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "type": "bluestore"
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:    },
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "osd_id": 2,
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "type": "bluestore"
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:    },
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "osd_id": 0,
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:        "type": "bluestore"
Nov 25 12:27:45 np0005535469 festive_rubin[429471]:    }
Nov 25 12:27:45 np0005535469 festive_rubin[429471]: }
Nov 25 12:27:45 np0005535469 podman[429455]: 2025-11-25 17:27:45.367986634 +0000 UTC m=+1.390474098 container died d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:27:45 np0005535469 systemd[1]: libpod-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope: Deactivated successfully.
Nov 25 12:27:45 np0005535469 systemd[1]: libpod-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope: Consumed 1.157s CPU time.
Nov 25 12:27:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a-merged.mount: Deactivated successfully.
Nov 25 12:27:45 np0005535469 podman[429455]: 2025-11-25 17:27:45.677797097 +0000 UTC m=+1.700284561 container remove d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:27:45 np0005535469 systemd[1]: libpod-conmon-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope: Deactivated successfully.
Nov 25 12:27:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:27:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:27:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:27:45 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:27:45 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ccd406ff-bfba-4a4b-8d27-e626b7a64848 does not exist
Nov 25 12:27:45 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2d12a761-2798-44d2-b389-7e67b2757320 does not exist
Nov 25 12:27:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:27:46 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:27:47 np0005535469 nova_compute[254092]: 2025-11-25 17:27:47.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 12:27:49 np0005535469 nova_compute[254092]: 2025-11-25 17:27:49.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 12:27:49 np0005535469 nova_compute[254092]: 2025-11-25 17:27:49.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.549 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:27:50 np0005535469 nova_compute[254092]: 2025-11-25 17:27:50.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:50 np0005535469 podman[429570]: 2025-11-25 17:27:50.692282046 +0000 UTC m=+0.097431133 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 12:27:50 np0005535469 podman[429568]: 2025-11-25 17:27:50.693870409 +0000 UTC m=+0.098774539 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:27:50 np0005535469 podman[429571]: 2025-11-25 17:27:50.701595759 +0000 UTC m=+0.106836579 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:27:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:27:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319224096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.050 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.144 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.145 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.366 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.367 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3418MB free_disk=59.94288635253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.368 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.368 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.458 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7d3f09ec-6bad-4674-ab8b-907560448ab0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.458 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.459 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:27:51 np0005535469 nova_compute[254092]: 2025-11-25 17:27:51.506 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007588279164224784 of space, bias 1.0, pg target 0.2276483749267435 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:27:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:27:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:27:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259980534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:27:52 np0005535469 nova_compute[254092]: 2025-11-25 17:27:52.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:27:52 np0005535469 nova_compute[254092]: 2025-11-25 17:27:52.015 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:27:52 np0005535469 nova_compute[254092]: 2025-11-25 17:27:52.034 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:27:52 np0005535469 nova_compute[254092]: 2025-11-25 17:27:52.085 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:27:52 np0005535469 nova_compute[254092]: 2025-11-25 17:27:52.086 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:27:52 np0005535469 nova_compute[254092]: 2025-11-25 17:27:52.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 12:27:54 np0005535469 nova_compute[254092]: 2025-11-25 17:27:54.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:27:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 25 12:27:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:27:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/154453331' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:27:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:27:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/154453331' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:27:55 np0005535469 nova_compute[254092]: 2025-11-25 17:27:55.737 254096 DEBUG nova.compute.manager [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:27:55 np0005535469 nova_compute[254092]: 2025-11-25 17:27:55.773 254096 INFO nova.compute.manager [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] instance snapshotting#033[00m
Nov 25 12:27:56 np0005535469 nova_compute[254092]: 2025-11-25 17:27:56.001 254096 INFO nova.virt.libvirt.driver [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Beginning live snapshot process#033[00m
Nov 25 12:27:56 np0005535469 nova_compute[254092]: 2025-11-25 17:27:56.087 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:56 np0005535469 nova_compute[254092]: 2025-11-25 17:27:56.088 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:56 np0005535469 nova_compute[254092]: 2025-11-25 17:27:56.088 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:27:56 np0005535469 nova_compute[254092]: 2025-11-25 17:27:56.320 254096 DEBUG nova.virt.libvirt.imagebackend [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 25 12:27:56 np0005535469 nova_compute[254092]: 2025-11-25 17:27:56.551 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(69e781c0eef74d57bbbfc30df767b991) on rbd image(7d3f09ec-6bad-4674-ab8b-907560448ab0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 12:27:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 25 12:27:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 25 12:27:57 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 25 12:27:57 np0005535469 nova_compute[254092]: 2025-11-25 17:27:57.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 36 KiB/s wr, 3 op/s
Nov 25 12:27:57 np0005535469 nova_compute[254092]: 2025-11-25 17:27:57.960 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] cloning vms/7d3f09ec-6bad-4674-ab8b-907560448ab0_disk@69e781c0eef74d57bbbfc30df767b991 to images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 12:27:58 np0005535469 nova_compute[254092]: 2025-11-25 17:27:58.447 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] flattening images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 12:27:58 np0005535469 nova_compute[254092]: 2025-11-25 17:27:58.550 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:27:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 36 KiB/s wr, 3 op/s
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.517 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:27:59 np0005535469 nova_compute[254092]: 2025-11-25 17:27:59.517 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:27:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 66 op/s
Nov 25 12:28:01 np0005535469 nova_compute[254092]: 2025-11-25 17:28:01.457 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] removing snapshot(69e781c0eef74d57bbbfc30df767b991) on rbd image(7d3f09ec-6bad-4674-ab8b-907560448ab0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 12:28:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 25 12:28:01 np0005535469 nova_compute[254092]: 2025-11-25 17:28:01.583 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:28:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 25 12:28:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 25 12:28:01 np0005535469 nova_compute[254092]: 2025-11-25 17:28:01.600 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:28:01 np0005535469 nova_compute[254092]: 2025-11-25 17:28:01.600 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:28:01 np0005535469 nova_compute[254092]: 2025-11-25 17:28:01.623 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(snap) on rbd image(eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 12:28:02 np0005535469 nova_compute[254092]: 2025-11-25 17:28:02.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 25 12:28:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 25 12:28:02 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 25 12:28:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 6.7 MiB/s wr, 101 op/s
Nov 25 12:28:03 np0005535469 nova_compute[254092]: 2025-11-25 17:28:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:03 np0005535469 nova_compute[254092]: 2025-11-25 17:28:03.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:03 np0005535469 nova_compute[254092]: 2025-11-25 17:28:03.943 254096 INFO nova.virt.libvirt.driver [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Snapshot image upload complete#033[00m
Nov 25 12:28:03 np0005535469 nova_compute[254092]: 2025-11-25 17:28:03.945 254096 INFO nova.compute.manager [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 8.17 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 12:28:04 np0005535469 nova_compute[254092]: 2025-11-25 17:28:04.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 109 op/s
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 134 op/s
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.421 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.422 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.442 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.524 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.525 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.537 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.538 254096 INFO nova.compute.claims [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:28:07 np0005535469 nova_compute[254092]: 2025-11-25 17:28:07.697 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:28:07 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:07Z|01624|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 12:28:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:28:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222383044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.244 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.253 254096 DEBUG nova.compute.provider_tree [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.274 254096 DEBUG nova.scheduler.client.report [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.304 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.305 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.378 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.378 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.401 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.425 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.536 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.538 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.538 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Creating image(s)#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.559 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.582 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.601 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.604 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "86dd2f95414ac23cbfb3f0889876ddcf1ef4f38f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.605 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "86dd2f95414ac23cbfb3f0889876ddcf1ef4f38f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.610 254096 DEBUG nova.policy [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e6ab922303af4fc0a70862a72b3ea9c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.850 254096 DEBUG nova.virt.libvirt.imagebackend [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.916 254096 DEBUG nova.virt.libvirt.imagebackend [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 25 12:28:08 np0005535469 nova_compute[254092]: 2025-11-25 17:28:08.918 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] cloning images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82@snap to None/c983f16d-bf50-4a2c-aa05-213890fb387a_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.063 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "86dd2f95414ac23cbfb3f0889876ddcf1ef4f38f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:09.181 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:28:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:09.184 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.261 254096 DEBUG nova.objects.instance [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'migration_context' on Instance uuid c983f16d-bf50-4a2c-aa05-213890fb387a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.284 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.285 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Ensure instance console log exists: /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.285 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.285 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.286 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 582 KiB/s wr, 55 op/s
Nov 25 12:28:09 np0005535469 nova_compute[254092]: 2025-11-25 17:28:09.473 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Successfully created port: 78592e85-0e7a-4c36-bf1d-981efc74361b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 12:28:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 25 12:28:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 25 12:28:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 25 12:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:28:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:28:10 np0005535469 nova_compute[254092]: 2025-11-25 17:28:10.164 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Successfully updated port: 78592e85-0e7a-4c36-bf1d-981efc74361b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 12:28:10 np0005535469 nova_compute[254092]: 2025-11-25 17:28:10.186 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:28:10 np0005535469 nova_compute[254092]: 2025-11-25 17:28:10.187 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:28:10 np0005535469 nova_compute[254092]: 2025-11-25 17:28:10.188 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:28:10 np0005535469 nova_compute[254092]: 2025-11-25 17:28:10.267 254096 DEBUG nova.compute.manager [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:28:10 np0005535469 nova_compute[254092]: 2025-11-25 17:28:10.267 254096 DEBUG nova.compute.manager [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing instance network info cache due to event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:28:10 np0005535469 nova_compute[254092]: 2025-11-25 17:28:10.268 254096 DEBUG oslo_concurrency.lockutils [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:28:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 537 KiB/s wr, 88 op/s
Nov 25 12:28:11 np0005535469 nova_compute[254092]: 2025-11-25 17:28:11.373 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:28:12 np0005535469 nova_compute[254092]: 2025-11-25 17:28:12.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:12 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:12.186 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:28:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 466 KiB/s wr, 76 op/s
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.474 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.490 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.491 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance network_info: |[{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.491 254096 DEBUG oslo_concurrency.lockutils [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.492 254096 DEBUG nova.network.neutron [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.494 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start _get_guest_xml network_info=[{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T17:27:54Z,direct_url=<?>,disk_format='raw',id=eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-2017666674',owner='d295e1cfcd234c4391fda20fc4264d70',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T17:28:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': 'eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.500 254096 WARNING nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.508 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.509 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.518 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.519 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.520 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.520 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T17:27:54Z,direct_url=<?>,disk_format='raw',id=eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-2017666674',owner='d295e1cfcd234c4391fda20fc4264d70',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T17:28:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.521 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.521 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.522 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.522 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.523 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.523 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.523 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.524 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.524 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.524 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.529 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:28:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:13.668 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:13.668 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:13.669 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:28:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11281694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:28:13 np0005535469 nova_compute[254092]: 2025-11-25 17:28:13.989 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.019 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.025 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:28:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260961190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.462 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.464 254096 DEBUG nova.virt.libvirt.vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-934976337',display_name='tempest-TestSnapshotPattern-server-934976337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-934976337',id=152,image_ref='eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-ryy3wl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d3f09ec-6bad-4674-ab8b-907560448ab0',image_min_disk='1',image_min_ram='0',image_owner_id='d295e1cfcd234c4391fda20fc4264d70',image_owner_project_name='tempest-TestSnapshotPattern-1072505445',image_owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member',image_user_id='e6ab922303af4fc0a70862a72b3ea9c8',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:28:08Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=c983f16d-bf50-4a2c-aa05-213890fb387a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.465 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.466 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.468 254096 DEBUG nova.objects.instance [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'pci_devices' on Instance uuid c983f16d-bf50-4a2c-aa05-213890fb387a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.486 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <uuid>c983f16d-bf50-4a2c-aa05-213890fb387a</uuid>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <name>instance-00000098</name>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <nova:name>tempest-TestSnapshotPattern-server-934976337</nova:name>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:28:13</nova:creationTime>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:user uuid="e6ab922303af4fc0a70862a72b3ea9c8">tempest-TestSnapshotPattern-1072505445-project-member</nova:user>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:project uuid="d295e1cfcd234c4391fda20fc4264d70">tempest-TestSnapshotPattern-1072505445</nova:project>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <nova:ports>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <nova:port uuid="78592e85-0e7a-4c36-bf1d-981efc74361b">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        </nova:port>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      </nova:ports>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <entry name="serial">c983f16d-bf50-4a2c-aa05-213890fb387a</entry>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <entry name="uuid">c983f16d-bf50-4a2c-aa05-213890fb387a</entry>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c983f16d-bf50-4a2c-aa05-213890fb387a_disk">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <interface type="ethernet">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <mac address="fa:16:3e:f0:b0:be"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <mtu size="1442"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <target dev="tap78592e85-0e"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </interface>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/console.log" append="off"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <input type="keyboard" bus="usb"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:28:14 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:28:14 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:28:14 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:28:14 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.488 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Preparing to wait for external event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.489 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.489 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.489 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.491 254096 DEBUG nova.virt.libvirt.vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-934976337',display_name='tempest-TestSnapshotPattern-server-934976337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-934976337',id=152,image_ref='eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-ryy3wl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d3f09ec-6bad-4674-ab8b-907560448ab0',image_min_disk='1',image_min_ram='0',image_owner_id='d295e1cfcd234c4391fda20fc4264d70',image_owner_project_name='tempest-TestSnapshotPattern-1072505445',image_owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member',image_user_id='e6ab922303af4fc0a70862a72b3ea9c8',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:28:08Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=c983f16d-bf50-4a2c-aa05-213890fb387a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.491 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.492 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.492 254096 DEBUG os_vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.494 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.494 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.500 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.500 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78592e85-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.501 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78592e85-0e, col_values=(('external_ids', {'iface-id': '78592e85-0e7a-4c36-bf1d-981efc74361b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:b0:be', 'vm-uuid': 'c983f16d-bf50-4a2c-aa05-213890fb387a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:14 np0005535469 NetworkManager[48891]: <info>  [1764091694.5045] manager: (tap78592e85-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/678)
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.516 254096 INFO os_vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e')#033[00m
Nov 25 12:28:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.578 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.578 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.579 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No VIF found with MAC fa:16:3e:f0:b0:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.579 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Using config drive#033[00m
Nov 25 12:28:14 np0005535469 nova_compute[254092]: 2025-11-25 17:28:14.600 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:28:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 8.2 KiB/s wr, 52 op/s
Nov 25 12:28:15 np0005535469 nova_compute[254092]: 2025-11-25 17:28:15.589 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Creating config drive at /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config#033[00m
Nov 25 12:28:15 np0005535469 nova_compute[254092]: 2025-11-25 17:28:15.596 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm3_i7t41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:28:15 np0005535469 nova_compute[254092]: 2025-11-25 17:28:15.746 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm3_i7t41" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:28:15 np0005535469 nova_compute[254092]: 2025-11-25 17:28:15.780 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:28:15 np0005535469 nova_compute[254092]: 2025-11-25 17:28:15.784 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:28:15 np0005535469 nova_compute[254092]: 2025-11-25 17:28:15.979 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:28:15 np0005535469 nova_compute[254092]: 2025-11-25 17:28:15.981 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deleting local config drive /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config because it was imported into RBD.#033[00m
Nov 25 12:28:16 np0005535469 kernel: tap78592e85-0e: entered promiscuous mode
Nov 25 12:28:16 np0005535469 NetworkManager[48891]: <info>  [1764091696.0799] manager: (tap78592e85-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/679)
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.082 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:16Z|01625|binding|INFO|Claiming lport 78592e85-0e7a-4c36-bf1d-981efc74361b for this chassis.
Nov 25 12:28:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:16Z|01626|binding|INFO|78592e85-0e7a-4c36-bf1d-981efc74361b: Claiming fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.097 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:b0:be 10.100.0.12'], port_security=['fa:16:3e:f0:b0:be 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c983f16d-bf50-4a2c-aa05-213890fb387a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '2', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=78592e85-0e7a-4c36-bf1d-981efc74361b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.098 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 78592e85-0e7a-4c36-bf1d-981efc74361b in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 bound to our chassis#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a1a00fe-6b82-48c5-a534-9040cbe84499#033[00m
Nov 25 12:28:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:16Z|01627|binding|INFO|Setting lport 78592e85-0e7a-4c36-bf1d-981efc74361b ovn-installed in OVS
Nov 25 12:28:16 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:16Z|01628|binding|INFO|Setting lport 78592e85-0e7a-4c36-bf1d-981efc74361b up in Southbound
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.117 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.125 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b59d864-c7af-4171-b5b1-25fd3dc6948a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:28:16 np0005535469 systemd-machined[216343]: New machine qemu-186-instance-00000098.
Nov 25 12:28:16 np0005535469 systemd-udevd[430156]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 12:28:16 np0005535469 NetworkManager[48891]: <info>  [1764091696.1541] device (tap78592e85-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 12:28:16 np0005535469 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Nov 25 12:28:16 np0005535469 NetworkManager[48891]: <info>  [1764091696.1553] device (tap78592e85-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.186 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b4023f-b4f1-47ce-87d4-92ad1de34b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.192 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[293519c1-e9b3-4927-bb97-b69a3693411f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.242 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f668f3f-8035-4190-b122-493aa3721fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.271 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf757338-6a79-4d5d-b157-1be6792c71bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430169, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.294 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d282adc2-3e2e-42b6-b701-0e59e1e04421]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810787, 'tstamp': 810787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430170, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810790, 'tstamp': 810790}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430170, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.297 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.303 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a1a00fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.303 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.304 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a1a00fe-60, col_values=(('external_ids', {'iface-id': '0a114fd0-0e8c-4ae1-8b45-56c99d3c790e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:28:16 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.304 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.559 254096 DEBUG nova.compute.manager [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.560 254096 DEBUG oslo_concurrency.lockutils [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.561 254096 DEBUG oslo_concurrency.lockutils [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.561 254096 DEBUG oslo_concurrency.lockutils [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.562 254096 DEBUG nova.compute.manager [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Processing event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.682 254096 DEBUG nova.network.neutron [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updated VIF entry in instance network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.683 254096 DEBUG nova.network.neutron [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:28:16 np0005535469 nova_compute[254092]: 2025-11-25 17:28:16.696 254096 DEBUG oslo_concurrency.lockutils [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.313 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091697.3121057, c983f16d-bf50-4a2c-aa05-213890fb387a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.314 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Started (Lifecycle Event)#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.317 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.322 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.326 254096 INFO nova.virt.libvirt.driver [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance spawned successfully.#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.326 254096 INFO nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 8.79 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.327 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.335 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.338 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.361 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.361 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091697.316541, c983f16d-bf50-4a2c-aa05-213890fb387a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.361 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Paused (Lifecycle Event)#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.401 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.413 254096 INFO nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 9.93 seconds to build instance.#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.417 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091697.3210852, c983f16d-bf50-4a2c-aa05-213890fb387a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.417 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.438 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.441 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:28:17 np0005535469 nova_compute[254092]: 2025-11-25 17:28:17.443 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:18 np0005535469 nova_compute[254092]: 2025-11-25 17:28:18.622 254096 DEBUG nova.compute.manager [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:28:18 np0005535469 nova_compute[254092]: 2025-11-25 17:28:18.623 254096 DEBUG oslo_concurrency.lockutils [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:18 np0005535469 nova_compute[254092]: 2025-11-25 17:28:18.623 254096 DEBUG oslo_concurrency.lockutils [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:18 np0005535469 nova_compute[254092]: 2025-11-25 17:28:18.624 254096 DEBUG oslo_concurrency.lockutils [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:18 np0005535469 nova_compute[254092]: 2025-11-25 17:28:18.624 254096 DEBUG nova.compute.manager [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] No waiting events found dispatching network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:28:18 np0005535469 nova_compute[254092]: 2025-11-25 17:28:18.625 254096 WARNING nova.compute.manager [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received unexpected event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b for instance with vm_state active and task_state None.#033[00m
Nov 25 12:28:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Nov 25 12:28:19 np0005535469 nova_compute[254092]: 2025-11-25 17:28:19.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 106 op/s
Nov 25 12:28:21 np0005535469 podman[430213]: 2025-11-25 17:28:21.672100287 +0000 UTC m=+0.079790423 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:28:21 np0005535469 podman[430214]: 2025-11-25 17:28:21.673466024 +0000 UTC m=+0.075276060 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:28:21 np0005535469 podman[430215]: 2025-11-25 17:28:21.703958834 +0000 UTC m=+0.101518034 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 12:28:22 np0005535469 nova_compute[254092]: 2025-11-25 17:28:22.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:23 np0005535469 nova_compute[254092]: 2025-11-25 17:28:23.112 254096 DEBUG nova.compute.manager [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:28:23 np0005535469 nova_compute[254092]: 2025-11-25 17:28:23.112 254096 DEBUG nova.compute.manager [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing instance network info cache due to event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:28:23 np0005535469 nova_compute[254092]: 2025-11-25 17:28:23.113 254096 DEBUG oslo_concurrency.lockutils [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:28:23 np0005535469 nova_compute[254092]: 2025-11-25 17:28:23.113 254096 DEBUG oslo_concurrency.lockutils [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:28:23 np0005535469 nova_compute[254092]: 2025-11-25 17:28:23.113 254096 DEBUG nova.network.neutron [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:28:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 12:28:24 np0005535469 nova_compute[254092]: 2025-11-25 17:28:24.253 254096 DEBUG nova.network.neutron [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updated VIF entry in instance network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:28:24 np0005535469 nova_compute[254092]: 2025-11-25 17:28:24.254 254096 DEBUG nova.network.neutron [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:28:24 np0005535469 nova_compute[254092]: 2025-11-25 17:28:24.273 254096 DEBUG oslo_concurrency.lockutils [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:28:24 np0005535469 nova_compute[254092]: 2025-11-25 17:28:24.509 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Nov 25 12:28:27 np0005535469 nova_compute[254092]: 2025-11-25 17:28:27.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 78 op/s
Nov 25 12:28:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 78 op/s
Nov 25 12:28:29 np0005535469 nova_compute[254092]: 2025-11-25 17:28:29.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:29Z|00200|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Nov 25 12:28:29 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:29Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 12:28:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 512 KiB/s wr, 130 op/s
Nov 25 12:28:32 np0005535469 nova_compute[254092]: 2025-11-25 17:28:32.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 500 KiB/s wr, 53 op/s
Nov 25 12:28:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:34Z|00202|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Nov 25 12:28:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:34Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 12:28:34 np0005535469 nova_compute[254092]: 2025-11-25 17:28:34.516 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:34Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 12:28:34 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:34Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 12:28:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 536 KiB/s wr, 54 op/s
Nov 25 12:28:37 np0005535469 nova_compute[254092]: 2025-11-25 17:28:37.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 12:28:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 12:28:39 np0005535469 nova_compute[254092]: 2025-11-25 17:28:39.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:28:40
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes']
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:28:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:28:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 12:28:42 np0005535469 nova_compute[254092]: 2025-11-25 17:28:42.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 47 KiB/s wr, 2 op/s
Nov 25 12:28:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:44 np0005535469 nova_compute[254092]: 2025-11-25 17:28:44.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 47 KiB/s wr, 2 op/s
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:28:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev be3a758c-5b78-4165-95ac-ac25709a50f8 does not exist
Nov 25 12:28:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3748cdcf-4796-4dfa-8139-8556d5c5b7e4 does not exist
Nov 25 12:28:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev bdfbd9ab-5fae-4a39-854b-6d874168acaa does not exist
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:28:47 np0005535469 nova_compute[254092]: 2025-11-25 17:28:47.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:28:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:28:47 np0005535469 podman[430547]: 2025-11-25 17:28:47.880366447 +0000 UTC m=+0.086057664 container create e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:28:47 np0005535469 podman[430547]: 2025-11-25 17:28:47.844274414 +0000 UTC m=+0.049965681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:28:47 np0005535469 systemd[1]: Started libpod-conmon-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope.
Nov 25 12:28:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:28:48 np0005535469 podman[430547]: 2025-11-25 17:28:48.035763996 +0000 UTC m=+0.241455203 container init e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:28:48 np0005535469 podman[430547]: 2025-11-25 17:28:48.050758565 +0000 UTC m=+0.256449792 container start e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 12:28:48 np0005535469 crazy_murdock[430564]: 167 167
Nov 25 12:28:48 np0005535469 systemd[1]: libpod-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope: Deactivated successfully.
Nov 25 12:28:48 np0005535469 conmon[430564]: conmon e3c9bf9c51df3f9b8957 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope/container/memory.events
Nov 25 12:28:48 np0005535469 podman[430547]: 2025-11-25 17:28:48.077297737 +0000 UTC m=+0.282989014 container attach e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:28:48 np0005535469 podman[430547]: 2025-11-25 17:28:48.079450396 +0000 UTC m=+0.285141633 container died e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:28:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a1ab75f076945080c0416fa3013f445bf747952b6c47798cda6845fa3ccfd57-merged.mount: Deactivated successfully.
Nov 25 12:28:48 np0005535469 podman[430547]: 2025-11-25 17:28:48.148041893 +0000 UTC m=+0.353733120 container remove e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:28:48 np0005535469 systemd[1]: libpod-conmon-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope: Deactivated successfully.
Nov 25 12:28:48 np0005535469 podman[430589]: 2025-11-25 17:28:48.39592879 +0000 UTC m=+0.058097283 container create 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:28:48 np0005535469 systemd[1]: Started libpod-conmon-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope.
Nov 25 12:28:48 np0005535469 podman[430589]: 2025-11-25 17:28:48.374582559 +0000 UTC m=+0.036751082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:28:48 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:28:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:48 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:48 np0005535469 podman[430589]: 2025-11-25 17:28:48.506901971 +0000 UTC m=+0.169070534 container init 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:28:48 np0005535469 podman[430589]: 2025-11-25 17:28:48.518782614 +0000 UTC m=+0.180951137 container start 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:28:48 np0005535469 podman[430589]: 2025-11-25 17:28:48.524271704 +0000 UTC m=+0.186440237 container attach 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:28:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 12:28:49 np0005535469 nova_compute[254092]: 2025-11-25 17:28:49.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:49 np0005535469 nova_compute[254092]: 2025-11-25 17:28:49.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:49 np0005535469 interesting_turing[430606]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:28:49 np0005535469 interesting_turing[430606]: --> relative data size: 1.0
Nov 25 12:28:49 np0005535469 interesting_turing[430606]: --> All data devices are unavailable
Nov 25 12:28:49 np0005535469 systemd[1]: libpod-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope: Deactivated successfully.
Nov 25 12:28:49 np0005535469 systemd[1]: libpod-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope: Consumed 1.150s CPU time.
Nov 25 12:28:49 np0005535469 podman[430589]: 2025-11-25 17:28:49.735205025 +0000 UTC m=+1.397373548 container died 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:28:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c-merged.mount: Deactivated successfully.
Nov 25 12:28:49 np0005535469 podman[430589]: 2025-11-25 17:28:49.803533834 +0000 UTC m=+1.465702327 container remove 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 12:28:49 np0005535469 systemd[1]: libpod-conmon-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope: Deactivated successfully.
Nov 25 12:28:50 np0005535469 podman[430790]: 2025-11-25 17:28:50.65531692 +0000 UTC m=+0.047766912 container create 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:28:50 np0005535469 systemd[1]: Started libpod-conmon-7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e.scope.
Nov 25 12:28:50 np0005535469 podman[430790]: 2025-11-25 17:28:50.6365927 +0000 UTC m=+0.029042712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:28:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:28:50 np0005535469 podman[430790]: 2025-11-25 17:28:50.753501643 +0000 UTC m=+0.145951655 container init 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:28:50 np0005535469 podman[430790]: 2025-11-25 17:28:50.759712521 +0000 UTC m=+0.152162523 container start 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:28:50 np0005535469 podman[430790]: 2025-11-25 17:28:50.76409753 +0000 UTC m=+0.156547552 container attach 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:28:50 np0005535469 pensive_blackwell[430806]: 167 167
Nov 25 12:28:50 np0005535469 systemd[1]: libpod-7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e.scope: Deactivated successfully.
Nov 25 12:28:50 np0005535469 podman[430790]: 2025-11-25 17:28:50.767763591 +0000 UTC m=+0.160213583 container died 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:28:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-199518b486907e55b5e950abfa182e36c059ad05382cc3bcc5127d64f8f8a69a-merged.mount: Deactivated successfully.
Nov 25 12:28:50 np0005535469 podman[430790]: 2025-11-25 17:28:50.821614456 +0000 UTC m=+0.214064468 container remove 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:28:50 np0005535469 systemd[1]: libpod-conmon-7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e.scope: Deactivated successfully.
Nov 25 12:28:51 np0005535469 podman[430829]: 2025-11-25 17:28:51.072075123 +0000 UTC m=+0.062960744 container create 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:28:51 np0005535469 systemd[1]: Started libpod-conmon-4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36.scope.
Nov 25 12:28:51 np0005535469 podman[430829]: 2025-11-25 17:28:51.050138906 +0000 UTC m=+0.041024577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:28:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:28:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:51 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:51 np0005535469 podman[430829]: 2025-11-25 17:28:51.187710511 +0000 UTC m=+0.178596192 container init 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:28:51 np0005535469 podman[430829]: 2025-11-25 17:28:51.196330446 +0000 UTC m=+0.187216067 container start 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:28:51 np0005535469 podman[430829]: 2025-11-25 17:28:51.198974908 +0000 UTC m=+0.189860579 container attach 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 12:28:51 np0005535469 nova_compute[254092]: 2025-11-25 17:28:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:51 np0005535469 nova_compute[254092]: 2025-11-25 17:28:51.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:51 np0005535469 nova_compute[254092]: 2025-11-25 17:28:51.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:51 np0005535469 nova_compute[254092]: 2025-11-25 17:28:51.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:51 np0005535469 nova_compute[254092]: 2025-11-25 17:28:51.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:51 np0005535469 nova_compute[254092]: 2025-11-25 17:28:51.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:28:51 np0005535469 nova_compute[254092]: 2025-11-25 17:28:51.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008648972170671355 of space, bias 1.0, pg target 0.2594691651201407 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014241139016409456 of space, bias 1.0, pg target 0.4272341704922837 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:28:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:28:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:28:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867709054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.028 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]: {
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:    "0": [
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:        {
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "devices": [
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "/dev/loop3"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            ],
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_name": "ceph_lv0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_size": "21470642176",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "name": "ceph_lv0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "tags": {
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cluster_name": "ceph",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.crush_device_class": "",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.encrypted": "0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osd_id": "0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.type": "block",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.vdo": "0"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            },
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "type": "block",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "vg_name": "ceph_vg0"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:        }
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:    ],
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:    "1": [
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:        {
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "devices": [
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "/dev/loop4"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            ],
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_name": "ceph_lv1",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_size": "21470642176",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "name": "ceph_lv1",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "tags": {
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cluster_name": "ceph",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.crush_device_class": "",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.encrypted": "0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osd_id": "1",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.type": "block",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.vdo": "0"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            },
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "type": "block",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "vg_name": "ceph_vg1"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:        }
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:    ],
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:    "2": [
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:        {
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "devices": [
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "/dev/loop5"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            ],
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_name": "ceph_lv2",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_size": "21470642176",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "name": "ceph_lv2",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "tags": {
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.cluster_name": "ceph",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.crush_device_class": "",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.encrypted": "0",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osd_id": "2",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.type": "block",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:                "ceph.vdo": "0"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            },
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "type": "block",
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:            "vg_name": "ceph_vg2"
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:        }
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]:    ]
Nov 25 12:28:52 np0005535469 sweet_haslett[430847]: }
Nov 25 12:28:52 np0005535469 ovn_controller[153477]: 2025-11-25T17:28:52Z|01629|memory_trim|INFO|Detected inactivity (last active 30030 ms ago): trimming memory
Nov 25 12:28:52 np0005535469 systemd[1]: libpod-4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36.scope: Deactivated successfully.
Nov 25 12:28:52 np0005535469 podman[430829]: 2025-11-25 17:28:52.094539704 +0000 UTC m=+1.085425365 container died 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:28:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a-merged.mount: Deactivated successfully.
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.146 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.147 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.150 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.151 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:52 np0005535469 podman[430829]: 2025-11-25 17:28:52.1722598 +0000 UTC m=+1.163145421 container remove 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:28:52 np0005535469 systemd[1]: libpod-conmon-4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36.scope: Deactivated successfully.
Nov 25 12:28:52 np0005535469 podman[430879]: 2025-11-25 17:28:52.201058613 +0000 UTC m=+0.107445385 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:28:52 np0005535469 podman[430880]: 2025-11-25 17:28:52.220896783 +0000 UTC m=+0.119550324 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 12:28:52 np0005535469 podman[430881]: 2025-11-25 17:28:52.232147599 +0000 UTC m=+0.138325965 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.380 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.383 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3154MB free_disk=59.936397552490234GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.383 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.384 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.471 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7d3f09ec-6bad-4674-ab8b-907560448ab0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.471 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance c983f16d-bf50-4a2c-aa05-213890fb387a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.471 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.472 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:28:52 np0005535469 nova_compute[254092]: 2025-11-25 17:28:52.539 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:28:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:28:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123940767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:28:53 np0005535469 nova_compute[254092]: 2025-11-25 17:28:53.002 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:28:53 np0005535469 nova_compute[254092]: 2025-11-25 17:28:53.012 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:28:53 np0005535469 nova_compute[254092]: 2025-11-25 17:28:53.036 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:28:53 np0005535469 podman[431108]: 2025-11-25 17:28:53.051388369 +0000 UTC m=+0.088468429 container create 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:28:53 np0005535469 nova_compute[254092]: 2025-11-25 17:28:53.067 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:28:53 np0005535469 nova_compute[254092]: 2025-11-25 17:28:53.067 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:28:53 np0005535469 systemd[1]: Started libpod-conmon-5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c.scope.
Nov 25 12:28:53 np0005535469 podman[431108]: 2025-11-25 17:28:53.013837856 +0000 UTC m=+0.050917976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:28:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:28:53 np0005535469 podman[431108]: 2025-11-25 17:28:53.150918598 +0000 UTC m=+0.187998708 container init 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:28:53 np0005535469 podman[431108]: 2025-11-25 17:28:53.163774978 +0000 UTC m=+0.200855038 container start 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:28:53 np0005535469 podman[431108]: 2025-11-25 17:28:53.168732873 +0000 UTC m=+0.205813003 container attach 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:28:53 np0005535469 relaxed_leavitt[431126]: 167 167
Nov 25 12:28:53 np0005535469 systemd[1]: libpod-5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c.scope: Deactivated successfully.
Nov 25 12:28:53 np0005535469 podman[431108]: 2025-11-25 17:28:53.174416598 +0000 UTC m=+0.211496698 container died 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:28:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0409e7a7a222f7077d6284cf1c7e2f6fb8cc3b0dcc5909b5bf928274bc0e9ba0-merged.mount: Deactivated successfully.
Nov 25 12:28:53 np0005535469 podman[431108]: 2025-11-25 17:28:53.228631343 +0000 UTC m=+0.265711413 container remove 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:28:53 np0005535469 systemd[1]: libpod-conmon-5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c.scope: Deactivated successfully.
Nov 25 12:28:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 12:28:53 np0005535469 podman[431148]: 2025-11-25 17:28:53.453712619 +0000 UTC m=+0.057260149 container create 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:28:53 np0005535469 systemd[1]: Started libpod-conmon-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope.
Nov 25 12:28:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:28:53 np0005535469 podman[431148]: 2025-11-25 17:28:53.432058471 +0000 UTC m=+0.035606041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:28:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:28:53 np0005535469 podman[431148]: 2025-11-25 17:28:53.547277257 +0000 UTC m=+0.150824807 container init 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:28:53 np0005535469 podman[431148]: 2025-11-25 17:28:53.556613281 +0000 UTC m=+0.160160821 container start 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:28:53 np0005535469 podman[431148]: 2025-11-25 17:28:53.560346563 +0000 UTC m=+0.163894143 container attach 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:28:54 np0005535469 nova_compute[254092]: 2025-11-25 17:28:54.065 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:54 np0005535469 trusting_brown[431165]: {
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "osd_id": 1,
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "type": "bluestore"
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:    },
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "osd_id": 2,
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "type": "bluestore"
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:    },
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "osd_id": 0,
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:        "type": "bluestore"
Nov 25 12:28:54 np0005535469 trusting_brown[431165]:    }
Nov 25 12:28:54 np0005535469 trusting_brown[431165]: }
Nov 25 12:28:54 np0005535469 nova_compute[254092]: 2025-11-25 17:28:54.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:54 np0005535469 systemd[1]: libpod-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope: Deactivated successfully.
Nov 25 12:28:54 np0005535469 systemd[1]: libpod-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope: Consumed 1.053s CPU time.
Nov 25 12:28:54 np0005535469 conmon[431165]: conmon 2a8e9df723a9fddde62a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope/container/memory.events
Nov 25 12:28:54 np0005535469 podman[431148]: 2025-11-25 17:28:54.600063743 +0000 UTC m=+1.203611273 container died 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:28:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f-merged.mount: Deactivated successfully.
Nov 25 12:28:54 np0005535469 podman[431148]: 2025-11-25 17:28:54.664364183 +0000 UTC m=+1.267911703 container remove 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:28:54 np0005535469 systemd[1]: libpod-conmon-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope: Deactivated successfully.
Nov 25 12:28:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:28:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:28:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:28:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:28:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e7d64f74-97b1-4bf7-9329-2ed56ea96738 does not exist
Nov 25 12:28:54 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a115b60d-1ed1-41cb-bbfa-fc7f85455915 does not exist
Nov 25 12:28:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 3.3 KiB/s wr, 2 op/s
Nov 25 12:28:55 np0005535469 nova_compute[254092]: 2025-11-25 17:28:55.381 254096 DEBUG nova.compute.manager [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:28:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:28:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686158814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:28:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:28:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686158814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:28:55 np0005535469 nova_compute[254092]: 2025-11-25 17:28:55.421 254096 INFO nova.compute.manager [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] instance snapshotting#033[00m
Nov 25 12:28:55 np0005535469 nova_compute[254092]: 2025-11-25 17:28:55.683 254096 INFO nova.virt.libvirt.driver [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Beginning live snapshot process#033[00m
Nov 25 12:28:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:28:55 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:28:55 np0005535469 nova_compute[254092]: 2025-11-25 17:28:55.854 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(e24d3925bfa3487ab0d43d6acc87a7fd) on rbd image(c983f16d-bf50-4a2c-aa05-213890fb387a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 12:28:56 np0005535469 nova_compute[254092]: 2025-11-25 17:28:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:56 np0005535469 nova_compute[254092]: 2025-11-25 17:28:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:28:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 25 12:28:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 25 12:28:56 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 25 12:28:56 np0005535469 nova_compute[254092]: 2025-11-25 17:28:56.832 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] cloning vms/c983f16d-bf50-4a2c-aa05-213890fb387a_disk@e24d3925bfa3487ab0d43d6acc87a7fd to images/b74fa3ec-ffe6-4470-93cb-d345ffbadb0e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 25 12:28:56 np0005535469 nova_compute[254092]: 2025-11-25 17:28:56.963 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] flattening images/b74fa3ec-ffe6-4470-93cb-d345ffbadb0e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 25 12:28:57 np0005535469 nova_compute[254092]: 2025-11-25 17:28:57.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 6 op/s
Nov 25 12:28:57 np0005535469 nova_compute[254092]: 2025-11-25 17:28:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:57 np0005535469 nova_compute[254092]: 2025-11-25 17:28:57.589 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] removing snapshot(e24d3925bfa3487ab0d43d6acc87a7fd) on rbd image(c983f16d-bf50-4a2c-aa05-213890fb387a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.769104) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737769155, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 2004, "num_deletes": 252, "total_data_size": 3323455, "memory_usage": 3374968, "flush_reason": "Manual Compaction"}
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737789757, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 3246834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63882, "largest_seqno": 65885, "table_properties": {"data_size": 3237545, "index_size": 5846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18744, "raw_average_key_size": 20, "raw_value_size": 3219111, "raw_average_value_size": 3483, "num_data_blocks": 259, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091529, "oldest_key_time": 1764091529, "file_creation_time": 1764091737, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 20732 microseconds, and 10943 cpu microseconds.
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.789833) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 3246834 bytes OK
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.789867) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.793936) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.793967) EVENT_LOG_v1 {"time_micros": 1764091737793957, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.793995) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 3314971, prev total WAL file size 3315012, number of live WAL files 2.
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.795757) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(3170KB)], [149(8164KB)]
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737795800, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11607675, "oldest_snapshot_seqno": -1}
Nov 25 12:28:57 np0005535469 nova_compute[254092]: 2025-11-25 17:28:57.815 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(snap) on rbd image(b74fa3ec-ffe6-4470-93cb-d345ffbadb0e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8515 keys, 9871488 bytes, temperature: kUnknown
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737839932, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9871488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9818208, "index_size": 30895, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21317, "raw_key_size": 222853, "raw_average_key_size": 26, "raw_value_size": 9669951, "raw_average_value_size": 1135, "num_data_blocks": 1195, "num_entries": 8515, "num_filter_entries": 8515, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091737, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.840594) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9871488 bytes
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.842576) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.4 rd, 223.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 9035, records dropped: 520 output_compression: NoCompression
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.842607) EVENT_LOG_v1 {"time_micros": 1764091737842593, "job": 92, "event": "compaction_finished", "compaction_time_micros": 44232, "compaction_time_cpu_micros": 24105, "output_level": 6, "num_output_files": 1, "total_output_size": 9871488, "num_input_records": 9035, "num_output_records": 8515, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737844140, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737847262, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.795622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:28:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:28:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 25 12:28:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 25 12:28:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 25 12:28:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 11 op/s
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:28:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.863 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.864 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.864 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 12:28:59 np0005535469 nova_compute[254092]: 2025-11-25 17:28:59.864 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:29:00 np0005535469 nova_compute[254092]: 2025-11-25 17:29:00.291 254096 INFO nova.virt.libvirt.driver [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Snapshot image upload complete#033[00m
Nov 25 12:29:00 np0005535469 nova_compute[254092]: 2025-11-25 17:29:00.291 254096 INFO nova.compute.manager [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 4.87 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 25 12:29:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 25 12:29:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 25 12:29:01 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 25 12:29:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 257 op/s
Nov 25 12:29:01 np0005535469 nova_compute[254092]: 2025-11-25 17:29:01.636 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:29:01 np0005535469 nova_compute[254092]: 2025-11-25 17:29:01.651 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:29:01 np0005535469 nova_compute[254092]: 2025-11-25 17:29:01.651 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 12:29:02 np0005535469 nova_compute[254092]: 2025-11-25 17:29:02.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.207 254096 DEBUG nova.compute.manager [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.208 254096 DEBUG nova.compute.manager [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing instance network info cache due to event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.209 254096 DEBUG oslo_concurrency.lockutils [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.209 254096 DEBUG oslo_concurrency.lockutils [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.210 254096 DEBUG nova.network.neutron [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:29:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 15 MiB/s wr, 196 op/s
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.402 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.403 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.403 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.404 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.404 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.406 254096 INFO nova.compute.manager [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Terminating instance#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.409 254096 DEBUG nova.compute.manager [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:29:03 np0005535469 kernel: tap78592e85-0e (unregistering): left promiscuous mode
Nov 25 12:29:03 np0005535469 NetworkManager[48891]: <info>  [1764091743.6723] device (tap78592e85-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:29:03 np0005535469 ovn_controller[153477]: 2025-11-25T17:29:03Z|01630|binding|INFO|Releasing lport 78592e85-0e7a-4c36-bf1d-981efc74361b from this chassis (sb_readonly=0)
Nov 25 12:29:03 np0005535469 ovn_controller[153477]: 2025-11-25T17:29:03Z|01631|binding|INFO|Setting lport 78592e85-0e7a-4c36-bf1d-981efc74361b down in Southbound
Nov 25 12:29:03 np0005535469 ovn_controller[153477]: 2025-11-25T17:29:03Z|01632|binding|INFO|Removing iface tap78592e85-0e ovn-installed in OVS
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.694 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:b0:be 10.100.0.12'], port_security=['fa:16:3e:f0:b0:be 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c983f16d-bf50-4a2c-aa05-213890fb387a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '4', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=78592e85-0e7a-4c36-bf1d-981efc74361b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.697 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 78592e85-0e7a-4c36-bf1d-981efc74361b in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 unbound from our chassis#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.698 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a1a00fe-6b82-48c5-a534-9040cbe84499#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.730 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35ba5d6e-e2b9-49d1-aab1-f3e35a747fab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:03 np0005535469 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 12:29:03 np0005535469 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 16.074s CPU time.
Nov 25 12:29:03 np0005535469 systemd-machined[216343]: Machine qemu-186-instance-00000098 terminated.
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.804 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[82770369-fca3-4482-91cf-be4d6914e533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.810 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a9632087-5fb5-427d-b497-36396c556e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.861 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3941c028-ba8a-4113-980c-1547848b0e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.866 254096 INFO nova.virt.libvirt.driver [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance destroyed successfully.#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.867 254096 DEBUG nova.objects.instance [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'resources' on Instance uuid c983f16d-bf50-4a2c-aa05-213890fb387a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.881 254096 DEBUG nova.virt.libvirt.vif [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-934976337',display_name='tempest-TestSnapshotPattern-server-934976337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-934976337',id=152,image_ref='eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:28:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-ryy3wl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d3f09ec-6bad-4674-ab8b-907560448ab0',image_min_disk='1',image_min_ram='0',image_owner_id='d295e1cfcd234c4391fda20fc4264d70',image_owner_project_name='tempest-TestSnapshotPattern-1072505445',image_owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member',image_user_id='e6ab922303af4fc0a70862a72b3ea9c8',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:29:00Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=c983f16d-bf50-4a2c-aa05-213890fb387a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.882 254096 DEBUG nova.network.os_vif_util [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.883 254096 DEBUG nova.network.os_vif_util [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.884 254096 DEBUG os_vif [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.887 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78592e85-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.893 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f51aa23-ad01-4952-8954-059cacc73731]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431426, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.900 254096 INFO os_vif [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e')#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[765ec5c0-c85b-4779-b5df-1c6499074cd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810787, 'tstamp': 810787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431428, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810790, 'tstamp': 810790}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431428, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.924 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.930 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a1a00fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.930 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.931 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a1a00fe-60, col_values=(('external_ids', {'iface-id': '0a114fd0-0e8c-4ae1-8b45-56c99d3c790e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:29:03 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.931 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.934 254096 DEBUG nova.compute.manager [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-unplugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.934 254096 DEBUG oslo_concurrency.lockutils [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.934 254096 DEBUG oslo_concurrency.lockutils [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.935 254096 DEBUG oslo_concurrency.lockutils [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.935 254096 DEBUG nova.compute.manager [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] No waiting events found dispatching network-vif-unplugged-78592e85-0e7a-4c36-bf1d-981efc74361b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.935 254096 DEBUG nova.compute.manager [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-unplugged-78592e85-0e7a-4c36-bf1d-981efc74361b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:29:03 np0005535469 nova_compute[254092]: 2025-11-25 17:29:03.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.329 254096 INFO nova.virt.libvirt.driver [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deleting instance files /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a_del#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.330 254096 INFO nova.virt.libvirt.driver [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deletion of /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a_del complete#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.421 254096 INFO nova.compute.manager [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.422 254096 DEBUG oslo.service.loopingcall [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.422 254096 DEBUG nova.compute.manager [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.423 254096 DEBUG nova.network.neutron [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 25 12:29:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 25 12:29:04 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.915 254096 DEBUG nova.network.neutron [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updated VIF entry in instance network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.916 254096 DEBUG nova.network.neutron [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:29:04 np0005535469 nova_compute[254092]: 2025-11-25 17:29:04.942 254096 DEBUG oslo_concurrency.lockutils [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:29:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 266 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 14 MiB/s wr, 201 op/s
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.394 254096 DEBUG nova.network.neutron [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.408 254096 INFO nova.compute.manager [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 0.99 seconds to deallocate network for instance.#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.449 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.450 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.512 254096 DEBUG nova.compute.manager [req-7023a8cc-8127-4d87-945f-95252fc20864 req-7a46357f-63f6-4de1-aa96-e0f384e4d58a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-deleted-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.544 254096 DEBUG oslo_concurrency.processutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.986 254096 DEBUG nova.compute.manager [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.988 254096 DEBUG oslo_concurrency.lockutils [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.989 254096 DEBUG oslo_concurrency.lockutils [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.991 254096 DEBUG oslo_concurrency.lockutils [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.992 254096 DEBUG nova.compute.manager [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] No waiting events found dispatching network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:29:05 np0005535469 nova_compute[254092]: 2025-11-25 17:29:05.992 254096 WARNING nova.compute.manager [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received unexpected event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b for instance with vm_state deleted and task_state None.#033[00m
Nov 25 12:29:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:29:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359872651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:29:06 np0005535469 nova_compute[254092]: 2025-11-25 17:29:06.015 254096 DEBUG oslo_concurrency.processutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:06 np0005535469 nova_compute[254092]: 2025-11-25 17:29:06.022 254096 DEBUG nova.compute.provider_tree [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:29:06 np0005535469 nova_compute[254092]: 2025-11-25 17:29:06.042 254096 DEBUG nova.scheduler.client.report [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:29:06 np0005535469 nova_compute[254092]: 2025-11-25 17:29:06.060 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:06 np0005535469 nova_compute[254092]: 2025-11-25 17:29:06.085 254096 INFO nova.scheduler.client.report [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Deleted allocations for instance c983f16d-bf50-4a2c-aa05-213890fb387a#033[00m
Nov 25 12:29:06 np0005535469 nova_compute[254092]: 2025-11-25 17:29:06.133 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:07 np0005535469 nova_compute[254092]: 2025-11-25 17:29:07.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 221 op/s
Nov 25 12:29:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 25 12:29:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 25 12:29:07 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.879 254096 DEBUG nova.compute.manager [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-changed-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.880 254096 DEBUG nova.compute.manager [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing instance network info cache due to event network-changed-4c9f1115-1d14-4772-b092-e842077e160a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.880 254096 DEBUG oslo_concurrency.lockutils [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.881 254096 DEBUG oslo_concurrency.lockutils [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.881 254096 DEBUG nova.network.neutron [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.905 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.906 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.906 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.907 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.907 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.909 254096 INFO nova.compute.manager [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Terminating instance#033[00m
Nov 25 12:29:08 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.910 254096 DEBUG nova.compute.manager [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:29:08 np0005535469 kernel: tap4c9f1115-1d (unregistering): left promiscuous mode
Nov 25 12:29:08 np0005535469 NetworkManager[48891]: <info>  [1764091748.9920] device (tap4c9f1115-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 12:29:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:29:08Z|01633|binding|INFO|Releasing lport 4c9f1115-1d14-4772-b092-e842077e160a from this chassis (sb_readonly=0)
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:08.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:29:08Z|01634|binding|INFO|Setting lport 4c9f1115-1d14-4772-b092-e842077e160a down in Southbound
Nov 25 12:29:09 np0005535469 ovn_controller[153477]: 2025-11-25T17:29:09Z|01635|binding|INFO|Removing iface tap4c9f1115-1d ovn-installed in OVS
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.010 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:a2:36 10.100.0.14'], port_security=['fa:16:3e:dc:a2:36 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7d3f09ec-6bad-4674-ab8b-907560448ab0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '4', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4c9f1115-1d14-4772-b092-e842077e160a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.011 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4c9f1115-1d14-4772-b092-e842077e160a in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 unbound from our chassis#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.013 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a1a00fe-6b82-48c5-a534-9040cbe84499, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9611eb-f1af-4676-abc3-d5221a42006f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.015 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 namespace which is not needed anymore#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 25 12:29:09 np0005535469 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 17.324s CPU time.
Nov 25 12:29:09 np0005535469 systemd-machined[216343]: Machine qemu-185-instance-00000097 terminated.
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.159 254096 INFO nova.virt.libvirt.driver [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance destroyed successfully.#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.160 254096 DEBUG nova.objects.instance [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'resources' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.173 254096 DEBUG nova.virt.libvirt.vif [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-142394671',display_name='tempest-TestSnapshotPattern-server-142394671',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-142394671',id=151,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:27:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-pb0kf19x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:28:03Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=7d3f09ec-6bad-4674-ab8b-907560448ab0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.174 254096 DEBUG nova.network.os_vif_util [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.175 254096 DEBUG nova.network.os_vif_util [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.175 254096 DEBUG os_vif [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.177 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.177 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c9f1115-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.187 254096 INFO os_vif [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d')#033[00m
Nov 25 12:29:09 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : haproxy version is 2.8.14-c23fe91
Nov 25 12:29:09 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : path to executable is /usr/sbin/haproxy
Nov 25 12:29:09 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [WARNING]  (428675) : Exiting Master process...
Nov 25 12:29:09 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [WARNING]  (428675) : Exiting Master process...
Nov 25 12:29:09 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [ALERT]    (428675) : Current worker (428677) exited with code 143 (Terminated)
Nov 25 12:29:09 np0005535469 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [WARNING]  (428675) : All workers exited. Exiting... (0)
Nov 25 12:29:09 np0005535469 systemd[1]: libpod-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b.scope: Deactivated successfully.
Nov 25 12:29:09 np0005535469 podman[431500]: 2025-11-25 17:29:09.238952354 +0000 UTC m=+0.070062198 container died 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:29:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9bdd69f08ecf9e368f25903ee5b299b22e79808015eca7d99f37cf6be366f960-merged.mount: Deactivated successfully.
Nov 25 12:29:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b-userdata-shm.mount: Deactivated successfully.
Nov 25 12:29:09 np0005535469 podman[431500]: 2025-11-25 17:29:09.308541678 +0000 UTC m=+0.139651552 container cleanup 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:29:09 np0005535469 systemd[1]: libpod-conmon-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b.scope: Deactivated successfully.
Nov 25 12:29:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 74 op/s
Nov 25 12:29:09 np0005535469 podman[431548]: 2025-11-25 17:29:09.401364834 +0000 UTC m=+0.056232821 container remove 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.407 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b223ca17-465b-4584-b6b8-4f02f88b1e1e]: (4, ('Tue Nov 25 05:29:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 (251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b)\n251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b\nTue Nov 25 05:29:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 (251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b)\n251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.408 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3af4bb45-b4f8-4d08-80a9-faee206853d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.409 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.454 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 kernel: tap7a1a00fe-60: left promiscuous mode
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.471 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a76c88-6c4a-4418-a380-31d6efa71c22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.494 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62348a13-91d6-44ff-9604-f751c264e591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.495 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f65cb0-b178-44c3-8560-06a458c693ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[725bc97a-9f65-41cd-ab3e-166d639c2a21]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810761, 'reachable_time': 27080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431564, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 systemd[1]: run-netns-ovnmeta\x2d7a1a00fe\x2d6b82\x2d48c5\x2da534\x2d9040cbe84499.mount: Deactivated successfully.
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.514 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.514 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[42f7f3c1-680a-49ff-b5bb-bbfb84aefabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 12:29:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 25 12:29:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 25 12:29:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.635 254096 INFO nova.virt.libvirt.driver [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deleting instance files /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0_del#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.636 254096 INFO nova.virt.libvirt.driver [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deletion of /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0_del complete#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.691 254096 INFO nova.compute.manager [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.692 254096 DEBUG oslo.service.loopingcall [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.692 254096 DEBUG nova.compute.manager [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.692 254096 DEBUG nova.network.neutron [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.724 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:29:09 np0005535469 nova_compute[254092]: 2025-11-25 17:29:09.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:09 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.725 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:29:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.978 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-unplugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.979 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.980 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.980 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.981 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] No waiting events found dispatching network-vif-unplugged-4c9f1115-1d14-4772-b092-e842077e160a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.981 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-unplugged-4c9f1115-1d14-4772-b092-e842077e160a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.982 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.982 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.983 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.983 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.984 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] No waiting events found dispatching network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 12:29:10 np0005535469 nova_compute[254092]: 2025-11-25 17:29:10.984 254096 WARNING nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received unexpected event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a for instance with vm_state active and task_state deleting.#033[00m
Nov 25 12:29:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 8.6 KiB/s wr, 178 op/s
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.449 254096 DEBUG nova.network.neutron [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated VIF entry in instance network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.450 254096 DEBUG nova.network.neutron [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.475 254096 DEBUG oslo_concurrency.lockutils [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.476 254096 DEBUG nova.network.neutron [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.487 254096 INFO nova.compute.manager [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 1.80 seconds to deallocate network for instance.#033[00m
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.528 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.529 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:11 np0005535469 nova_compute[254092]: 2025-11-25 17:29:11.581 254096 DEBUG oslo_concurrency.processutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:29:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163608115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:29:12 np0005535469 nova_compute[254092]: 2025-11-25 17:29:12.026 254096 DEBUG oslo_concurrency.processutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:12 np0005535469 nova_compute[254092]: 2025-11-25 17:29:12.038 254096 DEBUG nova.compute.provider_tree [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:29:12 np0005535469 nova_compute[254092]: 2025-11-25 17:29:12.065 254096 DEBUG nova.scheduler.client.report [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:29:12 np0005535469 nova_compute[254092]: 2025-11-25 17:29:12.091 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:12 np0005535469 nova_compute[254092]: 2025-11-25 17:29:12.123 254096 INFO nova.scheduler.client.report [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Deleted allocations for instance 7d3f09ec-6bad-4674-ab8b-907560448ab0#033[00m
Nov 25 12:29:12 np0005535469 nova_compute[254092]: 2025-11-25 17:29:12.177 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:12 np0005535469 nova_compute[254092]: 2025-11-25 17:29:12.215 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:13 np0005535469 nova_compute[254092]: 2025-11-25 17:29:13.087 254096 DEBUG nova.compute.manager [req-0fb21cb3-9df6-41be-8873-77e04dc75d93 req-2fc0bacc-4d8e-408b-aece-c8fe348cea82 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-deleted-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 12:29:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 6.5 KiB/s wr, 133 op/s
Nov 25 12:29:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:13.669 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:13.669 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:14 np0005535469 nova_compute[254092]: 2025-11-25 17:29:14.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 25 12:29:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 25 12:29:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 25 12:29:14 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:29:14.727 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:29:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 78 op/s
Nov 25 12:29:17 np0005535469 nova_compute[254092]: 2025-11-25 17:29:17.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Nov 25 12:29:17 np0005535469 nova_compute[254092]: 2025-11-25 17:29:17.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:17 np0005535469 nova_compute[254092]: 2025-11-25 17:29:17.874 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:18 np0005535469 nova_compute[254092]: 2025-11-25 17:29:18.863 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091743.861149, c983f16d-bf50-4a2c-aa05-213890fb387a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:29:18 np0005535469 nova_compute[254092]: 2025-11-25 17:29:18.863 254096 INFO nova.compute.manager [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:29:18 np0005535469 nova_compute[254092]: 2025-11-25 17:29:18.901 254096 DEBUG nova.compute.manager [None req-fabedd34-61f2-4cdf-a63c-5e0a3a69495f - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:29:19 np0005535469 nova_compute[254092]: 2025-11-25 17:29:19.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Nov 25 12:29:19 np0005535469 nova_compute[254092]: 2025-11-25 17:29:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:22 np0005535469 nova_compute[254092]: 2025-11-25 17:29:22.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:22 np0005535469 podman[431589]: 2025-11-25 17:29:22.649523658 +0000 UTC m=+0.062824530 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 25 12:29:22 np0005535469 podman[431588]: 2025-11-25 17:29:22.660077066 +0000 UTC m=+0.076018750 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 12:29:22 np0005535469 podman[431590]: 2025-11-25 17:29:22.696084746 +0000 UTC m=+0.107549908 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 12:29:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:24 np0005535469 nova_compute[254092]: 2025-11-25 17:29:24.157 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091749.1553006, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:29:24 np0005535469 nova_compute[254092]: 2025-11-25 17:29:24.157 254096 INFO nova.compute.manager [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:29:24 np0005535469 nova_compute[254092]: 2025-11-25 17:29:24.183 254096 DEBUG nova.compute.manager [None req-2599a26e-1353-4e55-a5ff-af6b3e64d7c2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:29:24 np0005535469 nova_compute[254092]: 2025-11-25 17:29:24.266 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:27 np0005535469 nova_compute[254092]: 2025-11-25 17:29:27.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:29 np0005535469 nova_compute[254092]: 2025-11-25 17:29:29.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.115 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.116 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.129 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.203 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.204 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.215 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.216 254096 INFO nova.compute.claims [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.324 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:29:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261067995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.796 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.802 254096 DEBUG nova.compute.provider_tree [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.815 254096 DEBUG nova.scheduler.client.report [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.834 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.835 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.869 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.870 254096 DEBUG nova.network.neutron [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.885 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.899 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.982 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.984 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 12:29:32 np0005535469 nova_compute[254092]: 2025-11-25 17:29:32.985 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Creating image(s)#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.026 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.069 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.100 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.105 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.191 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.192 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.193 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.193 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.219 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.224 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.363 254096 DEBUG nova.network.neutron [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.364 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.493 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.575 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] resizing rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.687 254096 DEBUG nova.objects.instance [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lazy-loading 'migration_context' on Instance uuid 93a89288-892f-44cf-8e55-4b1f01a9bbb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.704 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.704 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Ensure instance console log exists: /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.705 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.705 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.706 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.708 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.714 254096 WARNING nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.720 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.720 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.724 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.725 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.725 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.725 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.726 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.726 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.728 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.728 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.728 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.729 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 12:29:33 np0005535469 nova_compute[254092]: 2025-11-25 17:29:33.732 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:29:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724810595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.235 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.282 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.290 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 12:29:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028815429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.809 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.813 254096 DEBUG nova.objects.instance [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93a89288-892f-44cf-8e55-4b1f01a9bbb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.842 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <uuid>93a89288-892f-44cf-8e55-4b1f01a9bbb5</uuid>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <name>instance-00000099</name>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <memory>131072</memory>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <vcpu>1</vcpu>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <metadata>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1301247845</nova:name>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <nova:creationTime>2025-11-25 17:29:33</nova:creationTime>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <nova:flavor name="m1.nano">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <nova:memory>128</nova:memory>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <nova:disk>1</nova:disk>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <nova:swap>0</nova:swap>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <nova:vcpus>1</nova:vcpus>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      </nova:flavor>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <nova:owner>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <nova:user uuid="1a8467fdfffc42839788565288728335">tempest-AggregatesAdminTestJSON-1160613454-project-member</nova:user>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <nova:project uuid="8d10d0047c0e447bb9993376e290a416">tempest-AggregatesAdminTestJSON-1160613454</nova:project>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      </nova:owner>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <nova:ports/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </nova:instance>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  </metadata>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <sysinfo type="smbios">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <system>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <entry name="manufacturer">RDO</entry>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <entry name="product">OpenStack Compute</entry>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <entry name="serial">93a89288-892f-44cf-8e55-4b1f01a9bbb5</entry>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <entry name="uuid">93a89288-892f-44cf-8e55-4b1f01a9bbb5</entry>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <entry name="family">Virtual Machine</entry>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </system>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  </sysinfo>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <os>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <boot dev="hd"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <smbios mode="sysinfo"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  </os>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <features>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <acpi/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <apic/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <vmcoreinfo/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  </features>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <clock offset="utc">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <timer name="hpet" present="no"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  </clock>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <cpu mode="host-model" match="exact">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  </cpu>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  <devices>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <disk type="network" device="disk">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <target dev="vda" bus="virtio"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <disk type="network" device="cdrom">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <driver type="raw" cache="none"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <source protocol="rbd" name="vms/93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <host name="192.168.122.100" port="6789"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      </source>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <auth username="openstack">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:        <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      </auth>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <target dev="sda" bus="sata"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </disk>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <serial type="pty">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <log file="/var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/console.log" append="off"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </serial>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <video>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <model type="virtio"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </video>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <input type="tablet" bus="usb"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <rng model="virtio">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <backend model="random">/dev/urandom</backend>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </rng>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <controller type="usb" index="0"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    <memballoon model="virtio">
Nov 25 12:29:34 np0005535469 nova_compute[254092]:      <stats period="10"/>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:    </memballoon>
Nov 25 12:29:34 np0005535469 nova_compute[254092]:  </devices>
Nov 25 12:29:34 np0005535469 nova_compute[254092]: </domain>
Nov 25 12:29:34 np0005535469 nova_compute[254092]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 12:29:34 np0005535469 systemd[1]: Starting dnf makecache...
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.903 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.903 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.904 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Using config drive#033[00m
Nov 25 12:29:34 np0005535469 nova_compute[254092]: 2025-11-25 17:29:34.927 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:29:35 np0005535469 dnf[431901]: Metadata cache refreshed recently.
Nov 25 12:29:35 np0005535469 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 12:29:35 np0005535469 systemd[1]: Finished dnf makecache.
Nov 25 12:29:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 25 op/s
Nov 25 12:29:35 np0005535469 nova_compute[254092]: 2025-11-25 17:29:35.364 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Creating config drive at /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config#033[00m
Nov 25 12:29:35 np0005535469 nova_compute[254092]: 2025-11-25 17:29:35.369 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5kvwtnlz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:35 np0005535469 nova_compute[254092]: 2025-11-25 17:29:35.512 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5kvwtnlz" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:35 np0005535469 nova_compute[254092]: 2025-11-25 17:29:35.543 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 12:29:35 np0005535469 nova_compute[254092]: 2025-11-25 17:29:35.548 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:35 np0005535469 nova_compute[254092]: 2025-11-25 17:29:35.761 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:35 np0005535469 nova_compute[254092]: 2025-11-25 17:29:35.763 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deleting local config drive /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config because it was imported into RBD.#033[00m
Nov 25 12:29:35 np0005535469 systemd-machined[216343]: New machine qemu-187-instance-00000099.
Nov 25 12:29:35 np0005535469 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.337 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.341 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.342 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091776.3362472, 93a89288-892f-44cf-8e55-4b1f01a9bbb5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.342 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] VM Resumed (Lifecycle Event)#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.352 254096 INFO nova.virt.libvirt.driver [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance spawned successfully.#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.353 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.363 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.370 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.375 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.375 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.376 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.376 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.377 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.378 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.407 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091776.3366756, 93a89288-892f-44cf-8e55-4b1f01a9bbb5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] VM Started (Lifecycle Event)#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.429 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.434 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.441 254096 INFO nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 3.46 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.441 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.453 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.496 254096 INFO nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 4.33 seconds to build instance.#033[00m
Nov 25 12:29:36 np0005535469 nova_compute[254092]: 2025-11-25 17:29:36.510 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:37 np0005535469 nova_compute[254092]: 2025-11-25 17:29:37.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 12:29:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.567 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.569 254096 INFO nova.compute.manager [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Terminating instance#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.570 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "refresh_cache-93a89288-892f-44cf-8e55-4b1f01a9bbb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.570 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquired lock "refresh_cache-93a89288-892f-44cf-8e55-4b1f01a9bbb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.570 254096 DEBUG nova.network.neutron [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.585833) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779585869, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 650, "num_deletes": 259, "total_data_size": 697120, "memory_usage": 709560, "flush_reason": "Manual Compaction"}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779596256, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 689736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65887, "largest_seqno": 66535, "table_properties": {"data_size": 686234, "index_size": 1345, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7909, "raw_average_key_size": 19, "raw_value_size": 679197, "raw_average_value_size": 1640, "num_data_blocks": 60, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091737, "oldest_key_time": 1764091737, "file_creation_time": 1764091779, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 10481 microseconds, and 3119 cpu microseconds.
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.596313) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 689736 bytes OK
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.596333) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.597860) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.597872) EVENT_LOG_v1 {"time_micros": 1764091779597868, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.597889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 693619, prev total WAL file size 693619, number of live WAL files 2.
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.598441) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373636' seq:72057594037927935, type:22 .. '6C6F676D0033303138' seq:0, type:0; will stop at (end)
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(673KB)], [152(9640KB)]
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779598514, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 10561224, "oldest_snapshot_seqno": -1}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8400 keys, 10457021 bytes, temperature: kUnknown
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779660970, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10457021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10403088, "index_size": 31804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 221436, "raw_average_key_size": 26, "raw_value_size": 10255322, "raw_average_value_size": 1220, "num_data_blocks": 1232, "num_entries": 8400, "num_filter_entries": 8400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091779, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.661515) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10457021 bytes
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.663502) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.3 rd, 166.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(30.5) write-amplify(15.2) OK, records in: 8929, records dropped: 529 output_compression: NoCompression
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.663536) EVENT_LOG_v1 {"time_micros": 1764091779663521, "job": 94, "event": "compaction_finished", "compaction_time_micros": 62752, "compaction_time_cpu_micros": 37665, "output_level": 6, "num_output_files": 1, "total_output_size": 10457021, "num_input_records": 8929, "num_output_records": 8400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779664421, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779666819, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.598291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:29:39 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:29:39 np0005535469 nova_compute[254092]: 2025-11-25 17:29:39.848 254096 DEBUG nova.network.neutron [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.060 254096 DEBUG nova.network.neutron [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.074 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Releasing lock "refresh_cache-93a89288-892f-44cf-8e55-4b1f01a9bbb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.075 254096 DEBUG nova.compute.manager [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:29:40 np0005535469 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 25 12:29:40 np0005535469 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 4.367s CPU time.
Nov 25 12:29:40 np0005535469 systemd-machined[216343]: Machine qemu-187-instance-00000099 terminated.
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:29:40
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'vms']
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.303 254096 INFO nova.virt.libvirt.driver [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance destroyed successfully.#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.303 254096 DEBUG nova.objects.instance [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lazy-loading 'resources' on Instance uuid 93a89288-892f-44cf-8e55-4b1f01a9bbb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.687 254096 INFO nova.virt.libvirt.driver [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deleting instance files /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5_del#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.688 254096 INFO nova.virt.libvirt.driver [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deletion of /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5_del complete#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.730 254096 INFO nova.compute.manager [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.731 254096 DEBUG oslo.service.loopingcall [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.731 254096 DEBUG nova.compute.manager [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 12:29:40 np0005535469 nova_compute[254092]: 2025-11-25 17:29:40.732 254096 DEBUG nova.network.neutron [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:29:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:29:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 25 12:29:41 np0005535469 nova_compute[254092]: 2025-11-25 17:29:41.379 254096 DEBUG nova.network.neutron [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 12:29:41 np0005535469 nova_compute[254092]: 2025-11-25 17:29:41.392 254096 DEBUG nova.network.neutron [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 12:29:41 np0005535469 nova_compute[254092]: 2025-11-25 17:29:41.404 254096 INFO nova.compute.manager [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 0.67 seconds to deallocate network for instance.#033[00m
Nov 25 12:29:41 np0005535469 nova_compute[254092]: 2025-11-25 17:29:41.468 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:41 np0005535469 nova_compute[254092]: 2025-11-25 17:29:41.468 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:41 np0005535469 nova_compute[254092]: 2025-11-25 17:29:41.536 254096 DEBUG oslo_concurrency.processutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:29:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617938552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:29:42 np0005535469 nova_compute[254092]: 2025-11-25 17:29:42.027 254096 DEBUG oslo_concurrency.processutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:42 np0005535469 nova_compute[254092]: 2025-11-25 17:29:42.034 254096 DEBUG nova.compute.provider_tree [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:29:42 np0005535469 nova_compute[254092]: 2025-11-25 17:29:42.053 254096 DEBUG nova.scheduler.client.report [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:29:42 np0005535469 nova_compute[254092]: 2025-11-25 17:29:42.080 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:42 np0005535469 nova_compute[254092]: 2025-11-25 17:29:42.110 254096 INFO nova.scheduler.client.report [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Deleted allocations for instance 93a89288-892f-44cf-8e55-4b1f01a9bbb5#033[00m
Nov 25 12:29:42 np0005535469 nova_compute[254092]: 2025-11-25 17:29:42.156 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:42 np0005535469 nova_compute[254092]: 2025-11-25 17:29:42.241 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 25 12:29:44 np0005535469 nova_compute[254092]: 2025-11-25 17:29:44.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 12:29:47 np0005535469 nova_compute[254092]: 2025-11-25 17:29:47.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 214 KiB/s wr, 101 op/s
Nov 25 12:29:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 25 12:29:49 np0005535469 nova_compute[254092]: 2025-11-25 17:29:49.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:50 np0005535469 nova_compute[254092]: 2025-11-25 17:29:50.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 25 12:29:51 np0005535469 nova_compute[254092]: 2025-11-25 17:29:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:51 np0005535469 nova_compute[254092]: 2025-11-25 17:29:51.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:51 np0005535469 nova_compute[254092]: 2025-11-25 17:29:51.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:51 np0005535469 nova_compute[254092]: 2025-11-25 17:29:51.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:51 np0005535469 nova_compute[254092]: 2025-11-25 17:29:51.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:29:51 np0005535469 nova_compute[254092]: 2025-11-25 17:29:51.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:29:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:29:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:29:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002245299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:29:51 np0005535469 nova_compute[254092]: 2025-11-25 17:29:51.967 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:52 np0005535469 ovn_controller[153477]: 2025-11-25T17:29:52Z|01636|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.182 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.183 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.260 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:29:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/308748755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.684 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.690 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.708 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.726 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:29:52 np0005535469 nova_compute[254092]: 2025-11-25 17:29:52.727 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:29:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 25 12:29:53 np0005535469 podman[432107]: 2025-11-25 17:29:53.676578571 +0000 UTC m=+0.077193372 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:29:53 np0005535469 podman[432106]: 2025-11-25 17:29:53.715790299 +0000 UTC m=+0.116600696 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:29:53 np0005535469 nova_compute[254092]: 2025-11-25 17:29:53.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:53 np0005535469 nova_compute[254092]: 2025-11-25 17:29:53.728 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:53 np0005535469 podman[432108]: 2025-11-25 17:29:53.729519612 +0000 UTC m=+0.128230421 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 12:29:54 np0005535469 nova_compute[254092]: 2025-11-25 17:29:54.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:55 np0005535469 nova_compute[254092]: 2025-11-25 17:29:55.302 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091780.3008893, 93a89288-892f-44cf-8e55-4b1f01a9bbb5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 12:29:55 np0005535469 nova_compute[254092]: 2025-11-25 17:29:55.303 254096 INFO nova.compute.manager [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] VM Stopped (Lifecycle Event)#033[00m
Nov 25 12:29:55 np0005535469 nova_compute[254092]: 2025-11-25 17:29:55.322 254096 DEBUG nova.compute.manager [None req-1913e5e1-d16e-4252-93ff-0af95a0043f2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 12:29:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 25 12:29:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:29:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687164341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:29:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:29:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687164341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:29:55 np0005535469 podman[432345]: 2025-11-25 17:29:55.681791202 +0000 UTC m=+0.074890130 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 25 12:29:55 np0005535469 podman[432345]: 2025-11-25 17:29:55.802031614 +0000 UTC m=+0.195130542 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:29:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:29:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:29:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:29:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:29:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:29:56 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:29:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b4e2a07c-82e1-4687-be96-0007e7ed57cc does not exist
Nov 25 12:29:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 817bd08e-0a7b-4cb0-8350-5952278f3435 does not exist
Nov 25 12:29:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8e78218d-dd64-4d28-b473-9bca3b5eb555 does not exist
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:29:57 np0005535469 nova_compute[254092]: 2025-11-25 17:29:57.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:57 np0005535469 nova_compute[254092]: 2025-11-25 17:29:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:57 np0005535469 nova_compute[254092]: 2025-11-25 17:29:57.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:29:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:29:57 np0005535469 podman[432778]: 2025-11-25 17:29:57.754293974 +0000 UTC m=+0.050262469 container create 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:29:57 np0005535469 systemd[1]: Started libpod-conmon-5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200.scope.
Nov 25 12:29:57 np0005535469 podman[432778]: 2025-11-25 17:29:57.734268068 +0000 UTC m=+0.030236593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:29:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:29:57 np0005535469 podman[432778]: 2025-11-25 17:29:57.867814353 +0000 UTC m=+0.163782938 container init 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:29:57 np0005535469 podman[432778]: 2025-11-25 17:29:57.882874934 +0000 UTC m=+0.178843469 container start 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:29:57 np0005535469 podman[432778]: 2025-11-25 17:29:57.88752578 +0000 UTC m=+0.183494385 container attach 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:29:57 np0005535469 stupefied_beaver[432795]: 167 167
Nov 25 12:29:57 np0005535469 systemd[1]: libpod-5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200.scope: Deactivated successfully.
Nov 25 12:29:57 np0005535469 podman[432778]: 2025-11-25 17:29:57.894386237 +0000 UTC m=+0.190354742 container died 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:29:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6323b20d2db43485b88fd39d56eb3387641afe3dabbf39302103a07e1fb2c267-merged.mount: Deactivated successfully.
Nov 25 12:29:57 np0005535469 podman[432778]: 2025-11-25 17:29:57.943298358 +0000 UTC m=+0.239266863 container remove 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:29:57 np0005535469 systemd[1]: libpod-conmon-5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200.scope: Deactivated successfully.
Nov 25 12:29:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 25 12:29:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 25 12:29:58 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 25 12:29:58 np0005535469 podman[432817]: 2025-11-25 17:29:58.181878632 +0000 UTC m=+0.069931715 container create fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:29:58 np0005535469 systemd[1]: Started libpod-conmon-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope.
Nov 25 12:29:58 np0005535469 podman[432817]: 2025-11-25 17:29:58.144253158 +0000 UTC m=+0.032306281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:29:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:29:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:29:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:29:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:29:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:29:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:29:58 np0005535469 podman[432817]: 2025-11-25 17:29:58.306420292 +0000 UTC m=+0.194473395 container init fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:29:58 np0005535469 podman[432817]: 2025-11-25 17:29:58.319105907 +0000 UTC m=+0.207158990 container start fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:29:58 np0005535469 podman[432817]: 2025-11-25 17:29:58.323145337 +0000 UTC m=+0.211198460 container attach fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:29:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:29:59 np0005535469 nova_compute[254092]: 2025-11-25 17:29:59.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:29:59 np0005535469 nova_compute[254092]: 2025-11-25 17:29:59.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:59 np0005535469 gifted_hertz[432834]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:29:59 np0005535469 gifted_hertz[432834]: --> relative data size: 1.0
Nov 25 12:29:59 np0005535469 gifted_hertz[432834]: --> All data devices are unavailable
Nov 25 12:29:59 np0005535469 nova_compute[254092]: 2025-11-25 17:29:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:59 np0005535469 nova_compute[254092]: 2025-11-25 17:29:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:29:59 np0005535469 nova_compute[254092]: 2025-11-25 17:29:59.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:29:59 np0005535469 nova_compute[254092]: 2025-11-25 17:29:59.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:29:59 np0005535469 systemd[1]: libpod-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope: Deactivated successfully.
Nov 25 12:29:59 np0005535469 systemd[1]: libpod-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope: Consumed 1.116s CPU time.
Nov 25 12:29:59 np0005535469 podman[432817]: 2025-11-25 17:29:59.521720992 +0000 UTC m=+1.409774105 container died fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:29:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a-merged.mount: Deactivated successfully.
Nov 25 12:29:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:29:59 np0005535469 podman[432817]: 2025-11-25 17:29:59.589547128 +0000 UTC m=+1.477600201 container remove fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:29:59 np0005535469 systemd[1]: libpod-conmon-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope: Deactivated successfully.
Nov 25 12:30:00 np0005535469 podman[433016]: 2025-11-25 17:30:00.388966947 +0000 UTC m=+0.041007328 container create 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:30:00 np0005535469 systemd[1]: Started libpod-conmon-92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156.scope.
Nov 25 12:30:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:30:00 np0005535469 podman[433016]: 2025-11-25 17:30:00.374858923 +0000 UTC m=+0.026899324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:30:00 np0005535469 podman[433016]: 2025-11-25 17:30:00.479926983 +0000 UTC m=+0.131967374 container init 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:30:00 np0005535469 podman[433016]: 2025-11-25 17:30:00.487630292 +0000 UTC m=+0.139670673 container start 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:30:00 np0005535469 podman[433016]: 2025-11-25 17:30:00.490797568 +0000 UTC m=+0.142837969 container attach 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:30:00 np0005535469 jovial_cerf[433031]: 167 167
Nov 25 12:30:00 np0005535469 systemd[1]: libpod-92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156.scope: Deactivated successfully.
Nov 25 12:30:00 np0005535469 podman[433016]: 2025-11-25 17:30:00.498808787 +0000 UTC m=+0.150849218 container died 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 12:30:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e51e4ca6d5c21b47159c306af04fac18fdef4e72c1d0456c5eceeb1cd535aaed-merged.mount: Deactivated successfully.
Nov 25 12:30:00 np0005535469 podman[433016]: 2025-11-25 17:30:00.541312113 +0000 UTC m=+0.193352484 container remove 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:30:00 np0005535469 systemd[1]: libpod-conmon-92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156.scope: Deactivated successfully.
Nov 25 12:30:00 np0005535469 podman[433055]: 2025-11-25 17:30:00.701842754 +0000 UTC m=+0.043666130 container create 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:30:00 np0005535469 systemd[1]: Started libpod-conmon-850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5.scope.
Nov 25 12:30:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:30:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:00 np0005535469 podman[433055]: 2025-11-25 17:30:00.682789255 +0000 UTC m=+0.024612631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:30:00 np0005535469 podman[433055]: 2025-11-25 17:30:00.789865899 +0000 UTC m=+0.131689285 container init 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:30:00 np0005535469 podman[433055]: 2025-11-25 17:30:00.796706776 +0000 UTC m=+0.138530132 container start 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:30:00 np0005535469 podman[433055]: 2025-11-25 17:30:00.801038833 +0000 UTC m=+0.142862209 container attach 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:30:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]: {
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:    "0": [
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:        {
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "devices": [
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "/dev/loop3"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            ],
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_name": "ceph_lv0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_size": "21470642176",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "name": "ceph_lv0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "tags": {
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cluster_name": "ceph",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.crush_device_class": "",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.encrypted": "0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osd_id": "0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.type": "block",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.vdo": "0"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            },
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "type": "block",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "vg_name": "ceph_vg0"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:        }
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:    ],
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:    "1": [
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:        {
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "devices": [
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "/dev/loop4"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            ],
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_name": "ceph_lv1",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_size": "21470642176",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "name": "ceph_lv1",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "tags": {
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cluster_name": "ceph",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.crush_device_class": "",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.encrypted": "0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osd_id": "1",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.type": "block",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.vdo": "0"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            },
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "type": "block",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "vg_name": "ceph_vg1"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:        }
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:    ],
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:    "2": [
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:        {
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "devices": [
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "/dev/loop5"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            ],
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_name": "ceph_lv2",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_size": "21470642176",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "name": "ceph_lv2",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "tags": {
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.cluster_name": "ceph",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.crush_device_class": "",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.encrypted": "0",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osd_id": "2",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.type": "block",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:                "ceph.vdo": "0"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            },
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "type": "block",
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:            "vg_name": "ceph_vg2"
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:        }
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]:    ]
Nov 25 12:30:01 np0005535469 reverent_ishizaka[433072]: }
Nov 25 12:30:01 np0005535469 systemd[1]: libpod-850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5.scope: Deactivated successfully.
Nov 25 12:30:01 np0005535469 podman[433055]: 2025-11-25 17:30:01.556556318 +0000 UTC m=+0.898379694 container died 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:30:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33-merged.mount: Deactivated successfully.
Nov 25 12:30:01 np0005535469 podman[433055]: 2025-11-25 17:30:01.616843469 +0000 UTC m=+0.958666825 container remove 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:30:01 np0005535469 systemd[1]: libpod-conmon-850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5.scope: Deactivated successfully.
Nov 25 12:30:02 np0005535469 nova_compute[254092]: 2025-11-25 17:30:02.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:02 np0005535469 podman[433234]: 2025-11-25 17:30:02.415509369 +0000 UTC m=+0.069742320 container create f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:30:02 np0005535469 systemd[1]: Started libpod-conmon-f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a.scope.
Nov 25 12:30:02 np0005535469 podman[433234]: 2025-11-25 17:30:02.39208691 +0000 UTC m=+0.046319891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:30:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:30:02 np0005535469 podman[433234]: 2025-11-25 17:30:02.525121792 +0000 UTC m=+0.179355353 container init f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:30:02 np0005535469 podman[433234]: 2025-11-25 17:30:02.53349348 +0000 UTC m=+0.187726441 container start f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:30:02 np0005535469 podman[433234]: 2025-11-25 17:30:02.537447387 +0000 UTC m=+0.191680368 container attach f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:30:02 np0005535469 modest_khayyam[433250]: 167 167
Nov 25 12:30:02 np0005535469 podman[433234]: 2025-11-25 17:30:02.541461526 +0000 UTC m=+0.195694507 container died f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:30:02 np0005535469 systemd[1]: libpod-f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a.scope: Deactivated successfully.
Nov 25 12:30:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d967cef0342e2fe356fb55e271ad141c67696e9bd1551a48bb77fb5f2cce5613-merged.mount: Deactivated successfully.
Nov 25 12:30:02 np0005535469 podman[433234]: 2025-11-25 17:30:02.592418664 +0000 UTC m=+0.246651615 container remove f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:30:02 np0005535469 systemd[1]: libpod-conmon-f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a.scope: Deactivated successfully.
Nov 25 12:30:02 np0005535469 podman[433274]: 2025-11-25 17:30:02.848844963 +0000 UTC m=+0.076630077 container create ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:30:02 np0005535469 podman[433274]: 2025-11-25 17:30:02.820596854 +0000 UTC m=+0.048382058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:30:02 np0005535469 systemd[1]: Started libpod-conmon-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope.
Nov 25 12:30:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:30:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:30:02 np0005535469 podman[433274]: 2025-11-25 17:30:02.976329833 +0000 UTC m=+0.204114997 container init ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 12:30:02 np0005535469 podman[433274]: 2025-11-25 17:30:02.987091147 +0000 UTC m=+0.214876251 container start ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 12:30:02 np0005535469 podman[433274]: 2025-11-25 17:30:02.990423497 +0000 UTC m=+0.218208681 container attach ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:30:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 12:30:03 np0005535469 nova_compute[254092]: 2025-11-25 17:30:03.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:30:04 np0005535469 modest_wing[433291]: {
Nov 25 12:30:04 np0005535469 modest_wing[433291]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "osd_id": 1,
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "type": "bluestore"
Nov 25 12:30:04 np0005535469 modest_wing[433291]:    },
Nov 25 12:30:04 np0005535469 modest_wing[433291]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "osd_id": 2,
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "type": "bluestore"
Nov 25 12:30:04 np0005535469 modest_wing[433291]:    },
Nov 25 12:30:04 np0005535469 modest_wing[433291]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "osd_id": 0,
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:30:04 np0005535469 modest_wing[433291]:        "type": "bluestore"
Nov 25 12:30:04 np0005535469 modest_wing[433291]:    }
Nov 25 12:30:04 np0005535469 modest_wing[433291]: }
Nov 25 12:30:04 np0005535469 systemd[1]: libpod-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope: Deactivated successfully.
Nov 25 12:30:04 np0005535469 systemd[1]: libpod-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope: Consumed 1.090s CPU time.
Nov 25 12:30:04 np0005535469 podman[433274]: 2025-11-25 17:30:04.06735108 +0000 UTC m=+1.295136194 container died ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:30:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339-merged.mount: Deactivated successfully.
Nov 25 12:30:04 np0005535469 podman[433274]: 2025-11-25 17:30:04.151404718 +0000 UTC m=+1.379189832 container remove ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:30:04 np0005535469 systemd[1]: libpod-conmon-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope: Deactivated successfully.
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:30:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e2f53b5a-94b2-4cea-8a5e-826a1bf8273c does not exist
Nov 25 12:30:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d7a9b2b1-a8fc-4894-99aa-85e93284cff4 does not exist
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:30:04 np0005535469 nova_compute[254092]: 2025-11-25 17:30:04.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 25 12:30:04 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 25 12:30:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 25 12:30:05 np0005535469 nova_compute[254092]: 2025-11-25 17:30:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:30:07 np0005535469 nova_compute[254092]: 2025-11-25 17:30:07.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 25 12:30:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 12:30:09 np0005535469 nova_compute[254092]: 2025-11-25 17:30:09.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:30:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:30:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:12 np0005535469 nova_compute[254092]: 2025-11-25 17:30:12.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:30:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:30:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:30:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:30:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:30:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:30:14 np0005535469 nova_compute[254092]: 2025-11-25 17:30:14.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:17 np0005535469 nova_compute[254092]: 2025-11-25 17:30:17.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:19 np0005535469 nova_compute[254092]: 2025-11-25 17:30:19.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:22 np0005535469 nova_compute[254092]: 2025-11-25 17:30:22.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:24 np0005535469 nova_compute[254092]: 2025-11-25 17:30:24.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:24 np0005535469 podman[433389]: 2025-11-25 17:30:24.71771284 +0000 UTC m=+0.116913113 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:30:24 np0005535469 podman[433388]: 2025-11-25 17:30:24.729599904 +0000 UTC m=+0.134701948 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:30:24 np0005535469 podman[433390]: 2025-11-25 17:30:24.776558962 +0000 UTC m=+0.172072295 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 12:30:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:27 np0005535469 nova_compute[254092]: 2025-11-25 17:30:27.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:29 np0005535469 nova_compute[254092]: 2025-11-25 17:30:29.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:32 np0005535469 nova_compute[254092]: 2025-11-25 17:30:32.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:34 np0005535469 nova_compute[254092]: 2025-11-25 17:30:34.568 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:37 np0005535469 nova_compute[254092]: 2025-11-25 17:30:37.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:39 np0005535469 nova_compute[254092]: 2025-11-25 17:30:39.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:30:40
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images']
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:30:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:30:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:42 np0005535469 nova_compute[254092]: 2025-11-25 17:30:42.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:44 np0005535469 nova_compute[254092]: 2025-11-25 17:30:44.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:47 np0005535469 nova_compute[254092]: 2025-11-25 17:30:47.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:49 np0005535469 nova_compute[254092]: 2025-11-25 17:30:49.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:30:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:30:52 np0005535469 nova_compute[254092]: 2025-11-25 17:30:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:30:52 np0005535469 nova_compute[254092]: 2025-11-25 17:30:52.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:53 np0005535469 nova_compute[254092]: 2025-11-25 17:30:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:30:53 np0005535469 nova_compute[254092]: 2025-11-25 17:30:53.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:30:53 np0005535469 nova_compute[254092]: 2025-11-25 17:30:53.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:30:53 np0005535469 nova_compute[254092]: 2025-11-25 17:30:53.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:30:53 np0005535469 nova_compute[254092]: 2025-11-25 17:30:53.528 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:30:53 np0005535469 nova_compute[254092]: 2025-11-25 17:30:53.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:30:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:30:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336401234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.037 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.314 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.316 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.316 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:30:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.603 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.604 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.749 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.900 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.900 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.923 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.946 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:30:54 np0005535469 nova_compute[254092]: 2025-11-25 17:30:54.965 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:30:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:30:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11922419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:30:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:30:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11922419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:30:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:30:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717963504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:30:55 np0005535469 nova_compute[254092]: 2025-11-25 17:30:55.481 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:30:55 np0005535469 nova_compute[254092]: 2025-11-25 17:30:55.489 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:30:55 np0005535469 nova_compute[254092]: 2025-11-25 17:30:55.508 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:30:55 np0005535469 nova_compute[254092]: 2025-11-25 17:30:55.510 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:30:55 np0005535469 nova_compute[254092]: 2025-11-25 17:30:55.510 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:30:55 np0005535469 podman[433496]: 2025-11-25 17:30:55.68366179 +0000 UTC m=+0.079474304 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:30:55 np0005535469 podman[433495]: 2025-11-25 17:30:55.692051109 +0000 UTC m=+0.091998595 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 12:30:55 np0005535469 podman[433497]: 2025-11-25 17:30:55.714938582 +0000 UTC m=+0.104340331 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:30:56 np0005535469 nova_compute[254092]: 2025-11-25 17:30:56.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:30:56 np0005535469 nova_compute[254092]: 2025-11-25 17:30:56.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:30:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:57 np0005535469 nova_compute[254092]: 2025-11-25 17:30:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:30:57 np0005535469 nova_compute[254092]: 2025-11-25 17:30:57.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:30:57 np0005535469 nova_compute[254092]: 2025-11-25 17:30:57.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:30:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:30:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:30:59 np0005535469 nova_compute[254092]: 2025-11-25 17:30:59.753 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:00 np0005535469 nova_compute[254092]: 2025-11-25 17:31:00.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:01 np0005535469 nova_compute[254092]: 2025-11-25 17:31:01.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:01 np0005535469 nova_compute[254092]: 2025-11-25 17:31:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:01 np0005535469 nova_compute[254092]: 2025-11-25 17:31:01.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:31:01 np0005535469 nova_compute[254092]: 2025-11-25 17:31:01.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:31:01 np0005535469 nova_compute[254092]: 2025-11-25 17:31:01.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:31:02 np0005535469 nova_compute[254092]: 2025-11-25 17:31:02.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:04 np0005535469 nova_compute[254092]: 2025-11-25 17:31:04.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:31:05 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c42811f7-e9f8-461c-a505-fab691245d6b does not exist
Nov 25 12:31:05 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 501ee60f-edb5-4ca9-949f-b705dea6d416 does not exist
Nov 25 12:31:05 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d3832470-cbdd-4cbf-b531-c3a38b9a7faf does not exist
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:31:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:31:06 np0005535469 podman[433831]: 2025-11-25 17:31:06.339090391 +0000 UTC m=+0.067797856 container create 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:31:06 np0005535469 systemd[1]: Started libpod-conmon-7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d.scope.
Nov 25 12:31:06 np0005535469 podman[433831]: 2025-11-25 17:31:06.308795077 +0000 UTC m=+0.037502602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:31:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:31:06 np0005535469 podman[433831]: 2025-11-25 17:31:06.448195371 +0000 UTC m=+0.176902886 container init 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:31:06 np0005535469 podman[433831]: 2025-11-25 17:31:06.461863383 +0000 UTC m=+0.190570838 container start 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:31:06 np0005535469 podman[433831]: 2025-11-25 17:31:06.466188121 +0000 UTC m=+0.194895596 container attach 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:31:06 np0005535469 quirky_leakey[433847]: 167 167
Nov 25 12:31:06 np0005535469 systemd[1]: libpod-7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d.scope: Deactivated successfully.
Nov 25 12:31:06 np0005535469 podman[433831]: 2025-11-25 17:31:06.471554597 +0000 UTC m=+0.200262102 container died 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:31:06 np0005535469 nova_compute[254092]: 2025-11-25 17:31:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6c667413d1bb319dc28b41a4989350a538e72a7ea1600870fd7c55a7d6ce2e0e-merged.mount: Deactivated successfully.
Nov 25 12:31:06 np0005535469 podman[433831]: 2025-11-25 17:31:06.535480347 +0000 UTC m=+0.264187812 container remove 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:31:06 np0005535469 systemd[1]: libpod-conmon-7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d.scope: Deactivated successfully.
Nov 25 12:31:06 np0005535469 podman[433869]: 2025-11-25 17:31:06.80929438 +0000 UTC m=+0.073857591 container create b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:31:06 np0005535469 systemd[1]: Started libpod-conmon-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope.
Nov 25 12:31:06 np0005535469 podman[433869]: 2025-11-25 17:31:06.782400998 +0000 UTC m=+0.046964259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:31:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:31:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:06 np0005535469 podman[433869]: 2025-11-25 17:31:06.943250836 +0000 UTC m=+0.207814087 container init b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:31:06 np0005535469 podman[433869]: 2025-11-25 17:31:06.964243777 +0000 UTC m=+0.228806998 container start b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:31:06 np0005535469 podman[433869]: 2025-11-25 17:31:06.968819522 +0000 UTC m=+0.233382743 container attach b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:31:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:07 np0005535469 nova_compute[254092]: 2025-11-25 17:31:07.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:08 np0005535469 blissful_raman[433885]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:31:08 np0005535469 blissful_raman[433885]: --> relative data size: 1.0
Nov 25 12:31:08 np0005535469 blissful_raman[433885]: --> All data devices are unavailable
Nov 25 12:31:08 np0005535469 systemd[1]: libpod-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope: Deactivated successfully.
Nov 25 12:31:08 np0005535469 systemd[1]: libpod-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope: Consumed 1.231s CPU time.
Nov 25 12:31:08 np0005535469 podman[433869]: 2025-11-25 17:31:08.230207596 +0000 UTC m=+1.494770807 container died b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:31:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d-merged.mount: Deactivated successfully.
Nov 25 12:31:08 np0005535469 podman[433869]: 2025-11-25 17:31:08.312338201 +0000 UTC m=+1.576901392 container remove b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 12:31:08 np0005535469 systemd[1]: libpod-conmon-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope: Deactivated successfully.
Nov 25 12:31:09 np0005535469 podman[434066]: 2025-11-25 17:31:09.249181552 +0000 UTC m=+0.070309865 container create f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:31:09 np0005535469 systemd[1]: Started libpod-conmon-f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979.scope.
Nov 25 12:31:09 np0005535469 podman[434066]: 2025-11-25 17:31:09.22300891 +0000 UTC m=+0.044137293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:31:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:31:09 np0005535469 podman[434066]: 2025-11-25 17:31:09.355926307 +0000 UTC m=+0.177054640 container init f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 12:31:09 np0005535469 podman[434066]: 2025-11-25 17:31:09.367076341 +0000 UTC m=+0.188204664 container start f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:31:09 np0005535469 podman[434066]: 2025-11-25 17:31:09.372711044 +0000 UTC m=+0.193839367 container attach f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:31:09 np0005535469 elastic_chebyshev[434082]: 167 167
Nov 25 12:31:09 np0005535469 systemd[1]: libpod-f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979.scope: Deactivated successfully.
Nov 25 12:31:09 np0005535469 podman[434066]: 2025-11-25 17:31:09.376049995 +0000 UTC m=+0.197178318 container died f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:31:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-39913a41e03ae972fc0f4796ecdcba03d52c45b54517cb395f499ce27d33acf1-merged.mount: Deactivated successfully.
Nov 25 12:31:09 np0005535469 podman[434066]: 2025-11-25 17:31:09.438566716 +0000 UTC m=+0.259695029 container remove f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:31:09 np0005535469 systemd[1]: libpod-conmon-f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979.scope: Deactivated successfully.
Nov 25 12:31:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:09 np0005535469 podman[434107]: 2025-11-25 17:31:09.701905064 +0000 UTC m=+0.084436589 container create fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:31:09 np0005535469 podman[434107]: 2025-11-25 17:31:09.669938885 +0000 UTC m=+0.052470470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:31:09 np0005535469 nova_compute[254092]: 2025-11-25 17:31:09.758 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:09 np0005535469 systemd[1]: Started libpod-conmon-fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df.scope.
Nov 25 12:31:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:31:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:09 np0005535469 podman[434107]: 2025-11-25 17:31:09.840198439 +0000 UTC m=+0.222729994 container init fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:31:09 np0005535469 podman[434107]: 2025-11-25 17:31:09.851523707 +0000 UTC m=+0.234055222 container start fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:31:09 np0005535469 podman[434107]: 2025-11-25 17:31:09.855993278 +0000 UTC m=+0.238524863 container attach fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:31:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]: {
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:    "0": [
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:        {
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "devices": [
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "/dev/loop3"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            ],
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_name": "ceph_lv0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_size": "21470642176",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "name": "ceph_lv0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "tags": {
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cluster_name": "ceph",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.crush_device_class": "",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.encrypted": "0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osd_id": "0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.type": "block",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.vdo": "0"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            },
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "type": "block",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "vg_name": "ceph_vg0"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:        }
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:    ],
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:    "1": [
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:        {
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "devices": [
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "/dev/loop4"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            ],
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_name": "ceph_lv1",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_size": "21470642176",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "name": "ceph_lv1",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "tags": {
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cluster_name": "ceph",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.crush_device_class": "",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.encrypted": "0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osd_id": "1",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.type": "block",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.vdo": "0"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            },
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "type": "block",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "vg_name": "ceph_vg1"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:        }
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:    ],
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:    "2": [
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:        {
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "devices": [
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "/dev/loop5"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            ],
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_name": "ceph_lv2",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_size": "21470642176",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "name": "ceph_lv2",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "tags": {
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.cluster_name": "ceph",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.crush_device_class": "",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.encrypted": "0",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osd_id": "2",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.type": "block",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:                "ceph.vdo": "0"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            },
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "type": "block",
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:            "vg_name": "ceph_vg2"
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:        }
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]:    ]
Nov 25 12:31:10 np0005535469 hopeful_sanderson[434124]: }
Nov 25 12:31:10 np0005535469 systemd[1]: libpod-fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df.scope: Deactivated successfully.
Nov 25 12:31:10 np0005535469 podman[434107]: 2025-11-25 17:31:10.678681602 +0000 UTC m=+1.061213167 container died fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:31:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12-merged.mount: Deactivated successfully.
Nov 25 12:31:10 np0005535469 podman[434107]: 2025-11-25 17:31:10.761321551 +0000 UTC m=+1.143853036 container remove fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:31:10 np0005535469 systemd[1]: libpod-conmon-fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df.scope: Deactivated successfully.
Nov 25 12:31:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:11 np0005535469 podman[434288]: 2025-11-25 17:31:11.586726148 +0000 UTC m=+0.060271391 container create 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:31:11 np0005535469 systemd[1]: Started libpod-conmon-86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638.scope.
Nov 25 12:31:11 np0005535469 podman[434288]: 2025-11-25 17:31:11.562880729 +0000 UTC m=+0.036426082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:31:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:31:11 np0005535469 podman[434288]: 2025-11-25 17:31:11.696392483 +0000 UTC m=+0.169937766 container init 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:31:11 np0005535469 podman[434288]: 2025-11-25 17:31:11.70729618 +0000 UTC m=+0.180841423 container start 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:31:11 np0005535469 podman[434288]: 2025-11-25 17:31:11.711934166 +0000 UTC m=+0.185479479 container attach 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:31:11 np0005535469 affectionate_proskuriakova[434305]: 167 167
Nov 25 12:31:11 np0005535469 systemd[1]: libpod-86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638.scope: Deactivated successfully.
Nov 25 12:31:11 np0005535469 podman[434288]: 2025-11-25 17:31:11.716773468 +0000 UTC m=+0.190318771 container died 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:31:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6de9c67cc3ca49a5d97f31047f6e5edd01f83218c1251a704f0dc61838b49862-merged.mount: Deactivated successfully.
Nov 25 12:31:11 np0005535469 podman[434288]: 2025-11-25 17:31:11.760029905 +0000 UTC m=+0.233575148 container remove 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 12:31:11 np0005535469 systemd[1]: libpod-conmon-86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638.scope: Deactivated successfully.
Nov 25 12:31:12 np0005535469 podman[434329]: 2025-11-25 17:31:12.017615526 +0000 UTC m=+0.088672425 container create e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:31:12 np0005535469 podman[434329]: 2025-11-25 17:31:11.975244923 +0000 UTC m=+0.046301862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:31:12 np0005535469 systemd[1]: Started libpod-conmon-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope.
Nov 25 12:31:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:31:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:31:12 np0005535469 podman[434329]: 2025-11-25 17:31:12.150785212 +0000 UTC m=+0.221842171 container init e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:31:12 np0005535469 podman[434329]: 2025-11-25 17:31:12.167351393 +0000 UTC m=+0.238408292 container start e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:31:12 np0005535469 podman[434329]: 2025-11-25 17:31:12.182762062 +0000 UTC m=+0.253818941 container attach e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:31:12 np0005535469 nova_compute[254092]: 2025-11-25 17:31:12.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]: {
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "osd_id": 1,
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "type": "bluestore"
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:    },
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "osd_id": 2,
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "type": "bluestore"
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:    },
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "osd_id": 0,
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:        "type": "bluestore"
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]:    }
Nov 25 12:31:13 np0005535469 pensive_mirzakhani[434346]: }
Nov 25 12:31:13 np0005535469 systemd[1]: libpod-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope: Deactivated successfully.
Nov 25 12:31:13 np0005535469 systemd[1]: libpod-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope: Consumed 1.206s CPU time.
Nov 25 12:31:13 np0005535469 podman[434329]: 2025-11-25 17:31:13.377396349 +0000 UTC m=+1.448453248 container died e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:31:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1-merged.mount: Deactivated successfully.
Nov 25 12:31:13 np0005535469 podman[434329]: 2025-11-25 17:31:13.463153323 +0000 UTC m=+1.534210222 container remove e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:31:13 np0005535469 systemd[1]: libpod-conmon-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope: Deactivated successfully.
Nov 25 12:31:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:31:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:31:13 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:31:13 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:31:13 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d7a00c30-e265-462d-a410-88d1001ef831 does not exist
Nov 25 12:31:13 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2b6ba937-46c8-41d6-a807-41512cccd881 does not exist
Nov 25 12:31:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:31:13.672 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:31:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:31:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:31:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:31:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:31:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:31:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:31:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:14 np0005535469 nova_compute[254092]: 2025-11-25 17:31:14.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:17 np0005535469 nova_compute[254092]: 2025-11-25 17:31:17.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:19 np0005535469 nova_compute[254092]: 2025-11-25 17:31:19.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:22 np0005535469 nova_compute[254092]: 2025-11-25 17:31:22.675 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:24 np0005535469 nova_compute[254092]: 2025-11-25 17:31:24.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:26 np0005535469 podman[434443]: 2025-11-25 17:31:26.690598981 +0000 UTC m=+0.088129140 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:31:26 np0005535469 podman[434442]: 2025-11-25 17:31:26.727509166 +0000 UTC m=+0.123739419 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:31:26 np0005535469 podman[434444]: 2025-11-25 17:31:26.776629573 +0000 UTC m=+0.160933582 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 12:31:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:27 np0005535469 nova_compute[254092]: 2025-11-25 17:31:27.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:29 np0005535469 nova_compute[254092]: 2025-11-25 17:31:29.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:32 np0005535469 nova_compute[254092]: 2025-11-25 17:31:32.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:34 np0005535469 nova_compute[254092]: 2025-11-25 17:31:34.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:35 np0005535469 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 12:31:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:31:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 14K writes, 67K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1392 writes, 6544 keys, 1392 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s#012Interval WAL: 1392 writes, 1392 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.8      2.30              0.27        47    0.049       0      0       0.0       0.0#012  L6      1/0    9.97 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   4.8    121.4    102.6      3.74              1.12        46    0.081    306K    25K       0.0       0.0#012 Sum      1/0    9.97 MB   0.0      0.4     0.1      0.4       0.5      0.1       0.0   5.8     75.2     76.8      6.03              1.40        93    0.065    306K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6    135.0    139.0      0.49              0.25        12    0.041     52K   3113       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    121.4    102.6      3.74              1.12        46    0.081    306K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.5      2.25              0.27        46    0.049       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 600.0 interval#012Flush(GB): cumulative 0.078, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.45 GB write, 0.08 MB/s write, 0.44 GB read, 0.08 MB/s read, 6.0 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 52.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000353 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3412,50.53 MB,16.6222%) FilterBlock(94,849.67 KB,0.272947%) IndexBlock(94,1.35 MB,0.444688%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 12:31:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:37 np0005535469 nova_compute[254092]: 2025-11-25 17:31:37.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:39 np0005535469 nova_compute[254092]: 2025-11-25 17:31:39.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:31:40
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control']
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:31:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:31:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:42 np0005535469 nova_compute[254092]: 2025-11-25 17:31:42.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:44 np0005535469 nova_compute[254092]: 2025-11-25 17:31:44.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:47 np0005535469 nova_compute[254092]: 2025-11-25 17:31:47.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:49 np0005535469 nova_compute[254092]: 2025-11-25 17:31:49.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:31:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:31:52 np0005535469 nova_compute[254092]: 2025-11-25 17:31:52.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:31:53 np0005535469 nova_compute[254092]: 2025-11-25 17:31:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:53 np0005535469 nova_compute[254092]: 2025-11-25 17:31:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:53 np0005535469 nova_compute[254092]: 2025-11-25 17:31:53.708 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:31:53 np0005535469 nova_compute[254092]: 2025-11-25 17:31:53.708 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:31:53 np0005535469 nova_compute[254092]: 2025-11-25 17:31:53.709 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:31:53 np0005535469 nova_compute[254092]: 2025-11-25 17:31:53.709 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:31:53 np0005535469 nova_compute[254092]: 2025-11-25 17:31:53.710 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:31:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:31:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723494631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.184 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.413 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.415 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3642MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.415 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.415 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.511 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.512 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:31:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:54 np0005535469 nova_compute[254092]: 2025-11-25 17:31:54.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:31:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956116899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:31:55 np0005535469 nova_compute[254092]: 2025-11-25 17:31:55.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:31:55 np0005535469 nova_compute[254092]: 2025-11-25 17:31:55.015 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:31:55 np0005535469 nova_compute[254092]: 2025-11-25 17:31:55.034 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:31:55 np0005535469 nova_compute[254092]: 2025-11-25 17:31:55.036 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:31:55 np0005535469 nova_compute[254092]: 2025-11-25 17:31:55.036 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:31:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 12:31:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:31:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2117747339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:31:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:31:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2117747339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:31:56 np0005535469 nova_compute[254092]: 2025-11-25 17:31:56.037 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:56 np0005535469 nova_compute[254092]: 2025-11-25 17:31:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 12:31:57 np0005535469 nova_compute[254092]: 2025-11-25 17:31:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:31:57 np0005535469 nova_compute[254092]: 2025-11-25 17:31:57.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:31:57 np0005535469 podman[434550]: 2025-11-25 17:31:57.659869306 +0000 UTC m=+0.075420724 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:31:57 np0005535469 podman[434551]: 2025-11-25 17:31:57.661261924 +0000 UTC m=+0.072661130 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:31:57 np0005535469 nova_compute[254092]: 2025-11-25 17:31:57.694 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:31:57 np0005535469 podman[434552]: 2025-11-25 17:31:57.720732283 +0000 UTC m=+0.131958344 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 12:31:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 12:31:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:31:59 np0005535469 nova_compute[254092]: 2025-11-25 17:31:59.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:00 np0005535469 nova_compute[254092]: 2025-11-25 17:32:00.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 12:32:02 np0005535469 nova_compute[254092]: 2025-11-25 17:32:02.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:02 np0005535469 nova_compute[254092]: 2025-11-25 17:32:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:02 np0005535469 nova_compute[254092]: 2025-11-25 17:32:02.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:32:02 np0005535469 nova_compute[254092]: 2025-11-25 17:32:02.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:32:02 np0005535469 nova_compute[254092]: 2025-11-25 17:32:02.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:32:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 25 12:32:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 25 12:32:02 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 25 12:32:02 np0005535469 nova_compute[254092]: 2025-11-25 17:32:02.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Nov 25 12:32:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:05 np0005535469 nova_compute[254092]: 2025-11-25 17:32:05.001 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Nov 25 12:32:06 np0005535469 nova_compute[254092]: 2025-11-25 17:32:06.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 25 12:32:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 25 12:32:06 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 25 12:32:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 25 12:32:07 np0005535469 nova_compute[254092]: 2025-11-25 17:32:07.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:08 np0005535469 nova_compute[254092]: 2025-11-25 17:32:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 25 12:32:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 25 12:32:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 25 12:32:09 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 25 12:32:10 np0005535469 nova_compute[254092]: 2025-11-25 17:32:10.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:32:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:32:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 25 12:32:12 np0005535469 nova_compute[254092]: 2025-11-25 17:32:12.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 37 op/s
Nov 25 12:32:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:32:13.672 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:32:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:32:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:32:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:32:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:32:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b4c8bc2d-ae81-49c5-80d7-4e3c6d92247f does not exist
Nov 25 12:32:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8a97e76b-5fba-49a4-b865-a328be8c0d9a does not exist
Nov 25 12:32:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 53fffaab-87d6-4c85-a9a7-e365d792d32e does not exist
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:32:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:32:15 np0005535469 nova_compute[254092]: 2025-11-25 17:32:15.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Nov 25 12:32:15 np0005535469 podman[434884]: 2025-11-25 17:32:15.561393234 +0000 UTC m=+0.064390124 container create 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:32:15 np0005535469 systemd[1]: Started libpod-conmon-5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b.scope.
Nov 25 12:32:15 np0005535469 podman[434884]: 2025-11-25 17:32:15.530680268 +0000 UTC m=+0.033677208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:32:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:32:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:32:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:32:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:32:15 np0005535469 podman[434884]: 2025-11-25 17:32:15.686023326 +0000 UTC m=+0.189020226 container init 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:32:15 np0005535469 podman[434884]: 2025-11-25 17:32:15.699740049 +0000 UTC m=+0.202736919 container start 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:32:15 np0005535469 podman[434884]: 2025-11-25 17:32:15.704744925 +0000 UTC m=+0.207741825 container attach 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:32:15 np0005535469 cranky_bohr[434900]: 167 167
Nov 25 12:32:15 np0005535469 systemd[1]: libpod-5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b.scope: Deactivated successfully.
Nov 25 12:32:15 np0005535469 podman[434884]: 2025-11-25 17:32:15.711531021 +0000 UTC m=+0.214527921 container died 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:32:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ce18828d000e6817007b550ed20b0c479c1145c00ec0999c7b4e5148f1ae3145-merged.mount: Deactivated successfully.
Nov 25 12:32:15 np0005535469 podman[434884]: 2025-11-25 17:32:15.773315412 +0000 UTC m=+0.276312312 container remove 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:32:15 np0005535469 systemd[1]: libpod-conmon-5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b.scope: Deactivated successfully.
Nov 25 12:32:16 np0005535469 podman[434924]: 2025-11-25 17:32:16.014261811 +0000 UTC m=+0.054031683 container create 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:32:16 np0005535469 systemd[1]: Started libpod-conmon-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope.
Nov 25 12:32:16 np0005535469 podman[434924]: 2025-11-25 17:32:15.988003115 +0000 UTC m=+0.027772957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:32:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:32:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:16 np0005535469 podman[434924]: 2025-11-25 17:32:16.143087277 +0000 UTC m=+0.182857119 container init 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:32:16 np0005535469 podman[434924]: 2025-11-25 17:32:16.155903295 +0000 UTC m=+0.195673137 container start 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:32:16 np0005535469 podman[434924]: 2025-11-25 17:32:16.160183752 +0000 UTC m=+0.199953594 container attach 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 12:32:17 np0005535469 youthful_jennings[434940]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:32:17 np0005535469 youthful_jennings[434940]: --> relative data size: 1.0
Nov 25 12:32:17 np0005535469 youthful_jennings[434940]: --> All data devices are unavailable
Nov 25 12:32:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Nov 25 12:32:17 np0005535469 systemd[1]: libpod-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope: Deactivated successfully.
Nov 25 12:32:17 np0005535469 systemd[1]: libpod-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope: Consumed 1.257s CPU time.
Nov 25 12:32:17 np0005535469 podman[434924]: 2025-11-25 17:32:17.453088634 +0000 UTC m=+1.492858486 container died 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:32:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824-merged.mount: Deactivated successfully.
Nov 25 12:32:17 np0005535469 podman[434924]: 2025-11-25 17:32:17.542850047 +0000 UTC m=+1.582619899 container remove 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:32:17 np0005535469 systemd[1]: libpod-conmon-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope: Deactivated successfully.
Nov 25 12:32:17 np0005535469 nova_compute[254092]: 2025-11-25 17:32:17.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:18 np0005535469 podman[435122]: 2025-11-25 17:32:18.352979139 +0000 UTC m=+0.052567912 container create 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 12:32:18 np0005535469 systemd[1]: Started libpod-conmon-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope.
Nov 25 12:32:18 np0005535469 podman[435122]: 2025-11-25 17:32:18.327866555 +0000 UTC m=+0.027455328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:32:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:32:18 np0005535469 podman[435122]: 2025-11-25 17:32:18.464508984 +0000 UTC m=+0.164097797 container init 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:32:18 np0005535469 podman[435122]: 2025-11-25 17:32:18.476720317 +0000 UTC m=+0.176309070 container start 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:32:18 np0005535469 podman[435122]: 2025-11-25 17:32:18.481776895 +0000 UTC m=+0.181365668 container attach 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:32:18 np0005535469 infallible_blackwell[435138]: 167 167
Nov 25 12:32:18 np0005535469 systemd[1]: libpod-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope: Deactivated successfully.
Nov 25 12:32:18 np0005535469 conmon[435138]: conmon 5f8f1f80ba8caec70346 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope/container/memory.events
Nov 25 12:32:18 np0005535469 podman[435122]: 2025-11-25 17:32:18.488258291 +0000 UTC m=+0.187847074 container died 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:32:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8796a0a6bffe97d77cb1f211750de705d7d8ef058b3c2cb9fb1482985ae7d5c0-merged.mount: Deactivated successfully.
Nov 25 12:32:18 np0005535469 podman[435122]: 2025-11-25 17:32:18.541306325 +0000 UTC m=+0.240895068 container remove 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:32:18 np0005535469 systemd[1]: libpod-conmon-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope: Deactivated successfully.
Nov 25 12:32:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 25 12:32:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 25 12:32:18 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 25 12:32:18 np0005535469 podman[435162]: 2025-11-25 17:32:18.754532708 +0000 UTC m=+0.075635859 container create 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:32:18 np0005535469 podman[435162]: 2025-11-25 17:32:18.725055266 +0000 UTC m=+0.046158427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:32:18 np0005535469 systemd[1]: Started libpod-conmon-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope.
Nov 25 12:32:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:18 np0005535469 podman[435162]: 2025-11-25 17:32:18.897958892 +0000 UTC m=+0.219062033 container init 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:32:18 np0005535469 podman[435162]: 2025-11-25 17:32:18.911299876 +0000 UTC m=+0.232402977 container start 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:32:18 np0005535469 podman[435162]: 2025-11-25 17:32:18.915283514 +0000 UTC m=+0.236386645 container attach 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:32:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:19 np0005535469 practical_yalow[435178]: {
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:    "0": [
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:        {
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "devices": [
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "/dev/loop3"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            ],
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_name": "ceph_lv0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_size": "21470642176",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "name": "ceph_lv0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "tags": {
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cluster_name": "ceph",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.crush_device_class": "",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.encrypted": "0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osd_id": "0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.type": "block",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.vdo": "0"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            },
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "type": "block",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "vg_name": "ceph_vg0"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:        }
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:    ],
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:    "1": [
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:        {
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "devices": [
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "/dev/loop4"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            ],
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_name": "ceph_lv1",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_size": "21470642176",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "name": "ceph_lv1",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "tags": {
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cluster_name": "ceph",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.crush_device_class": "",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.encrypted": "0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osd_id": "1",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.type": "block",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.vdo": "0"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            },
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "type": "block",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "vg_name": "ceph_vg1"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:        }
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:    ],
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:    "2": [
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:        {
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "devices": [
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "/dev/loop5"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            ],
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_name": "ceph_lv2",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_size": "21470642176",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "name": "ceph_lv2",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "tags": {
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.cluster_name": "ceph",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.crush_device_class": "",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.encrypted": "0",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osd_id": "2",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.type": "block",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:                "ceph.vdo": "0"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            },
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "type": "block",
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:            "vg_name": "ceph_vg2"
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:        }
Nov 25 12:32:19 np0005535469 practical_yalow[435178]:    ]
Nov 25 12:32:19 np0005535469 practical_yalow[435178]: }
Nov 25 12:32:19 np0005535469 systemd[1]: libpod-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope: Deactivated successfully.
Nov 25 12:32:19 np0005535469 conmon[435178]: conmon 56b1094af973e28e3332 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope/container/memory.events
Nov 25 12:32:19 np0005535469 podman[435162]: 2025-11-25 17:32:19.707417065 +0000 UTC m=+1.028520216 container died 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:32:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667-merged.mount: Deactivated successfully.
Nov 25 12:32:19 np0005535469 podman[435162]: 2025-11-25 17:32:19.784769601 +0000 UTC m=+1.105872722 container remove 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:32:19 np0005535469 systemd[1]: libpod-conmon-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope: Deactivated successfully.
Nov 25 12:32:20 np0005535469 nova_compute[254092]: 2025-11-25 17:32:20.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:20 np0005535469 podman[435340]: 2025-11-25 17:32:20.617522688 +0000 UTC m=+0.053830186 container create 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:32:20 np0005535469 systemd[1]: Started libpod-conmon-8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35.scope.
Nov 25 12:32:20 np0005535469 podman[435340]: 2025-11-25 17:32:20.592085855 +0000 UTC m=+0.028393193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:32:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:32:20 np0005535469 podman[435340]: 2025-11-25 17:32:20.724503659 +0000 UTC m=+0.160811017 container init 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:32:20 np0005535469 podman[435340]: 2025-11-25 17:32:20.738906741 +0000 UTC m=+0.175213989 container start 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:32:20 np0005535469 podman[435340]: 2025-11-25 17:32:20.742235912 +0000 UTC m=+0.178543270 container attach 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:32:20 np0005535469 practical_swirles[435356]: 167 167
Nov 25 12:32:20 np0005535469 systemd[1]: libpod-8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35.scope: Deactivated successfully.
Nov 25 12:32:20 np0005535469 podman[435340]: 2025-11-25 17:32:20.74838958 +0000 UTC m=+0.184696898 container died 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:32:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dfd8fcec1ba7707b2e8543ca108cc973efb64ca4d82f730544d60276199485de-merged.mount: Deactivated successfully.
Nov 25 12:32:20 np0005535469 podman[435340]: 2025-11-25 17:32:20.795851652 +0000 UTC m=+0.232158920 container remove 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:32:20 np0005535469 systemd[1]: libpod-conmon-8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35.scope: Deactivated successfully.
Nov 25 12:32:21 np0005535469 podman[435381]: 2025-11-25 17:32:21.054713138 +0000 UTC m=+0.066389949 container create 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:32:21 np0005535469 systemd[1]: Started libpod-conmon-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope.
Nov 25 12:32:21 np0005535469 podman[435381]: 2025-11-25 17:32:21.035523705 +0000 UTC m=+0.047200486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:32:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:32:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:32:21 np0005535469 podman[435381]: 2025-11-25 17:32:21.160740404 +0000 UTC m=+0.172417245 container init 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:32:21 np0005535469 podman[435381]: 2025-11-25 17:32:21.17456859 +0000 UTC m=+0.186245381 container start 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:32:21 np0005535469 podman[435381]: 2025-11-25 17:32:21.178898437 +0000 UTC m=+0.190575288 container attach 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:32:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]: {
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "osd_id": 1,
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "type": "bluestore"
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:    },
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "osd_id": 2,
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "type": "bluestore"
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:    },
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "osd_id": 0,
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:        "type": "bluestore"
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]:    }
Nov 25 12:32:22 np0005535469 stupefied_perlman[435397]: }
Nov 25 12:32:22 np0005535469 systemd[1]: libpod-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope: Deactivated successfully.
Nov 25 12:32:22 np0005535469 systemd[1]: libpod-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope: Consumed 1.143s CPU time.
Nov 25 12:32:22 np0005535469 podman[435381]: 2025-11-25 17:32:22.308108334 +0000 UTC m=+1.319785165 container died 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:32:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15-merged.mount: Deactivated successfully.
Nov 25 12:32:22 np0005535469 podman[435381]: 2025-11-25 17:32:22.386995142 +0000 UTC m=+1.398671903 container remove 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:32:22 np0005535469 systemd[1]: libpod-conmon-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope: Deactivated successfully.
Nov 25 12:32:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:32:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:32:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:32:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:32:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e980db8a-84ff-4c8b-9233-729382f9adf0 does not exist
Nov 25 12:32:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c1d47fbf-40ed-4f99-8218-820b5e4ebf0c does not exist
Nov 25 12:32:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:32:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:32:22 np0005535469 nova_compute[254092]: 2025-11-25 17:32:22.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Nov 25 12:32:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:25 np0005535469 nova_compute[254092]: 2025-11-25 17:32:25.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 12:32:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 12:32:27 np0005535469 nova_compute[254092]: 2025-11-25 17:32:27.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:28 np0005535469 podman[435495]: 2025-11-25 17:32:28.70447481 +0000 UTC m=+0.104192327 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 12:32:28 np0005535469 podman[435494]: 2025-11-25 17:32:28.718968154 +0000 UTC m=+0.118169277 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:32:28 np0005535469 podman[435496]: 2025-11-25 17:32:28.7706302 +0000 UTC m=+0.164160759 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:32:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.750026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949750139, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1724, "num_deletes": 257, "total_data_size": 2715082, "memory_usage": 2760592, "flush_reason": "Manual Compaction"}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949771737, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 2654235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66536, "largest_seqno": 68259, "table_properties": {"data_size": 2646235, "index_size": 4877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16645, "raw_average_key_size": 20, "raw_value_size": 2630127, "raw_average_value_size": 3219, "num_data_blocks": 217, "num_entries": 817, "num_filter_entries": 817, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091780, "oldest_key_time": 1764091780, "file_creation_time": 1764091949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 21771 microseconds, and 12314 cpu microseconds.
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.771808) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 2654235 bytes OK
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.771840) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.774144) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.774172) EVENT_LOG_v1 {"time_micros": 1764091949774162, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.774206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 2707641, prev total WAL file size 2707641, number of live WAL files 2.
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.776102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(2592KB)], [155(10211KB)]
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949776187, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 13111256, "oldest_snapshot_seqno": -1}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8690 keys, 11399065 bytes, temperature: kUnknown
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949853571, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 11399065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11342108, "index_size": 34124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 228123, "raw_average_key_size": 26, "raw_value_size": 11188223, "raw_average_value_size": 1287, "num_data_blocks": 1326, "num_entries": 8690, "num_filter_entries": 8690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.854019) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 11399065 bytes
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.855718) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.0 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.2) write-amplify(4.3) OK, records in: 9217, records dropped: 527 output_compression: NoCompression
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.855749) EVENT_LOG_v1 {"time_micros": 1764091949855734, "job": 96, "event": "compaction_finished", "compaction_time_micros": 77560, "compaction_time_cpu_micros": 53034, "output_level": 6, "num_output_files": 1, "total_output_size": 11399065, "num_input_records": 9217, "num_output_records": 8690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949856931, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949860935, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.775891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:32:29 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:32:30 np0005535469 nova_compute[254092]: 2025-11-25 17:32:30.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Nov 25 12:32:32 np0005535469 nova_compute[254092]: 2025-11-25 17:32:32.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:35 np0005535469 nova_compute[254092]: 2025-11-25 17:32:35.107 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:37 np0005535469 nova_compute[254092]: 2025-11-25 17:32:37.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:40 np0005535469 nova_compute[254092]: 2025-11-25 17:32:40.110 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:32:40
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:32:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:32:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:42 np0005535469 nova_compute[254092]: 2025-11-25 17:32:42.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:45 np0005535469 nova_compute[254092]: 2025-11-25 17:32:45.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:47 np0005535469 nova_compute[254092]: 2025-11-25 17:32:47.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:50 np0005535469 nova_compute[254092]: 2025-11-25 17:32:50.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:32:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:32:52 np0005535469 nova_compute[254092]: 2025-11-25 17:32:52.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:32:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2829513400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:32:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:32:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2829513400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:32:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:32:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:32:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892037524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:32:55 np0005535469 nova_compute[254092]: 2025-11-25 17:32:55.977 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.141 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.142 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3638MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.143 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.143 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.199 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.199 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.213 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:32:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:32:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3597018792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.660 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.665 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.682 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.683 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:32:56 np0005535469 nova_compute[254092]: 2025-11-25 17:32:56.684 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:32:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:57 np0005535469 nova_compute[254092]: 2025-11-25 17:32:57.683 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:57 np0005535469 nova_compute[254092]: 2025-11-25 17:32:57.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:32:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:32:59 np0005535469 nova_compute[254092]: 2025-11-25 17:32:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:32:59 np0005535469 nova_compute[254092]: 2025-11-25 17:32:59.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:32:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:32:59 np0005535469 podman[435603]: 2025-11-25 17:32:59.66134254 +0000 UTC m=+0.062895883 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:32:59 np0005535469 podman[435602]: 2025-11-25 17:32:59.681887699 +0000 UTC m=+0.084899352 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 25 12:32:59 np0005535469 podman[435604]: 2025-11-25 17:32:59.723823161 +0000 UTC m=+0.108223867 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:33:00 np0005535469 nova_compute[254092]: 2025-11-25 17:33:00.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:02 np0005535469 nova_compute[254092]: 2025-11-25 17:33:02.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:02 np0005535469 nova_compute[254092]: 2025-11-25 17:33:02.765 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:03 np0005535469 nova_compute[254092]: 2025-11-25 17:33:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:03 np0005535469 nova_compute[254092]: 2025-11-25 17:33:03.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:33:03 np0005535469 nova_compute[254092]: 2025-11-25 17:33:03.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:33:03 np0005535469 nova_compute[254092]: 2025-11-25 17:33:03.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:33:04 np0005535469 nova_compute[254092]: 2025-11-25 17:33:04.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:05 np0005535469 nova_compute[254092]: 2025-11-25 17:33:05.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:07 np0005535469 nova_compute[254092]: 2025-11-25 17:33:07.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:33:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:33:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:33:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:33:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:33:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:33:10 np0005535469 nova_compute[254092]: 2025-11-25 17:33:10.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:10 np0005535469 nova_compute[254092]: 2025-11-25 17:33:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:12 np0005535469 nova_compute[254092]: 2025-11-25 17:33:12.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:33:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:33:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:33:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:33:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:33:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:33:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:15 np0005535469 nova_compute[254092]: 2025-11-25 17:33:15.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:17 np0005535469 nova_compute[254092]: 2025-11-25 17:33:17.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:20 np0005535469 nova_compute[254092]: 2025-11-25 17:33:20.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:22 np0005535469 nova_compute[254092]: 2025-11-25 17:33:22.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:33:23 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 68223c93-765c-4941-bfa8-6a2813cd0b73 does not exist
Nov 25 12:33:23 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 75113497-9b25-4e3f-922e-b65fa090f56d does not exist
Nov 25 12:33:23 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 10cf8c44-9c41-404d-be3c-dfd8e15db1a5 does not exist
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:33:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:33:24 np0005535469 podman[435939]: 2025-11-25 17:33:24.340915239 +0000 UTC m=+0.041381598 container create 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:33:24 np0005535469 systemd[1]: Started libpod-conmon-325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3.scope.
Nov 25 12:33:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:33:24 np0005535469 podman[435939]: 2025-11-25 17:33:24.322789716 +0000 UTC m=+0.023256105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:33:24 np0005535469 podman[435939]: 2025-11-25 17:33:24.435961246 +0000 UTC m=+0.136427665 container init 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:33:24 np0005535469 podman[435939]: 2025-11-25 17:33:24.447584112 +0000 UTC m=+0.148050471 container start 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:33:24 np0005535469 charming_sutherland[435955]: 167 167
Nov 25 12:33:24 np0005535469 podman[435939]: 2025-11-25 17:33:24.453524334 +0000 UTC m=+0.153990723 container attach 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:33:24 np0005535469 systemd[1]: libpod-325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3.scope: Deactivated successfully.
Nov 25 12:33:24 np0005535469 podman[435939]: 2025-11-25 17:33:24.454415259 +0000 UTC m=+0.154881658 container died 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:33:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7eb6dc0c778a1b9fee46e781b3bbd8b29fde3a60da06a2d7cfac8fd9f3e2d3cb-merged.mount: Deactivated successfully.
Nov 25 12:33:24 np0005535469 podman[435939]: 2025-11-25 17:33:24.500084242 +0000 UTC m=+0.200550601 container remove 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:33:24 np0005535469 systemd[1]: libpod-conmon-325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3.scope: Deactivated successfully.
Nov 25 12:33:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:24 np0005535469 podman[435980]: 2025-11-25 17:33:24.702260125 +0000 UTC m=+0.067484738 container create 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:33:24 np0005535469 systemd[1]: Started libpod-conmon-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope.
Nov 25 12:33:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:33:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:24 np0005535469 podman[435980]: 2025-11-25 17:33:24.684610694 +0000 UTC m=+0.049835377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:33:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:24 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:24 np0005535469 podman[435980]: 2025-11-25 17:33:24.798983768 +0000 UTC m=+0.164208481 container init 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:33:24 np0005535469 podman[435980]: 2025-11-25 17:33:24.806136112 +0000 UTC m=+0.171360765 container start 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:33:24 np0005535469 podman[435980]: 2025-11-25 17:33:24.809689048 +0000 UTC m=+0.174913711 container attach 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:33:25 np0005535469 nova_compute[254092]: 2025-11-25 17:33:25.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:25 np0005535469 xenodochial_kapitsa[435996]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:33:25 np0005535469 xenodochial_kapitsa[435996]: --> relative data size: 1.0
Nov 25 12:33:25 np0005535469 xenodochial_kapitsa[435996]: --> All data devices are unavailable
Nov 25 12:33:25 np0005535469 systemd[1]: libpod-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope: Deactivated successfully.
Nov 25 12:33:25 np0005535469 podman[435980]: 2025-11-25 17:33:25.898200957 +0000 UTC m=+1.263425580 container died 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:33:25 np0005535469 systemd[1]: libpod-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope: Consumed 1.043s CPU time.
Nov 25 12:33:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6-merged.mount: Deactivated successfully.
Nov 25 12:33:25 np0005535469 podman[435980]: 2025-11-25 17:33:25.954681474 +0000 UTC m=+1.319906137 container remove 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:33:25 np0005535469 systemd[1]: libpod-conmon-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope: Deactivated successfully.
Nov 25 12:33:26 np0005535469 podman[436180]: 2025-11-25 17:33:26.747080003 +0000 UTC m=+0.055274796 container create 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 12:33:26 np0005535469 systemd[1]: Started libpod-conmon-30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1.scope.
Nov 25 12:33:26 np0005535469 podman[436180]: 2025-11-25 17:33:26.719526213 +0000 UTC m=+0.027721066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:33:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:33:26 np0005535469 podman[436180]: 2025-11-25 17:33:26.843796425 +0000 UTC m=+0.151991218 container init 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:33:26 np0005535469 podman[436180]: 2025-11-25 17:33:26.851296349 +0000 UTC m=+0.159491122 container start 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 12:33:26 np0005535469 podman[436180]: 2025-11-25 17:33:26.854837976 +0000 UTC m=+0.163032769 container attach 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:33:26 np0005535469 objective_wilbur[436196]: 167 167
Nov 25 12:33:26 np0005535469 systemd[1]: libpod-30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1.scope: Deactivated successfully.
Nov 25 12:33:26 np0005535469 podman[436180]: 2025-11-25 17:33:26.860039017 +0000 UTC m=+0.168233810 container died 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:33:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4b818e21b04b0f46d924697cb0fe9a6d891cfbab4876bd5a289cb2a2ec6f03b0-merged.mount: Deactivated successfully.
Nov 25 12:33:26 np0005535469 podman[436180]: 2025-11-25 17:33:26.899745708 +0000 UTC m=+0.207940471 container remove 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:33:26 np0005535469 systemd[1]: libpod-conmon-30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1.scope: Deactivated successfully.
Nov 25 12:33:27 np0005535469 podman[436220]: 2025-11-25 17:33:27.109162898 +0000 UTC m=+0.047357610 container create 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:33:27 np0005535469 systemd[1]: Started libpod-conmon-893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac.scope.
Nov 25 12:33:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:33:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:27 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:27 np0005535469 podman[436220]: 2025-11-25 17:33:27.088733403 +0000 UTC m=+0.026928145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:33:27 np0005535469 podman[436220]: 2025-11-25 17:33:27.194590344 +0000 UTC m=+0.132785086 container init 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:33:27 np0005535469 podman[436220]: 2025-11-25 17:33:27.211453673 +0000 UTC m=+0.149648395 container start 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:33:27 np0005535469 podman[436220]: 2025-11-25 17:33:27.215093192 +0000 UTC m=+0.153287914 container attach 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:33:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:27 np0005535469 nova_compute[254092]: 2025-11-25 17:33:27.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:27 np0005535469 nervous_bose[436236]: {
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:    "0": [
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:        {
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "devices": [
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "/dev/loop3"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            ],
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_name": "ceph_lv0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_size": "21470642176",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "name": "ceph_lv0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "tags": {
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cluster_name": "ceph",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.crush_device_class": "",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.encrypted": "0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osd_id": "0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.type": "block",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.vdo": "0"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            },
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "type": "block",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "vg_name": "ceph_vg0"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:        }
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:    ],
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:    "1": [
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:        {
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "devices": [
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "/dev/loop4"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            ],
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_name": "ceph_lv1",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_size": "21470642176",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "name": "ceph_lv1",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "tags": {
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cluster_name": "ceph",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.crush_device_class": "",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.encrypted": "0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osd_id": "1",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.type": "block",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.vdo": "0"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            },
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "type": "block",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "vg_name": "ceph_vg1"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:        }
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:    ],
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:    "2": [
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:        {
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "devices": [
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "/dev/loop5"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            ],
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_name": "ceph_lv2",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_size": "21470642176",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "name": "ceph_lv2",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "tags": {
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.cluster_name": "ceph",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.crush_device_class": "",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.encrypted": "0",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osd_id": "2",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.type": "block",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:                "ceph.vdo": "0"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            },
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "type": "block",
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:            "vg_name": "ceph_vg2"
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:        }
Nov 25 12:33:27 np0005535469 nervous_bose[436236]:    ]
Nov 25 12:33:27 np0005535469 nervous_bose[436236]: }
Nov 25 12:33:28 np0005535469 systemd[1]: libpod-893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac.scope: Deactivated successfully.
Nov 25 12:33:28 np0005535469 podman[436220]: 2025-11-25 17:33:28.027785213 +0000 UTC m=+0.965979935 container died 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:33:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798-merged.mount: Deactivated successfully.
Nov 25 12:33:28 np0005535469 podman[436220]: 2025-11-25 17:33:28.086077569 +0000 UTC m=+1.024272281 container remove 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:33:28 np0005535469 systemd[1]: libpod-conmon-893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac.scope: Deactivated successfully.
Nov 25 12:33:28 np0005535469 podman[436402]: 2025-11-25 17:33:28.79956394 +0000 UTC m=+0.042145868 container create b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:33:28 np0005535469 systemd[1]: Started libpod-conmon-b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b.scope.
Nov 25 12:33:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:33:28 np0005535469 podman[436402]: 2025-11-25 17:33:28.784086489 +0000 UTC m=+0.026668397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:33:28 np0005535469 podman[436402]: 2025-11-25 17:33:28.888905832 +0000 UTC m=+0.131487820 container init b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:33:28 np0005535469 podman[436402]: 2025-11-25 17:33:28.898740269 +0000 UTC m=+0.141322217 container start b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:33:28 np0005535469 podman[436402]: 2025-11-25 17:33:28.902834481 +0000 UTC m=+0.145416389 container attach b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:33:28 np0005535469 unruffled_benz[436418]: 167 167
Nov 25 12:33:28 np0005535469 systemd[1]: libpod-b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b.scope: Deactivated successfully.
Nov 25 12:33:28 np0005535469 podman[436402]: 2025-11-25 17:33:28.906134461 +0000 UTC m=+0.148716379 container died b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:33:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9a709fbf8284229747b81f75a75b01fd79cb0d91f6afc66564323c7f5ece038f-merged.mount: Deactivated successfully.
Nov 25 12:33:28 np0005535469 podman[436402]: 2025-11-25 17:33:28.945463701 +0000 UTC m=+0.188045619 container remove b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:33:28 np0005535469 systemd[1]: libpod-conmon-b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b.scope: Deactivated successfully.
Nov 25 12:33:29 np0005535469 podman[436442]: 2025-11-25 17:33:29.185593907 +0000 UTC m=+0.070724076 container create 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:33:29 np0005535469 systemd[1]: Started libpod-conmon-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope.
Nov 25 12:33:29 np0005535469 podman[436442]: 2025-11-25 17:33:29.16217554 +0000 UTC m=+0.047305679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:33:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:33:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:33:29 np0005535469 podman[436442]: 2025-11-25 17:33:29.300620449 +0000 UTC m=+0.185750608 container init 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:33:29 np0005535469 podman[436442]: 2025-11-25 17:33:29.316932032 +0000 UTC m=+0.202062201 container start 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:33:29 np0005535469 podman[436442]: 2025-11-25 17:33:29.321533778 +0000 UTC m=+0.206663937 container attach 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:33:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:30 np0005535469 nova_compute[254092]: 2025-11-25 17:33:30.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]: {
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "osd_id": 1,
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "type": "bluestore"
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:    },
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "osd_id": 2,
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "type": "bluestore"
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:    },
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "osd_id": 0,
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:        "type": "bluestore"
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]:    }
Nov 25 12:33:30 np0005535469 vibrant_mclaren[436458]: }
Nov 25 12:33:30 np0005535469 systemd[1]: libpod-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope: Deactivated successfully.
Nov 25 12:33:30 np0005535469 systemd[1]: libpod-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope: Consumed 1.034s CPU time.
Nov 25 12:33:30 np0005535469 podman[436442]: 2025-11-25 17:33:30.335013874 +0000 UTC m=+1.220144003 container died 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:33:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548-merged.mount: Deactivated successfully.
Nov 25 12:33:30 np0005535469 podman[436442]: 2025-11-25 17:33:30.42783001 +0000 UTC m=+1.312960149 container remove 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 12:33:30 np0005535469 systemd[1]: libpod-conmon-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope: Deactivated successfully.
Nov 25 12:33:30 np0005535469 podman[436500]: 2025-11-25 17:33:30.48771896 +0000 UTC m=+0.103498048 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 12:33:30 np0005535469 podman[436492]: 2025-11-25 17:33:30.489460088 +0000 UTC m=+0.103726805 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:33:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:33:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:33:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:33:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:33:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 65176985-d690-4942-8598-3cc82828e14d does not exist
Nov 25 12:33:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 34efc1e6-d172-40f0-873d-38896587622a does not exist
Nov 25 12:33:30 np0005535469 podman[436501]: 2025-11-25 17:33:30.533348373 +0000 UTC m=+0.138203744 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:33:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:33:30 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:33:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:32 np0005535469 nova_compute[254092]: 2025-11-25 17:33:32.784 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:35 np0005535469 nova_compute[254092]: 2025-11-25 17:33:35.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:37 np0005535469 nova_compute[254092]: 2025-11-25 17:33:37.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:33:40 np0005535469 nova_compute[254092]: 2025-11-25 17:33:40.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:33:40
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr', 'images', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:33:40 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:33:40 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:33:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:33:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 44K writes, 181K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 44K writes, 16K syncs, 2.80 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2211 writes, 8310 keys, 2211 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s#012Interval WAL: 2211 writes, 889 syncs, 2.49 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:33:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:33:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.496 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.497 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.510 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.516 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.516 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.518 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 25 12:33:41 np0005535469 nova_compute[254092]: 2025-11-25 17:33:41.518 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 25 12:33:42 np0005535469 nova_compute[254092]: 2025-11-25 17:33:42.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:45 np0005535469 nova_compute[254092]: 2025-11-25 17:33:45.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:47 np0005535469 nova_compute[254092]: 2025-11-25 17:33:47.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.658005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029658045, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 869, "num_deletes": 251, "total_data_size": 1185047, "memory_usage": 1205936, "flush_reason": "Manual Compaction"}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029664082, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 727050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68260, "largest_seqno": 69128, "table_properties": {"data_size": 723532, "index_size": 1297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9378, "raw_average_key_size": 20, "raw_value_size": 715999, "raw_average_value_size": 1573, "num_data_blocks": 59, "num_entries": 455, "num_filter_entries": 455, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091950, "oldest_key_time": 1764091950, "file_creation_time": 1764092029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 6109 microseconds, and 2682 cpu microseconds.
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.664117) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 727050 bytes OK
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.664132) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.665963) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.665977) EVENT_LOG_v1 {"time_micros": 1764092029665972, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.665993) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1180801, prev total WAL file size 1180801, number of live WAL files 2.
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.666692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373630' seq:72057594037927935, type:22 .. '6D6772737461740033303132' seq:0, type:0; will stop at (end)
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(710KB)], [158(10MB)]
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029666775, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12126115, "oldest_snapshot_seqno": -1}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8665 keys, 9259099 bytes, temperature: kUnknown
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029728250, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 9259099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9206149, "index_size": 30181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 227748, "raw_average_key_size": 26, "raw_value_size": 9056441, "raw_average_value_size": 1045, "num_data_blocks": 1165, "num_entries": 8665, "num_filter_entries": 8665, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.728482) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 9259099 bytes
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.732975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.1 rd, 150.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(29.4) write-amplify(12.7) OK, records in: 9145, records dropped: 480 output_compression: NoCompression
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.732991) EVENT_LOG_v1 {"time_micros": 1764092029732983, "job": 98, "event": "compaction_finished", "compaction_time_micros": 61536, "compaction_time_cpu_micros": 40591, "output_level": 6, "num_output_files": 1, "total_output_size": 9259099, "num_input_records": 9145, "num_output_records": 8665, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029733937, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029735860, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.666498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:33:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:33:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:33:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.2 total, 600.0 interval#012Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2173 writes, 8480 keys, 2173 commit groups, 1.0 writes per commit group, ingest: 7.93 MB, 0.01 MB/s#012Interval WAL: 2173 writes, 862 syncs, 2.52 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Nov 25 12:33:50 np0005535469 nova_compute[254092]: 2025-11-25 17:33:50.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:51 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:33:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:33:52 np0005535469 nova_compute[254092]: 2025-11-25 17:33:52.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:33:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1635799409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:33:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:33:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1635799409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:33:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.547 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:33:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:33:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1453461597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:33:55 np0005535469 nova_compute[254092]: 2025-11-25 17:33:55.968 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.123 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.125 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.125 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.126 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.219 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.220 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:33:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:33:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2179969112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.732 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.738 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.751 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.752 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:33:56 np0005535469 nova_compute[254092]: 2025-11-25 17:33:56.753 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:33:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:57 np0005535469 nova_compute[254092]: 2025-11-25 17:33:57.731 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:57 np0005535469 nova_compute[254092]: 2025-11-25 17:33:57.731 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:33:57 np0005535469 nova_compute[254092]: 2025-11-25 17:33:57.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:33:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:33:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:00 np0005535469 nova_compute[254092]: 2025-11-25 17:34:00.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:00 np0005535469 podman[436664]: 2025-11-25 17:34:00.671935139 +0000 UTC m=+0.072732821 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 12:34:00 np0005535469 podman[436663]: 2025-11-25 17:34:00.680514432 +0000 UTC m=+0.084496581 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:34:00 np0005535469 podman[436665]: 2025-11-25 17:34:00.747944338 +0000 UTC m=+0.140376952 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:34:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:01 np0005535469 nova_compute[254092]: 2025-11-25 17:34:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:01 np0005535469 nova_compute[254092]: 2025-11-25 17:34:01.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:34:02 np0005535469 nova_compute[254092]: 2025-11-25 17:34:02.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 25 12:34:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 25 12:34:03 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 25 12:34:04 np0005535469 nova_compute[254092]: 2025-11-25 17:34:04.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:04 np0005535469 nova_compute[254092]: 2025-11-25 17:34:04.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:04 np0005535469 nova_compute[254092]: 2025-11-25 17:34:04.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:34:04 np0005535469 nova_compute[254092]: 2025-11-25 17:34:04.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:34:04 np0005535469 nova_compute[254092]: 2025-11-25 17:34:04.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:34:04 np0005535469 nova_compute[254092]: 2025-11-25 17:34:04.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:05 np0005535469 nova_compute[254092]: 2025-11-25 17:34:05.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 29 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 2.8 MiB/s wr, 11 op/s
Nov 25 12:34:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 25 12:34:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 25 12:34:05 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 25 12:34:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 12:34:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:34:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.5 total, 600.0 interval#012Cumulative writes: 39K writes, 154K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.78 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1570 writes, 5388 keys, 1570 commit groups, 1.0 writes per commit group, ingest: 4.15 MB, 0.01 MB/s#012Interval WAL: 1570 writes, 670 syncs, 2.34 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Nov 25 12:34:07 np0005535469 nova_compute[254092]: 2025-11-25 17:34:07.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:08 np0005535469 nova_compute[254092]: 2025-11-25 17:34:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:08 np0005535469 nova_compute[254092]: 2025-11-25 17:34:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:34:08 np0005535469 nova_compute[254092]: 2025-11-25 17:34:08.525 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:34:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 12:34:09 np0005535469 nova_compute[254092]: 2025-11-25 17:34:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:09 np0005535469 nova_compute[254092]: 2025-11-25 17:34:09.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:09 np0005535469 nova_compute[254092]: 2025-11-25 17:34:09.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:34:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:34:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:34:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:34:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:34:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:34:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:34:10 np0005535469 nova_compute[254092]: 2025-11-25 17:34:10.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 25 12:34:11 np0005535469 nova_compute[254092]: 2025-11-25 17:34:11.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:12 np0005535469 nova_compute[254092]: 2025-11-25 17:34:12.800 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Nov 25 12:34:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:34:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:34:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:34:13.675 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:34:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:34:13.675 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:34:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:15 np0005535469 nova_compute[254092]: 2025-11-25 17:34:15.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 12:34:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 22 op/s
Nov 25 12:34:17 np0005535469 nova_compute[254092]: 2025-11-25 17:34:17.854 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 25 12:34:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:20 np0005535469 nova_compute[254092]: 2025-11-25 17:34:20.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 25 12:34:22 np0005535469 nova_compute[254092]: 2025-11-25 17:34:22.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:25 np0005535469 nova_compute[254092]: 2025-11-25 17:34:25.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 12:34:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:27 np0005535469 nova_compute[254092]: 2025-11-25 17:34:27.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:34:27.872 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:34:27 np0005535469 nova_compute[254092]: 2025-11-25 17:34:27.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:27 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:34:27.874 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:34:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:30 np0005535469 nova_compute[254092]: 2025-11-25 17:34:30.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:30 np0005535469 nova_compute[254092]: 2025-11-25 17:34:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:30 np0005535469 podman[436751]: 2025-11-25 17:34:30.891887384 +0000 UTC m=+0.081486040 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 12:34:30 np0005535469 podman[436750]: 2025-11-25 17:34:30.900885759 +0000 UTC m=+0.097569677 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:34:30 np0005535469 podman[436752]: 2025-11-25 17:34:30.963927495 +0000 UTC m=+0.142344096 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:34:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:34:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1e95d0de-81f3-42b0-8efc-c863d48bc6fd does not exist
Nov 25 12:34:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c2fe32bc-ce43-4859-850d-7325e2008fac does not exist
Nov 25 12:34:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 831910be-5c10-4f6d-af71-df1191a23dde does not exist
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:34:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:34:32 np0005535469 podman[437059]: 2025-11-25 17:34:32.4623181 +0000 UTC m=+0.051063201 container create b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:34:32 np0005535469 systemd[1]: Started libpod-conmon-b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7.scope.
Nov 25 12:34:32 np0005535469 podman[437059]: 2025-11-25 17:34:32.44212602 +0000 UTC m=+0.030871121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:34:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:34:32 np0005535469 podman[437059]: 2025-11-25 17:34:32.572588701 +0000 UTC m=+0.161333872 container init b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:34:32 np0005535469 podman[437059]: 2025-11-25 17:34:32.586482829 +0000 UTC m=+0.175227940 container start b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:34:32 np0005535469 podman[437059]: 2025-11-25 17:34:32.590670473 +0000 UTC m=+0.179415664 container attach b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:34:32 np0005535469 admiring_davinci[437075]: 167 167
Nov 25 12:34:32 np0005535469 systemd[1]: libpod-b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7.scope: Deactivated successfully.
Nov 25 12:34:32 np0005535469 podman[437059]: 2025-11-25 17:34:32.594300742 +0000 UTC m=+0.183045863 container died b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:34:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:34:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:34:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:34:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d89cb80d1df7ff6bae566a4dc8fd3a0a2de032d54125b544289356d657e6c484-merged.mount: Deactivated successfully.
Nov 25 12:34:32 np0005535469 podman[437059]: 2025-11-25 17:34:32.64749376 +0000 UTC m=+0.236238841 container remove b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 12:34:32 np0005535469 systemd[1]: libpod-conmon-b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7.scope: Deactivated successfully.
Nov 25 12:34:32 np0005535469 podman[437099]: 2025-11-25 17:34:32.856845269 +0000 UTC m=+0.059811389 container create f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:34:32 np0005535469 nova_compute[254092]: 2025-11-25 17:34:32.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:32 np0005535469 podman[437099]: 2025-11-25 17:34:32.82603264 +0000 UTC m=+0.028998770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:34:32 np0005535469 systemd[1]: Started libpod-conmon-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope.
Nov 25 12:34:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:34:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:32 np0005535469 podman[437099]: 2025-11-25 17:34:32.985544842 +0000 UTC m=+0.188511042 container init f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:34:33 np0005535469 podman[437099]: 2025-11-25 17:34:33.001498946 +0000 UTC m=+0.204465096 container start f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:34:33 np0005535469 podman[437099]: 2025-11-25 17:34:33.005865665 +0000 UTC m=+0.208831785 container attach f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:34:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:34 np0005535469 boring_almeida[437116]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:34:34 np0005535469 boring_almeida[437116]: --> relative data size: 1.0
Nov 25 12:34:34 np0005535469 boring_almeida[437116]: --> All data devices are unavailable
Nov 25 12:34:34 np0005535469 systemd[1]: libpod-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope: Deactivated successfully.
Nov 25 12:34:34 np0005535469 systemd[1]: libpod-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope: Consumed 1.156s CPU time.
Nov 25 12:34:34 np0005535469 podman[437099]: 2025-11-25 17:34:34.214590846 +0000 UTC m=+1.417556946 container died f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:34:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6-merged.mount: Deactivated successfully.
Nov 25 12:34:34 np0005535469 podman[437099]: 2025-11-25 17:34:34.310751163 +0000 UTC m=+1.513717263 container remove f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:34:34 np0005535469 systemd[1]: libpod-conmon-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope: Deactivated successfully.
Nov 25 12:34:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:35 np0005535469 podman[437301]: 2025-11-25 17:34:35.159246628 +0000 UTC m=+0.061882415 container create 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:34:35 np0005535469 systemd[1]: Started libpod-conmon-7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50.scope.
Nov 25 12:34:35 np0005535469 podman[437301]: 2025-11-25 17:34:35.139169902 +0000 UTC m=+0.041805699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:34:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:34:35 np0005535469 podman[437301]: 2025-11-25 17:34:35.276286144 +0000 UTC m=+0.178921951 container init 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 12:34:35 np0005535469 podman[437301]: 2025-11-25 17:34:35.290112421 +0000 UTC m=+0.192748188 container start 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:34:35 np0005535469 podman[437301]: 2025-11-25 17:34:35.3000437 +0000 UTC m=+0.202679587 container attach 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:34:35 np0005535469 flamboyant_thompson[437317]: 167 167
Nov 25 12:34:35 np0005535469 systemd[1]: libpod-7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50.scope: Deactivated successfully.
Nov 25 12:34:35 np0005535469 podman[437301]: 2025-11-25 17:34:35.302542819 +0000 UTC m=+0.205178586 container died 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:34:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c62b5edf2f0834719a21b5da5b15356328648d8faea16d3efac7946075499ae7-merged.mount: Deactivated successfully.
Nov 25 12:34:35 np0005535469 podman[437301]: 2025-11-25 17:34:35.351157611 +0000 UTC m=+0.253793408 container remove 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:34:35 np0005535469 systemd[1]: libpod-conmon-7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50.scope: Deactivated successfully.
Nov 25 12:34:35 np0005535469 nova_compute[254092]: 2025-11-25 17:34:35.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 511 B/s wr, 7 op/s
Nov 25 12:34:35 np0005535469 podman[437341]: 2025-11-25 17:34:35.604131438 +0000 UTC m=+0.061561357 container create 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:34:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 25 12:34:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 25 12:34:35 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 25 12:34:35 np0005535469 systemd[1]: Started libpod-conmon-5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e.scope.
Nov 25 12:34:35 np0005535469 podman[437341]: 2025-11-25 17:34:35.58255682 +0000 UTC m=+0.039986689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:34:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:34:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:35 np0005535469 podman[437341]: 2025-11-25 17:34:35.722671244 +0000 UTC m=+0.180101143 container init 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:34:35 np0005535469 podman[437341]: 2025-11-25 17:34:35.731884235 +0000 UTC m=+0.189314144 container start 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 12:34:35 np0005535469 podman[437341]: 2025-11-25 17:34:35.735673228 +0000 UTC m=+0.193103117 container attach 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]: {
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:    "0": [
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:        {
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "devices": [
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "/dev/loop3"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            ],
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_name": "ceph_lv0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_size": "21470642176",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "name": "ceph_lv0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "tags": {
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cluster_name": "ceph",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.crush_device_class": "",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.encrypted": "0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osd_id": "0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.type": "block",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.vdo": "0"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            },
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "type": "block",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "vg_name": "ceph_vg0"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:        }
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:    ],
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:    "1": [
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:        {
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "devices": [
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "/dev/loop4"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            ],
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_name": "ceph_lv1",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_size": "21470642176",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "name": "ceph_lv1",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "tags": {
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cluster_name": "ceph",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.crush_device_class": "",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.encrypted": "0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osd_id": "1",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.type": "block",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.vdo": "0"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            },
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "type": "block",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "vg_name": "ceph_vg1"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:        }
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:    ],
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:    "2": [
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:        {
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "devices": [
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "/dev/loop5"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            ],
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_name": "ceph_lv2",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_size": "21470642176",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "name": "ceph_lv2",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "tags": {
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.cluster_name": "ceph",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.crush_device_class": "",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.encrypted": "0",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osd_id": "2",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.type": "block",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:                "ceph.vdo": "0"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            },
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "type": "block",
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:            "vg_name": "ceph_vg2"
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:        }
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]:    ]
Nov 25 12:34:36 np0005535469 naughty_sutherland[437357]: }
Nov 25 12:34:36 np0005535469 systemd[1]: libpod-5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e.scope: Deactivated successfully.
Nov 25 12:34:36 np0005535469 podman[437341]: 2025-11-25 17:34:36.540471494 +0000 UTC m=+0.997901383 container died 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:34:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368-merged.mount: Deactivated successfully.
Nov 25 12:34:36 np0005535469 podman[437341]: 2025-11-25 17:34:36.610919521 +0000 UTC m=+1.068349420 container remove 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:34:36 np0005535469 systemd[1]: libpod-conmon-5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e.scope: Deactivated successfully.
Nov 25 12:34:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 12:34:37 np0005535469 podman[437520]: 2025-11-25 17:34:37.5370441 +0000 UTC m=+0.070207412 container create ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:34:37 np0005535469 systemd[1]: Started libpod-conmon-ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b.scope.
Nov 25 12:34:37 np0005535469 podman[437520]: 2025-11-25 17:34:37.509745587 +0000 UTC m=+0.042908949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:34:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:34:37 np0005535469 podman[437520]: 2025-11-25 17:34:37.660816369 +0000 UTC m=+0.193979741 container init ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:34:37 np0005535469 podman[437520]: 2025-11-25 17:34:37.676520486 +0000 UTC m=+0.209683808 container start ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:34:37 np0005535469 podman[437520]: 2025-11-25 17:34:37.681885602 +0000 UTC m=+0.215048974 container attach ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:34:37 np0005535469 crazy_northcutt[437536]: 167 167
Nov 25 12:34:37 np0005535469 systemd[1]: libpod-ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b.scope: Deactivated successfully.
Nov 25 12:34:37 np0005535469 podman[437520]: 2025-11-25 17:34:37.687879135 +0000 UTC m=+0.221042467 container died ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:34:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-905d3921c6acd4ecb271d3bdc8a824213cbc25a9f50473b3f3c9d98582920c95-merged.mount: Deactivated successfully.
Nov 25 12:34:37 np0005535469 podman[437520]: 2025-11-25 17:34:37.749500752 +0000 UTC m=+0.282664074 container remove ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 12:34:37 np0005535469 systemd[1]: libpod-conmon-ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b.scope: Deactivated successfully.
Nov 25 12:34:37 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:34:37.876 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:34:37 np0005535469 nova_compute[254092]: 2025-11-25 17:34:37.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:38 np0005535469 podman[437561]: 2025-11-25 17:34:38.01134016 +0000 UTC m=+0.076351049 container create eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:34:38 np0005535469 podman[437561]: 2025-11-25 17:34:37.981262031 +0000 UTC m=+0.046272990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:34:38 np0005535469 systemd[1]: Started libpod-conmon-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope.
Nov 25 12:34:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:34:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:34:38 np0005535469 podman[437561]: 2025-11-25 17:34:38.140413443 +0000 UTC m=+0.205424312 container init eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:34:38 np0005535469 podman[437561]: 2025-11-25 17:34:38.16196666 +0000 UTC m=+0.226977529 container start eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:34:38 np0005535469 podman[437561]: 2025-11-25 17:34:38.166104942 +0000 UTC m=+0.231115811 container attach eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]: {
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "osd_id": 1,
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "type": "bluestore"
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:    },
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "osd_id": 2,
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "type": "bluestore"
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:    },
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "osd_id": 0,
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:        "type": "bluestore"
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]:    }
Nov 25 12:34:39 np0005535469 peaceful_liskov[437577]: }
Nov 25 12:34:39 np0005535469 systemd[1]: libpod-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope: Deactivated successfully.
Nov 25 12:34:39 np0005535469 systemd[1]: libpod-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope: Consumed 1.251s CPU time.
Nov 25 12:34:39 np0005535469 podman[437611]: 2025-11-25 17:34:39.490968144 +0000 UTC m=+0.052448009 container died eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:34:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 12:34:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965-merged.mount: Deactivated successfully.
Nov 25 12:34:39 np0005535469 podman[437611]: 2025-11-25 17:34:39.574521349 +0000 UTC m=+0.136001184 container remove eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:34:39 np0005535469 systemd[1]: libpod-conmon-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope: Deactivated successfully.
Nov 25 12:34:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:34:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:34:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:34:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:34:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d181ffa5-d3ef-4a0c-b27d-86b07695ca1a does not exist
Nov 25 12:34:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8eb25c9f-0574-478d-9184-476883d5abce does not exist
Nov 25 12:34:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:34:40
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control']
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:34:40 np0005535469 nova_compute[254092]: 2025-11-25 17:34:40.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:34:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:34:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:34:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 12:34:42 np0005535469 nova_compute[254092]: 2025-11-25 17:34:42.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 12:34:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 25 12:34:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 25 12:34:44 np0005535469 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 25 12:34:45 np0005535469 nova_compute[254092]: 2025-11-25 17:34:45.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 829 B/s wr, 16 op/s
Nov 25 12:34:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Nov 25 12:34:48 np0005535469 nova_compute[254092]: 2025-11-25 17:34:48.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Nov 25 12:34:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:50 np0005535469 nova_compute[254092]: 2025-11-25 17:34:50.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:34:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:34:53 np0005535469 nova_compute[254092]: 2025-11-25 17:34:53.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:34:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:34:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871584981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:34:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:34:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871584981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:34:55 np0005535469 nova_compute[254092]: 2025-11-25 17:34:55.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:55 np0005535469 nova_compute[254092]: 2025-11-25 17:34:55.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:55 np0005535469 nova_compute[254092]: 2025-11-25 17:34:55.577 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:34:55 np0005535469 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:34:55 np0005535469 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:34:55 np0005535469 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:34:55 np0005535469 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:34:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:34:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394049784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.064 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.251 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.253 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.253 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.254 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.343 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.344 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.360 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:34:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:34:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702891268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.958 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.969 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.985 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.988 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:34:56 np0005535469 nova_compute[254092]: 2025-11-25 17:34:56.989 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:34:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:57 np0005535469 nova_compute[254092]: 2025-11-25 17:34:57.976 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:57 np0005535469 nova_compute[254092]: 2025-11-25 17:34:57.977 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:57 np0005535469 nova_compute[254092]: 2025-11-25 17:34:57.977 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:34:58 np0005535469 nova_compute[254092]: 2025-11-25 17:34:58.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:34:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:34:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:00 np0005535469 nova_compute[254092]: 2025-11-25 17:35:00.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:01 np0005535469 nova_compute[254092]: 2025-11-25 17:35:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:35:01 np0005535469 nova_compute[254092]: 2025-11-25 17:35:01.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:35:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:01 np0005535469 podman[437721]: 2025-11-25 17:35:01.68875949 +0000 UTC m=+0.084531752 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 12:35:01 np0005535469 podman[437720]: 2025-11-25 17:35:01.726988641 +0000 UTC m=+0.130356100 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:35:01 np0005535469 podman[437722]: 2025-11-25 17:35:01.737218099 +0000 UTC m=+0.128705415 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 12:35:03 np0005535469 nova_compute[254092]: 2025-11-25 17:35:03.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:04 np0005535469 nova_compute[254092]: 2025-11-25 17:35:04.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:35:04 np0005535469 nova_compute[254092]: 2025-11-25 17:35:04.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:35:04 np0005535469 nova_compute[254092]: 2025-11-25 17:35:04.500 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:35:04 np0005535469 nova_compute[254092]: 2025-11-25 17:35:04.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:35:04 np0005535469 nova_compute[254092]: 2025-11-25 17:35:04.521 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:35:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:05 np0005535469 nova_compute[254092]: 2025-11-25 17:35:05.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:06 np0005535469 nova_compute[254092]: 2025-11-25 17:35:06.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:35:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:08 np0005535469 nova_compute[254092]: 2025-11-25 17:35:08.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:35:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:35:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:35:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:35:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:35:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:35:10 np0005535469 nova_compute[254092]: 2025-11-25 17:35:10.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:12 np0005535469 nova_compute[254092]: 2025-11-25 17:35:12.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:35:13 np0005535469 nova_compute[254092]: 2025-11-25 17:35:13.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:35:13.675 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:35:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:35:13.676 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:35:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:35:13.676 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:35:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:15 np0005535469 nova_compute[254092]: 2025-11-25 17:35:15.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:35:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:15 np0005535469 nova_compute[254092]: 2025-11-25 17:35:15.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:18 np0005535469 nova_compute[254092]: 2025-11-25 17:35:18.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:20 np0005535469 nova_compute[254092]: 2025-11-25 17:35:20.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:23 np0005535469 nova_compute[254092]: 2025-11-25 17:35:23.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:25 np0005535469 nova_compute[254092]: 2025-11-25 17:35:25.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:28 np0005535469 nova_compute[254092]: 2025-11-25 17:35:28.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:30 np0005535469 nova_compute[254092]: 2025-11-25 17:35:30.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:32 np0005535469 podman[437784]: 2025-11-25 17:35:32.646136357 +0000 UTC m=+0.064940898 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 12:35:32 np0005535469 podman[437785]: 2025-11-25 17:35:32.665764301 +0000 UTC m=+0.070075578 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 12:35:32 np0005535469 podman[437786]: 2025-11-25 17:35:32.727956664 +0000 UTC m=+0.136679991 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:35:33 np0005535469 nova_compute[254092]: 2025-11-25 17:35:33.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:35 np0005535469 nova_compute[254092]: 2025-11-25 17:35:35.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:38 np0005535469 nova_compute[254092]: 2025-11-25 17:35:38.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:35:40
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', '.rgw.root', '.mgr', 'backups']
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:35:40 np0005535469 nova_compute[254092]: 2025-11-25 17:35:40.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 86bbf3ab-00be-4bc6-957d-2fd5c3428cbb does not exist
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 93f8f155-a37d-4a5f-9aa8-86191c7c6407 does not exist
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 58a5d2a9-3cc0-467b-a973-7249307bce5f does not exist
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:35:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:35:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:35:41 np0005535469 podman[438119]: 2025-11-25 17:35:41.539446975 +0000 UTC m=+0.046534177 container create 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:35:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:41 np0005535469 systemd[1]: Started libpod-conmon-6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373.scope.
Nov 25 12:35:41 np0005535469 podman[438119]: 2025-11-25 17:35:41.520261242 +0000 UTC m=+0.027348524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:35:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:35:41 np0005535469 podman[438119]: 2025-11-25 17:35:41.668628901 +0000 UTC m=+0.175716153 container init 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:35:41 np0005535469 podman[438119]: 2025-11-25 17:35:41.677825692 +0000 UTC m=+0.184912924 container start 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 12:35:41 np0005535469 podman[438119]: 2025-11-25 17:35:41.684495323 +0000 UTC m=+0.191582575 container attach 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:35:41 np0005535469 sharp_rhodes[438135]: 167 167
Nov 25 12:35:41 np0005535469 systemd[1]: libpod-6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373.scope: Deactivated successfully.
Nov 25 12:35:41 np0005535469 podman[438119]: 2025-11-25 17:35:41.69172655 +0000 UTC m=+0.198813762 container died 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:35:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-18e34b47b7b1f9a7d94f5f6fff758edd9979f68fe861afc30918531e6b7c8877-merged.mount: Deactivated successfully.
Nov 25 12:35:41 np0005535469 podman[438119]: 2025-11-25 17:35:41.734760771 +0000 UTC m=+0.241847973 container remove 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:35:41 np0005535469 systemd[1]: libpod-conmon-6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373.scope: Deactivated successfully.
Nov 25 12:35:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:35:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:35:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:35:41 np0005535469 podman[438159]: 2025-11-25 17:35:41.951052398 +0000 UTC m=+0.066149581 container create a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:35:42 np0005535469 systemd[1]: Started libpod-conmon-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope.
Nov 25 12:35:42 np0005535469 podman[438159]: 2025-11-25 17:35:41.929571824 +0000 UTC m=+0.044669047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:35:42 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:35:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:42 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:42 np0005535469 podman[438159]: 2025-11-25 17:35:42.051210294 +0000 UTC m=+0.166307567 container init a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:35:42 np0005535469 podman[438159]: 2025-11-25 17:35:42.063918961 +0000 UTC m=+0.179016154 container start a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:35:42 np0005535469 podman[438159]: 2025-11-25 17:35:42.068234888 +0000 UTC m=+0.183332091 container attach a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:35:43 np0005535469 nova_compute[254092]: 2025-11-25 17:35:43.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:43 np0005535469 peaceful_knuth[438175]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:35:43 np0005535469 peaceful_knuth[438175]: --> relative data size: 1.0
Nov 25 12:35:43 np0005535469 peaceful_knuth[438175]: --> All data devices are unavailable
Nov 25 12:35:43 np0005535469 systemd[1]: libpod-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope: Deactivated successfully.
Nov 25 12:35:43 np0005535469 systemd[1]: libpod-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope: Consumed 1.068s CPU time.
Nov 25 12:35:43 np0005535469 podman[438159]: 2025-11-25 17:35:43.178245161 +0000 UTC m=+1.293342354 container died a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:35:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109-merged.mount: Deactivated successfully.
Nov 25 12:35:43 np0005535469 podman[438159]: 2025-11-25 17:35:43.282589362 +0000 UTC m=+1.397686625 container remove a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:35:43 np0005535469 systemd[1]: libpod-conmon-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope: Deactivated successfully.
Nov 25 12:35:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:43 np0005535469 podman[438358]: 2025-11-25 17:35:43.917438222 +0000 UTC m=+0.054207876 container create e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:35:43 np0005535469 systemd[1]: Started libpod-conmon-e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7.scope.
Nov 25 12:35:43 np0005535469 podman[438358]: 2025-11-25 17:35:43.88688148 +0000 UTC m=+0.023651214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:35:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:35:44 np0005535469 podman[438358]: 2025-11-25 17:35:44.010073113 +0000 UTC m=+0.146842837 container init e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:35:44 np0005535469 podman[438358]: 2025-11-25 17:35:44.017002362 +0000 UTC m=+0.153772016 container start e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:35:44 np0005535469 podman[438358]: 2025-11-25 17:35:44.021616657 +0000 UTC m=+0.158386381 container attach e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:35:44 np0005535469 agitated_shockley[438374]: 167 167
Nov 25 12:35:44 np0005535469 systemd[1]: libpod-e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7.scope: Deactivated successfully.
Nov 25 12:35:44 np0005535469 podman[438358]: 2025-11-25 17:35:44.025144544 +0000 UTC m=+0.161914198 container died e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:35:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-24fee3c7070f2c40bee1150a903a04321e31e3bf5c7fa9a49a0f626b328c7063-merged.mount: Deactivated successfully.
Nov 25 12:35:44 np0005535469 podman[438358]: 2025-11-25 17:35:44.064365062 +0000 UTC m=+0.201134726 container remove e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:35:44 np0005535469 systemd[1]: libpod-conmon-e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7.scope: Deactivated successfully.
Nov 25 12:35:44 np0005535469 podman[438398]: 2025-11-25 17:35:44.247431984 +0000 UTC m=+0.045048277 container create 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 12:35:44 np0005535469 systemd[1]: Started libpod-conmon-6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625.scope.
Nov 25 12:35:44 np0005535469 podman[438398]: 2025-11-25 17:35:44.225332453 +0000 UTC m=+0.022948656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:35:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:35:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:44 np0005535469 podman[438398]: 2025-11-25 17:35:44.357183431 +0000 UTC m=+0.154799674 container init 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:35:44 np0005535469 podman[438398]: 2025-11-25 17:35:44.367338568 +0000 UTC m=+0.164954751 container start 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:35:44 np0005535469 podman[438398]: 2025-11-25 17:35:44.373986419 +0000 UTC m=+0.171602652 container attach 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:35:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:45 np0005535469 distracted_moser[438414]: {
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:    "0": [
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:        {
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "devices": [
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "/dev/loop3"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            ],
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_name": "ceph_lv0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_size": "21470642176",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "name": "ceph_lv0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "tags": {
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cluster_name": "ceph",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.crush_device_class": "",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.encrypted": "0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osd_id": "0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.type": "block",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.vdo": "0"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            },
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "type": "block",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "vg_name": "ceph_vg0"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:        }
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:    ],
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:    "1": [
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:        {
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "devices": [
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "/dev/loop4"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            ],
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_name": "ceph_lv1",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_size": "21470642176",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "name": "ceph_lv1",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "tags": {
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cluster_name": "ceph",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.crush_device_class": "",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.encrypted": "0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osd_id": "1",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.type": "block",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.vdo": "0"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            },
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "type": "block",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "vg_name": "ceph_vg1"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:        }
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:    ],
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:    "2": [
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:        {
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "devices": [
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "/dev/loop5"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            ],
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_name": "ceph_lv2",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_size": "21470642176",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "name": "ceph_lv2",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "tags": {
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.cluster_name": "ceph",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.crush_device_class": "",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.encrypted": "0",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osd_id": "2",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.type": "block",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:                "ceph.vdo": "0"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            },
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "type": "block",
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:            "vg_name": "ceph_vg2"
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:        }
Nov 25 12:35:45 np0005535469 distracted_moser[438414]:    ]
Nov 25 12:35:45 np0005535469 distracted_moser[438414]: }
Nov 25 12:35:45 np0005535469 systemd[1]: libpod-6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625.scope: Deactivated successfully.
Nov 25 12:35:45 np0005535469 podman[438398]: 2025-11-25 17:35:45.278633663 +0000 UTC m=+1.076249886 container died 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:35:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2-merged.mount: Deactivated successfully.
Nov 25 12:35:45 np0005535469 podman[438398]: 2025-11-25 17:35:45.344014632 +0000 UTC m=+1.141630815 container remove 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:35:45 np0005535469 systemd[1]: libpod-conmon-6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625.scope: Deactivated successfully.
Nov 25 12:35:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:45 np0005535469 nova_compute[254092]: 2025-11-25 17:35:45.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:46 np0005535469 podman[438575]: 2025-11-25 17:35:46.030178249 +0000 UTC m=+0.044671247 container create 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:35:46 np0005535469 systemd[1]: Started libpod-conmon-62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e.scope.
Nov 25 12:35:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:35:46 np0005535469 podman[438575]: 2025-11-25 17:35:46.010793401 +0000 UTC m=+0.025286449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:35:46 np0005535469 podman[438575]: 2025-11-25 17:35:46.113892028 +0000 UTC m=+0.128385026 container init 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:35:46 np0005535469 podman[438575]: 2025-11-25 17:35:46.119659684 +0000 UTC m=+0.134152732 container start 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:35:46 np0005535469 podman[438575]: 2025-11-25 17:35:46.123483239 +0000 UTC m=+0.137976257 container attach 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:35:46 np0005535469 condescending_swartz[438591]: 167 167
Nov 25 12:35:46 np0005535469 systemd[1]: libpod-62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e.scope: Deactivated successfully.
Nov 25 12:35:46 np0005535469 podman[438575]: 2025-11-25 17:35:46.125780771 +0000 UTC m=+0.140273779 container died 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:35:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cd4a3a7d5b8ff599d3d8db73a9f401ff96a31600c62ead7eafdb81c418aa4087-merged.mount: Deactivated successfully.
Nov 25 12:35:46 np0005535469 podman[438575]: 2025-11-25 17:35:46.200802773 +0000 UTC m=+0.215295811 container remove 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:35:46 np0005535469 systemd[1]: libpod-conmon-62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e.scope: Deactivated successfully.
Nov 25 12:35:46 np0005535469 podman[438616]: 2025-11-25 17:35:46.369791964 +0000 UTC m=+0.047900886 container create 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 12:35:46 np0005535469 systemd[1]: Started libpod-conmon-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope.
Nov 25 12:35:46 np0005535469 podman[438616]: 2025-11-25 17:35:46.346633252 +0000 UTC m=+0.024742265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:35:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:35:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:35:46 np0005535469 podman[438616]: 2025-11-25 17:35:46.496331328 +0000 UTC m=+0.174440280 container init 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:35:46 np0005535469 podman[438616]: 2025-11-25 17:35:46.504417798 +0000 UTC m=+0.182526750 container start 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:35:46 np0005535469 podman[438616]: 2025-11-25 17:35:46.510079982 +0000 UTC m=+0.188188884 container attach 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:35:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]: {
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "osd_id": 1,
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "type": "bluestore"
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:    },
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "osd_id": 2,
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "type": "bluestore"
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:    },
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "osd_id": 0,
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:        "type": "bluestore"
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]:    }
Nov 25 12:35:47 np0005535469 jovial_swirles[438632]: }
Nov 25 12:35:47 np0005535469 systemd[1]: libpod-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope: Deactivated successfully.
Nov 25 12:35:47 np0005535469 podman[438616]: 2025-11-25 17:35:47.686542505 +0000 UTC m=+1.364651447 container died 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:35:47 np0005535469 systemd[1]: libpod-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope: Consumed 1.189s CPU time.
Nov 25 12:35:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e-merged.mount: Deactivated successfully.
Nov 25 12:35:47 np0005535469 podman[438616]: 2025-11-25 17:35:47.760249971 +0000 UTC m=+1.438358903 container remove 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:35:47 np0005535469 systemd[1]: libpod-conmon-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope: Deactivated successfully.
Nov 25 12:35:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:35:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:35:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:35:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:35:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 444caea5-b475-4157-92c3-6c975755088a does not exist
Nov 25 12:35:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 80fe0c97-4267-4d88-9648-8f7fecd1b5b4 does not exist
Nov 25 12:35:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:35:47 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:35:48 np0005535469 nova_compute[254092]: 2025-11-25 17:35:48.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:50 np0005535469 nova_compute[254092]: 2025-11-25 17:35:50.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:35:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:35:53 np0005535469 nova_compute[254092]: 2025-11-25 17:35:53.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:35:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:35:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1678690733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:35:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:35:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1678690733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:35:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:35:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939635113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:35:55 np0005535469 nova_compute[254092]: 2025-11-25 17:35:55.991 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.153 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.155 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3607MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.397 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.398 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.477 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.572 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.572 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.584 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.603 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:35:56 np0005535469 nova_compute[254092]: 2025-11-25 17:35:56.620 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:35:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:35:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918263461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:35:57 np0005535469 nova_compute[254092]: 2025-11-25 17:35:57.109 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:35:57 np0005535469 nova_compute[254092]: 2025-11-25 17:35:57.116 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:35:57 np0005535469 nova_compute[254092]: 2025-11-25 17:35:57.129 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:35:57 np0005535469 nova_compute[254092]: 2025-11-25 17:35:57.130 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:35:57 np0005535469 nova_compute[254092]: 2025-11-25 17:35:57.130 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:35:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:58 np0005535469 nova_compute[254092]: 2025-11-25 17:35:58.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:35:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:35:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:00 np0005535469 nova_compute[254092]: 2025-11-25 17:36:00.130 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:00 np0005535469 nova_compute[254092]: 2025-11-25 17:36:00.131 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:00 np0005535469 nova_compute[254092]: 2025-11-25 17:36:00.131 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:00 np0005535469 nova_compute[254092]: 2025-11-25 17:36:00.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:02 np0005535469 nova_compute[254092]: 2025-11-25 17:36:02.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:02 np0005535469 nova_compute[254092]: 2025-11-25 17:36:02.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:36:03 np0005535469 nova_compute[254092]: 2025-11-25 17:36:03.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:03 np0005535469 podman[438774]: 2025-11-25 17:36:03.697838191 +0000 UTC m=+0.092665623 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 12:36:03 np0005535469 podman[438773]: 2025-11-25 17:36:03.713163929 +0000 UTC m=+0.108475824 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:36:03 np0005535469 podman[438775]: 2025-11-25 17:36:03.782886656 +0000 UTC m=+0.178265113 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 12:36:04 np0005535469 nova_compute[254092]: 2025-11-25 17:36:04.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:04 np0005535469 nova_compute[254092]: 2025-11-25 17:36:04.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:36:04 np0005535469 nova_compute[254092]: 2025-11-25 17:36:04.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:36:04 np0005535469 nova_compute[254092]: 2025-11-25 17:36:04.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:36:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:05 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 12:36:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 12:36:05 np0005535469 nova_compute[254092]: 2025-11-25 17:36:05.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:06 np0005535469 nova_compute[254092]: 2025-11-25 17:36:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:06 np0005535469 nova_compute[254092]: 2025-11-25 17:36:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 25 12:36:08 np0005535469 nova_compute[254092]: 2025-11-25 17:36:08.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 25 12:36:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:36:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:36:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:36:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:36:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:36:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:36:10 np0005535469 nova_compute[254092]: 2025-11-25 17:36:10.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 12:36:12 np0005535469 nova_compute[254092]: 2025-11-25 17:36:12.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:13 np0005535469 nova_compute[254092]: 2025-11-25 17:36:13.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:13 np0005535469 nova_compute[254092]: 2025-11-25 17:36:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 12:36:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:36:13.676 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:36:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:36:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:36:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:36:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.626854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174626905, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1441, "num_deletes": 252, "total_data_size": 2204578, "memory_usage": 2243624, "flush_reason": "Manual Compaction"}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174646413, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 2171544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69129, "largest_seqno": 70569, "table_properties": {"data_size": 2164759, "index_size": 3919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14246, "raw_average_key_size": 20, "raw_value_size": 2151051, "raw_average_value_size": 3033, "num_data_blocks": 175, "num_entries": 709, "num_filter_entries": 709, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092030, "oldest_key_time": 1764092030, "file_creation_time": 1764092174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 19651 microseconds, and 7280 cpu microseconds.
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.646503) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 2171544 bytes OK
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.646533) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.647658) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.647675) EVENT_LOG_v1 {"time_micros": 1764092174647669, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.647700) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2198223, prev total WAL file size 2198223, number of live WAL files 2.
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.648524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(2120KB)], [161(9042KB)]
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174648567, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 11430643, "oldest_snapshot_seqno": -1}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8854 keys, 9661007 bytes, temperature: kUnknown
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174709444, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9661007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9606298, "index_size": 31492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 232365, "raw_average_key_size": 26, "raw_value_size": 9452737, "raw_average_value_size": 1067, "num_data_blocks": 1213, "num_entries": 8854, "num_filter_entries": 8854, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.709940) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9661007 bytes
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.711307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.3 rd, 158.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.4) OK, records in: 9374, records dropped: 520 output_compression: NoCompression
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.711346) EVENT_LOG_v1 {"time_micros": 1764092174711326, "job": 100, "event": "compaction_finished", "compaction_time_micros": 61030, "compaction_time_cpu_micros": 24417, "output_level": 6, "num_output_files": 1, "total_output_size": 9661007, "num_input_records": 9374, "num_output_records": 8854, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174712503, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174716126, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.648459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:36:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:36:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 12:36:15 np0005535469 nova_compute[254092]: 2025-11-25 17:36:15.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Nov 25 12:36:18 np0005535469 nova_compute[254092]: 2025-11-25 17:36:18.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 12:36:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:20 np0005535469 nova_compute[254092]: 2025-11-25 17:36:20.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 12:36:22 np0005535469 systemd-logind[791]: New session 52 of user zuul.
Nov 25 12:36:22 np0005535469 systemd[1]: Started Session 52 of User zuul.
Nov 25 12:36:23 np0005535469 nova_compute[254092]: 2025-11-25 17:36:23.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:36:25.575 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:36:25 np0005535469 nova_compute[254092]: 2025-11-25 17:36:25.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:25 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:36:25.576 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:36:25 np0005535469 nova_compute[254092]: 2025-11-25 17:36:25.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:28 np0005535469 nova_compute[254092]: 2025-11-25 17:36:28.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:29 np0005535469 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 12:36:29 np0005535469 systemd-logind[791]: Session 52 logged out. Waiting for processes to exit.
Nov 25 12:36:29 np0005535469 systemd-logind[791]: Removed session 52.
Nov 25 12:36:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:30 np0005535469 nova_compute[254092]: 2025-11-25 17:36:30.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:31 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:36:31.579 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:36:33 np0005535469 nova_compute[254092]: 2025-11-25 17:36:33.387 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:34 np0005535469 podman[439095]: 2025-11-25 17:36:34.753847181 +0000 UTC m=+0.150395645 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 12:36:34 np0005535469 podman[439094]: 2025-11-25 17:36:34.7927306 +0000 UTC m=+0.189281664 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 12:36:34 np0005535469 podman[439096]: 2025-11-25 17:36:34.81368305 +0000 UTC m=+0.208808834 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:36:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:35 np0005535469 nova_compute[254092]: 2025-11-25 17:36:35.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:38 np0005535469 nova_compute[254092]: 2025-11-25 17:36:38.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:36:40
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'volumes']
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:36:40 np0005535469 nova_compute[254092]: 2025-11-25 17:36:40.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:36:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:36:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:43 np0005535469 nova_compute[254092]: 2025-11-25 17:36:43.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:45 np0005535469 nova_compute[254092]: 2025-11-25 17:36:45.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:48 np0005535469 nova_compute[254092]: 2025-11-25 17:36:48.516 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:36:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:36:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 52902d94-519a-4ea1-b8a3-2a6a0a1a8a81 does not exist
Nov 25 12:36:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 30966eff-ac50-4586-82b3-80c084e5b896 does not exist
Nov 25 12:36:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 39ddf466-7e27-4d64-bdde-1d4c27b9a175 does not exist
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:36:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:36:50 np0005535469 podman[439548]: 2025-11-25 17:36:50.021702007 +0000 UTC m=+0.057318411 container create 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:36:50 np0005535469 systemd[1]: Started libpod-conmon-3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6.scope.
Nov 25 12:36:50 np0005535469 podman[439548]: 2025-11-25 17:36:49.996514961 +0000 UTC m=+0.032131375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:36:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:36:50 np0005535469 podman[439548]: 2025-11-25 17:36:50.132516044 +0000 UTC m=+0.168132498 container init 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:36:50 np0005535469 podman[439548]: 2025-11-25 17:36:50.143010029 +0000 UTC m=+0.178626433 container start 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:36:50 np0005535469 podman[439548]: 2025-11-25 17:36:50.146703161 +0000 UTC m=+0.182319615 container attach 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 12:36:50 np0005535469 xenodochial_fermat[439564]: 167 167
Nov 25 12:36:50 np0005535469 systemd[1]: libpod-3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6.scope: Deactivated successfully.
Nov 25 12:36:50 np0005535469 podman[439548]: 2025-11-25 17:36:50.152733775 +0000 UTC m=+0.188350149 container died 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:36:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8e1b89a7546aaed64659f5c15a7bc41fb8ba71ae5f50df4f3f9c208004c7dd45-merged.mount: Deactivated successfully.
Nov 25 12:36:50 np0005535469 podman[439548]: 2025-11-25 17:36:50.205284945 +0000 UTC m=+0.240901319 container remove 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:36:50 np0005535469 systemd[1]: libpod-conmon-3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6.scope: Deactivated successfully.
Nov 25 12:36:50 np0005535469 podman[439590]: 2025-11-25 17:36:50.407298555 +0000 UTC m=+0.054750722 container create f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 12:36:50 np0005535469 systemd[1]: Started libpod-conmon-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope.
Nov 25 12:36:50 np0005535469 podman[439590]: 2025-11-25 17:36:50.374505392 +0000 UTC m=+0.021957539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:36:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:50 np0005535469 podman[439590]: 2025-11-25 17:36:50.509678292 +0000 UTC m=+0.157130439 container init f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:36:50 np0005535469 podman[439590]: 2025-11-25 17:36:50.523676653 +0000 UTC m=+0.171128800 container start f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:36:50 np0005535469 podman[439590]: 2025-11-25 17:36:50.526851789 +0000 UTC m=+0.174303936 container attach f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:36:50 np0005535469 nova_compute[254092]: 2025-11-25 17:36:50.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:51 np0005535469 elastic_grothendieck[439607]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:36:51 np0005535469 elastic_grothendieck[439607]: --> relative data size: 1.0
Nov 25 12:36:51 np0005535469 elastic_grothendieck[439607]: --> All data devices are unavailable
Nov 25 12:36:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:51 np0005535469 podman[439590]: 2025-11-25 17:36:51.600522969 +0000 UTC m=+1.247975096 container died f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:36:51 np0005535469 systemd[1]: libpod-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope: Deactivated successfully.
Nov 25 12:36:51 np0005535469 systemd[1]: libpod-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope: Consumed 1.014s CPU time.
Nov 25 12:36:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a-merged.mount: Deactivated successfully.
Nov 25 12:36:51 np0005535469 podman[439590]: 2025-11-25 17:36:51.675545411 +0000 UTC m=+1.322997568 container remove f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:36:51 np0005535469 systemd[1]: libpod-conmon-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope: Deactivated successfully.
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:36:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:36:52 np0005535469 podman[439789]: 2025-11-25 17:36:52.540693933 +0000 UTC m=+0.060367494 container create d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:36:52 np0005535469 systemd[1]: Started libpod-conmon-d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990.scope.
Nov 25 12:36:52 np0005535469 podman[439789]: 2025-11-25 17:36:52.511514899 +0000 UTC m=+0.031188550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:36:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:36:52 np0005535469 podman[439789]: 2025-11-25 17:36:52.643842391 +0000 UTC m=+0.163515992 container init d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:36:52 np0005535469 podman[439789]: 2025-11-25 17:36:52.653374591 +0000 UTC m=+0.173048192 container start d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:36:52 np0005535469 podman[439789]: 2025-11-25 17:36:52.657330739 +0000 UTC m=+0.177004330 container attach d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:36:52 np0005535469 epic_diffie[439806]: 167 167
Nov 25 12:36:52 np0005535469 systemd[1]: libpod-d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990.scope: Deactivated successfully.
Nov 25 12:36:52 np0005535469 podman[439789]: 2025-11-25 17:36:52.659552949 +0000 UTC m=+0.179226510 container died d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:36:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cd84b395dbfb62b8cd6367f7cb09d59329a1e438140da89d568e1603bb8bbd06-merged.mount: Deactivated successfully.
Nov 25 12:36:52 np0005535469 podman[439789]: 2025-11-25 17:36:52.706467207 +0000 UTC m=+0.226140768 container remove d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:36:52 np0005535469 systemd[1]: libpod-conmon-d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990.scope: Deactivated successfully.
Nov 25 12:36:52 np0005535469 podman[439828]: 2025-11-25 17:36:52.929542549 +0000 UTC m=+0.059090490 container create a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:36:52 np0005535469 systemd[1]: Started libpod-conmon-a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9.scope.
Nov 25 12:36:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:36:53 np0005535469 podman[439828]: 2025-11-25 17:36:52.91231868 +0000 UTC m=+0.041866641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:36:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:53 np0005535469 podman[439828]: 2025-11-25 17:36:53.017822263 +0000 UTC m=+0.147370214 container init a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:36:53 np0005535469 podman[439828]: 2025-11-25 17:36:53.030444666 +0000 UTC m=+0.159992607 container start a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:36:53 np0005535469 podman[439828]: 2025-11-25 17:36:53.034166257 +0000 UTC m=+0.163714248 container attach a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:36:53 np0005535469 nova_compute[254092]: 2025-11-25 17:36:53.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:53 np0005535469 elated_rubin[439844]: {
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:    "0": [
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:        {
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "devices": [
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "/dev/loop3"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            ],
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_name": "ceph_lv0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_size": "21470642176",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "name": "ceph_lv0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "tags": {
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cluster_name": "ceph",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.crush_device_class": "",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.encrypted": "0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osd_id": "0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.type": "block",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.vdo": "0"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            },
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "type": "block",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "vg_name": "ceph_vg0"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:        }
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:    ],
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:    "1": [
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:        {
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "devices": [
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "/dev/loop4"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            ],
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_name": "ceph_lv1",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_size": "21470642176",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "name": "ceph_lv1",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "tags": {
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cluster_name": "ceph",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.crush_device_class": "",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.encrypted": "0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osd_id": "1",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.type": "block",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.vdo": "0"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            },
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "type": "block",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "vg_name": "ceph_vg1"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:        }
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:    ],
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:    "2": [
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:        {
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "devices": [
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "/dev/loop5"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            ],
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_name": "ceph_lv2",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_size": "21470642176",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "name": "ceph_lv2",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "tags": {
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.cluster_name": "ceph",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.crush_device_class": "",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.encrypted": "0",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osd_id": "2",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.type": "block",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:                "ceph.vdo": "0"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            },
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "type": "block",
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:            "vg_name": "ceph_vg2"
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:        }
Nov 25 12:36:53 np0005535469 elated_rubin[439844]:    ]
Nov 25 12:36:53 np0005535469 elated_rubin[439844]: }
Nov 25 12:36:53 np0005535469 systemd[1]: libpod-a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9.scope: Deactivated successfully.
Nov 25 12:36:53 np0005535469 podman[439828]: 2025-11-25 17:36:53.868357817 +0000 UTC m=+0.997905788 container died a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 12:36:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36-merged.mount: Deactivated successfully.
Nov 25 12:36:53 np0005535469 podman[439828]: 2025-11-25 17:36:53.941228371 +0000 UTC m=+1.070776342 container remove a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:36:53 np0005535469 systemd[1]: libpod-conmon-a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9.scope: Deactivated successfully.
Nov 25 12:36:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:36:54 np0005535469 podman[440006]: 2025-11-25 17:36:54.847144434 +0000 UTC m=+0.061844114 container create 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:36:54 np0005535469 systemd[1]: Started libpod-conmon-365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4.scope.
Nov 25 12:36:54 np0005535469 podman[440006]: 2025-11-25 17:36:54.818727971 +0000 UTC m=+0.033427701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:36:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:36:54 np0005535469 podman[440006]: 2025-11-25 17:36:54.956289785 +0000 UTC m=+0.170989535 container init 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:36:54 np0005535469 podman[440006]: 2025-11-25 17:36:54.969561356 +0000 UTC m=+0.184261016 container start 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:36:54 np0005535469 podman[440006]: 2025-11-25 17:36:54.974129651 +0000 UTC m=+0.188829351 container attach 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:36:54 np0005535469 elastic_pasteur[440022]: 167 167
Nov 25 12:36:54 np0005535469 systemd[1]: libpod-365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4.scope: Deactivated successfully.
Nov 25 12:36:54 np0005535469 podman[440006]: 2025-11-25 17:36:54.979278571 +0000 UTC m=+0.193978261 container died 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:36:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9e4228cc62361bc74e4b1724bb2b5ce0a292480286a85d48516fcea08e2bc883-merged.mount: Deactivated successfully.
Nov 25 12:36:55 np0005535469 podman[440006]: 2025-11-25 17:36:55.033189118 +0000 UTC m=+0.247888808 container remove 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:36:55 np0005535469 systemd[1]: libpod-conmon-365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4.scope: Deactivated successfully.
Nov 25 12:36:55 np0005535469 podman[440048]: 2025-11-25 17:36:55.297541955 +0000 UTC m=+0.078627232 container create f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:36:55 np0005535469 systemd[1]: Started libpod-conmon-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope.
Nov 25 12:36:55 np0005535469 podman[440048]: 2025-11-25 17:36:55.267129948 +0000 UTC m=+0.048215285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:36:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:36:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:36:55 np0005535469 podman[440048]: 2025-11-25 17:36:55.391119123 +0000 UTC m=+0.172204390 container init f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:36:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:36:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4153111508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:36:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:36:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4153111508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:36:55 np0005535469 podman[440048]: 2025-11-25 17:36:55.408419844 +0000 UTC m=+0.189505131 container start f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:36:55 np0005535469 podman[440048]: 2025-11-25 17:36:55.412548206 +0000 UTC m=+0.193633493 container attach f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:36:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:36:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423538353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:36:55 np0005535469 nova_compute[254092]: 2025-11-25 17:36:55.974 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.124 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.125 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.125 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.125 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.194 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.194 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.213 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]: {
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "osd_id": 1,
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "type": "bluestore"
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:    },
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "osd_id": 2,
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "type": "bluestore"
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:    },
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "osd_id": 0,
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:        "type": "bluestore"
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]:    }
Nov 25 12:36:56 np0005535469 elastic_haslett[440065]: }
Nov 25 12:36:56 np0005535469 systemd[1]: libpod-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope: Deactivated successfully.
Nov 25 12:36:56 np0005535469 systemd[1]: libpod-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope: Consumed 1.144s CPU time.
Nov 25 12:36:56 np0005535469 podman[440140]: 2025-11-25 17:36:56.605998987 +0000 UTC m=+0.031390837 container died f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:36:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0-merged.mount: Deactivated successfully.
Nov 25 12:36:56 np0005535469 podman[440140]: 2025-11-25 17:36:56.655678668 +0000 UTC m=+0.081070518 container remove f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:36:56 np0005535469 systemd[1]: libpod-conmon-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope: Deactivated successfully.
Nov 25 12:36:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:36:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796075242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.690 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.698 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:36:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:36:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:36:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.720 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:36:56 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c616c376-5dbb-4aca-89d4-91ed880c745f does not exist
Nov 25 12:36:56 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d3f7b4c3-67dc-4aed-9ebe-4230a3fae304 does not exist
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.722 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:36:56 np0005535469 nova_compute[254092]: 2025-11-25 17:36:56.722 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:36:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:36:58 np0005535469 nova_compute[254092]: 2025-11-25 17:36:58.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:36:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:36:59 np0005535469 nova_compute[254092]: 2025-11-25 17:36:59.724 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:59 np0005535469 nova_compute[254092]: 2025-11-25 17:36:59.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:36:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:00 np0005535469 nova_compute[254092]: 2025-11-25 17:37:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:00 np0005535469 nova_compute[254092]: 2025-11-25 17:37:00.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:03 np0005535469 nova_compute[254092]: 2025-11-25 17:37:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:03 np0005535469 nova_compute[254092]: 2025-11-25 17:37:03.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:37:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:03 np0005535469 nova_compute[254092]: 2025-11-25 17:37:03.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:05 np0005535469 podman[440208]: 2025-11-25 17:37:05.665881089 +0000 UTC m=+0.071960840 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:37:05 np0005535469 nova_compute[254092]: 2025-11-25 17:37:05.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:05 np0005535469 podman[440207]: 2025-11-25 17:37:05.687656931 +0000 UTC m=+0.102862200 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 12:37:05 np0005535469 podman[440209]: 2025-11-25 17:37:05.690685074 +0000 UTC m=+0.103429936 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 12:37:06 np0005535469 nova_compute[254092]: 2025-11-25 17:37:06.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:06 np0005535469 nova_compute[254092]: 2025-11-25 17:37:06.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:06 np0005535469 nova_compute[254092]: 2025-11-25 17:37:06.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:37:06 np0005535469 nova_compute[254092]: 2025-11-25 17:37:06.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:37:06 np0005535469 nova_compute[254092]: 2025-11-25 17:37:06.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:37:06 np0005535469 nova_compute[254092]: 2025-11-25 17:37:06.519 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:08 np0005535469 nova_compute[254092]: 2025-11-25 17:37:08.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.762949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229763019, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 705, "num_deletes": 255, "total_data_size": 867604, "memory_usage": 881704, "flush_reason": "Manual Compaction"}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229770779, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 848800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70570, "largest_seqno": 71274, "table_properties": {"data_size": 845114, "index_size": 1529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8141, "raw_average_key_size": 18, "raw_value_size": 837709, "raw_average_value_size": 1939, "num_data_blocks": 68, "num_entries": 432, "num_filter_entries": 432, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092175, "oldest_key_time": 1764092175, "file_creation_time": 1764092229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 7842 microseconds, and 2947 cpu microseconds.
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.770808) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 848800 bytes OK
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.770822) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772187) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772200) EVENT_LOG_v1 {"time_micros": 1764092229772195, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772213) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 863926, prev total WAL file size 890414, number of live WAL files 2.
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772809) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303137' seq:72057594037927935, type:22 .. '6C6F676D0033323638' seq:0, type:0; will stop at (end)
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(828KB)], [164(9434KB)]
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229772850, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10509807, "oldest_snapshot_seqno": -1}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8765 keys, 10396532 bytes, temperature: kUnknown
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229840275, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10396532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10341020, "index_size": 32487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 231420, "raw_average_key_size": 26, "raw_value_size": 10187550, "raw_average_value_size": 1162, "num_data_blocks": 1256, "num_entries": 8765, "num_filter_entries": 8765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.840851) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10396532 bytes
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.843687) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 153.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(24.6) write-amplify(12.2) OK, records in: 9286, records dropped: 521 output_compression: NoCompression
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.843728) EVENT_LOG_v1 {"time_micros": 1764092229843710, "job": 102, "event": "compaction_finished", "compaction_time_micros": 67573, "compaction_time_cpu_micros": 36972, "output_level": 6, "num_output_files": 1, "total_output_size": 10396532, "num_input_records": 9286, "num_output_records": 8765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229844259, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229847111, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:37:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:37:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:37:10 np0005535469 nova_compute[254092]: 2025-11-25 17:37:10.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:13 np0005535469 nova_compute[254092]: 2025-11-25 17:37:13.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:37:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:37:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:37:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:37:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:37:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:37:14 np0005535469 nova_compute[254092]: 2025-11-25 17:37:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:15 np0005535469 nova_compute[254092]: 2025-11-25 17:37:15.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:18 np0005535469 nova_compute[254092]: 2025-11-25 17:37:18.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:20 np0005535469 nova_compute[254092]: 2025-11-25 17:37:20.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:23 np0005535469 nova_compute[254092]: 2025-11-25 17:37:23.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:25 np0005535469 nova_compute[254092]: 2025-11-25 17:37:25.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:28 np0005535469 nova_compute[254092]: 2025-11-25 17:37:28.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:30 np0005535469 nova_compute[254092]: 2025-11-25 17:37:30.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:33 np0005535469 nova_compute[254092]: 2025-11-25 17:37:33.682 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:35 np0005535469 nova_compute[254092]: 2025-11-25 17:37:35.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:36 np0005535469 nova_compute[254092]: 2025-11-25 17:37:36.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:37:36.672 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 12:37:36 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:37:36.674 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 12:37:36 np0005535469 podman[440271]: 2025-11-25 17:37:36.676066647 +0000 UTC m=+0.074221061 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:37:36 np0005535469 podman[440272]: 2025-11-25 17:37:36.694033707 +0000 UTC m=+0.087810942 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 12:37:36 np0005535469 podman[440273]: 2025-11-25 17:37:36.730937022 +0000 UTC m=+0.121848279 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 12:37:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:38 np0005535469 nova_compute[254092]: 2025-11-25 17:37:38.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:37:40
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'images']
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:37:40 np0005535469 systemd-logind[791]: New session 53 of user zuul.
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:37:40 np0005535469 systemd[1]: Started Session 53 of User zuul.
Nov 25 12:37:40 np0005535469 nova_compute[254092]: 2025-11-25 17:37:40.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:37:40 np0005535469 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 12:37:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:42 np0005535469 systemd[1]: Reloading.
Nov 25 12:37:42 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 12:37:42 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 12:37:43 np0005535469 systemd[1]: Reloading.
Nov 25 12:37:43 np0005535469 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 12:37:43 np0005535469 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 12:37:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:43 np0005535469 nova_compute[254092]: 2025-11-25 17:37:43.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:43 np0005535469 systemd[1]: Starting Podman API Socket...
Nov 25 12:37:43 np0005535469 systemd[1]: Listening on Podman API Socket.
Nov 25 12:37:44 np0005535469 dbus-broker-launch[775]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Nov 25 12:37:44 np0005535469 systemd[1]: podman.socket: Deactivated successfully.
Nov 25 12:37:44 np0005535469 systemd[1]: Closed Podman API Socket.
Nov 25 12:37:44 np0005535469 systemd[1]: Stopping Podman API Socket...
Nov 25 12:37:44 np0005535469 systemd[1]: Starting Podman API Socket...
Nov 25 12:37:44 np0005535469 systemd[1]: Listening on Podman API Socket.
Nov 25 12:37:44 np0005535469 systemd-logind[791]: New session 54 of user zuul.
Nov 25 12:37:44 np0005535469 systemd[1]: Started Session 54 of User zuul.
Nov 25 12:37:44 np0005535469 systemd[1]: Starting Podman API Service...
Nov 25 12:37:44 np0005535469 systemd[1]: Started Podman API Service.
Nov 25 12:37:44 np0005535469 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 25 12:37:44 np0005535469 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Setting parallel job count to 25"
Nov 25 12:37:44 np0005535469 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Using sqlite as database backend"
Nov 25 12:37:44 np0005535469 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 25 12:37:44 np0005535469 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 25 12:37:44 np0005535469 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 25 12:37:44 np0005535469 podman[440595]: @ - - [25/Nov/2025:17:37:44 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 12:37:44 np0005535469 podman[440595]: @ - - [25/Nov/2025:17:37:44 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24899 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 12:37:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:45 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:37:45.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 12:37:45 np0005535469 nova_compute[254092]: 2025-11-25 17:37:45.697 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:48 np0005535469 nova_compute[254092]: 2025-11-25 17:37:48.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:50 np0005535469 nova_compute[254092]: 2025-11-25 17:37:50.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:37:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:37:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:53 np0005535469 nova_compute[254092]: 2025-11-25 17:37:53.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:37:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:37:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3528567440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:37:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:37:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3528567440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:37:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:55 np0005535469 nova_compute[254092]: 2025-11-25 17:37:55.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:57 np0005535469 nova_compute[254092]: 2025-11-25 17:37:57.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:57 np0005535469 nova_compute[254092]: 2025-11-25 17:37:57.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:37:57 np0005535469 nova_compute[254092]: 2025-11-25 17:37:57.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:37:57 np0005535469 nova_compute[254092]: 2025-11-25 17:37:57.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:37:57 np0005535469 nova_compute[254092]: 2025-11-25 17:37:57.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:37:57 np0005535469 nova_compute[254092]: 2025-11-25 17:37:57.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:37:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:37:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e22c35af-c143-4887-89ea-b10563f13b84 does not exist
Nov 25 12:37:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fd86b1f6-8764-4f48-8191-c0dc37a75412 does not exist
Nov 25 12:37:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a80a8bd1-72ab-4977-97c4-f68de8672e2e does not exist
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:37:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7614586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.158 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.160 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3604MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.229 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.229 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.254 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:37:58 np0005535469 podman[440922]: 2025-11-25 17:37:58.475987851 +0000 UTC m=+0.041214503 container create 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:37:58 np0005535469 systemd[1]: Started libpod-conmon-290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd.scope.
Nov 25 12:37:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:37:58 np0005535469 podman[440922]: 2025-11-25 17:37:58.459596925 +0000 UTC m=+0.024823587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:37:58 np0005535469 podman[440922]: 2025-11-25 17:37:58.566162596 +0000 UTC m=+0.131389238 container init 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:37:58 np0005535469 podman[440922]: 2025-11-25 17:37:58.578330478 +0000 UTC m=+0.143557130 container start 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:37:58 np0005535469 podman[440922]: 2025-11-25 17:37:58.58134331 +0000 UTC m=+0.146569982 container attach 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:37:58 np0005535469 zealous_euler[440938]: 167 167
Nov 25 12:37:58 np0005535469 systemd[1]: libpod-290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd.scope: Deactivated successfully.
Nov 25 12:37:58 np0005535469 podman[440922]: 2025-11-25 17:37:58.585423991 +0000 UTC m=+0.150650673 container died 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:37:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d810108bbb9960d985fbace81c7b86b75b749265bfe3bd88225c5ae645fa3070-merged.mount: Deactivated successfully.
Nov 25 12:37:58 np0005535469 podman[440922]: 2025-11-25 17:37:58.630983571 +0000 UTC m=+0.196210223 container remove 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:37:58 np0005535469 systemd[1]: libpod-conmon-290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd.scope: Deactivated successfully.
Nov 25 12:37:58 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:37:58 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683460061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.709 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.722 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.723 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:37:58 np0005535469 nova_compute[254092]: 2025-11-25 17:37:58.724 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:37:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:37:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:37:58 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:37:58 np0005535469 podman[440964]: 2025-11-25 17:37:58.801120833 +0000 UTC m=+0.041356368 container create 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:37:58 np0005535469 systemd[1]: Started libpod-conmon-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope.
Nov 25 12:37:58 np0005535469 podman[440964]: 2025-11-25 17:37:58.785273561 +0000 UTC m=+0.025509126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:37:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:37:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:37:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:37:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:37:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:37:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:37:58 np0005535469 podman[440964]: 2025-11-25 17:37:58.909148174 +0000 UTC m=+0.149383829 container init 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:37:58 np0005535469 podman[440964]: 2025-11-25 17:37:58.918724824 +0000 UTC m=+0.158960359 container start 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 12:37:58 np0005535469 podman[440964]: 2025-11-25 17:37:58.922023324 +0000 UTC m=+0.162258899 container attach 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:37:59 np0005535469 podman[440595]: time="2025-11-25T17:37:59Z" level=info msg="Received shutdown.Stop(), terminating!" PID=440595
Nov 25 12:37:59 np0005535469 systemd[1]: podman.service: Deactivated successfully.
Nov 25 12:37:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:37:59 np0005535469 nova_compute[254092]: 2025-11-25 17:37:59.724 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:59 np0005535469 nova_compute[254092]: 2025-11-25 17:37:59.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:37:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:00 np0005535469 sad_brahmagupta[440981]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:38:00 np0005535469 sad_brahmagupta[440981]: --> relative data size: 1.0
Nov 25 12:38:00 np0005535469 sad_brahmagupta[440981]: --> All data devices are unavailable
Nov 25 12:38:00 np0005535469 systemd[1]: libpod-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope: Deactivated successfully.
Nov 25 12:38:00 np0005535469 systemd[1]: libpod-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope: Consumed 1.088s CPU time.
Nov 25 12:38:00 np0005535469 podman[440964]: 2025-11-25 17:38:00.076857992 +0000 UTC m=+1.317093527 container died 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 12:38:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54-merged.mount: Deactivated successfully.
Nov 25 12:38:00 np0005535469 podman[440964]: 2025-11-25 17:38:00.190586899 +0000 UTC m=+1.430822444 container remove 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:38:00 np0005535469 systemd[1]: libpod-conmon-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope: Deactivated successfully.
Nov 25 12:38:00 np0005535469 nova_compute[254092]: 2025-11-25 17:38:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:00 np0005535469 nova_compute[254092]: 2025-11-25 17:38:00.702 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:00 np0005535469 podman[441162]: 2025-11-25 17:38:00.854496213 +0000 UTC m=+0.039469806 container create 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:38:00 np0005535469 systemd[1]: Started libpod-conmon-6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a.scope.
Nov 25 12:38:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:38:00 np0005535469 podman[441162]: 2025-11-25 17:38:00.835722211 +0000 UTC m=+0.020695814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:38:00 np0005535469 podman[441162]: 2025-11-25 17:38:00.946806696 +0000 UTC m=+0.131780309 container init 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:38:00 np0005535469 podman[441162]: 2025-11-25 17:38:00.952988304 +0000 UTC m=+0.137961917 container start 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:38:00 np0005535469 goofy_bell[441179]: 167 167
Nov 25 12:38:00 np0005535469 systemd[1]: libpod-6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a.scope: Deactivated successfully.
Nov 25 12:38:00 np0005535469 podman[441162]: 2025-11-25 17:38:00.957039095 +0000 UTC m=+0.142012698 container attach 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:38:00 np0005535469 podman[441162]: 2025-11-25 17:38:00.957587269 +0000 UTC m=+0.142560852 container died 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:38:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c14b868327b4a186f835ec36ac2ab1539405de6a97bf53860867e4f306856f14-merged.mount: Deactivated successfully.
Nov 25 12:38:00 np0005535469 podman[441162]: 2025-11-25 17:38:00.991613286 +0000 UTC m=+0.176586859 container remove 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:38:01 np0005535469 systemd[1]: libpod-conmon-6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a.scope: Deactivated successfully.
Nov 25 12:38:01 np0005535469 podman[441203]: 2025-11-25 17:38:01.159251989 +0000 UTC m=+0.041460569 container create dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 12:38:01 np0005535469 systemd[1]: Started libpod-conmon-dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b.scope.
Nov 25 12:38:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:38:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:01 np0005535469 podman[441203]: 2025-11-25 17:38:01.145105984 +0000 UTC m=+0.027314584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:38:01 np0005535469 podman[441203]: 2025-11-25 17:38:01.244577712 +0000 UTC m=+0.126786332 container init dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:38:01 np0005535469 podman[441203]: 2025-11-25 17:38:01.251970474 +0000 UTC m=+0.134179064 container start dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:38:01 np0005535469 podman[441203]: 2025-11-25 17:38:01.25478769 +0000 UTC m=+0.136996290 container attach dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:38:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]: {
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:    "0": [
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:        {
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "devices": [
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "/dev/loop3"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            ],
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_name": "ceph_lv0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_size": "21470642176",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "name": "ceph_lv0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "tags": {
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cluster_name": "ceph",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.crush_device_class": "",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.encrypted": "0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osd_id": "0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.type": "block",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.vdo": "0"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            },
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "type": "block",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "vg_name": "ceph_vg0"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:        }
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:    ],
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:    "1": [
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:        {
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "devices": [
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "/dev/loop4"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            ],
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_name": "ceph_lv1",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_size": "21470642176",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "name": "ceph_lv1",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "tags": {
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cluster_name": "ceph",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.crush_device_class": "",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.encrypted": "0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osd_id": "1",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.type": "block",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.vdo": "0"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            },
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "type": "block",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "vg_name": "ceph_vg1"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:        }
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:    ],
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:    "2": [
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:        {
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "devices": [
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "/dev/loop5"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            ],
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_name": "ceph_lv2",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_size": "21470642176",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "name": "ceph_lv2",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "tags": {
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.cluster_name": "ceph",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.crush_device_class": "",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.encrypted": "0",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osd_id": "2",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.type": "block",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:                "ceph.vdo": "0"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            },
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "type": "block",
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:            "vg_name": "ceph_vg2"
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:        }
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]:    ]
Nov 25 12:38:01 np0005535469 hardcore_ganguly[441219]: }
Nov 25 12:38:02 np0005535469 systemd[1]: libpod-dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b.scope: Deactivated successfully.
Nov 25 12:38:02 np0005535469 podman[441203]: 2025-11-25 17:38:02.013213858 +0000 UTC m=+0.895422448 container died dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:38:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be-merged.mount: Deactivated successfully.
Nov 25 12:38:02 np0005535469 podman[441203]: 2025-11-25 17:38:02.069622543 +0000 UTC m=+0.951831133 container remove dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:38:02 np0005535469 systemd[1]: libpod-conmon-dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b.scope: Deactivated successfully.
Nov 25 12:38:02 np0005535469 podman[441383]: 2025-11-25 17:38:02.751150036 +0000 UTC m=+0.059200802 container create 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:38:02 np0005535469 systemd[1]: Started libpod-conmon-66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3.scope.
Nov 25 12:38:02 np0005535469 podman[441383]: 2025-11-25 17:38:02.721345695 +0000 UTC m=+0.029396261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:38:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:38:02 np0005535469 podman[441383]: 2025-11-25 17:38:02.864475022 +0000 UTC m=+0.172525618 container init 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:38:02 np0005535469 podman[441383]: 2025-11-25 17:38:02.873011574 +0000 UTC m=+0.181062090 container start 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:38:02 np0005535469 podman[441383]: 2025-11-25 17:38:02.877033673 +0000 UTC m=+0.185084229 container attach 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:38:02 np0005535469 peaceful_jackson[441400]: 167 167
Nov 25 12:38:02 np0005535469 systemd[1]: libpod-66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3.scope: Deactivated successfully.
Nov 25 12:38:02 np0005535469 podman[441383]: 2025-11-25 17:38:02.880746104 +0000 UTC m=+0.188796650 container died 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:38:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-12841acdc9f6ff0b914ee087c30d649b9671f2625d433742338cbbc5883c52fa-merged.mount: Deactivated successfully.
Nov 25 12:38:02 np0005535469 podman[441383]: 2025-11-25 17:38:02.932128904 +0000 UTC m=+0.240179450 container remove 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:38:02 np0005535469 systemd[1]: libpod-conmon-66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3.scope: Deactivated successfully.
Nov 25 12:38:03 np0005535469 podman[441423]: 2025-11-25 17:38:03.148763351 +0000 UTC m=+0.050988869 container create a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:38:03 np0005535469 systemd[1]: Started libpod-conmon-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope.
Nov 25 12:38:03 np0005535469 podman[441423]: 2025-11-25 17:38:03.123839842 +0000 UTC m=+0.026065370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:38:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:38:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:38:03 np0005535469 podman[441423]: 2025-11-25 17:38:03.254929921 +0000 UTC m=+0.157155439 container init a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 12:38:03 np0005535469 podman[441423]: 2025-11-25 17:38:03.268279375 +0000 UTC m=+0.170504883 container start a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:38:03 np0005535469 podman[441423]: 2025-11-25 17:38:03.273511907 +0000 UTC m=+0.175737495 container attach a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:38:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:03 np0005535469 nova_compute[254092]: 2025-11-25 17:38:03.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:04 np0005535469 determined_gates[441440]: {
Nov 25 12:38:04 np0005535469 determined_gates[441440]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "osd_id": 1,
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "type": "bluestore"
Nov 25 12:38:04 np0005535469 determined_gates[441440]:    },
Nov 25 12:38:04 np0005535469 determined_gates[441440]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "osd_id": 2,
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "type": "bluestore"
Nov 25 12:38:04 np0005535469 determined_gates[441440]:    },
Nov 25 12:38:04 np0005535469 determined_gates[441440]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "osd_id": 0,
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:38:04 np0005535469 determined_gates[441440]:        "type": "bluestore"
Nov 25 12:38:04 np0005535469 determined_gates[441440]:    }
Nov 25 12:38:04 np0005535469 determined_gates[441440]: }
Nov 25 12:38:04 np0005535469 systemd[1]: libpod-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope: Deactivated successfully.
Nov 25 12:38:04 np0005535469 podman[441423]: 2025-11-25 17:38:04.463686607 +0000 UTC m=+1.365912095 container died a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:38:04 np0005535469 systemd[1]: libpod-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope: Consumed 1.211s CPU time.
Nov 25 12:38:04 np0005535469 nova_compute[254092]: 2025-11-25 17:38:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:04 np0005535469 nova_compute[254092]: 2025-11-25 17:38:04.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:38:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce-merged.mount: Deactivated successfully.
Nov 25 12:38:04 np0005535469 podman[441423]: 2025-11-25 17:38:04.553559975 +0000 UTC m=+1.455785493 container remove a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:38:04 np0005535469 systemd[1]: libpod-conmon-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope: Deactivated successfully.
Nov 25 12:38:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:38:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:38:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:38:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:38:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 28754b26-1130-455e-b794-eaa71f2a8d34 does not exist
Nov 25 12:38:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f86d45ee-fd2f-4309-a6a3-2e4e947edd03 does not exist
Nov 25 12:38:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:38:04 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:38:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:05 np0005535469 nova_compute[254092]: 2025-11-25 17:38:05.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:06 np0005535469 nova_compute[254092]: 2025-11-25 17:38:06.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:06 np0005535469 nova_compute[254092]: 2025-11-25 17:38:06.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:06 np0005535469 podman[441585]: 2025-11-25 17:38:06.898154851 +0000 UTC m=+0.090802312 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 12:38:06 np0005535469 podman[441584]: 2025-11-25 17:38:06.911108484 +0000 UTC m=+0.103549060 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:38:06 np0005535469 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 12:38:06 np0005535469 systemd[1]: session-53.scope: Consumed 1.709s CPU time.
Nov 25 12:38:06 np0005535469 systemd-logind[791]: Session 53 logged out. Waiting for processes to exit.
Nov 25 12:38:06 np0005535469 systemd-logind[791]: Removed session 53.
Nov 25 12:38:06 np0005535469 podman[441586]: 2025-11-25 17:38:06.951757791 +0000 UTC m=+0.136759164 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 12:38:07 np0005535469 nova_compute[254092]: 2025-11-25 17:38:07.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:07 np0005535469 nova_compute[254092]: 2025-11-25 17:38:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:38:07 np0005535469 nova_compute[254092]: 2025-11-25 17:38:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:38:07 np0005535469 nova_compute[254092]: 2025-11-25 17:38:07.530 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:38:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:07 np0005535469 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 12:38:07 np0005535469 systemd-logind[791]: Session 54 logged out. Waiting for processes to exit.
Nov 25 12:38:07 np0005535469 systemd-logind[791]: Removed session 54.
Nov 25 12:38:08 np0005535469 nova_compute[254092]: 2025-11-25 17:38:08.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:38:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:38:10 np0005535469 nova_compute[254092]: 2025-11-25 17:38:10.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:38:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:38:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:38:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:38:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:38:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:38:13 np0005535469 nova_compute[254092]: 2025-11-25 17:38:13.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:14 np0005535469 nova_compute[254092]: 2025-11-25 17:38:14.524 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:15 np0005535469 nova_compute[254092]: 2025-11-25 17:38:15.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:16 np0005535469 nova_compute[254092]: 2025-11-25 17:38:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:18 np0005535469 nova_compute[254092]: 2025-11-25 17:38:18.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:20 np0005535469 nova_compute[254092]: 2025-11-25 17:38:20.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:23 np0005535469 nova_compute[254092]: 2025-11-25 17:38:23.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:25 np0005535469 nova_compute[254092]: 2025-11-25 17:38:25.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:28 np0005535469 nova_compute[254092]: 2025-11-25 17:38:28.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:30 np0005535469 nova_compute[254092]: 2025-11-25 17:38:30.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:33 np0005535469 nova_compute[254092]: 2025-11-25 17:38:33.800 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:35 np0005535469 nova_compute[254092]: 2025-11-25 17:38:35.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:37 np0005535469 podman[441653]: 2025-11-25 17:38:37.733539675 +0000 UTC m=+0.089080786 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 12:38:37 np0005535469 podman[441652]: 2025-11-25 17:38:37.745915012 +0000 UTC m=+0.108989958 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:38:37 np0005535469 podman[441654]: 2025-11-25 17:38:37.787021271 +0000 UTC m=+0.136802955 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:38:38 np0005535469 nova_compute[254092]: 2025-11-25 17:38:38.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:38:40
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:38:40 np0005535469 nova_compute[254092]: 2025-11-25 17:38:40.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:38:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:38:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:43 np0005535469 nova_compute[254092]: 2025-11-25 17:38:43.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:45 np0005535469 nova_compute[254092]: 2025-11-25 17:38:45.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:48 np0005535469 nova_compute[254092]: 2025-11-25 17:38:48.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:50 np0005535469 nova_compute[254092]: 2025-11-25 17:38:50.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:38:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:38:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:53 np0005535469 nova_compute[254092]: 2025-11-25 17:38:53.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:38:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438611634' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:38:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:38:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438611634' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:38:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:55 np0005535469 nova_compute[254092]: 2025-11-25 17:38:55.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:58 np0005535469 nova_compute[254092]: 2025-11-25 17:38:58.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.540 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.540 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.541 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.541 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:38:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:38:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:38:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:38:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2122273370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:38:59 np0005535469 nova_compute[254092]: 2025-11-25 17:38:59.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.236 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3650MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.238 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.238 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.315 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.315 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.335 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:39:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087428060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.889 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.896 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.913 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.915 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:39:00 np0005535469 nova_compute[254092]: 2025-11-25 17:39:00.916 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:39:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:02 np0005535469 nova_compute[254092]: 2025-11-25 17:39:02.917 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:03 np0005535469 nova_compute[254092]: 2025-11-25 17:39:03.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4e87997d-1d25-4e7c-902e-f005f91b2dc0 does not exist
Nov 25 12:39:05 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b3f2b56c-5c45-4458-8bfa-ad609700c756 does not exist
Nov 25 12:39:05 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 07fcd31c-98ec-49be-8e6a-1eb20552b708 does not exist
Nov 25 12:39:05 np0005535469 nova_compute[254092]: 2025-11-25 17:39:05.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:39:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:39:06 np0005535469 podman[442030]: 2025-11-25 17:39:06.418707572 +0000 UTC m=+0.042608400 container create 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:39:06 np0005535469 systemd[1]: Started libpod-conmon-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope.
Nov 25 12:39:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:39:06 np0005535469 nova_compute[254092]: 2025-11-25 17:39:06.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:06 np0005535469 podman[442030]: 2025-11-25 17:39:06.494011162 +0000 UTC m=+0.117912010 container init 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 12:39:06 np0005535469 podman[442030]: 2025-11-25 17:39:06.400351943 +0000 UTC m=+0.024252811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:39:06 np0005535469 nova_compute[254092]: 2025-11-25 17:39:06.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:06 np0005535469 nova_compute[254092]: 2025-11-25 17:39:06.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:39:06 np0005535469 podman[442030]: 2025-11-25 17:39:06.501287301 +0000 UTC m=+0.125188129 container start 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:39:06 np0005535469 podman[442030]: 2025-11-25 17:39:06.504494828 +0000 UTC m=+0.128395656 container attach 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:39:06 np0005535469 vibrant_shannon[442046]: 167 167
Nov 25 12:39:06 np0005535469 systemd[1]: libpod-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope: Deactivated successfully.
Nov 25 12:39:06 np0005535469 conmon[442046]: conmon 90177fe2be0b7f1a8cf1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope/container/memory.events
Nov 25 12:39:06 np0005535469 podman[442030]: 2025-11-25 17:39:06.508793165 +0000 UTC m=+0.132694023 container died 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 25 12:39:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-02db7752988c1d92774d568ef3285969c14a6fafe4ab80b2641b2c11f8b1b785-merged.mount: Deactivated successfully.
Nov 25 12:39:06 np0005535469 podman[442030]: 2025-11-25 17:39:06.559158886 +0000 UTC m=+0.183059714 container remove 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:39:06 np0005535469 systemd[1]: libpod-conmon-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope: Deactivated successfully.
Nov 25 12:39:06 np0005535469 podman[442069]: 2025-11-25 17:39:06.724160788 +0000 UTC m=+0.041258514 container create 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:39:06 np0005535469 systemd[1]: Started libpod-conmon-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope.
Nov 25 12:39:06 np0005535469 podman[442069]: 2025-11-25 17:39:06.707195686 +0000 UTC m=+0.024293432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:39:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:39:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:06 np0005535469 podman[442069]: 2025-11-25 17:39:06.823678787 +0000 UTC m=+0.140776543 container init 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:39:06 np0005535469 podman[442069]: 2025-11-25 17:39:06.831527791 +0000 UTC m=+0.148625517 container start 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:39:06 np0005535469 podman[442069]: 2025-11-25 17:39:06.834697027 +0000 UTC m=+0.151794753 container attach 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 12:39:07 np0005535469 nova_compute[254092]: 2025-11-25 17:39:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:07 np0005535469 nova_compute[254092]: 2025-11-25 17:39:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:39:07 np0005535469 nova_compute[254092]: 2025-11-25 17:39:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:39:07 np0005535469 nova_compute[254092]: 2025-11-25 17:39:07.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:39:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:08 np0005535469 ecstatic_mcclintock[442086]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:39:08 np0005535469 ecstatic_mcclintock[442086]: --> relative data size: 1.0
Nov 25 12:39:08 np0005535469 ecstatic_mcclintock[442086]: --> All data devices are unavailable
Nov 25 12:39:08 np0005535469 systemd[1]: libpod-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope: Deactivated successfully.
Nov 25 12:39:08 np0005535469 systemd[1]: libpod-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope: Consumed 1.167s CPU time.
Nov 25 12:39:08 np0005535469 podman[442069]: 2025-11-25 17:39:08.053463327 +0000 UTC m=+1.370561083 container died 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:39:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b-merged.mount: Deactivated successfully.
Nov 25 12:39:08 np0005535469 podman[442069]: 2025-11-25 17:39:08.148415282 +0000 UTC m=+1.465513018 container remove 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:39:08 np0005535469 systemd[1]: libpod-conmon-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope: Deactivated successfully.
Nov 25 12:39:08 np0005535469 podman[442130]: 2025-11-25 17:39:08.194984919 +0000 UTC m=+0.081643584 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 12:39:08 np0005535469 podman[442116]: 2025-11-25 17:39:08.221595074 +0000 UTC m=+0.114731205 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:39:08 np0005535469 podman[442131]: 2025-11-25 17:39:08.228430309 +0000 UTC m=+0.122042812 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 12:39:08 np0005535469 nova_compute[254092]: 2025-11-25 17:39:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:08 np0005535469 podman[442331]: 2025-11-25 17:39:08.769795578 +0000 UTC m=+0.054101314 container create f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:39:08 np0005535469 systemd[1]: Started libpod-conmon-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope.
Nov 25 12:39:08 np0005535469 podman[442331]: 2025-11-25 17:39:08.738844425 +0000 UTC m=+0.023150171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:39:08 np0005535469 nova_compute[254092]: 2025-11-25 17:39:08.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:39:08 np0005535469 podman[442331]: 2025-11-25 17:39:08.891510501 +0000 UTC m=+0.175816287 container init f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:39:08 np0005535469 podman[442331]: 2025-11-25 17:39:08.904017791 +0000 UTC m=+0.188323527 container start f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:39:08 np0005535469 podman[442331]: 2025-11-25 17:39:08.91017527 +0000 UTC m=+0.194481006 container attach f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:39:08 np0005535469 systemd[1]: libpod-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope: Deactivated successfully.
Nov 25 12:39:08 np0005535469 amazing_brahmagupta[442348]: 167 167
Nov 25 12:39:08 np0005535469 conmon[442348]: conmon f859571eb8f84b7b8999 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope/container/memory.events
Nov 25 12:39:08 np0005535469 podman[442331]: 2025-11-25 17:39:08.915522635 +0000 UTC m=+0.199828371 container died f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:39:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-af7e5ed73d35694b3061cb98a0d57b205a0e0b3b3025b2865c12838145e6dd3d-merged.mount: Deactivated successfully.
Nov 25 12:39:08 np0005535469 podman[442331]: 2025-11-25 17:39:08.982277922 +0000 UTC m=+0.266583628 container remove f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:39:09 np0005535469 systemd[1]: libpod-conmon-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope: Deactivated successfully.
Nov 25 12:39:09 np0005535469 podman[442371]: 2025-11-25 17:39:09.20260677 +0000 UTC m=+0.048087790 container create 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:39:09 np0005535469 systemd[1]: Started libpod-conmon-0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63.scope.
Nov 25 12:39:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:39:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:09 np0005535469 podman[442371]: 2025-11-25 17:39:09.182906704 +0000 UTC m=+0.028387764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:39:09 np0005535469 podman[442371]: 2025-11-25 17:39:09.290701028 +0000 UTC m=+0.136182088 container init 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:39:09 np0005535469 podman[442371]: 2025-11-25 17:39:09.298686156 +0000 UTC m=+0.144167166 container start 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:39:09 np0005535469 podman[442371]: 2025-11-25 17:39:09.306473618 +0000 UTC m=+0.151954678 container attach 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 12:39:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]: {
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:    "0": [
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:        {
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "devices": [
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "/dev/loop3"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            ],
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_name": "ceph_lv0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_size": "21470642176",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "name": "ceph_lv0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "tags": {
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cluster_name": "ceph",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.crush_device_class": "",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.encrypted": "0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osd_id": "0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.type": "block",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.vdo": "0"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            },
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "type": "block",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "vg_name": "ceph_vg0"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:        }
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:    ],
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:    "1": [
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:        {
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "devices": [
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "/dev/loop4"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            ],
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_name": "ceph_lv1",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_size": "21470642176",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "name": "ceph_lv1",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "tags": {
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cluster_name": "ceph",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.crush_device_class": "",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.encrypted": "0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osd_id": "1",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.type": "block",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.vdo": "0"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            },
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "type": "block",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "vg_name": "ceph_vg1"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:        }
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:    ],
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:    "2": [
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:        {
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "devices": [
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "/dev/loop5"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            ],
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_name": "ceph_lv2",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_size": "21470642176",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "name": "ceph_lv2",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "tags": {
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.cluster_name": "ceph",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.crush_device_class": "",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.encrypted": "0",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osd_id": "2",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.type": "block",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:                "ceph.vdo": "0"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            },
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "type": "block",
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:            "vg_name": "ceph_vg2"
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:        }
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]:    ]
Nov 25 12:39:10 np0005535469 magical_goldberg[442387]: }
Nov 25 12:39:10 np0005535469 systemd[1]: libpod-0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63.scope: Deactivated successfully.
Nov 25 12:39:10 np0005535469 podman[442371]: 2025-11-25 17:39:10.080457318 +0000 UTC m=+0.925938338 container died 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:39:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:39:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b-merged.mount: Deactivated successfully.
Nov 25 12:39:10 np0005535469 podman[442371]: 2025-11-25 17:39:10.36149878 +0000 UTC m=+1.206979800 container remove 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:39:10 np0005535469 systemd[1]: libpod-conmon-0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63.scope: Deactivated successfully.
Nov 25 12:39:10 np0005535469 nova_compute[254092]: 2025-11-25 17:39:10.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:11 np0005535469 podman[442550]: 2025-11-25 17:39:11.13253489 +0000 UTC m=+0.089545369 container create 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:39:11 np0005535469 podman[442550]: 2025-11-25 17:39:11.064284142 +0000 UTC m=+0.021294641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:39:11 np0005535469 systemd[1]: Started libpod-conmon-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope.
Nov 25 12:39:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:39:11 np0005535469 podman[442550]: 2025-11-25 17:39:11.350791842 +0000 UTC m=+0.307802321 container init 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:39:11 np0005535469 podman[442550]: 2025-11-25 17:39:11.361226136 +0000 UTC m=+0.318236655 container start 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:39:11 np0005535469 dreamy_chatterjee[442566]: 167 167
Nov 25 12:39:11 np0005535469 systemd[1]: libpod-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope: Deactivated successfully.
Nov 25 12:39:11 np0005535469 conmon[442566]: conmon 8d160eb83e242ede2ec3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope/container/memory.events
Nov 25 12:39:11 np0005535469 podman[442550]: 2025-11-25 17:39:11.438009066 +0000 UTC m=+0.395019585 container attach 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:39:11 np0005535469 podman[442550]: 2025-11-25 17:39:11.438613162 +0000 UTC m=+0.395623681 container died 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 12:39:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6e592d8ea9847b47b87b04a061b70d718b501a3d3b2eec9b14812d47c8bb700d-merged.mount: Deactivated successfully.
Nov 25 12:39:12 np0005535469 podman[442550]: 2025-11-25 17:39:12.066889327 +0000 UTC m=+1.023899836 container remove 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:39:12 np0005535469 systemd[1]: libpod-conmon-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope: Deactivated successfully.
Nov 25 12:39:12 np0005535469 podman[442589]: 2025-11-25 17:39:12.37667218 +0000 UTC m=+0.114625381 container create 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:39:12 np0005535469 podman[442589]: 2025-11-25 17:39:12.305552295 +0000 UTC m=+0.043505516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:39:12 np0005535469 systemd[1]: Started libpod-conmon-7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d.scope.
Nov 25 12:39:12 np0005535469 nova_compute[254092]: 2025-11-25 17:39:12.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:12 np0005535469 nova_compute[254092]: 2025-11-25 17:39:12.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:39:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:39:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:39:12 np0005535469 podman[442589]: 2025-11-25 17:39:12.550618236 +0000 UTC m=+0.288571467 container init 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:39:12 np0005535469 podman[442589]: 2025-11-25 17:39:12.563144337 +0000 UTC m=+0.301097558 container start 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 12:39:12 np0005535469 podman[442589]: 2025-11-25 17:39:12.625202656 +0000 UTC m=+0.363156147 container attach 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]: {
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "osd_id": 1,
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "type": "bluestore"
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:    },
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "osd_id": 2,
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "type": "bluestore"
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:    },
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "osd_id": 0,
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:        "type": "bluestore"
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]:    }
Nov 25 12:39:13 np0005535469 charming_chandrasekhar[442606]: }
Nov 25 12:39:13 np0005535469 systemd[1]: libpod-7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d.scope: Deactivated successfully.
Nov 25 12:39:13 np0005535469 podman[442589]: 2025-11-25 17:39:13.567267153 +0000 UTC m=+1.305220354 container died 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:39:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:39:13.679 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:39:13.680 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:39:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:39:13.680 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:39:13 np0005535469 nova_compute[254092]: 2025-11-25 17:39:13.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77-merged.mount: Deactivated successfully.
Nov 25 12:39:14 np0005535469 podman[442589]: 2025-11-25 17:39:14.280387067 +0000 UTC m=+2.018340268 container remove 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:39:14 np0005535469 systemd[1]: libpod-conmon-7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d.scope: Deactivated successfully.
Nov 25 12:39:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:39:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:39:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:39:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:39:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5c4caf6e-2f8b-4365-b92c-880a73e06c40 does not exist
Nov 25 12:39:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0a5758d5-bdd4-4429-a74b-906eade08a53 does not exist
Nov 25 12:39:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:39:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:39:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:15 np0005535469 nova_compute[254092]: 2025-11-25 17:39:15.817 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:16 np0005535469 nova_compute[254092]: 2025-11-25 17:39:16.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:16 np0005535469 nova_compute[254092]: 2025-11-25 17:39:16.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:39:16 np0005535469 nova_compute[254092]: 2025-11-25 17:39:16.542 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:39:17 np0005535469 nova_compute[254092]: 2025-11-25 17:39:17.519 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:18 np0005535469 nova_compute[254092]: 2025-11-25 17:39:18.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:20 np0005535469 nova_compute[254092]: 2025-11-25 17:39:20.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:23 np0005535469 nova_compute[254092]: 2025-11-25 17:39:23.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:25 np0005535469 nova_compute[254092]: 2025-11-25 17:39:25.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:28 np0005535469 nova_compute[254092]: 2025-11-25 17:39:28.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:30 np0005535469 nova_compute[254092]: 2025-11-25 17:39:30.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:31 np0005535469 nova_compute[254092]: 2025-11-25 17:39:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:33 np0005535469 nova_compute[254092]: 2025-11-25 17:39:33.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:35 np0005535469 nova_compute[254092]: 2025-11-25 17:39:35.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:38 np0005535469 podman[442702]: 2025-11-25 17:39:38.690993521 +0000 UTC m=+0.094911304 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 12:39:38 np0005535469 podman[442701]: 2025-11-25 17:39:38.703471291 +0000 UTC m=+0.108057803 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:39:38 np0005535469 podman[442703]: 2025-11-25 17:39:38.744613851 +0000 UTC m=+0.145004518 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 12:39:38 np0005535469 nova_compute[254092]: 2025-11-25 17:39:38.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:39:40
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:39:40 np0005535469 nova_compute[254092]: 2025-11-25 17:39:40.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:39:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:39:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:43 np0005535469 nova_compute[254092]: 2025-11-25 17:39:43.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:45 np0005535469 nova_compute[254092]: 2025-11-25 17:39:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:48 np0005535469 nova_compute[254092]: 2025-11-25 17:39:48.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:50 np0005535469 nova_compute[254092]: 2025-11-25 17:39:50.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:39:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:39:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:53 np0005535469 nova_compute[254092]: 2025-11-25 17:39:53.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:39:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:39:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3355613458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:39:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:39:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3355613458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:39:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:55 np0005535469 nova_compute[254092]: 2025-11-25 17:39:55.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:58 np0005535469 nova_compute[254092]: 2025-11-25 17:39:58.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:39:59 np0005535469 nova_compute[254092]: 2025-11-25 17:39:59.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:59 np0005535469 nova_compute[254092]: 2025-11-25 17:39:59.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:39:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:39:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:00 np0005535469 nova_compute[254092]: 2025-11-25 17:40:00.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:01 np0005535469 nova_compute[254092]: 2025-11-25 17:40:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:01 np0005535469 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:40:01 np0005535469 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:40:01 np0005535469 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:40:01 np0005535469 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:40:01 np0005535469 nova_compute[254092]: 2025-11-25 17:40:01.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:40:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:40:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2769921081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:40:01 np0005535469 nova_compute[254092]: 2025-11-25 17:40:01.975 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.165 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.167 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3636MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.167 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.168 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.253 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.253 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.296 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:40:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:40:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543408170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.747 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.752 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.763 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.765 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:40:02 np0005535469 nova_compute[254092]: 2025-11-25 17:40:02.766 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:40:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:03 np0005535469 nova_compute[254092]: 2025-11-25 17:40:03.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:04 np0005535469 nova_compute[254092]: 2025-11-25 17:40:04.767 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.939234) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092404939275, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1593, "num_deletes": 251, "total_data_size": 2580828, "memory_usage": 2624384, "flush_reason": "Manual Compaction"}
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092404959140, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 2545220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71275, "largest_seqno": 72867, "table_properties": {"data_size": 2537733, "index_size": 4493, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15097, "raw_average_key_size": 19, "raw_value_size": 2522906, "raw_average_value_size": 3332, "num_data_blocks": 200, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092229, "oldest_key_time": 1764092229, "file_creation_time": 1764092404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 19952 microseconds, and 10254 cpu microseconds.
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.959185) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 2545220 bytes OK
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.959206) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.960933) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.960946) EVENT_LOG_v1 {"time_micros": 1764092404960942, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.960962) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2573938, prev total WAL file size 2573938, number of live WAL files 2.
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.961711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(2485KB)], [167(10152KB)]
Nov 25 12:40:04 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092404961773, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 12941752, "oldest_snapshot_seqno": -1}
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 9008 keys, 11214481 bytes, temperature: kUnknown
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092405027176, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 11214481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11156630, "index_size": 34250, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 237090, "raw_average_key_size": 26, "raw_value_size": 10998145, "raw_average_value_size": 1220, "num_data_blocks": 1324, "num_entries": 9008, "num_filter_entries": 9008, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.027425) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11214481 bytes
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.028679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 171.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.9 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(9.5) write-amplify(4.4) OK, records in: 9522, records dropped: 514 output_compression: NoCompression
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.028699) EVENT_LOG_v1 {"time_micros": 1764092405028689, "job": 104, "event": "compaction_finished", "compaction_time_micros": 65470, "compaction_time_cpu_micros": 29115, "output_level": 6, "num_output_files": 1, "total_output_size": 11214481, "num_input_records": 9522, "num_output_records": 9008, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092405029331, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092405031510, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.961591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:40:05 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:40:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:05 np0005535469 nova_compute[254092]: 2025-11-25 17:40:05.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:06 np0005535469 nova_compute[254092]: 2025-11-25 17:40:06.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:07 np0005535469 nova_compute[254092]: 2025-11-25 17:40:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:07 np0005535469 nova_compute[254092]: 2025-11-25 17:40:07.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:40:07 np0005535469 nova_compute[254092]: 2025-11-25 17:40:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:40:07 np0005535469 nova_compute[254092]: 2025-11-25 17:40:07.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:40:07 np0005535469 nova_compute[254092]: 2025-11-25 17:40:07.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:07 np0005535469 nova_compute[254092]: 2025-11-25 17:40:07.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:40:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:08 np0005535469 nova_compute[254092]: 2025-11-25 17:40:08.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:09 np0005535469 nova_compute[254092]: 2025-11-25 17:40:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:09 np0005535469 podman[442809]: 2025-11-25 17:40:09.687615041 +0000 UTC m=+0.093310741 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:40:09 np0005535469 podman[442810]: 2025-11-25 17:40:09.688784513 +0000 UTC m=+0.088765958 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 12:40:09 np0005535469 podman[442811]: 2025-11-25 17:40:09.737715405 +0000 UTC m=+0.140351242 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 12:40:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:40:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:40:10 np0005535469 nova_compute[254092]: 2025-11-25 17:40:10.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:40:13.681 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:40:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:40:13.682 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:40:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:40:13.682 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:40:14 np0005535469 nova_compute[254092]: 2025-11-25 17:40:14.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:15 np0005535469 podman[443049]: 2025-11-25 17:40:15.424240153 +0000 UTC m=+0.062926695 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:40:15 np0005535469 podman[443049]: 2025-11-25 17:40:15.539241503 +0000 UTC m=+0.177928025 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:40:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:15 np0005535469 nova_compute[254092]: 2025-11-25 17:40:15.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:40:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:40:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:16 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 93c65c62-ac1c-4266-b62d-cfe844726382 does not exist
Nov 25 12:40:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 89099479-2f0f-4907-aaf4-8bd6366e7a96 does not exist
Nov 25 12:40:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 93bf0c59-554c-4c46-b91b-e5e35fa8f057 does not exist
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:40:17 np0005535469 nova_compute[254092]: 2025-11-25 17:40:17.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:40:17 np0005535469 podman[443475]: 2025-11-25 17:40:17.849283261 +0000 UTC m=+0.052479019 container create acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:40:17 np0005535469 systemd[1]: Started libpod-conmon-acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e.scope.
Nov 25 12:40:17 np0005535469 podman[443475]: 2025-11-25 17:40:17.821103054 +0000 UTC m=+0.024298852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:40:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:40:18 np0005535469 podman[443475]: 2025-11-25 17:40:18.000273841 +0000 UTC m=+0.203469609 container init acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:40:18 np0005535469 podman[443475]: 2025-11-25 17:40:18.008089434 +0000 UTC m=+0.211285192 container start acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:40:18 np0005535469 podman[443475]: 2025-11-25 17:40:18.011275781 +0000 UTC m=+0.214471539 container attach acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:40:18 np0005535469 optimistic_black[443491]: 167 167
Nov 25 12:40:18 np0005535469 systemd[1]: libpod-acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e.scope: Deactivated successfully.
Nov 25 12:40:18 np0005535469 podman[443475]: 2025-11-25 17:40:18.016711649 +0000 UTC m=+0.219907437 container died acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:40:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f5e04389922ea911b2381835f99eef2d5a9d0e11033204723007a20d921a9696-merged.mount: Deactivated successfully.
Nov 25 12:40:18 np0005535469 podman[443475]: 2025-11-25 17:40:18.0619253 +0000 UTC m=+0.265121058 container remove acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:40:18 np0005535469 systemd[1]: libpod-conmon-acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e.scope: Deactivated successfully.
Nov 25 12:40:18 np0005535469 podman[443514]: 2025-11-25 17:40:18.276113061 +0000 UTC m=+0.063518710 container create 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 12:40:18 np0005535469 systemd[1]: Started libpod-conmon-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope.
Nov 25 12:40:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:40:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:18 np0005535469 podman[443514]: 2025-11-25 17:40:18.256694862 +0000 UTC m=+0.044100541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:40:18 np0005535469 podman[443514]: 2025-11-25 17:40:18.355479381 +0000 UTC m=+0.142885030 container init 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:40:18 np0005535469 podman[443514]: 2025-11-25 17:40:18.369139443 +0000 UTC m=+0.156545092 container start 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:40:18 np0005535469 podman[443514]: 2025-11-25 17:40:18.372089934 +0000 UTC m=+0.159495603 container attach 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:40:19 np0005535469 nova_compute[254092]: 2025-11-25 17:40:19.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:19 np0005535469 recursing_noether[443530]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:40:19 np0005535469 recursing_noether[443530]: --> relative data size: 1.0
Nov 25 12:40:19 np0005535469 recursing_noether[443530]: --> All data devices are unavailable
Nov 25 12:40:19 np0005535469 systemd[1]: libpod-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope: Deactivated successfully.
Nov 25 12:40:19 np0005535469 systemd[1]: libpod-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope: Consumed 1.015s CPU time.
Nov 25 12:40:19 np0005535469 nova_compute[254092]: 2025-11-25 17:40:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:40:19 np0005535469 podman[443559]: 2025-11-25 17:40:19.516763656 +0000 UTC m=+0.049399206 container died 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 12:40:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038-merged.mount: Deactivated successfully.
Nov 25 12:40:19 np0005535469 podman[443559]: 2025-11-25 17:40:19.62122895 +0000 UTC m=+0.153864440 container remove 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:40:19 np0005535469 systemd[1]: libpod-conmon-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope: Deactivated successfully.
Nov 25 12:40:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:20 np0005535469 podman[443714]: 2025-11-25 17:40:20.335118395 +0000 UTC m=+0.044787151 container create 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:40:20 np0005535469 systemd[1]: Started libpod-conmon-7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416.scope.
Nov 25 12:40:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:40:20 np0005535469 podman[443714]: 2025-11-25 17:40:20.409385806 +0000 UTC m=+0.119054612 container init 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:40:20 np0005535469 podman[443714]: 2025-11-25 17:40:20.318484292 +0000 UTC m=+0.028153068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:40:20 np0005535469 podman[443714]: 2025-11-25 17:40:20.421815185 +0000 UTC m=+0.131483951 container start 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 12:40:20 np0005535469 podman[443714]: 2025-11-25 17:40:20.425208827 +0000 UTC m=+0.134877593 container attach 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:40:20 np0005535469 intelligent_ride[443731]: 167 167
Nov 25 12:40:20 np0005535469 systemd[1]: libpod-7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416.scope: Deactivated successfully.
Nov 25 12:40:20 np0005535469 podman[443714]: 2025-11-25 17:40:20.428215089 +0000 UTC m=+0.137883875 container died 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:40:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c75429d6f1f23969e3973ab35fbc6093cfad6d20f3381b04f2a5e2aad257ee27-merged.mount: Deactivated successfully.
Nov 25 12:40:20 np0005535469 podman[443714]: 2025-11-25 17:40:20.484933333 +0000 UTC m=+0.194602119 container remove 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:40:20 np0005535469 systemd[1]: libpod-conmon-7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416.scope: Deactivated successfully.
Nov 25 12:40:20 np0005535469 podman[443755]: 2025-11-25 17:40:20.741346264 +0000 UTC m=+0.073640066 container create e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 12:40:20 np0005535469 systemd[1]: Started libpod-conmon-e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7.scope.
Nov 25 12:40:20 np0005535469 podman[443755]: 2025-11-25 17:40:20.715437198 +0000 UTC m=+0.047730980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:40:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:40:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:20 np0005535469 podman[443755]: 2025-11-25 17:40:20.857575137 +0000 UTC m=+0.189868939 container init e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:40:20 np0005535469 podman[443755]: 2025-11-25 17:40:20.870822288 +0000 UTC m=+0.203116090 container start e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 12:40:20 np0005535469 podman[443755]: 2025-11-25 17:40:20.875601479 +0000 UTC m=+0.207895281 container attach e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:40:20 np0005535469 nova_compute[254092]: 2025-11-25 17:40:20.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]: {
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:    "0": [
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:        {
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "devices": [
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "/dev/loop3"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            ],
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_name": "ceph_lv0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_size": "21470642176",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "name": "ceph_lv0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "tags": {
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cluster_name": "ceph",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.crush_device_class": "",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.encrypted": "0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osd_id": "0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.type": "block",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.vdo": "0"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            },
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "type": "block",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "vg_name": "ceph_vg0"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:        }
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:    ],
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:    "1": [
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:        {
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "devices": [
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "/dev/loop4"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            ],
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_name": "ceph_lv1",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_size": "21470642176",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "name": "ceph_lv1",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "tags": {
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cluster_name": "ceph",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.crush_device_class": "",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.encrypted": "0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osd_id": "1",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.type": "block",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.vdo": "0"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            },
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "type": "block",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "vg_name": "ceph_vg1"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:        }
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:    ],
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:    "2": [
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:        {
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "devices": [
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "/dev/loop5"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            ],
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_name": "ceph_lv2",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_size": "21470642176",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "name": "ceph_lv2",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "tags": {
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.cluster_name": "ceph",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.crush_device_class": "",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.encrypted": "0",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osd_id": "2",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.type": "block",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:                "ceph.vdo": "0"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            },
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "type": "block",
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:            "vg_name": "ceph_vg2"
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:        }
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]:    ]
Nov 25 12:40:21 np0005535469 pensive_boyd[443771]: }
Nov 25 12:40:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:21 np0005535469 systemd[1]: libpod-e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7.scope: Deactivated successfully.
Nov 25 12:40:21 np0005535469 podman[443755]: 2025-11-25 17:40:21.68598562 +0000 UTC m=+1.018279422 container died e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:40:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382-merged.mount: Deactivated successfully.
Nov 25 12:40:21 np0005535469 podman[443755]: 2025-11-25 17:40:21.759153942 +0000 UTC m=+1.091447704 container remove e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:40:21 np0005535469 systemd[1]: libpod-conmon-e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7.scope: Deactivated successfully.
Nov 25 12:40:22 np0005535469 podman[443935]: 2025-11-25 17:40:22.489546036 +0000 UTC m=+0.047684059 container create d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:40:22 np0005535469 systemd[1]: Started libpod-conmon-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope.
Nov 25 12:40:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:40:22 np0005535469 podman[443935]: 2025-11-25 17:40:22.463692962 +0000 UTC m=+0.021831035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:40:22 np0005535469 podman[443935]: 2025-11-25 17:40:22.568070693 +0000 UTC m=+0.126208706 container init d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 12:40:22 np0005535469 podman[443935]: 2025-11-25 17:40:22.575869906 +0000 UTC m=+0.134007909 container start d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:40:22 np0005535469 podman[443935]: 2025-11-25 17:40:22.57932021 +0000 UTC m=+0.137458203 container attach d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 12:40:22 np0005535469 fervent_bhaskara[443952]: 167 167
Nov 25 12:40:22 np0005535469 systemd[1]: libpod-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope: Deactivated successfully.
Nov 25 12:40:22 np0005535469 conmon[443952]: conmon d7e1480a529a643d0a45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope/container/memory.events
Nov 25 12:40:22 np0005535469 podman[443935]: 2025-11-25 17:40:22.587608685 +0000 UTC m=+0.145746678 container died d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:40:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4e15deaa9fe9fa526399079d2caca684e7895b7f3ccc0f6ca02041984fb97c27-merged.mount: Deactivated successfully.
Nov 25 12:40:22 np0005535469 podman[443935]: 2025-11-25 17:40:22.634226425 +0000 UTC m=+0.192364418 container remove d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:40:22 np0005535469 systemd[1]: libpod-conmon-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope: Deactivated successfully.
Nov 25 12:40:22 np0005535469 podman[443976]: 2025-11-25 17:40:22.797712235 +0000 UTC m=+0.046976040 container create 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:40:22 np0005535469 systemd[1]: Started libpod-conmon-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope.
Nov 25 12:40:22 np0005535469 podman[443976]: 2025-11-25 17:40:22.778529473 +0000 UTC m=+0.027793298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:40:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:40:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:22 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:40:22 np0005535469 podman[443976]: 2025-11-25 17:40:22.921106924 +0000 UTC m=+0.170370749 container init 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:40:22 np0005535469 podman[443976]: 2025-11-25 17:40:22.940496162 +0000 UTC m=+0.189759967 container start 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:40:22 np0005535469 podman[443976]: 2025-11-25 17:40:22.944230194 +0000 UTC m=+0.193494029 container attach 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:40:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]: {
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "osd_id": 1,
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "type": "bluestore"
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:    },
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "osd_id": 2,
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "type": "bluestore"
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:    },
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "osd_id": 0,
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:        "type": "bluestore"
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]:    }
Nov 25 12:40:24 np0005535469 serene_ganguly[443992]: }
Nov 25 12:40:24 np0005535469 systemd[1]: libpod-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope: Deactivated successfully.
Nov 25 12:40:24 np0005535469 podman[443976]: 2025-11-25 17:40:24.073599859 +0000 UTC m=+1.322863664 container died 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:40:24 np0005535469 systemd[1]: libpod-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope: Consumed 1.143s CPU time.
Nov 25 12:40:24 np0005535469 nova_compute[254092]: 2025-11-25 17:40:24.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202-merged.mount: Deactivated successfully.
Nov 25 12:40:24 np0005535469 podman[443976]: 2025-11-25 17:40:24.154721627 +0000 UTC m=+1.403985442 container remove 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:40:24 np0005535469 systemd[1]: libpod-conmon-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope: Deactivated successfully.
Nov 25 12:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:40:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cc7092b0-6279-429b-9cd0-4462426d49b9 does not exist
Nov 25 12:40:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 66aa4a81-168e-49fe-86e2-d5d60175d329 does not exist
Nov 25 12:40:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:24 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:40:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:25 np0005535469 nova_compute[254092]: 2025-11-25 17:40:25.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:29 np0005535469 nova_compute[254092]: 2025-11-25 17:40:29.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:30 np0005535469 nova_compute[254092]: 2025-11-25 17:40:30.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:34 np0005535469 nova_compute[254092]: 2025-11-25 17:40:34.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:35 np0005535469 nova_compute[254092]: 2025-11-25 17:40:35.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:39 np0005535469 nova_compute[254092]: 2025-11-25 17:40:39.098 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:40:40
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:40:40 np0005535469 podman[444089]: 2025-11-25 17:40:40.696943809 +0000 UTC m=+0.097409952 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:40:40 np0005535469 podman[444088]: 2025-11-25 17:40:40.702284564 +0000 UTC m=+0.104946467 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:40:40 np0005535469 podman[444090]: 2025-11-25 17:40:40.746751055 +0000 UTC m=+0.138172442 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:40:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:40:41 np0005535469 nova_compute[254092]: 2025-11-25 17:40:41.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:44 np0005535469 nova_compute[254092]: 2025-11-25 17:40:44.100 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:46 np0005535469 nova_compute[254092]: 2025-11-25 17:40:46.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:49 np0005535469 nova_compute[254092]: 2025-11-25 17:40:49.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:51 np0005535469 nova_compute[254092]: 2025-11-25 17:40:51.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:40:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:40:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:54 np0005535469 nova_compute[254092]: 2025-11-25 17:40:54.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:40:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:40:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508926618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:40:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:40:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508926618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:40:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:56 np0005535469 nova_compute[254092]: 2025-11-25 17:40:56.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:59 np0005535469 nova_compute[254092]: 2025-11-25 17:40:59.232 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:40:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:40:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:00 np0005535469 nova_compute[254092]: 2025-11-25 17:41:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:01 np0005535469 nova_compute[254092]: 2025-11-25 17:41:01.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:01 np0005535469 nova_compute[254092]: 2025-11-25 17:41:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:02 np0005535469 nova_compute[254092]: 2025-11-25 17:41:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:02 np0005535469 nova_compute[254092]: 2025-11-25 17:41:02.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:41:02 np0005535469 nova_compute[254092]: 2025-11-25 17:41:02.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:41:02 np0005535469 nova_compute[254092]: 2025-11-25 17:41:02.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:41:02 np0005535469 nova_compute[254092]: 2025-11-25 17:41:02.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:41:02 np0005535469 nova_compute[254092]: 2025-11-25 17:41:02.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:41:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:41:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258336635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:41:02 np0005535469 nova_compute[254092]: 2025-11-25 17:41:02.967 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.129 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.130 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3631MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.131 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.131 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.396 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.502 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.502 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.518 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.555 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:41:03 np0005535469 nova_compute[254092]: 2025-11-25 17:41:03.570 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:41:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:41:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2491037390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:41:04 np0005535469 nova_compute[254092]: 2025-11-25 17:41:04.014 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:41:04 np0005535469 nova_compute[254092]: 2025-11-25 17:41:04.020 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:41:04 np0005535469 nova_compute[254092]: 2025-11-25 17:41:04.034 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:41:04 np0005535469 nova_compute[254092]: 2025-11-25 17:41:04.035 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:41:04 np0005535469 nova_compute[254092]: 2025-11-25 17:41:04.035 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:41:04 np0005535469 nova_compute[254092]: 2025-11-25 17:41:04.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:06 np0005535469 nova_compute[254092]: 2025-11-25 17:41:06.036 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:06 np0005535469 nova_compute[254092]: 2025-11-25 17:41:06.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:07 np0005535469 nova_compute[254092]: 2025-11-25 17:41:07.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:07 np0005535469 nova_compute[254092]: 2025-11-25 17:41:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:07 np0005535469 nova_compute[254092]: 2025-11-25 17:41:07.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:41:07 np0005535469 nova_compute[254092]: 2025-11-25 17:41:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:41:07 np0005535469 nova_compute[254092]: 2025-11-25 17:41:07.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:41:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:09 np0005535469 nova_compute[254092]: 2025-11-25 17:41:09.287 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:09 np0005535469 nova_compute[254092]: 2025-11-25 17:41:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:09 np0005535469 nova_compute[254092]: 2025-11-25 17:41:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:09 np0005535469 nova_compute[254092]: 2025-11-25 17:41:09.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:41:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:41:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:41:11 np0005535469 nova_compute[254092]: 2025-11-25 17:41:11.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:11 np0005535469 podman[444192]: 2025-11-25 17:41:11.655573296 +0000 UTC m=+0.058548985 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:41:11 np0005535469 podman[444193]: 2025-11-25 17:41:11.677280586 +0000 UTC m=+0.079189696 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 12:41:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:11 np0005535469 podman[444194]: 2025-11-25 17:41:11.76848602 +0000 UTC m=+0.166618208 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Nov 25 12:41:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:41:13.683 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:41:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:41:13.684 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:41:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:41:13.684 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:41:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:14 np0005535469 nova_compute[254092]: 2025-11-25 17:41:14.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:16 np0005535469 nova_compute[254092]: 2025-11-25 17:41:16.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:19 np0005535469 nova_compute[254092]: 2025-11-25 17:41:19.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:21 np0005535469 nova_compute[254092]: 2025-11-25 17:41:21.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:21 np0005535469 nova_compute[254092]: 2025-11-25 17:41:21.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:41:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:24 np0005535469 nova_compute[254092]: 2025-11-25 17:41:24.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:41:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 739cb68b-3ced-4df6-95b1-18b0d8d27e4c does not exist
Nov 25 12:41:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a2cc8955-3feb-421e-89df-68b782729b4e does not exist
Nov 25 12:41:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e7786b4c-6df7-4682-98a9-169083e46e57 does not exist
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:41:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:41:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:41:25 np0005535469 podman[444524]: 2025-11-25 17:41:25.904879584 +0000 UTC m=+0.041310786 container create c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:41:25 np0005535469 systemd[1]: Started libpod-conmon-c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef.scope.
Nov 25 12:41:25 np0005535469 podman[444524]: 2025-11-25 17:41:25.888147208 +0000 UTC m=+0.024578390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:41:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:41:26 np0005535469 podman[444524]: 2025-11-25 17:41:26.019038262 +0000 UTC m=+0.155469504 container init c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:41:26 np0005535469 podman[444524]: 2025-11-25 17:41:26.033623239 +0000 UTC m=+0.170054401 container start c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:41:26 np0005535469 podman[444524]: 2025-11-25 17:41:26.037158695 +0000 UTC m=+0.173589887 container attach c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:41:26 np0005535469 cranky_napier[444540]: 167 167
Nov 25 12:41:26 np0005535469 systemd[1]: libpod-c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef.scope: Deactivated successfully.
Nov 25 12:41:26 np0005535469 podman[444524]: 2025-11-25 17:41:26.042601173 +0000 UTC m=+0.179032335 container died c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:41:26 np0005535469 nova_compute[254092]: 2025-11-25 17:41:26.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4a09bb60cc1efe9f08c3004f4cf75a6deb8c3af6b006926e1a64569115353976-merged.mount: Deactivated successfully.
Nov 25 12:41:26 np0005535469 podman[444524]: 2025-11-25 17:41:26.089847169 +0000 UTC m=+0.226278331 container remove c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:41:26 np0005535469 systemd[1]: libpod-conmon-c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef.scope: Deactivated successfully.
Nov 25 12:41:26 np0005535469 podman[444564]: 2025-11-25 17:41:26.276927352 +0000 UTC m=+0.048871642 container create f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:41:26 np0005535469 systemd[1]: Started libpod-conmon-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope.
Nov 25 12:41:26 np0005535469 podman[444564]: 2025-11-25 17:41:26.253099513 +0000 UTC m=+0.025043823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:41:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:41:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:26 np0005535469 podman[444564]: 2025-11-25 17:41:26.372004871 +0000 UTC m=+0.143949201 container init f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:41:26 np0005535469 podman[444564]: 2025-11-25 17:41:26.389490757 +0000 UTC m=+0.161435037 container start f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:41:26 np0005535469 podman[444564]: 2025-11-25 17:41:26.393693592 +0000 UTC m=+0.165637872 container attach f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:41:27 np0005535469 hungry_margulis[444581]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:41:27 np0005535469 hungry_margulis[444581]: --> relative data size: 1.0
Nov 25 12:41:27 np0005535469 hungry_margulis[444581]: --> All data devices are unavailable
Nov 25 12:41:27 np0005535469 systemd[1]: libpod-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope: Deactivated successfully.
Nov 25 12:41:27 np0005535469 systemd[1]: libpod-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope: Consumed 1.051s CPU time.
Nov 25 12:41:27 np0005535469 podman[444610]: 2025-11-25 17:41:27.521443932 +0000 UTC m=+0.021384503 container died f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:41:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8-merged.mount: Deactivated successfully.
Nov 25 12:41:27 np0005535469 podman[444610]: 2025-11-25 17:41:27.573511 +0000 UTC m=+0.073451540 container remove f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:41:27 np0005535469 systemd[1]: libpod-conmon-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope: Deactivated successfully.
Nov 25 12:41:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:29 np0005535469 nova_compute[254092]: 2025-11-25 17:41:29.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:29 np0005535469 podman[444766]: 2025-11-25 17:41:29.985397361 +0000 UTC m=+0.062798511 container create 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:41:30 np0005535469 systemd[1]: Started libpod-conmon-822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622.scope.
Nov 25 12:41:30 np0005535469 podman[444766]: 2025-11-25 17:41:29.951375185 +0000 UTC m=+0.028776385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:41:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:41:30 np0005535469 podman[444766]: 2025-11-25 17:41:30.075319049 +0000 UTC m=+0.152720159 container init 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:41:30 np0005535469 podman[444766]: 2025-11-25 17:41:30.089749141 +0000 UTC m=+0.167150261 container start 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:41:30 np0005535469 podman[444766]: 2025-11-25 17:41:30.094304396 +0000 UTC m=+0.171705526 container attach 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:41:30 np0005535469 youthful_lehmann[444782]: 167 167
Nov 25 12:41:30 np0005535469 systemd[1]: libpod-822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622.scope: Deactivated successfully.
Nov 25 12:41:30 np0005535469 podman[444766]: 2025-11-25 17:41:30.100749121 +0000 UTC m=+0.178150281 container died 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:41:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0348457f91cc1f73452aef8a90366c99b2f793d12f6ff57aa1d150c1031475cb-merged.mount: Deactivated successfully.
Nov 25 12:41:30 np0005535469 podman[444766]: 2025-11-25 17:41:30.153122317 +0000 UTC m=+0.230523467 container remove 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:41:30 np0005535469 systemd[1]: libpod-conmon-822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622.scope: Deactivated successfully.
Nov 25 12:41:30 np0005535469 podman[444805]: 2025-11-25 17:41:30.404696895 +0000 UTC m=+0.077928362 container create 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:41:30 np0005535469 podman[444805]: 2025-11-25 17:41:30.363244627 +0000 UTC m=+0.036476144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:41:30 np0005535469 systemd[1]: Started libpod-conmon-01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f.scope.
Nov 25 12:41:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:41:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:30 np0005535469 podman[444805]: 2025-11-25 17:41:30.510347861 +0000 UTC m=+0.183579378 container init 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:41:30 np0005535469 podman[444805]: 2025-11-25 17:41:30.51982951 +0000 UTC m=+0.193060977 container start 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:41:30 np0005535469 podman[444805]: 2025-11-25 17:41:30.524336622 +0000 UTC m=+0.197568089 container attach 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:41:31 np0005535469 nova_compute[254092]: 2025-11-25 17:41:31.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]: {
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:    "0": [
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:        {
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "devices": [
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "/dev/loop3"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            ],
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_name": "ceph_lv0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_size": "21470642176",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "name": "ceph_lv0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "tags": {
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cluster_name": "ceph",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.crush_device_class": "",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.encrypted": "0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osd_id": "0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.type": "block",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.vdo": "0"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            },
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "type": "block",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "vg_name": "ceph_vg0"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:        }
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:    ],
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:    "1": [
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:        {
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "devices": [
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "/dev/loop4"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            ],
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_name": "ceph_lv1",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_size": "21470642176",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "name": "ceph_lv1",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "tags": {
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cluster_name": "ceph",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.crush_device_class": "",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.encrypted": "0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osd_id": "1",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.type": "block",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.vdo": "0"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            },
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "type": "block",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "vg_name": "ceph_vg1"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:        }
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:    ],
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:    "2": [
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:        {
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "devices": [
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "/dev/loop5"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            ],
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_name": "ceph_lv2",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_size": "21470642176",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "name": "ceph_lv2",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "tags": {
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.cluster_name": "ceph",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.crush_device_class": "",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.encrypted": "0",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osd_id": "2",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.type": "block",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:                "ceph.vdo": "0"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            },
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "type": "block",
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:            "vg_name": "ceph_vg2"
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:        }
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]:    ]
Nov 25 12:41:31 np0005535469 naughty_khayyam[444821]: }
Nov 25 12:41:31 np0005535469 systemd[1]: libpod-01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f.scope: Deactivated successfully.
Nov 25 12:41:31 np0005535469 podman[444805]: 2025-11-25 17:41:31.448221134 +0000 UTC m=+1.121452561 container died 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:41:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574-merged.mount: Deactivated successfully.
Nov 25 12:41:31 np0005535469 podman[444805]: 2025-11-25 17:41:31.518059495 +0000 UTC m=+1.191290922 container remove 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 12:41:31 np0005535469 systemd[1]: libpod-conmon-01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f.scope: Deactivated successfully.
Nov 25 12:41:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:32 np0005535469 podman[444984]: 2025-11-25 17:41:32.361410184 +0000 UTC m=+0.063141460 container create fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:41:32 np0005535469 systemd[1]: Started libpod-conmon-fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36.scope.
Nov 25 12:41:32 np0005535469 podman[444984]: 2025-11-25 17:41:32.339495797 +0000 UTC m=+0.041227063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:41:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:41:32 np0005535469 podman[444984]: 2025-11-25 17:41:32.465164989 +0000 UTC m=+0.166896265 container init fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:41:32 np0005535469 podman[444984]: 2025-11-25 17:41:32.476140798 +0000 UTC m=+0.177872074 container start fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 12:41:32 np0005535469 podman[444984]: 2025-11-25 17:41:32.480329391 +0000 UTC m=+0.182060677 container attach fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:41:32 np0005535469 jolly_cerf[445000]: 167 167
Nov 25 12:41:32 np0005535469 systemd[1]: libpod-fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36.scope: Deactivated successfully.
Nov 25 12:41:32 np0005535469 podman[444984]: 2025-11-25 17:41:32.486146229 +0000 UTC m=+0.187877495 container died fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:41:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-da1cb7399f8e9329939b1ea30cf8e6bcd6fbab99a0d2849f8cbd9f3f7c3e81c5-merged.mount: Deactivated successfully.
Nov 25 12:41:32 np0005535469 podman[444984]: 2025-11-25 17:41:32.531345401 +0000 UTC m=+0.233076677 container remove fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:41:32 np0005535469 systemd[1]: libpod-conmon-fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36.scope: Deactivated successfully.
Nov 25 12:41:32 np0005535469 podman[445024]: 2025-11-25 17:41:32.785713855 +0000 UTC m=+0.069030770 container create ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 12:41:32 np0005535469 systemd[1]: Started libpod-conmon-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope.
Nov 25 12:41:32 np0005535469 podman[445024]: 2025-11-25 17:41:32.759192553 +0000 UTC m=+0.042509448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:41:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:41:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:32 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:41:32 np0005535469 podman[445024]: 2025-11-25 17:41:32.892122172 +0000 UTC m=+0.175439067 container init ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:41:32 np0005535469 podman[445024]: 2025-11-25 17:41:32.903948084 +0000 UTC m=+0.187264989 container start ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:41:32 np0005535469 podman[445024]: 2025-11-25 17:41:32.908251201 +0000 UTC m=+0.191568096 container attach ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:41:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:34 np0005535469 gracious_nash[445042]: {
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "osd_id": 1,
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "type": "bluestore"
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:    },
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "osd_id": 2,
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "type": "bluestore"
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:    },
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "osd_id": 0,
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:        "type": "bluestore"
Nov 25 12:41:34 np0005535469 gracious_nash[445042]:    }
Nov 25 12:41:34 np0005535469 gracious_nash[445042]: }
Nov 25 12:41:34 np0005535469 systemd[1]: libpod-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope: Deactivated successfully.
Nov 25 12:41:34 np0005535469 systemd[1]: libpod-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope: Consumed 1.227s CPU time.
Nov 25 12:41:34 np0005535469 conmon[445042]: conmon ddfe38a8b38910a93e44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope/container/memory.events
Nov 25 12:41:34 np0005535469 podman[445024]: 2025-11-25 17:41:34.126743003 +0000 UTC m=+1.410059928 container died ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:41:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3-merged.mount: Deactivated successfully.
Nov 25 12:41:34 np0005535469 podman[445024]: 2025-11-25 17:41:34.211194882 +0000 UTC m=+1.494511787 container remove ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:41:34 np0005535469 systemd[1]: libpod-conmon-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope: Deactivated successfully.
Nov 25 12:41:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:41:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:41:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:41:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:41:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev af0f5bfd-7dad-492d-969f-f9557dc4c52e does not exist
Nov 25 12:41:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 245025a6-7ad5-449c-b68e-2461cc619c86 does not exist
Nov 25 12:41:34 np0005535469 nova_compute[254092]: 2025-11-25 17:41:34.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:41:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:41:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:36 np0005535469 nova_compute[254092]: 2025-11-25 17:41:36.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:41:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 16K writes, 73K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1355 writes, 6111 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s#012Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     37.3      2.37              0.31        52    0.046       0      0       0.0       0.0#012  L6      1/0   10.69 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9    125.5    106.4      4.07              1.31        51    0.080    352K    27K       0.0       0.0#012 Sum      1/0   10.69 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     79.3     81.0      6.44              1.62       103    0.063    352K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.8    140.4    142.1      0.41              0.22        10    0.041     46K   2562       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    125.5    106.4      4.07              1.31        51    0.080    352K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     38.0      2.32              0.31        51    0.046       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.1 total, 600.0 interval#012Flush(GB): cumulative 0.086, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.51 GB write, 0.08 MB/s write, 0.50 GB read, 0.08 MB/s read, 6.4 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 59.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000827 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3858,57.26 MB,18.8366%) FilterBlock(104,978.61 KB,0.314366%) IndexBlock(104,1.54 MB,0.506737%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 12:41:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:39 np0005535469 nova_compute[254092]: 2025-11-25 17:41:39.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:41:40
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images']
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:41:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:41:41 np0005535469 nova_compute[254092]: 2025-11-25 17:41:41.053 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:42 np0005535469 podman[445140]: 2025-11-25 17:41:42.656123643 +0000 UTC m=+0.068812425 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:41:42 np0005535469 podman[445139]: 2025-11-25 17:41:42.666547396 +0000 UTC m=+0.079268679 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:41:42 np0005535469 podman[445141]: 2025-11-25 17:41:42.694378594 +0000 UTC m=+0.107100337 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:41:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:44 np0005535469 nova_compute[254092]: 2025-11-25 17:41:44.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:46 np0005535469 nova_compute[254092]: 2025-11-25 17:41:46.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:49 np0005535469 nova_compute[254092]: 2025-11-25 17:41:49.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:51 np0005535469 nova_compute[254092]: 2025-11-25 17:41:51.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:41:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:41:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:54 np0005535469 nova_compute[254092]: 2025-11-25 17:41:54.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:41:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:41:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972227040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:41:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:41:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972227040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:41:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:56 np0005535469 nova_compute[254092]: 2025-11-25 17:41:56.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:59 np0005535469 nova_compute[254092]: 2025-11-25 17:41:59.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:41:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:41:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:00 np0005535469 nova_compute[254092]: 2025-11-25 17:42:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:01 np0005535469 nova_compute[254092]: 2025-11-25 17:42:01.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:01 np0005535469 nova_compute[254092]: 2025-11-25 17:42:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:02 np0005535469 nova_compute[254092]: 2025-11-25 17:42:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:02 np0005535469 nova_compute[254092]: 2025-11-25 17:42:02.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:42:02 np0005535469 nova_compute[254092]: 2025-11-25 17:42:02.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:42:02 np0005535469 nova_compute[254092]: 2025-11-25 17:42:02.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:42:02 np0005535469 nova_compute[254092]: 2025-11-25 17:42:02.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:42:02 np0005535469 nova_compute[254092]: 2025-11-25 17:42:02.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:42:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:42:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716502427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.007 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.237 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.239 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3637MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.239 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.239 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.317 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.532 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:42:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:42:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427291694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.970 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.977 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.995 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.998 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:42:03 np0005535469 nova_compute[254092]: 2025-11-25 17:42:03.998 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:42:04 np0005535469 nova_compute[254092]: 2025-11-25 17:42:04.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:06 np0005535469 nova_compute[254092]: 2025-11-25 17:42:06.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:08 np0005535469 nova_compute[254092]: 2025-11-25 17:42:08.000 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:08 np0005535469 nova_compute[254092]: 2025-11-25 17:42:08.001 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:08 np0005535469 nova_compute[254092]: 2025-11-25 17:42:08.001 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:42:08 np0005535469 nova_compute[254092]: 2025-11-25 17:42:08.001 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:42:08 np0005535469 nova_compute[254092]: 2025-11-25 17:42:08.021 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:42:08 np0005535469 nova_compute[254092]: 2025-11-25 17:42:08.021 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:09 np0005535469 nova_compute[254092]: 2025-11-25 17:42:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:09 np0005535469 nova_compute[254092]: 2025-11-25 17:42:09.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:09 np0005535469 nova_compute[254092]: 2025-11-25 17:42:09.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:42:09 np0005535469 nova_compute[254092]: 2025-11-25 17:42:09.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.933457) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529933515, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1252, "num_deletes": 250, "total_data_size": 1931572, "memory_usage": 1954920, "flush_reason": "Manual Compaction"}
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529941227, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1141350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72868, "largest_seqno": 74119, "table_properties": {"data_size": 1136815, "index_size": 1994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11845, "raw_average_key_size": 20, "raw_value_size": 1126971, "raw_average_value_size": 1963, "num_data_blocks": 91, "num_entries": 574, "num_filter_entries": 574, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092405, "oldest_key_time": 1764092405, "file_creation_time": 1764092529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 7803 microseconds, and 3546 cpu microseconds.
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.941266) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1141350 bytes OK
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.941287) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.942461) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.942473) EVENT_LOG_v1 {"time_micros": 1764092529942469, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.942492) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1925918, prev total WAL file size 1925918, number of live WAL files 2.
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.943203) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303131' seq:72057594037927935, type:22 .. '6D6772737461740033323632' seq:0, type:0; will stop at (end)
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1114KB)], [170(10MB)]
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529943236, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12355831, "oldest_snapshot_seqno": -1}
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 9127 keys, 9805417 bytes, temperature: kUnknown
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529995384, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 9805417, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9749810, "index_size": 31686, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22853, "raw_key_size": 239682, "raw_average_key_size": 26, "raw_value_size": 9592262, "raw_average_value_size": 1050, "num_data_blocks": 1221, "num_entries": 9127, "num_filter_entries": 9127, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.995923) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 9805417 bytes
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.997181) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.0 rd, 187.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(19.4) write-amplify(8.6) OK, records in: 9582, records dropped: 455 output_compression: NoCompression
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.997214) EVENT_LOG_v1 {"time_micros": 1764092529997197, "job": 106, "event": "compaction_finished", "compaction_time_micros": 52350, "compaction_time_cpu_micros": 23547, "output_level": 6, "num_output_files": 1, "total_output_size": 9805417, "num_input_records": 9582, "num_output_records": 9127, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:42:09 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529997788, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092530001785, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.943088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:42:10 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:42:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:42:11 np0005535469 nova_compute[254092]: 2025-11-25 17:42:11.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:42:13.684 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:42:13.685 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:42:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:42:13.685 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:42:13 np0005535469 podman[445246]: 2025-11-25 17:42:13.715865541 +0000 UTC m=+0.103432617 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:42:13 np0005535469 podman[445245]: 2025-11-25 17:42:13.727717973 +0000 UTC m=+0.121469858 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 25 12:42:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:13 np0005535469 podman[445247]: 2025-11-25 17:42:13.790922824 +0000 UTC m=+0.171736796 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:42:14 np0005535469 nova_compute[254092]: 2025-11-25 17:42:14.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:16 np0005535469 nova_compute[254092]: 2025-11-25 17:42:16.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:19 np0005535469 nova_compute[254092]: 2025-11-25 17:42:19.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:21 np0005535469 nova_compute[254092]: 2025-11-25 17:42:21.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:21 np0005535469 nova_compute[254092]: 2025-11-25 17:42:21.493 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:21 np0005535469 nova_compute[254092]: 2025-11-25 17:42:21.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:42:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:24 np0005535469 nova_compute[254092]: 2025-11-25 17:42:24.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:26 np0005535469 nova_compute[254092]: 2025-11-25 17:42:26.070 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:29 np0005535469 nova_compute[254092]: 2025-11-25 17:42:29.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:31 np0005535469 nova_compute[254092]: 2025-11-25 17:42:31.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:34 np0005535469 nova_compute[254092]: 2025-11-25 17:42:34.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:42:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 89c7d221-3a4f-4446-82d1-d4bb78e92d0b does not exist
Nov 25 12:42:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fc7bc80e-d1a8-4fbf-8c84-50a93599a769 does not exist
Nov 25 12:42:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4cb79e3f-4e14-4f71-9028-d4f0c77c0497 does not exist
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:42:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:42:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:42:36 np0005535469 podman[445575]: 2025-11-25 17:42:36.050430885 +0000 UTC m=+0.061516695 container create c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:42:36 np0005535469 nova_compute[254092]: 2025-11-25 17:42:36.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:36 np0005535469 systemd[1]: Started libpod-conmon-c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f.scope.
Nov 25 12:42:36 np0005535469 podman[445575]: 2025-11-25 17:42:36.028875748 +0000 UTC m=+0.039961588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:42:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:42:36 np0005535469 podman[445575]: 2025-11-25 17:42:36.16596421 +0000 UTC m=+0.177050100 container init c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 12:42:36 np0005535469 podman[445575]: 2025-11-25 17:42:36.177454223 +0000 UTC m=+0.188540033 container start c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:42:36 np0005535469 podman[445575]: 2025-11-25 17:42:36.180606899 +0000 UTC m=+0.191692739 container attach c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:42:36 np0005535469 intelligent_villani[445591]: 167 167
Nov 25 12:42:36 np0005535469 systemd[1]: libpod-c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f.scope: Deactivated successfully.
Nov 25 12:42:36 np0005535469 podman[445575]: 2025-11-25 17:42:36.186769156 +0000 UTC m=+0.197854966 container died c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:42:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2a028c1e582b8fa4003ebeda27832eccaf9849e51608d90de9670b3615be09e1-merged.mount: Deactivated successfully.
Nov 25 12:42:36 np0005535469 podman[445575]: 2025-11-25 17:42:36.235709829 +0000 UTC m=+0.246795669 container remove c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 12:42:36 np0005535469 systemd[1]: libpod-conmon-c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f.scope: Deactivated successfully.
Nov 25 12:42:36 np0005535469 podman[445616]: 2025-11-25 17:42:36.482187809 +0000 UTC m=+0.065925246 container create a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:42:36 np0005535469 systemd[1]: Started libpod-conmon-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope.
Nov 25 12:42:36 np0005535469 podman[445616]: 2025-11-25 17:42:36.45837146 +0000 UTC m=+0.042108987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:42:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:42:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:36 np0005535469 podman[445616]: 2025-11-25 17:42:36.580246499 +0000 UTC m=+0.163984016 container init a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:42:36 np0005535469 podman[445616]: 2025-11-25 17:42:36.592166393 +0000 UTC m=+0.175903870 container start a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 12:42:36 np0005535469 podman[445616]: 2025-11-25 17:42:36.596324307 +0000 UTC m=+0.180061774 container attach a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:42:37 np0005535469 hungry_tharp[445633]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:42:37 np0005535469 hungry_tharp[445633]: --> relative data size: 1.0
Nov 25 12:42:37 np0005535469 hungry_tharp[445633]: --> All data devices are unavailable
Nov 25 12:42:37 np0005535469 systemd[1]: libpod-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope: Deactivated successfully.
Nov 25 12:42:37 np0005535469 podman[445616]: 2025-11-25 17:42:37.737096073 +0000 UTC m=+1.320833590 container died a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:42:37 np0005535469 systemd[1]: libpod-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope: Consumed 1.097s CPU time.
Nov 25 12:42:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af-merged.mount: Deactivated successfully.
Nov 25 12:42:37 np0005535469 podman[445616]: 2025-11-25 17:42:37.972281335 +0000 UTC m=+1.556018782 container remove a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:42:37 np0005535469 systemd[1]: libpod-conmon-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope: Deactivated successfully.
Nov 25 12:42:38 np0005535469 podman[445817]: 2025-11-25 17:42:38.74053695 +0000 UTC m=+0.057026584 container create 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:42:38 np0005535469 systemd[1]: Started libpod-conmon-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope.
Nov 25 12:42:38 np0005535469 podman[445817]: 2025-11-25 17:42:38.714721727 +0000 UTC m=+0.031211471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:42:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:42:38 np0005535469 podman[445817]: 2025-11-25 17:42:38.836972584 +0000 UTC m=+0.153462208 container init 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:42:38 np0005535469 podman[445817]: 2025-11-25 17:42:38.851333316 +0000 UTC m=+0.167822960 container start 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:42:38 np0005535469 podman[445817]: 2025-11-25 17:42:38.854675167 +0000 UTC m=+0.171164811 container attach 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:42:38 np0005535469 mystifying_poincare[445834]: 167 167
Nov 25 12:42:38 np0005535469 systemd[1]: libpod-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope: Deactivated successfully.
Nov 25 12:42:38 np0005535469 conmon[445834]: conmon 1450897e3a9a98215ea2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope/container/memory.events
Nov 25 12:42:38 np0005535469 podman[445817]: 2025-11-25 17:42:38.861500472 +0000 UTC m=+0.177990116 container died 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:42:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-578b42a2abf39c16195dfa0f0ac0a58372c8286582727137e13975b8bb3f8cbb-merged.mount: Deactivated successfully.
Nov 25 12:42:38 np0005535469 podman[445817]: 2025-11-25 17:42:38.913707724 +0000 UTC m=+0.230197378 container remove 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:42:38 np0005535469 systemd[1]: libpod-conmon-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope: Deactivated successfully.
Nov 25 12:42:39 np0005535469 podman[445857]: 2025-11-25 17:42:39.098229378 +0000 UTC m=+0.056260403 container create 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:42:39 np0005535469 systemd[1]: Started libpod-conmon-77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11.scope.
Nov 25 12:42:39 np0005535469 podman[445857]: 2025-11-25 17:42:39.069587817 +0000 UTC m=+0.027618932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:42:39 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:42:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:39 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:39 np0005535469 podman[445857]: 2025-11-25 17:42:39.230211291 +0000 UTC m=+0.188242316 container init 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:42:39 np0005535469 podman[445857]: 2025-11-25 17:42:39.237435297 +0000 UTC m=+0.195466302 container start 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 12:42:39 np0005535469 podman[445857]: 2025-11-25 17:42:39.240705426 +0000 UTC m=+0.198736441 container attach 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:42:39 np0005535469 nova_compute[254092]: 2025-11-25 17:42:39.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]: {
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:    "0": [
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:        {
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "devices": [
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "/dev/loop3"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            ],
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_name": "ceph_lv0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_size": "21470642176",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "name": "ceph_lv0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "tags": {
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cluster_name": "ceph",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.crush_device_class": "",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.encrypted": "0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osd_id": "0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.type": "block",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.vdo": "0"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            },
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "type": "block",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "vg_name": "ceph_vg0"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:        }
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:    ],
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:    "1": [
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:        {
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "devices": [
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "/dev/loop4"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            ],
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_name": "ceph_lv1",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_size": "21470642176",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "name": "ceph_lv1",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "tags": {
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cluster_name": "ceph",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.crush_device_class": "",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.encrypted": "0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osd_id": "1",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.type": "block",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.vdo": "0"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            },
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "type": "block",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "vg_name": "ceph_vg1"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:        }
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:    ],
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:    "2": [
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:        {
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "devices": [
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "/dev/loop5"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            ],
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_name": "ceph_lv2",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_size": "21470642176",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "name": "ceph_lv2",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "tags": {
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.cluster_name": "ceph",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.crush_device_class": "",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.encrypted": "0",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osd_id": "2",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.type": "block",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:                "ceph.vdo": "0"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            },
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "type": "block",
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:            "vg_name": "ceph_vg2"
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:        }
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]:    ]
Nov 25 12:42:40 np0005535469 confident_chandrasekhar[445873]: }
Nov 25 12:42:40 np0005535469 podman[445857]: 2025-11-25 17:42:40.046371739 +0000 UTC m=+1.004402754 container died 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:42:40 np0005535469 systemd[1]: libpod-77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11.scope: Deactivated successfully.
Nov 25 12:42:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422-merged.mount: Deactivated successfully.
Nov 25 12:42:40 np0005535469 podman[445857]: 2025-11-25 17:42:40.11951134 +0000 UTC m=+1.077542355 container remove 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:42:40 np0005535469 systemd[1]: libpod-conmon-77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11.scope: Deactivated successfully.
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:42:40
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:42:40 np0005535469 podman[446036]: 2025-11-25 17:42:40.753658924 +0000 UTC m=+0.041665355 container create c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:42:40 np0005535469 systemd[1]: Started libpod-conmon-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope.
Nov 25 12:42:40 np0005535469 podman[446036]: 2025-11-25 17:42:40.737909125 +0000 UTC m=+0.025915576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:42:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:42:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:42:40 np0005535469 podman[446036]: 2025-11-25 17:42:40.85050242 +0000 UTC m=+0.138508881 container init c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:42:40 np0005535469 podman[446036]: 2025-11-25 17:42:40.860580955 +0000 UTC m=+0.148587386 container start c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:42:40 np0005535469 podman[446036]: 2025-11-25 17:42:40.864015538 +0000 UTC m=+0.152021979 container attach c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 12:42:40 np0005535469 magical_galois[446052]: 167 167
Nov 25 12:42:40 np0005535469 systemd[1]: libpod-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope: Deactivated successfully.
Nov 25 12:42:40 np0005535469 conmon[446052]: conmon c6647a7cb2fff15f9bd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope/container/memory.events
Nov 25 12:42:40 np0005535469 podman[446036]: 2025-11-25 17:42:40.868010667 +0000 UTC m=+0.156017088 container died c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:42:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0e53bb27308a75dbf2d2c54ef4431854f28e0b6ac2e63d1cac41e8ab56cda547-merged.mount: Deactivated successfully.
Nov 25 12:42:40 np0005535469 podman[446036]: 2025-11-25 17:42:40.905506668 +0000 UTC m=+0.193513099 container remove c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:42:40 np0005535469 systemd[1]: libpod-conmon-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope: Deactivated successfully.
Nov 25 12:42:41 np0005535469 nova_compute[254092]: 2025-11-25 17:42:41.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:41 np0005535469 podman[446075]: 2025-11-25 17:42:41.092470988 +0000 UTC m=+0.053138228 container create 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 12:42:41 np0005535469 systemd[1]: Started libpod-conmon-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope.
Nov 25 12:42:41 np0005535469 podman[446075]: 2025-11-25 17:42:41.067334834 +0000 UTC m=+0.028002154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:42:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:42:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:42:41 np0005535469 podman[446075]: 2025-11-25 17:42:41.191040911 +0000 UTC m=+0.151708181 container init 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:42:41 np0005535469 podman[446075]: 2025-11-25 17:42:41.208612349 +0000 UTC m=+0.169279609 container start 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:42:41 np0005535469 podman[446075]: 2025-11-25 17:42:41.212583897 +0000 UTC m=+0.173251127 container attach 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:42:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]: {
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "osd_id": 1,
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "type": "bluestore"
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:    },
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "osd_id": 2,
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "type": "bluestore"
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:    },
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "osd_id": 0,
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:        "type": "bluestore"
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]:    }
Nov 25 12:42:42 np0005535469 condescending_mclean[446091]: }
Nov 25 12:42:42 np0005535469 systemd[1]: libpod-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope: Deactivated successfully.
Nov 25 12:42:42 np0005535469 systemd[1]: libpod-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope: Consumed 1.149s CPU time.
Nov 25 12:42:42 np0005535469 podman[446075]: 2025-11-25 17:42:42.345775087 +0000 UTC m=+1.306442317 container died 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:42:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5-merged.mount: Deactivated successfully.
Nov 25 12:42:42 np0005535469 podman[446075]: 2025-11-25 17:42:42.411812535 +0000 UTC m=+1.372479765 container remove 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:42:42 np0005535469 systemd[1]: libpod-conmon-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope: Deactivated successfully.
Nov 25 12:42:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:42:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:42:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:42:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:42:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7efaf783-1abb-4fad-a49a-8ebdd9d2e0cd does not exist
Nov 25 12:42:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 22605c26-2e55-42a6-8b7e-d6600952a2d7 does not exist
Nov 25 12:42:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:42:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:42:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:44 np0005535469 podman[446190]: 2025-11-25 17:42:44.65047362 +0000 UTC m=+0.059713738 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 12:42:44 np0005535469 podman[446189]: 2025-11-25 17:42:44.664062239 +0000 UTC m=+0.082439735 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:42:44 np0005535469 podman[446191]: 2025-11-25 17:42:44.685429011 +0000 UTC m=+0.093723032 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:42:44 np0005535469 nova_compute[254092]: 2025-11-25 17:42:44.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:46 np0005535469 nova_compute[254092]: 2025-11-25 17:42:46.079 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:49 np0005535469 nova_compute[254092]: 2025-11-25 17:42:49.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:51 np0005535469 nova_compute[254092]: 2025-11-25 17:42:51.080 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:42:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:42:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:54 np0005535469 nova_compute[254092]: 2025-11-25 17:42:54.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:42:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:42:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653637048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:42:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:42:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653637048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:42:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:56 np0005535469 nova_compute[254092]: 2025-11-25 17:42:56.082 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:59 np0005535469 nova_compute[254092]: 2025-11-25 17:42:59.753 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:42:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:42:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:01 np0005535469 nova_compute[254092]: 2025-11-25 17:43:01.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:01 np0005535469 nova_compute[254092]: 2025-11-25 17:43:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:02 np0005535469 nova_compute[254092]: 2025-11-25 17:43:02.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.539 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:43:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1852068403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:43:04 np0005535469 nova_compute[254092]: 2025-11-25 17:43:04.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.174 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.175 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3622MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.268 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.268 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.308 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:43:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:43:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948473468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.743 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.751 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:43:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.767 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.770 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:43:05 np0005535469 nova_compute[254092]: 2025-11-25 17:43:05.771 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:43:06 np0005535469 nova_compute[254092]: 2025-11-25 17:43:06.087 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:08 np0005535469 nova_compute[254092]: 2025-11-25 17:43:08.772 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:08 np0005535469 nova_compute[254092]: 2025-11-25 17:43:08.773 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:08 np0005535469 nova_compute[254092]: 2025-11-25 17:43:08.773 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:43:08 np0005535469 nova_compute[254092]: 2025-11-25 17:43:08.774 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:43:08 np0005535469 nova_compute[254092]: 2025-11-25 17:43:08.789 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:43:08 np0005535469 nova_compute[254092]: 2025-11-25 17:43:08.789 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:09 np0005535469 nova_compute[254092]: 2025-11-25 17:43:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:09 np0005535469 nova_compute[254092]: 2025-11-25 17:43:09.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:09 np0005535469 nova_compute[254092]: 2025-11-25 17:43:09.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:43:09 np0005535469 nova_compute[254092]: 2025-11-25 17:43:09.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:43:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:43:11 np0005535469 nova_compute[254092]: 2025-11-25 17:43:11.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:43:13.685 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:43:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:43:13.686 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:43:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:43:13.686 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:43:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:14 np0005535469 nova_compute[254092]: 2025-11-25 17:43:14.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:15 np0005535469 podman[446299]: 2025-11-25 17:43:15.6787647 +0000 UTC m=+0.085317414 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd)
Nov 25 12:43:15 np0005535469 podman[446300]: 2025-11-25 17:43:15.703494414 +0000 UTC m=+0.101375952 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:43:15 np0005535469 podman[446301]: 2025-11-25 17:43:15.717399511 +0000 UTC m=+0.111391772 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:43:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:16 np0005535469 nova_compute[254092]: 2025-11-25 17:43:16.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:19 np0005535469 nova_compute[254092]: 2025-11-25 17:43:19.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:21 np0005535469 nova_compute[254092]: 2025-11-25 17:43:21.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:21 np0005535469 nova_compute[254092]: 2025-11-25 17:43:21.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:43:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:24 np0005535469 nova_compute[254092]: 2025-11-25 17:43:24.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:26 np0005535469 nova_compute[254092]: 2025-11-25 17:43:26.098 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:29 np0005535469 nova_compute[254092]: 2025-11-25 17:43:29.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:31 np0005535469 nova_compute[254092]: 2025-11-25 17:43:31.100 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:34 np0005535469 nova_compute[254092]: 2025-11-25 17:43:34.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:36 np0005535469 nova_compute[254092]: 2025-11-25 17:43:36.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:39 np0005535469 nova_compute[254092]: 2025-11-25 17:43:39.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:43:40
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:43:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:43:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.2 total, 600.0 interval#012Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 307 writes, 622 keys, 307 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 307 writes, 149 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:43:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:43:41 np0005535469 nova_compute[254092]: 2025-11-25 17:43:41.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:43:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 208f4db1-f3c0-4d93-b824-b74749aac47e does not exist
Nov 25 12:43:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e9e459e3-d8b9-4d78-8be2-224a4b503e2c does not exist
Nov 25 12:43:43 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f48eb4e-0b74-4505-9455-3ab589a294e8 does not exist
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:43:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:43:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:43:44 np0005535469 podman[446632]: 2025-11-25 17:43:44.445568085 +0000 UTC m=+0.061250188 container create 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:43:44 np0005535469 systemd[1]: Started libpod-conmon-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope.
Nov 25 12:43:44 np0005535469 podman[446632]: 2025-11-25 17:43:44.422424045 +0000 UTC m=+0.038106168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:43:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:43:44 np0005535469 podman[446632]: 2025-11-25 17:43:44.538947888 +0000 UTC m=+0.154630071 container init 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 25 12:43:44 np0005535469 podman[446632]: 2025-11-25 17:43:44.553051801 +0000 UTC m=+0.168733934 container start 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:43:44 np0005535469 podman[446632]: 2025-11-25 17:43:44.556986428 +0000 UTC m=+0.172668621 container attach 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:43:44 np0005535469 optimistic_bose[446649]: 167 167
Nov 25 12:43:44 np0005535469 systemd[1]: libpod-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope: Deactivated successfully.
Nov 25 12:43:44 np0005535469 conmon[446649]: conmon 4ee08558c99e00fa8201 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope/container/memory.events
Nov 25 12:43:44 np0005535469 podman[446632]: 2025-11-25 17:43:44.563993809 +0000 UTC m=+0.179675942 container died 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:43:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-780f73d29e875c9b6423ff32980167577dcdce7ea8d3d85d17073131581fb0d5-merged.mount: Deactivated successfully.
Nov 25 12:43:44 np0005535469 podman[446632]: 2025-11-25 17:43:44.619586673 +0000 UTC m=+0.235268766 container remove 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 12:43:44 np0005535469 systemd[1]: libpod-conmon-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope: Deactivated successfully.
Nov 25 12:43:44 np0005535469 podman[446672]: 2025-11-25 17:43:44.864689405 +0000 UTC m=+0.076822422 container create 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 12:43:44 np0005535469 nova_compute[254092]: 2025-11-25 17:43:44.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:44 np0005535469 systemd[1]: Started libpod-conmon-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope.
Nov 25 12:43:44 np0005535469 podman[446672]: 2025-11-25 17:43:44.835382918 +0000 UTC m=+0.047515945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:43:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:43:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:44 np0005535469 podman[446672]: 2025-11-25 17:43:44.974231237 +0000 UTC m=+0.186364334 container init 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:43:44 np0005535469 podman[446672]: 2025-11-25 17:43:44.983363675 +0000 UTC m=+0.195496732 container start 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:43:44 np0005535469 podman[446672]: 2025-11-25 17:43:44.987978731 +0000 UTC m=+0.200111838 container attach 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:43:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:46 np0005535469 crazy_taussig[446689]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:43:46 np0005535469 crazy_taussig[446689]: --> relative data size: 1.0
Nov 25 12:43:46 np0005535469 crazy_taussig[446689]: --> All data devices are unavailable
Nov 25 12:43:46 np0005535469 systemd[1]: libpod-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope: Deactivated successfully.
Nov 25 12:43:46 np0005535469 systemd[1]: libpod-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope: Consumed 1.039s CPU time.
Nov 25 12:43:46 np0005535469 podman[446672]: 2025-11-25 17:43:46.075009404 +0000 UTC m=+1.287142421 container died 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:43:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb-merged.mount: Deactivated successfully.
Nov 25 12:43:46 np0005535469 nova_compute[254092]: 2025-11-25 17:43:46.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:46 np0005535469 podman[446672]: 2025-11-25 17:43:46.22250866 +0000 UTC m=+1.434641677 container remove 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:43:46 np0005535469 podman[446719]: 2025-11-25 17:43:46.227949838 +0000 UTC m=+0.112889615 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 12:43:46 np0005535469 systemd[1]: libpod-conmon-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope: Deactivated successfully.
Nov 25 12:43:46 np0005535469 podman[446729]: 2025-11-25 17:43:46.272435559 +0000 UTC m=+0.156943433 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 12:43:46 np0005535469 podman[446730]: 2025-11-25 17:43:46.308771438 +0000 UTC m=+0.182552711 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:43:47 np0005535469 podman[446938]: 2025-11-25 17:43:47.001463136 +0000 UTC m=+0.062127642 container create aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:43:47 np0005535469 systemd[1]: Started libpod-conmon-aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125.scope.
Nov 25 12:43:47 np0005535469 podman[446938]: 2025-11-25 17:43:46.981200314 +0000 UTC m=+0.041864850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:43:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:43:47 np0005535469 podman[446938]: 2025-11-25 17:43:47.109920269 +0000 UTC m=+0.170584815 container init aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:43:47 np0005535469 podman[446938]: 2025-11-25 17:43:47.121057512 +0000 UTC m=+0.181722018 container start aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:43:47 np0005535469 podman[446938]: 2025-11-25 17:43:47.125232235 +0000 UTC m=+0.185896791 container attach aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:43:47 np0005535469 agitated_brown[446954]: 167 167
Nov 25 12:43:47 np0005535469 systemd[1]: libpod-aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125.scope: Deactivated successfully.
Nov 25 12:43:47 np0005535469 podman[446938]: 2025-11-25 17:43:47.129839161 +0000 UTC m=+0.190503687 container died aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:43:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-71dca24b45f1536aa7786ac7204965d1b767e1fc0761e7f17d93ebbfbfcd6258-merged.mount: Deactivated successfully.
Nov 25 12:43:47 np0005535469 podman[446938]: 2025-11-25 17:43:47.18124114 +0000 UTC m=+0.241905666 container remove aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:43:47 np0005535469 systemd[1]: libpod-conmon-aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125.scope: Deactivated successfully.
Nov 25 12:43:47 np0005535469 podman[446979]: 2025-11-25 17:43:47.441476544 +0000 UTC m=+0.074872099 container create 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:43:47 np0005535469 systemd[1]: Started libpod-conmon-00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9.scope.
Nov 25 12:43:47 np0005535469 podman[446979]: 2025-11-25 17:43:47.405978418 +0000 UTC m=+0.039374083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:43:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:43:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:47 np0005535469 podman[446979]: 2025-11-25 17:43:47.549600138 +0000 UTC m=+0.182995793 container init 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:43:47 np0005535469 podman[446979]: 2025-11-25 17:43:47.563888697 +0000 UTC m=+0.197284282 container start 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 12:43:47 np0005535469 podman[446979]: 2025-11-25 17:43:47.568337828 +0000 UTC m=+0.201733413 container attach 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:43:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]: {
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:    "0": [
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:        {
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "devices": [
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "/dev/loop3"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            ],
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_name": "ceph_lv0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_size": "21470642176",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "name": "ceph_lv0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "tags": {
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cluster_name": "ceph",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.crush_device_class": "",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.encrypted": "0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osd_id": "0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.type": "block",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.vdo": "0"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            },
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "type": "block",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "vg_name": "ceph_vg0"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:        }
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:    ],
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:    "1": [
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:        {
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "devices": [
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "/dev/loop4"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            ],
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_name": "ceph_lv1",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_size": "21470642176",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "name": "ceph_lv1",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "tags": {
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cluster_name": "ceph",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.crush_device_class": "",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.encrypted": "0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osd_id": "1",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.type": "block",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.vdo": "0"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            },
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "type": "block",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "vg_name": "ceph_vg1"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:        }
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:    ],
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:    "2": [
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:        {
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "devices": [
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "/dev/loop5"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            ],
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_name": "ceph_lv2",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_size": "21470642176",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "name": "ceph_lv2",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "tags": {
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.cluster_name": "ceph",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.crush_device_class": "",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.encrypted": "0",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osd_id": "2",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.type": "block",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:                "ceph.vdo": "0"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            },
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "type": "block",
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:            "vg_name": "ceph_vg2"
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:        }
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]:    ]
Nov 25 12:43:48 np0005535469 trusting_chebyshev[446995]: }
Nov 25 12:43:48 np0005535469 systemd[1]: libpod-00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9.scope: Deactivated successfully.
Nov 25 12:43:48 np0005535469 podman[446979]: 2025-11-25 17:43:48.380322573 +0000 UTC m=+1.013718128 container died 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:43:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb-merged.mount: Deactivated successfully.
Nov 25 12:43:48 np0005535469 podman[446979]: 2025-11-25 17:43:48.441829768 +0000 UTC m=+1.075225353 container remove 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:43:48 np0005535469 systemd[1]: libpod-conmon-00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9.scope: Deactivated successfully.
Nov 25 12:43:49 np0005535469 podman[447157]: 2025-11-25 17:43:49.228048432 +0000 UTC m=+0.050924458 container create 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:43:49 np0005535469 systemd[1]: Started libpod-conmon-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope.
Nov 25 12:43:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:43:49 np0005535469 podman[447157]: 2025-11-25 17:43:49.207877772 +0000 UTC m=+0.030753848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:43:49 np0005535469 podman[447157]: 2025-11-25 17:43:49.315840161 +0000 UTC m=+0.138716237 container init 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:43:49 np0005535469 podman[447157]: 2025-11-25 17:43:49.322332808 +0000 UTC m=+0.145208834 container start 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:43:49 np0005535469 keen_edison[447173]: 167 167
Nov 25 12:43:49 np0005535469 podman[447157]: 2025-11-25 17:43:49.326416129 +0000 UTC m=+0.149292205 container attach 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 12:43:49 np0005535469 systemd[1]: libpod-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope: Deactivated successfully.
Nov 25 12:43:49 np0005535469 conmon[447173]: conmon 767c3a9c33420972014e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope/container/memory.events
Nov 25 12:43:49 np0005535469 podman[447157]: 2025-11-25 17:43:49.328097775 +0000 UTC m=+0.150973831 container died 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:43:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-48e950f39815ae0280b782457ac2b597297036cea13f5f10ef8f113056a6c3b7-merged.mount: Deactivated successfully.
Nov 25 12:43:49 np0005535469 podman[447157]: 2025-11-25 17:43:49.369184514 +0000 UTC m=+0.192060580 container remove 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:43:49 np0005535469 systemd[1]: libpod-conmon-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope: Deactivated successfully.
Nov 25 12:43:49 np0005535469 podman[447197]: 2025-11-25 17:43:49.583527979 +0000 UTC m=+0.062386540 container create bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:43:49 np0005535469 systemd[1]: Started libpod-conmon-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope.
Nov 25 12:43:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:43:49 np0005535469 podman[447197]: 2025-11-25 17:43:49.562587589 +0000 UTC m=+0.041446150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:43:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:49 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:43:49 np0005535469 podman[447197]: 2025-11-25 17:43:49.676945952 +0000 UTC m=+0.155804493 container init bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:43:49 np0005535469 podman[447197]: 2025-11-25 17:43:49.688051265 +0000 UTC m=+0.166909796 container start bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:43:49 np0005535469 podman[447197]: 2025-11-25 17:43:49.692535436 +0000 UTC m=+0.171393997 container attach bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:43:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:49 np0005535469 nova_compute[254092]: 2025-11-25 17:43:49.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:43:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.2 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 16K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 300 writes, 695 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 300 writes, 143 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:43:50 np0005535469 eager_bose[447213]: {
Nov 25 12:43:50 np0005535469 eager_bose[447213]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "osd_id": 1,
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "type": "bluestore"
Nov 25 12:43:50 np0005535469 eager_bose[447213]:    },
Nov 25 12:43:50 np0005535469 eager_bose[447213]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "osd_id": 2,
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "type": "bluestore"
Nov 25 12:43:50 np0005535469 eager_bose[447213]:    },
Nov 25 12:43:50 np0005535469 eager_bose[447213]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "osd_id": 0,
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:43:50 np0005535469 eager_bose[447213]:        "type": "bluestore"
Nov 25 12:43:50 np0005535469 eager_bose[447213]:    }
Nov 25 12:43:50 np0005535469 eager_bose[447213]: }
Nov 25 12:43:50 np0005535469 systemd[1]: libpod-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope: Deactivated successfully.
Nov 25 12:43:50 np0005535469 podman[447197]: 2025-11-25 17:43:50.777171064 +0000 UTC m=+1.256029595 container died bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:43:50 np0005535469 systemd[1]: libpod-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope: Consumed 1.100s CPU time.
Nov 25 12:43:50 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b-merged.mount: Deactivated successfully.
Nov 25 12:43:50 np0005535469 podman[447197]: 2025-11-25 17:43:50.84094976 +0000 UTC m=+1.319808311 container remove bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:43:50 np0005535469 systemd[1]: libpod-conmon-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope: Deactivated successfully.
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:43:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9d8d7507-6072-4d7f-b255-a7cf8d1d3a3b does not exist
Nov 25 12:43:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 723ecaec-4780-4d5d-be81-344ffec3b8c9 does not exist
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.909000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630909052, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1042, "num_deletes": 251, "total_data_size": 1512987, "memory_usage": 1541584, "flush_reason": "Manual Compaction"}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630917080, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1487572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74120, "largest_seqno": 75161, "table_properties": {"data_size": 1482474, "index_size": 2621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10867, "raw_average_key_size": 19, "raw_value_size": 1472281, "raw_average_value_size": 2662, "num_data_blocks": 118, "num_entries": 553, "num_filter_entries": 553, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092530, "oldest_key_time": 1764092530, "file_creation_time": 1764092630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 8119 microseconds, and 3664 cpu microseconds.
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.917129) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1487572 bytes OK
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.917145) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918193) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918203) EVENT_LOG_v1 {"time_micros": 1764092630918200, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1508106, prev total WAL file size 1508106, number of live WAL files 2.
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918787) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1452KB)], [173(9575KB)]
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630918854, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 11292989, "oldest_snapshot_seqno": -1}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9166 keys, 9577325 bytes, temperature: kUnknown
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630978467, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9577325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9521681, "index_size": 31595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241122, "raw_average_key_size": 26, "raw_value_size": 9363653, "raw_average_value_size": 1021, "num_data_blocks": 1211, "num_entries": 9166, "num_filter_entries": 9166, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.978953) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9577325 bytes
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.980344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.0 rd, 160.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(14.0) write-amplify(6.4) OK, records in: 9680, records dropped: 514 output_compression: NoCompression
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.980377) EVENT_LOG_v1 {"time_micros": 1764092630980360, "job": 108, "event": "compaction_finished", "compaction_time_micros": 59745, "compaction_time_cpu_micros": 40873, "output_level": 6, "num_output_files": 1, "total_output_size": 9577325, "num_input_records": 9680, "num_output_records": 9166, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630981216, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630985195, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:43:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:43:51 np0005535469 nova_compute[254092]: 2025-11-25 17:43:51.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:43:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:43:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:54 np0005535469 nova_compute[254092]: 2025-11-25 17:43:54.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:43:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:43:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138638566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:43:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:43:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138638566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:43:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:56 np0005535469 nova_compute[254092]: 2025-11-25 17:43:56.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:43:59 np0005535469 nova_compute[254092]: 2025-11-25 17:43:59.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:43:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:01 np0005535469 nova_compute[254092]: 2025-11-25 17:44:01.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:01 np0005535469 nova_compute[254092]: 2025-11-25 17:44:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:02 np0005535469 nova_compute[254092]: 2025-11-25 17:44:02.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:04 np0005535469 nova_compute[254092]: 2025-11-25 17:44:04.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:05 np0005535469 nova_compute[254092]: 2025-11-25 17:44:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:05 np0005535469 nova_compute[254092]: 2025-11-25 17:44:05.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:44:05 np0005535469 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:44:05 np0005535469 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:44:05 np0005535469 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:44:05 np0005535469 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:44:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:44:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617392977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:44:05 np0005535469 nova_compute[254092]: 2025-11-25 17:44:05.987 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.158 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.159 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.253 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.287 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.287 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.309 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:44:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:44:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451724122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.757 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.763 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.794 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.796 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:44:06 np0005535469 nova_compute[254092]: 2025-11-25 17:44:06.796 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:44:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:44:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.5 total, 600.0 interval#012Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 230 writes, 420 keys, 230 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s#012Interval WAL: 230 writes, 108 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:44:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.791 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.791 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.792 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.792 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:44:09 np0005535469 nova_compute[254092]: 2025-11-25 17:44:09.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:44:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:44:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:44:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:44:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:44:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:44:11 np0005535469 nova_compute[254092]: 2025-11-25 17:44:11.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:11 np0005535469 nova_compute[254092]: 2025-11-25 17:44:11.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:44:13.687 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:44:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:44:13.688 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:44:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:44:13.688 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:44:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:14 np0005535469 nova_compute[254092]: 2025-11-25 17:44:14.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:16 np0005535469 nova_compute[254092]: 2025-11-25 17:44:16.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:16 np0005535469 podman[447350]: 2025-11-25 17:44:16.663209187 +0000 UTC m=+0.072176526 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 25 12:44:16 np0005535469 podman[447351]: 2025-11-25 17:44:16.678938025 +0000 UTC m=+0.088088749 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 12:44:16 np0005535469 podman[447352]: 2025-11-25 17:44:16.69236052 +0000 UTC m=+0.101067792 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:44:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:19 np0005535469 nova_compute[254092]: 2025-11-25 17:44:19.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:20 np0005535469 nova_compute[254092]: 2025-11-25 17:44:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:20 np0005535469 nova_compute[254092]: 2025-11-25 17:44:20.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:44:21 np0005535469 nova_compute[254092]: 2025-11-25 17:44:21.309 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:22 np0005535469 nova_compute[254092]: 2025-11-25 17:44:22.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:22 np0005535469 nova_compute[254092]: 2025-11-25 17:44:22.525 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:22 np0005535469 nova_compute[254092]: 2025-11-25 17:44:22.525 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:44:22 np0005535469 nova_compute[254092]: 2025-11-25 17:44:22.535 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:44:23 np0005535469 nova_compute[254092]: 2025-11-25 17:44:23.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:24 np0005535469 nova_compute[254092]: 2025-11-25 17:44:24.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 12:44:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:26 np0005535469 nova_compute[254092]: 2025-11-25 17:44:26.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3650: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:29 np0005535469 nova_compute[254092]: 2025-11-25 17:44:29.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:31 np0005535469 nova_compute[254092]: 2025-11-25 17:44:31.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:34 np0005535469 nova_compute[254092]: 2025-11-25 17:44:34.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:36 np0005535469 nova_compute[254092]: 2025-11-25 17:44:36.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3655: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:39 np0005535469 nova_compute[254092]: 2025-11-25 17:44:39.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:44:40
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:44:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:44:41 np0005535469 nova_compute[254092]: 2025-11-25 17:44:41.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3657: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:43 np0005535469 nova_compute[254092]: 2025-11-25 17:44:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:44:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:44 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:44 np0005535469 nova_compute[254092]: 2025-11-25 17:44:44.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:46 np0005535469 nova_compute[254092]: 2025-11-25 17:44:46.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:47 np0005535469 podman[447412]: 2025-11-25 17:44:47.679321628 +0000 UTC m=+0.089540459 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:44:47 np0005535469 podman[447411]: 2025-11-25 17:44:47.688160579 +0000 UTC m=+0.105964266 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:44:47 np0005535469 podman[447413]: 2025-11-25 17:44:47.703419884 +0000 UTC m=+0.107827186 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:44:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3660: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:49 np0005535469 nova_compute[254092]: 2025-11-25 17:44:49.997 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:51 np0005535469 nova_compute[254092]: 2025-11-25 17:44:51.385 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3662: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:44:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:44:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:44:51 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:44:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9e656d33-ef8a-41d1-929f-5ce50c3a5e8b does not exist
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e81ff122-c5d9-46c6-b58e-dc17b050aaac does not exist
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f5fe6f9-ff37-4191-901c-564b49a77c7e does not exist
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:44:52 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:44:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:44:52 np0005535469 podman[447748]: 2025-11-25 17:44:52.81725428 +0000 UTC m=+0.111322461 container create eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 12:44:52 np0005535469 podman[447748]: 2025-11-25 17:44:52.735197807 +0000 UTC m=+0.029266018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:44:52 np0005535469 systemd[1]: Started libpod-conmon-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope.
Nov 25 12:44:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:44:52 np0005535469 podman[447748]: 2025-11-25 17:44:52.981260055 +0000 UTC m=+0.275328316 container init eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:44:52 np0005535469 podman[447748]: 2025-11-25 17:44:52.992055139 +0000 UTC m=+0.286123350 container start eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:44:52 np0005535469 elastic_lewin[447764]: 167 167
Nov 25 12:44:53 np0005535469 systemd[1]: libpod-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope: Deactivated successfully.
Nov 25 12:44:53 np0005535469 conmon[447764]: conmon eb9c4fcd1b23441e60e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope/container/memory.events
Nov 25 12:44:53 np0005535469 podman[447748]: 2025-11-25 17:44:53.123850737 +0000 UTC m=+0.417919008 container attach eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:44:53 np0005535469 podman[447748]: 2025-11-25 17:44:53.125849991 +0000 UTC m=+0.419918202 container died eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 12:44:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8a8672e82e180c0e7261c0ebee47cb1c345c24f877bdb30880b4841c71bb6631-merged.mount: Deactivated successfully.
Nov 25 12:44:53 np0005535469 podman[447748]: 2025-11-25 17:44:53.554625995 +0000 UTC m=+0.848694196 container remove eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:44:53 np0005535469 systemd[1]: libpod-conmon-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope: Deactivated successfully.
Nov 25 12:44:53 np0005535469 podman[447788]: 2025-11-25 17:44:53.800359084 +0000 UTC m=+0.086063134 container create f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:44:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:53 np0005535469 podman[447788]: 2025-11-25 17:44:53.738066819 +0000 UTC m=+0.023770849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:44:53 np0005535469 systemd[1]: Started libpod-conmon-f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7.scope.
Nov 25 12:44:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:44:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:54 np0005535469 podman[447788]: 2025-11-25 17:44:54.02424117 +0000 UTC m=+0.309945220 container init f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:44:54 np0005535469 podman[447788]: 2025-11-25 17:44:54.031570049 +0000 UTC m=+0.317274099 container start f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:44:54 np0005535469 podman[447788]: 2025-11-25 17:44:54.134202973 +0000 UTC m=+0.419907063 container attach f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:44:54 np0005535469 nice_nobel[447803]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:44:54 np0005535469 nice_nobel[447803]: --> relative data size: 1.0
Nov 25 12:44:54 np0005535469 nice_nobel[447803]: --> All data devices are unavailable
Nov 25 12:44:55 np0005535469 nova_compute[254092]: 2025-11-25 17:44:54.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:44:55 np0005535469 systemd[1]: libpod-f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7.scope: Deactivated successfully.
Nov 25 12:44:55 np0005535469 podman[447788]: 2025-11-25 17:44:55.023800631 +0000 UTC m=+1.309504651 container died f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:44:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd-merged.mount: Deactivated successfully.
Nov 25 12:44:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:44:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/898474702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:44:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:44:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/898474702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:44:55 np0005535469 podman[447788]: 2025-11-25 17:44:55.482279973 +0000 UTC m=+1.767983983 container remove f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:44:55 np0005535469 systemd[1]: libpod-conmon-f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7.scope: Deactivated successfully.
Nov 25 12:44:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:56 np0005535469 podman[447982]: 2025-11-25 17:44:56.087566981 +0000 UTC m=+0.040486363 container create 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:44:56 np0005535469 systemd[1]: Started libpod-conmon-1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5.scope.
Nov 25 12:44:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:44:56 np0005535469 podman[447982]: 2025-11-25 17:44:56.069004906 +0000 UTC m=+0.021924328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:44:56 np0005535469 podman[447982]: 2025-11-25 17:44:56.170405236 +0000 UTC m=+0.123324638 container init 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:44:56 np0005535469 podman[447982]: 2025-11-25 17:44:56.176624155 +0000 UTC m=+0.129543577 container start 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 12:44:56 np0005535469 podman[447982]: 2025-11-25 17:44:56.180160861 +0000 UTC m=+0.133080273 container attach 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 12:44:56 np0005535469 happy_bartik[447999]: 167 167
Nov 25 12:44:56 np0005535469 systemd[1]: libpod-1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5.scope: Deactivated successfully.
Nov 25 12:44:56 np0005535469 podman[447982]: 2025-11-25 17:44:56.182557717 +0000 UTC m=+0.135477099 container died 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:44:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f3d11aa6a4423eaf0e26514e8344cf6c74e301e204f3bb8176da1e22710f3e30-merged.mount: Deactivated successfully.
Nov 25 12:44:56 np0005535469 podman[447982]: 2025-11-25 17:44:56.218002412 +0000 UTC m=+0.170921784 container remove 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:44:56 np0005535469 systemd[1]: libpod-conmon-1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5.scope: Deactivated successfully.
Nov 25 12:44:56 np0005535469 podman[448022]: 2025-11-25 17:44:56.40748217 +0000 UTC m=+0.049615311 container create 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:44:56 np0005535469 nova_compute[254092]: 2025-11-25 17:44:56.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:44:56 np0005535469 systemd[1]: Started libpod-conmon-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope.
Nov 25 12:44:56 np0005535469 podman[448022]: 2025-11-25 17:44:56.384749571 +0000 UTC m=+0.026882762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:44:56 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:44:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:56 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:56 np0005535469 podman[448022]: 2025-11-25 17:44:56.492808113 +0000 UTC m=+0.134941254 container init 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:44:56 np0005535469 podman[448022]: 2025-11-25 17:44:56.501975863 +0000 UTC m=+0.144108994 container start 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:44:56 np0005535469 podman[448022]: 2025-11-25 17:44:56.50520943 +0000 UTC m=+0.147342571 container attach 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]: {
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:    "0": [
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:        {
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "devices": [
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "/dev/loop3"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            ],
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_name": "ceph_lv0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_size": "21470642176",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "name": "ceph_lv0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "tags": {
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cluster_name": "ceph",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.crush_device_class": "",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.encrypted": "0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osd_id": "0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.type": "block",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.vdo": "0"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            },
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "type": "block",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "vg_name": "ceph_vg0"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:        }
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:    ],
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:    "1": [
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:        {
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "devices": [
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "/dev/loop4"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            ],
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_name": "ceph_lv1",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_size": "21470642176",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "name": "ceph_lv1",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "tags": {
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cluster_name": "ceph",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.crush_device_class": "",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.encrypted": "0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osd_id": "1",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.type": "block",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.vdo": "0"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            },
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "type": "block",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "vg_name": "ceph_vg1"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:        }
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:    ],
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:    "2": [
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:        {
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "devices": [
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "/dev/loop5"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            ],
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_name": "ceph_lv2",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_size": "21470642176",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "name": "ceph_lv2",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "tags": {
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.cluster_name": "ceph",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.crush_device_class": "",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.encrypted": "0",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osd_id": "2",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.type": "block",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:                "ceph.vdo": "0"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            },
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "type": "block",
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:            "vg_name": "ceph_vg2"
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:        }
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]:    ]
Nov 25 12:44:57 np0005535469 awesome_elgamal[448038]: }
Nov 25 12:44:57 np0005535469 systemd[1]: libpod-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope: Deactivated successfully.
Nov 25 12:44:57 np0005535469 conmon[448038]: conmon 392a00429266d61ee25c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope/container/memory.events
Nov 25 12:44:57 np0005535469 podman[448022]: 2025-11-25 17:44:57.311068639 +0000 UTC m=+0.953201770 container died 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:44:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c-merged.mount: Deactivated successfully.
Nov 25 12:44:57 np0005535469 podman[448022]: 2025-11-25 17:44:57.357058801 +0000 UTC m=+0.999191932 container remove 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:44:57 np0005535469 systemd[1]: libpod-conmon-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope: Deactivated successfully.
Nov 25 12:44:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:44:57 np0005535469 podman[448197]: 2025-11-25 17:44:57.928291382 +0000 UTC m=+0.044127721 container create 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:44:57 np0005535469 systemd[1]: Started libpod-conmon-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope.
Nov 25 12:44:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:44:58 np0005535469 podman[448197]: 2025-11-25 17:44:57.909248995 +0000 UTC m=+0.025085194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:44:58 np0005535469 podman[448197]: 2025-11-25 17:44:58.008334212 +0000 UTC m=+0.124170421 container init 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:44:58 np0005535469 podman[448197]: 2025-11-25 17:44:58.016963017 +0000 UTC m=+0.132799196 container start 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:44:58 np0005535469 podman[448197]: 2025-11-25 17:44:58.021108889 +0000 UTC m=+0.136945068 container attach 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:44:58 np0005535469 brave_panini[448213]: 167 167
Nov 25 12:44:58 np0005535469 systemd[1]: libpod-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope: Deactivated successfully.
Nov 25 12:44:58 np0005535469 conmon[448213]: conmon 2593a422c7066f242174 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope/container/memory.events
Nov 25 12:44:58 np0005535469 podman[448197]: 2025-11-25 17:44:58.024620756 +0000 UTC m=+0.140456935 container died 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:44:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-edfe1615ba6439a1fc6e7f2cd208bf68645044be5ef1975e01cb872be7ae97bc-merged.mount: Deactivated successfully.
Nov 25 12:44:58 np0005535469 podman[448197]: 2025-11-25 17:44:58.059066643 +0000 UTC m=+0.174902822 container remove 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:44:58 np0005535469 systemd[1]: libpod-conmon-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope: Deactivated successfully.
Nov 25 12:44:58 np0005535469 podman[448237]: 2025-11-25 17:44:58.209880119 +0000 UTC m=+0.037929144 container create d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:44:58 np0005535469 systemd[1]: Started libpod-conmon-d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8.scope.
Nov 25 12:44:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:44:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:44:58 np0005535469 podman[448237]: 2025-11-25 17:44:58.19339377 +0000 UTC m=+0.021442815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:44:58 np0005535469 podman[448237]: 2025-11-25 17:44:58.294430951 +0000 UTC m=+0.122480006 container init d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:44:58 np0005535469 podman[448237]: 2025-11-25 17:44:58.300065204 +0000 UTC m=+0.128114229 container start d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:44:58 np0005535469 podman[448237]: 2025-11-25 17:44:58.303811576 +0000 UTC m=+0.131860621 container attach d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]: {
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "osd_id": 1,
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "type": "bluestore"
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:    },
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "osd_id": 2,
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "type": "bluestore"
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:    },
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "osd_id": 0,
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:        "type": "bluestore"
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]:    }
Nov 25 12:44:59 np0005535469 pedantic_feynman[448253]: }
Nov 25 12:44:59 np0005535469 systemd[1]: libpod-d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8.scope: Deactivated successfully.
Nov 25 12:44:59 np0005535469 podman[448237]: 2025-11-25 17:44:59.230858933 +0000 UTC m=+1.058908028 container died d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:44:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa-merged.mount: Deactivated successfully.
Nov 25 12:44:59 np0005535469 podman[448237]: 2025-11-25 17:44:59.29350841 +0000 UTC m=+1.121557475 container remove d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:44:59 np0005535469 systemd[1]: libpod-conmon-d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8.scope: Deactivated successfully.
Nov 25 12:44:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:44:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:44:59 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:44:59 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:44:59 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2aa09f5b-4d81-41c4-a3c0-2d34b19e2dfb does not exist
Nov 25 12:44:59 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 6212ca05-2313-4833-a25c-0061e28f7dc3 does not exist
Nov 25 12:44:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:00 np0005535469 nova_compute[254092]: 2025-11-25 17:45:00.001 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.020560) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700020589, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 792, "num_deletes": 257, "total_data_size": 1017571, "memory_usage": 1033296, "flush_reason": "Manual Compaction"}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700028541, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1008201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75162, "largest_seqno": 75953, "table_properties": {"data_size": 1004124, "index_size": 1792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8854, "raw_average_key_size": 19, "raw_value_size": 995971, "raw_average_value_size": 2137, "num_data_blocks": 80, "num_entries": 466, "num_filter_entries": 466, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092630, "oldest_key_time": 1764092630, "file_creation_time": 1764092700, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 8023 microseconds, and 3787 cpu microseconds.
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.028581) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1008201 bytes OK
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.028598) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030043) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030058) EVENT_LOG_v1 {"time_micros": 1764092700030054, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1013570, prev total WAL file size 1013570, number of live WAL files 2.
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030747) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323637' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(984KB)], [176(9352KB)]
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700030775, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 10585526, "oldest_snapshot_seqno": -1}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9107 keys, 10471828 bytes, temperature: kUnknown
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700100746, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10471828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10415099, "index_size": 32872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240822, "raw_average_key_size": 26, "raw_value_size": 10256460, "raw_average_value_size": 1126, "num_data_blocks": 1266, "num_entries": 9107, "num_filter_entries": 9107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092700, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.101062) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10471828 bytes
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.103019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.0 rd, 149.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.1 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(20.9) write-amplify(10.4) OK, records in: 9632, records dropped: 525 output_compression: NoCompression
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.103041) EVENT_LOG_v1 {"time_micros": 1764092700103031, "job": 110, "event": "compaction_finished", "compaction_time_micros": 70089, "compaction_time_cpu_micros": 31274, "output_level": 6, "num_output_files": 1, "total_output_size": 10471828, "num_input_records": 9632, "num_output_records": 9107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700103410, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700105629, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:45:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:45:01 np0005535469 nova_compute[254092]: 2025-11-25 17:45:01.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3667: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:03 np0005535469 nova_compute[254092]: 2025-11-25 17:45:03.519 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:04 np0005535469 nova_compute[254092]: 2025-11-25 17:45:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:05 np0005535469 nova_compute[254092]: 2025-11-25 17:45:05.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:06 np0005535469 nova_compute[254092]: 2025-11-25 17:45:06.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:07 np0005535469 nova_compute[254092]: 2025-11-25 17:45:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:07 np0005535469 nova_compute[254092]: 2025-11-25 17:45:07.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:45:07 np0005535469 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:45:07 np0005535469 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:45:07 np0005535469 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:45:07 np0005535469 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:45:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:45:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219470700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:45:07 np0005535469 nova_compute[254092]: 2025-11-25 17:45:07.992 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.131 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.132 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.133 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.205 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.206 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.234 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:45:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:45:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580759221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.686 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.692 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.708 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.709 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:45:08 np0005535469 nova_compute[254092]: 2025-11-25 17:45:08.710 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:45:09 np0005535469 nova_compute[254092]: 2025-11-25 17:45:09.710 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:09 np0005535469 nova_compute[254092]: 2025-11-25 17:45:09.711 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:09 np0005535469 nova_compute[254092]: 2025-11-25 17:45:09.711 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:45:09 np0005535469 nova_compute[254092]: 2025-11-25 17:45:09.711 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:45:09 np0005535469 nova_compute[254092]: 2025-11-25 17:45:09.733 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:45:09 np0005535469 nova_compute[254092]: 2025-11-25 17:45:09.733 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:10 np0005535469 nova_compute[254092]: 2025-11-25 17:45:10.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:45:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:45:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:45:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:45:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:45:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:45:10 np0005535469 nova_compute[254092]: 2025-11-25 17:45:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:10 np0005535469 nova_compute[254092]: 2025-11-25 17:45:10.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:45:11 np0005535469 nova_compute[254092]: 2025-11-25 17:45:11.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:11 np0005535469 nova_compute[254092]: 2025-11-25 17:45:11.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:12 np0005535469 nova_compute[254092]: 2025-11-25 17:45:12.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:45:13.688 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:45:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:45:13.689 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:45:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:45:13.689 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:45:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:15 np0005535469 nova_compute[254092]: 2025-11-25 17:45:15.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:16 np0005535469 nova_compute[254092]: 2025-11-25 17:45:16.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:18 np0005535469 podman[448399]: 2025-11-25 17:45:18.653812134 +0000 UTC m=+0.061531096 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 12:45:18 np0005535469 podman[448398]: 2025-11-25 17:45:18.657163355 +0000 UTC m=+0.068703751 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 12:45:18 np0005535469 podman[448400]: 2025-11-25 17:45:18.724508309 +0000 UTC m=+0.129078255 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 12:45:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:20 np0005535469 nova_compute[254092]: 2025-11-25 17:45:20.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:21 np0005535469 nova_compute[254092]: 2025-11-25 17:45:21.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3677: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:24 np0005535469 nova_compute[254092]: 2025-11-25 17:45:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:25 np0005535469 nova_compute[254092]: 2025-11-25 17:45:25.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:26 np0005535469 nova_compute[254092]: 2025-11-25 17:45:26.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:30 np0005535469 nova_compute[254092]: 2025-11-25 17:45:30.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:31 np0005535469 nova_compute[254092]: 2025-11-25 17:45:31.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:35 np0005535469 nova_compute[254092]: 2025-11-25 17:45:35.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:36 np0005535469 nova_compute[254092]: 2025-11-25 17:45:36.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:40 np0005535469 nova_compute[254092]: 2025-11-25 17:45:40.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:45:40
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'vms', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:45:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:45:41 np0005535469 nova_compute[254092]: 2025-11-25 17:45:41.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:42 np0005535469 nova_compute[254092]: 2025-11-25 17:45:42.481 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:45:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:45 np0005535469 nova_compute[254092]: 2025-11-25 17:45:45.110 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:46 np0005535469 nova_compute[254092]: 2025-11-25 17:45:46.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:49 np0005535469 podman[448465]: 2025-11-25 17:45:49.662149814 +0000 UTC m=+0.068476056 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:45:49 np0005535469 podman[448464]: 2025-11-25 17:45:49.677475221 +0000 UTC m=+0.080314678 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 12:45:49 np0005535469 podman[448466]: 2025-11-25 17:45:49.693168048 +0000 UTC m=+0.089969670 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 12:45:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:50 np0005535469 nova_compute[254092]: 2025-11-25 17:45:50.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:51 np0005535469 nova_compute[254092]: 2025-11-25 17:45:51.487 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:45:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:45:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:45:55 np0005535469 nova_compute[254092]: 2025-11-25 17:45:55.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:45:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288394970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:45:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:45:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288394970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:45:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:56 np0005535469 nova_compute[254092]: 2025-11-25 17:45:56.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:45:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:45:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:00 np0005535469 nova_compute[254092]: 2025-11-25 17:46:00.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:46:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f1e2b986-f3a7-4f63-be78-035002d465cd does not exist
Nov 25 12:46:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 541fcc6d-1422-4491-9444-844ddaa6517b does not exist
Nov 25 12:46:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b6d82f61-5418-4bc5-a9e0-3f88d539f797 does not exist
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:46:00 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:46:01 np0005535469 podman[448799]: 2025-11-25 17:46:01.194737812 +0000 UTC m=+0.062295707 container create a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:46:01 np0005535469 systemd[1]: Started libpod-conmon-a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a.scope.
Nov 25 12:46:01 np0005535469 podman[448799]: 2025-11-25 17:46:01.165699252 +0000 UTC m=+0.033257177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:46:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:46:01 np0005535469 podman[448799]: 2025-11-25 17:46:01.295899106 +0000 UTC m=+0.163457061 container init a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:46:01 np0005535469 podman[448799]: 2025-11-25 17:46:01.305060886 +0000 UTC m=+0.172618751 container start a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:46:01 np0005535469 podman[448799]: 2025-11-25 17:46:01.31001827 +0000 UTC m=+0.177576185 container attach a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:46:01 np0005535469 sad_cohen[448816]: 167 167
Nov 25 12:46:01 np0005535469 systemd[1]: libpod-a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a.scope: Deactivated successfully.
Nov 25 12:46:01 np0005535469 podman[448821]: 2025-11-25 17:46:01.365932672 +0000 UTC m=+0.035500417 container died a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:46:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-080eaaa242137c6ff50ea1aa64b081653f61341fd35f14dd9088a56d53efe11a-merged.mount: Deactivated successfully.
Nov 25 12:46:01 np0005535469 podman[448821]: 2025-11-25 17:46:01.417196008 +0000 UTC m=+0.086763703 container remove a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:46:01 np0005535469 systemd[1]: libpod-conmon-a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a.scope: Deactivated successfully.
Nov 25 12:46:01 np0005535469 nova_compute[254092]: 2025-11-25 17:46:01.491 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:01 np0005535469 podman[448843]: 2025-11-25 17:46:01.662128526 +0000 UTC m=+0.066022948 container create 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:46:01 np0005535469 systemd[1]: Started libpod-conmon-8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b.scope.
Nov 25 12:46:01 np0005535469 podman[448843]: 2025-11-25 17:46:01.639692835 +0000 UTC m=+0.043587237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:46:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:46:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:01 np0005535469 podman[448843]: 2025-11-25 17:46:01.765178172 +0000 UTC m=+0.169072574 container init 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:46:01 np0005535469 podman[448843]: 2025-11-25 17:46:01.772787619 +0000 UTC m=+0.176682001 container start 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 12:46:01 np0005535469 podman[448843]: 2025-11-25 17:46:01.77576217 +0000 UTC m=+0.179656572 container attach 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:46:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:02 np0005535469 romantic_lalande[448860]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:46:02 np0005535469 romantic_lalande[448860]: --> relative data size: 1.0
Nov 25 12:46:02 np0005535469 romantic_lalande[448860]: --> All data devices are unavailable
Nov 25 12:46:02 np0005535469 systemd[1]: libpod-8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b.scope: Deactivated successfully.
Nov 25 12:46:02 np0005535469 podman[448889]: 2025-11-25 17:46:02.842602473 +0000 UTC m=+0.026220465 container died 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:46:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272-merged.mount: Deactivated successfully.
Nov 25 12:46:02 np0005535469 podman[448889]: 2025-11-25 17:46:02.889681464 +0000 UTC m=+0.073299456 container remove 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:46:02 np0005535469 systemd[1]: libpod-conmon-8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b.scope: Deactivated successfully.
Nov 25 12:46:03 np0005535469 podman[449046]: 2025-11-25 17:46:03.498563681 +0000 UTC m=+0.034057028 container create 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:46:03 np0005535469 systemd[1]: Started libpod-conmon-27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860.scope.
Nov 25 12:46:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:46:03 np0005535469 podman[449046]: 2025-11-25 17:46:03.57276296 +0000 UTC m=+0.108256337 container init 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:46:03 np0005535469 podman[449046]: 2025-11-25 17:46:03.578660781 +0000 UTC m=+0.114154128 container start 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:46:03 np0005535469 podman[449046]: 2025-11-25 17:46:03.484516568 +0000 UTC m=+0.020009935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:46:03 np0005535469 podman[449046]: 2025-11-25 17:46:03.582202768 +0000 UTC m=+0.117696145 container attach 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:46:03 np0005535469 practical_shockley[449062]: 167 167
Nov 25 12:46:03 np0005535469 systemd[1]: libpod-27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860.scope: Deactivated successfully.
Nov 25 12:46:03 np0005535469 podman[449046]: 2025-11-25 17:46:03.587767039 +0000 UTC m=+0.123260386 container died 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:46:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ec403ad57cbdb1156dd9238050310db55016b60c1d0d5d24e3e888a21c39a708-merged.mount: Deactivated successfully.
Nov 25 12:46:03 np0005535469 podman[449046]: 2025-11-25 17:46:03.620363666 +0000 UTC m=+0.155857013 container remove 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:46:03 np0005535469 systemd[1]: libpod-conmon-27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860.scope: Deactivated successfully.
Nov 25 12:46:03 np0005535469 podman[449087]: 2025-11-25 17:46:03.775293534 +0000 UTC m=+0.044505272 container create cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:46:03 np0005535469 systemd[1]: Started libpod-conmon-cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9.scope.
Nov 25 12:46:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:46:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:03 np0005535469 podman[449087]: 2025-11-25 17:46:03.757567451 +0000 UTC m=+0.026779219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:46:03 np0005535469 podman[449087]: 2025-11-25 17:46:03.866116037 +0000 UTC m=+0.135327805 container init cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 12:46:03 np0005535469 podman[449087]: 2025-11-25 17:46:03.873169928 +0000 UTC m=+0.142381656 container start cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:46:03 np0005535469 podman[449087]: 2025-11-25 17:46:03.876415077 +0000 UTC m=+0.145626845 container attach cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:46:04 np0005535469 nova_compute[254092]: 2025-11-25 17:46:04.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:04 np0005535469 nova_compute[254092]: 2025-11-25 17:46:04.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]: {
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:    "0": [
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:        {
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "devices": [
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "/dev/loop3"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            ],
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_name": "ceph_lv0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_size": "21470642176",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "name": "ceph_lv0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "tags": {
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cluster_name": "ceph",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.crush_device_class": "",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.encrypted": "0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osd_id": "0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.type": "block",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.vdo": "0"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            },
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "type": "block",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "vg_name": "ceph_vg0"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:        }
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:    ],
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:    "1": [
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:        {
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "devices": [
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "/dev/loop4"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            ],
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_name": "ceph_lv1",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_size": "21470642176",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "name": "ceph_lv1",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "tags": {
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cluster_name": "ceph",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.crush_device_class": "",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.encrypted": "0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osd_id": "1",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.type": "block",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.vdo": "0"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            },
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "type": "block",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "vg_name": "ceph_vg1"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:        }
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:    ],
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:    "2": [
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:        {
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "devices": [
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "/dev/loop5"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            ],
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_name": "ceph_lv2",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_size": "21470642176",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "name": "ceph_lv2",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "tags": {
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.cluster_name": "ceph",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.crush_device_class": "",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.encrypted": "0",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osd_id": "2",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.type": "block",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:                "ceph.vdo": "0"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            },
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "type": "block",
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:            "vg_name": "ceph_vg2"
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:        }
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]:    ]
Nov 25 12:46:04 np0005535469 pedantic_mestorf[449104]: }
Nov 25 12:46:04 np0005535469 systemd[1]: libpod-cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9.scope: Deactivated successfully.
Nov 25 12:46:04 np0005535469 podman[449087]: 2025-11-25 17:46:04.664253235 +0000 UTC m=+0.933464973 container died cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:46:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927-merged.mount: Deactivated successfully.
Nov 25 12:46:04 np0005535469 podman[449087]: 2025-11-25 17:46:04.73019335 +0000 UTC m=+0.999405078 container remove cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:46:04 np0005535469 systemd[1]: libpod-conmon-cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9.scope: Deactivated successfully.
Nov 25 12:46:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:05 np0005535469 nova_compute[254092]: 2025-11-25 17:46:05.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:05 np0005535469 podman[449264]: 2025-11-25 17:46:05.344565606 +0000 UTC m=+0.039397474 container create dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 12:46:05 np0005535469 systemd[1]: Started libpod-conmon-dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041.scope.
Nov 25 12:46:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:46:05 np0005535469 podman[449264]: 2025-11-25 17:46:05.327023507 +0000 UTC m=+0.021855385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:46:05 np0005535469 podman[449264]: 2025-11-25 17:46:05.429782565 +0000 UTC m=+0.124614443 container init dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:46:05 np0005535469 podman[449264]: 2025-11-25 17:46:05.443112478 +0000 UTC m=+0.137944336 container start dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 12:46:05 np0005535469 podman[449264]: 2025-11-25 17:46:05.44611591 +0000 UTC m=+0.140947798 container attach dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:46:05 np0005535469 keen_allen[449281]: 167 167
Nov 25 12:46:05 np0005535469 systemd[1]: libpod-dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041.scope: Deactivated successfully.
Nov 25 12:46:05 np0005535469 podman[449264]: 2025-11-25 17:46:05.451797975 +0000 UTC m=+0.146629833 container died dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:46:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-03b3c634f2e529c501e6072f9d14cafd97886b0df736d95b375dd48689bf3d18-merged.mount: Deactivated successfully.
Nov 25 12:46:05 np0005535469 podman[449264]: 2025-11-25 17:46:05.484933637 +0000 UTC m=+0.179765495 container remove dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:46:05 np0005535469 systemd[1]: libpod-conmon-dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041.scope: Deactivated successfully.
Nov 25 12:46:05 np0005535469 podman[449306]: 2025-11-25 17:46:05.642786264 +0000 UTC m=+0.038834509 container create c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:46:05 np0005535469 systemd[1]: Started libpod-conmon-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope.
Nov 25 12:46:05 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:46:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:05 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:46:05 np0005535469 podman[449306]: 2025-11-25 17:46:05.720006766 +0000 UTC m=+0.116055051 container init c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:46:05 np0005535469 podman[449306]: 2025-11-25 17:46:05.626941932 +0000 UTC m=+0.022990207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:46:05 np0005535469 podman[449306]: 2025-11-25 17:46:05.726791161 +0000 UTC m=+0.122839406 container start c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:46:05 np0005535469 podman[449306]: 2025-11-25 17:46:05.730355727 +0000 UTC m=+0.126403972 container attach c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:46:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 12:46:06 np0005535469 nova_compute[254092]: 2025-11-25 17:46:06.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]: {
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "osd_id": 1,
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "type": "bluestore"
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:    },
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "osd_id": 2,
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "type": "bluestore"
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:    },
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "osd_id": 0,
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:        "type": "bluestore"
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]:    }
Nov 25 12:46:06 np0005535469 distracted_darwin[449323]: }
Nov 25 12:46:06 np0005535469 systemd[1]: libpod-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope: Deactivated successfully.
Nov 25 12:46:06 np0005535469 conmon[449323]: conmon c18dd3f858f0c88c5f25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope/container/memory.events
Nov 25 12:46:06 np0005535469 podman[449306]: 2025-11-25 17:46:06.690926358 +0000 UTC m=+1.086974623 container died c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 12:46:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b-merged.mount: Deactivated successfully.
Nov 25 12:46:06 np0005535469 podman[449306]: 2025-11-25 17:46:06.750942912 +0000 UTC m=+1.146991167 container remove c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:46:06 np0005535469 systemd[1]: libpod-conmon-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope: Deactivated successfully.
Nov 25 12:46:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:46:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:46:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:46:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:46:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b61c370f-ccf2-42d8-a0ea-1afaa1e6fca3 does not exist
Nov 25 12:46:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cc0f6696-ad5e-4615-8473-e9dcd1dba29b does not exist
Nov 25 12:46:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:46:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:46:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 12:46:08 np0005535469 nova_compute[254092]: 2025-11-25 17:46:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:09 np0005535469 nova_compute[254092]: 2025-11-25 17:46:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:09 np0005535469 nova_compute[254092]: 2025-11-25 17:46:09.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:46:09 np0005535469 nova_compute[254092]: 2025-11-25 17:46:09.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:46:09 np0005535469 nova_compute[254092]: 2025-11-25 17:46:09.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:46:09 np0005535469 nova_compute[254092]: 2025-11-25 17:46:09.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:46:09 np0005535469 nova_compute[254092]: 2025-11-25 17:46:09.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:46:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 12:46:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:46:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297102051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:46:09 np0005535469 nova_compute[254092]: 2025-11-25 17:46:09.992 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:46:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.117 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:46:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:46:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:46:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:46:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:46:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.163 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.164 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3567MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.568 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.569 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.712 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.813 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.814 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.829 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.873 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:46:10 np0005535469 nova_compute[254092]: 2025-11-25 17:46:10.890 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:46:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:46:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469955822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:46:11 np0005535469 nova_compute[254092]: 2025-11-25 17:46:11.304 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:46:11 np0005535469 nova_compute[254092]: 2025-11-25 17:46:11.312 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:46:11 np0005535469 nova_compute[254092]: 2025-11-25 17:46:11.332 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:46:11 np0005535469 nova_compute[254092]: 2025-11-25 17:46:11.335 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:46:11 np0005535469 nova_compute[254092]: 2025-11-25 17:46:11.336 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:46:11 np0005535469 nova_compute[254092]: 2025-11-25 17:46:11.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:13 np0005535469 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:46:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:46:13.690 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:46:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:46:13.691 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:46:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:46:13.691 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:46:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 12:46:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:15 np0005535469 nova_compute[254092]: 2025-11-25 17:46:15.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 12:46:16 np0005535469 nova_compute[254092]: 2025-11-25 17:46:16.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 12:46:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Nov 25 12:46:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:20 np0005535469 nova_compute[254092]: 2025-11-25 17:46:20.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:20 np0005535469 podman[449463]: 2025-11-25 17:46:20.703373969 +0000 UTC m=+0.110447538 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 12:46:20 np0005535469 podman[449462]: 2025-11-25 17:46:20.706184185 +0000 UTC m=+0.120047449 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:46:20 np0005535469 podman[449464]: 2025-11-25 17:46:20.721058811 +0000 UTC m=+0.131493361 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 12:46:21 np0005535469 nova_compute[254092]: 2025-11-25 17:46:21.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Nov 25 12:46:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 12:46:24 np0005535469 nova_compute[254092]: 2025-11-25 17:46:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:24 np0005535469 nova_compute[254092]: 2025-11-25 17:46:24.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:46:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:25 np0005535469 nova_compute[254092]: 2025-11-25 17:46:25.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 12:46:26 np0005535469 nova_compute[254092]: 2025-11-25 17:46:26.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 12:46:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 12:46:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:30 np0005535469 nova_compute[254092]: 2025-11-25 17:46:30.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:31 np0005535469 nova_compute[254092]: 2025-11-25 17:46:31.551 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 12:46:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:35 np0005535469 nova_compute[254092]: 2025-11-25 17:46:35.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:36 np0005535469 nova_compute[254092]: 2025-11-25 17:46:36.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:46:40 np0005535469 nova_compute[254092]: 2025-11-25 17:46:40.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:46:40
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:46:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:46:41 np0005535469 nova_compute[254092]: 2025-11-25 17:46:41.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:45 np0005535469 nova_compute[254092]: 2025-11-25 17:46:45.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:46 np0005535469 nova_compute[254092]: 2025-11-25 17:46:46.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:50 np0005535469 nova_compute[254092]: 2025-11-25 17:46:50.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:51 np0005535469 nova_compute[254092]: 2025-11-25 17:46:51.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:51 np0005535469 podman[449528]: 2025-11-25 17:46:51.630316424 +0000 UTC m=+0.047828963 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:46:51 np0005535469 podman[449527]: 2025-11-25 17:46:51.641663203 +0000 UTC m=+0.058638587 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 12:46:51 np0005535469 podman[449529]: 2025-11-25 17:46:51.662520301 +0000 UTC m=+0.077347157 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:46:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:46:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:46:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:46:55 np0005535469 nova_compute[254092]: 2025-11-25 17:46:55.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:46:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611584250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:46:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:46:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611584250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:46:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:56 np0005535469 nova_compute[254092]: 2025-11-25 17:46:56.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:46:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:46:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:00 np0005535469 nova_compute[254092]: 2025-11-25 17:47:00.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:01 np0005535469 nova_compute[254092]: 2025-11-25 17:47:01.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:05 np0005535469 nova_compute[254092]: 2025-11-25 17:47:05.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:05 np0005535469 nova_compute[254092]: 2025-11-25 17:47:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:06 np0005535469 nova_compute[254092]: 2025-11-25 17:47:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:06 np0005535469 nova_compute[254092]: 2025-11-25 17:47:06.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:47:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:47:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b1db504f-7a6d-4470-a14e-85ebc878885a does not exist
Nov 25 12:47:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7ac9cd6d-da4d-41db-9e2a-4e9c869e908d does not exist
Nov 25 12:47:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7e2f61e3-f1fc-473b-bc1f-3d11e973cf1b does not exist
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:08 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:47:08 np0005535469 nova_compute[254092]: 2025-11-25 17:47:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:08 np0005535469 podman[449983]: 2025-11-25 17:47:08.718150617 +0000 UTC m=+0.037338457 container create f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:47:08 np0005535469 systemd[1]: Started libpod-conmon-f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb.scope.
Nov 25 12:47:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:47:08 np0005535469 podman[449983]: 2025-11-25 17:47:08.702074479 +0000 UTC m=+0.021262329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:47:08 np0005535469 podman[449983]: 2025-11-25 17:47:08.801939818 +0000 UTC m=+0.121127678 container init f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 12:47:08 np0005535469 podman[449983]: 2025-11-25 17:47:08.810144622 +0000 UTC m=+0.129332452 container start f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:47:08 np0005535469 podman[449983]: 2025-11-25 17:47:08.813161374 +0000 UTC m=+0.132349234 container attach f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 12:47:08 np0005535469 keen_shamir[449999]: 167 167
Nov 25 12:47:08 np0005535469 systemd[1]: libpod-f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb.scope: Deactivated successfully.
Nov 25 12:47:08 np0005535469 podman[449983]: 2025-11-25 17:47:08.817114071 +0000 UTC m=+0.136301901 container died f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:47:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-91df52b1c3815759c4233039899493e820976199120ba73d7398cf89a505a487-merged.mount: Deactivated successfully.
Nov 25 12:47:08 np0005535469 podman[449983]: 2025-11-25 17:47:08.860085742 +0000 UTC m=+0.179273572 container remove f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:47:08 np0005535469 systemd[1]: libpod-conmon-f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb.scope: Deactivated successfully.
Nov 25 12:47:09 np0005535469 podman[450022]: 2025-11-25 17:47:09.088631723 +0000 UTC m=+0.064548468 container create 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:47:09 np0005535469 systemd[1]: Started libpod-conmon-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope.
Nov 25 12:47:09 np0005535469 podman[450022]: 2025-11-25 17:47:09.060322132 +0000 UTC m=+0.036238897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:47:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:47:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:09 np0005535469 podman[450022]: 2025-11-25 17:47:09.200589051 +0000 UTC m=+0.176505766 container init 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:47:09 np0005535469 podman[450022]: 2025-11-25 17:47:09.213559004 +0000 UTC m=+0.189475719 container start 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:47:09 np0005535469 podman[450022]: 2025-11-25 17:47:09.216090013 +0000 UTC m=+0.192006738 container attach 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 12:47:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:47:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:10 np0005535469 gifted_bell[450039]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:47:10 np0005535469 gifted_bell[450039]: --> relative data size: 1.0
Nov 25 12:47:10 np0005535469 gifted_bell[450039]: --> All data devices are unavailable
Nov 25 12:47:10 np0005535469 systemd[1]: libpod-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope: Deactivated successfully.
Nov 25 12:47:10 np0005535469 systemd[1]: libpod-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope: Consumed 1.083s CPU time.
Nov 25 12:47:10 np0005535469 podman[450022]: 2025-11-25 17:47:10.345061158 +0000 UTC m=+1.320977883 container died 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:47:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80-merged.mount: Deactivated successfully.
Nov 25 12:47:10 np0005535469 podman[450022]: 2025-11-25 17:47:10.408458143 +0000 UTC m=+1.384374868 container remove 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:47:10 np0005535469 systemd[1]: libpod-conmon-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope: Deactivated successfully.
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.514 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.515 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.516 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:47:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:47:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997106820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:47:10 np0005535469 nova_compute[254092]: 2025-11-25 17:47:10.950 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:47:11 np0005535469 podman[450244]: 2025-11-25 17:47:11.027201908 +0000 UTC m=+0.041215464 container create a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:47:11 np0005535469 systemd[1]: Started libpod-conmon-a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2.scope.
Nov 25 12:47:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:47:11 np0005535469 podman[450244]: 2025-11-25 17:47:11.097975704 +0000 UTC m=+0.111989280 container init a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:47:11 np0005535469 podman[450244]: 2025-11-25 17:47:11.104608115 +0000 UTC m=+0.118621671 container start a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:47:11 np0005535469 podman[450244]: 2025-11-25 17:47:11.009931128 +0000 UTC m=+0.023944704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:47:11 np0005535469 podman[450244]: 2025-11-25 17:47:11.107433062 +0000 UTC m=+0.121446638 container attach a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:47:11 np0005535469 sharp_shaw[450260]: 167 167
Nov 25 12:47:11 np0005535469 systemd[1]: libpod-a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2.scope: Deactivated successfully.
Nov 25 12:47:11 np0005535469 podman[450244]: 2025-11-25 17:47:11.110455714 +0000 UTC m=+0.124469270 container died a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.118 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.119 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.119 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.119 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:47:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a8be0076bf57f2497660d676c85e141b1ee5ba13fb42c65195e90d347a2c16be-merged.mount: Deactivated successfully.
Nov 25 12:47:11 np0005535469 podman[450244]: 2025-11-25 17:47:11.144333487 +0000 UTC m=+0.158347033 container remove a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:47:11 np0005535469 systemd[1]: libpod-conmon-a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2.scope: Deactivated successfully.
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.200 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.228 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:47:11 np0005535469 podman[450285]: 2025-11-25 17:47:11.295574334 +0000 UTC m=+0.038893470 container create 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:47:11 np0005535469 systemd[1]: Started libpod-conmon-77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367.scope.
Nov 25 12:47:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:47:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:11 np0005535469 podman[450285]: 2025-11-25 17:47:11.280479943 +0000 UTC m=+0.023799099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:47:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:11 np0005535469 podman[450285]: 2025-11-25 17:47:11.385227315 +0000 UTC m=+0.128546461 container init 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:47:11 np0005535469 podman[450285]: 2025-11-25 17:47:11.391978548 +0000 UTC m=+0.135297684 container start 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:47:11 np0005535469 podman[450285]: 2025-11-25 17:47:11.395037021 +0000 UTC m=+0.138356157 container attach 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:47:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326731643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.695 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.702 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.723 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.726 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:47:11 np0005535469 nova_compute[254092]: 2025-11-25 17:47:11.726 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:47:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]: {
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:    "0": [
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:        {
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "devices": [
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "/dev/loop3"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            ],
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_name": "ceph_lv0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_size": "21470642176",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "name": "ceph_lv0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "tags": {
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cluster_name": "ceph",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.crush_device_class": "",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.encrypted": "0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osd_id": "0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.type": "block",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.vdo": "0"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            },
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "type": "block",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "vg_name": "ceph_vg0"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:        }
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:    ],
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:    "1": [
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:        {
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "devices": [
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "/dev/loop4"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            ],
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_name": "ceph_lv1",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_size": "21470642176",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "name": "ceph_lv1",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "tags": {
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cluster_name": "ceph",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.crush_device_class": "",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.encrypted": "0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osd_id": "1",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.type": "block",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.vdo": "0"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            },
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "type": "block",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "vg_name": "ceph_vg1"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:        }
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:    ],
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:    "2": [
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:        {
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "devices": [
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "/dev/loop5"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            ],
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_name": "ceph_lv2",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_size": "21470642176",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "name": "ceph_lv2",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "tags": {
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.cluster_name": "ceph",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.crush_device_class": "",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.encrypted": "0",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osd_id": "2",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.type": "block",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:                "ceph.vdo": "0"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            },
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "type": "block",
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:            "vg_name": "ceph_vg2"
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:        }
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]:    ]
Nov 25 12:47:12 np0005535469 lucid_hopper[450301]: }
Nov 25 12:47:12 np0005535469 systemd[1]: libpod-77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367.scope: Deactivated successfully.
Nov 25 12:47:12 np0005535469 podman[450285]: 2025-11-25 17:47:12.160058549 +0000 UTC m=+0.903377725 container died 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:47:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda-merged.mount: Deactivated successfully.
Nov 25 12:47:12 np0005535469 podman[450285]: 2025-11-25 17:47:12.223174776 +0000 UTC m=+0.966493922 container remove 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:47:12 np0005535469 systemd[1]: libpod-conmon-77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367.scope: Deactivated successfully.
Nov 25 12:47:12 np0005535469 podman[450489]: 2025-11-25 17:47:12.99321826 +0000 UTC m=+0.063495530 container create 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:47:13 np0005535469 systemd[1]: Started libpod-conmon-0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d.scope.
Nov 25 12:47:13 np0005535469 podman[450489]: 2025-11-25 17:47:12.963756997 +0000 UTC m=+0.034034307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:47:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:47:13 np0005535469 podman[450489]: 2025-11-25 17:47:13.097957831 +0000 UTC m=+0.168235091 container init 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 12:47:13 np0005535469 podman[450489]: 2025-11-25 17:47:13.111601472 +0000 UTC m=+0.181878712 container start 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:47:13 np0005535469 podman[450489]: 2025-11-25 17:47:13.114902972 +0000 UTC m=+0.185180272 container attach 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:47:13 np0005535469 compassionate_jones[450505]: 167 167
Nov 25 12:47:13 np0005535469 systemd[1]: libpod-0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d.scope: Deactivated successfully.
Nov 25 12:47:13 np0005535469 podman[450489]: 2025-11-25 17:47:13.121439731 +0000 UTC m=+0.191717061 container died 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:47:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4e8188b24c276f86b20bb3224f9bf5c976c9e28691d471b6e6187502fa15c49a-merged.mount: Deactivated successfully.
Nov 25 12:47:13 np0005535469 podman[450489]: 2025-11-25 17:47:13.158610392 +0000 UTC m=+0.228887632 container remove 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:47:13 np0005535469 systemd[1]: libpod-conmon-0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d.scope: Deactivated successfully.
Nov 25 12:47:13 np0005535469 podman[450528]: 2025-11-25 17:47:13.31869451 +0000 UTC m=+0.045510349 container create df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:47:13 np0005535469 systemd[1]: Started libpod-conmon-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope.
Nov 25 12:47:13 np0005535469 podman[450528]: 2025-11-25 17:47:13.296668151 +0000 UTC m=+0.023484000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:47:13 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:47:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:13 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:47:13 np0005535469 podman[450528]: 2025-11-25 17:47:13.414311993 +0000 UTC m=+0.141127822 container init df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:47:13 np0005535469 podman[450528]: 2025-11-25 17:47:13.422202028 +0000 UTC m=+0.149017877 container start df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:47:13 np0005535469 podman[450528]: 2025-11-25 17:47:13.426215888 +0000 UTC m=+0.153031727 container attach df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:47:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:47:13.691 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:47:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:47:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:47:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:47:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.727 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.728 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.744 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.744 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.745 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:13 np0005535469 nova_compute[254092]: 2025-11-25 17:47:13.745 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:47:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:14 np0005535469 competent_gates[450545]: {
Nov 25 12:47:14 np0005535469 competent_gates[450545]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "osd_id": 1,
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "type": "bluestore"
Nov 25 12:47:14 np0005535469 competent_gates[450545]:    },
Nov 25 12:47:14 np0005535469 competent_gates[450545]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "osd_id": 2,
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "type": "bluestore"
Nov 25 12:47:14 np0005535469 competent_gates[450545]:    },
Nov 25 12:47:14 np0005535469 competent_gates[450545]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "osd_id": 0,
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:47:14 np0005535469 competent_gates[450545]:        "type": "bluestore"
Nov 25 12:47:14 np0005535469 competent_gates[450545]:    }
Nov 25 12:47:14 np0005535469 competent_gates[450545]: }
Nov 25 12:47:14 np0005535469 systemd[1]: libpod-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope: Deactivated successfully.
Nov 25 12:47:14 np0005535469 systemd[1]: libpod-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope: Consumed 1.116s CPU time.
Nov 25 12:47:14 np0005535469 podman[450528]: 2025-11-25 17:47:14.531437956 +0000 UTC m=+1.258253765 container died df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:47:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595-merged.mount: Deactivated successfully.
Nov 25 12:47:14 np0005535469 podman[450528]: 2025-11-25 17:47:14.589486846 +0000 UTC m=+1.316302655 container remove df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:47:14 np0005535469 systemd[1]: libpod-conmon-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope: Deactivated successfully.
Nov 25 12:47:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:47:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:47:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e1415b1f-4488-4985-a6c3-429f2157204b does not exist
Nov 25 12:47:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 575208ef-5f7f-4f47-8d0b-2663958eb69d does not exist
Nov 25 12:47:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:14 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:47:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:15 np0005535469 nova_compute[254092]: 2025-11-25 17:47:15.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:16 np0005535469 nova_compute[254092]: 2025-11-25 17:47:16.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:20 np0005535469 nova_compute[254092]: 2025-11-25 17:47:20.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:21 np0005535469 nova_compute[254092]: 2025-11-25 17:47:21.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:22 np0005535469 podman[450642]: 2025-11-25 17:47:22.696133009 +0000 UTC m=+0.094662959 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 12:47:22 np0005535469 podman[450641]: 2025-11-25 17:47:22.734197074 +0000 UTC m=+0.132332963 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 25 12:47:22 np0005535469 podman[450643]: 2025-11-25 17:47:22.781123022 +0000 UTC m=+0.170350539 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 12:47:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:24 np0005535469 nova_compute[254092]: 2025-11-25 17:47:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:47:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:25 np0005535469 nova_compute[254092]: 2025-11-25 17:47:25.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:26 np0005535469 nova_compute[254092]: 2025-11-25 17:47:26.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:30 np0005535469 nova_compute[254092]: 2025-11-25 17:47:30.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:31 np0005535469 nova_compute[254092]: 2025-11-25 17:47:31.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:35 np0005535469 nova_compute[254092]: 2025-11-25 17:47:35.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:36 np0005535469 nova_compute[254092]: 2025-11-25 17:47:36.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:47:40
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', 'volumes', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:47:40 np0005535469 nova_compute[254092]: 2025-11-25 17:47:40.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:47:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:47:41 np0005535469 nova_compute[254092]: 2025-11-25 17:47:41.623 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.965890) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092861965915, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1511, "num_deletes": 251, "total_data_size": 2459817, "memory_usage": 2506864, "flush_reason": "Manual Compaction"}
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092861986725, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 2414762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75954, "largest_seqno": 77464, "table_properties": {"data_size": 2407609, "index_size": 4223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14477, "raw_average_key_size": 19, "raw_value_size": 2393460, "raw_average_value_size": 3292, "num_data_blocks": 189, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092700, "oldest_key_time": 1764092700, "file_creation_time": 1764092861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 20866 microseconds, and 4863 cpu microseconds.
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.986754) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 2414762 bytes OK
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.986770) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.988330) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.988378) EVENT_LOG_v1 {"time_micros": 1764092861988368, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.988405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2453213, prev total WAL file size 2453213, number of live WAL files 2.
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.989173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(2358KB)], [179(10226KB)]
Nov 25 12:47:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092861989202, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 12886590, "oldest_snapshot_seqno": -1}
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9320 keys, 11141598 bytes, temperature: kUnknown
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092862058402, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 11141598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11082692, "index_size": 34473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 245872, "raw_average_key_size": 26, "raw_value_size": 10919605, "raw_average_value_size": 1171, "num_data_blocks": 1328, "num_entries": 9320, "num_filter_entries": 9320, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.058717) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 11141598 bytes
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.059824) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.0 rd, 160.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 9834, records dropped: 514 output_compression: NoCompression
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.059844) EVENT_LOG_v1 {"time_micros": 1764092862059834, "job": 112, "event": "compaction_finished", "compaction_time_micros": 69283, "compaction_time_cpu_micros": 24344, "output_level": 6, "num_output_files": 1, "total_output_size": 11141598, "num_input_records": 9834, "num_output_records": 9320, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092862060441, "job": 112, "event": "table_file_deletion", "file_number": 181}
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092862062946, "job": 112, "event": "table_file_deletion", "file_number": 179}
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.989114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:47:42 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:47:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:45 np0005535469 nova_compute[254092]: 2025-11-25 17:47:45.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:46 np0005535469 nova_compute[254092]: 2025-11-25 17:47:46.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:50 np0005535469 nova_compute[254092]: 2025-11-25 17:47:50.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:51 np0005535469 nova_compute[254092]: 2025-11-25 17:47:51.628 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:47:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:47:53 np0005535469 podman[450705]: 2025-11-25 17:47:53.627821062 +0000 UTC m=+0.050124446 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:47:53 np0005535469 podman[450706]: 2025-11-25 17:47:53.653779208 +0000 UTC m=+0.071798475 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:47:53 np0005535469 podman[450704]: 2025-11-25 17:47:53.658306172 +0000 UTC m=+0.081219912 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 12:47:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:47:55 np0005535469 nova_compute[254092]: 2025-11-25 17:47:55.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:47:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/165510286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:47:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:47:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/165510286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:47:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:56 np0005535469 nova_compute[254092]: 2025-11-25 17:47:56.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:47:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:47:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:00 np0005535469 nova_compute[254092]: 2025-11-25 17:48:00.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:01 np0005535469 nova_compute[254092]: 2025-11-25 17:48:01.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:05 np0005535469 nova_compute[254092]: 2025-11-25 17:48:05.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:06 np0005535469 nova_compute[254092]: 2025-11-25 17:48:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:06 np0005535469 nova_compute[254092]: 2025-11-25 17:48:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:06 np0005535469 nova_compute[254092]: 2025-11-25 17:48:06.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:08 np0005535469 nova_compute[254092]: 2025-11-25 17:48:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:48:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:48:10 np0005535469 nova_compute[254092]: 2025-11-25 17:48:10.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:48:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970592892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:48:11 np0005535469 nova_compute[254092]: 2025-11-25 17:48:11.956 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.107 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.109 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.109 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.109 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.179 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.180 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.204 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:48:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:48:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145851242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.645 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.650 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.666 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.668 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:48:12 np0005535469 nova_compute[254092]: 2025-11-25 17:48:12.668 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:48:13 np0005535469 nova_compute[254092]: 2025-11-25 17:48:13.671 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:13 np0005535469 nova_compute[254092]: 2025-11-25 17:48:13.674 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:48:13 np0005535469 nova_compute[254092]: 2025-11-25 17:48:13.674 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:48:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:48:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:48:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:48:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:48:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:48:13.693 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:48:13 np0005535469 nova_compute[254092]: 2025-11-25 17:48:13.693 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:48:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:14 np0005535469 nova_compute[254092]: 2025-11-25 17:48:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:14 np0005535469 nova_compute[254092]: 2025-11-25 17:48:14.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:15 np0005535469 nova_compute[254092]: 2025-11-25 17:48:15.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:15 np0005535469 nova_compute[254092]: 2025-11-25 17:48:15.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:48:15 np0005535469 nova_compute[254092]: 2025-11-25 17:48:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:48:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev cab5ad44-2f94-46eb-8701-da449b9fc31d does not exist
Nov 25 12:48:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 146c542f-cccc-4589-88d5-49e3d914613a does not exist
Nov 25 12:48:15 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b94e63db-20cb-4aa5-90a9-70d5767d94be does not exist
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:48:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:48:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:16 np0005535469 podman[451083]: 2025-11-25 17:48:16.010269505 +0000 UTC m=+0.038722455 container create 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:48:16 np0005535469 systemd[1]: Started libpod-conmon-44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540.scope.
Nov 25 12:48:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:48:16 np0005535469 podman[451083]: 2025-11-25 17:48:15.992837031 +0000 UTC m=+0.021290001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:48:16 np0005535469 podman[451083]: 2025-11-25 17:48:16.090461819 +0000 UTC m=+0.118914789 container init 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:48:16 np0005535469 podman[451083]: 2025-11-25 17:48:16.098450795 +0000 UTC m=+0.126903745 container start 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:48:16 np0005535469 podman[451083]: 2025-11-25 17:48:16.101683754 +0000 UTC m=+0.130136704 container attach 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:48:16 np0005535469 sweet_elgamal[451099]: 167 167
Nov 25 12:48:16 np0005535469 systemd[1]: libpod-44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540.scope: Deactivated successfully.
Nov 25 12:48:16 np0005535469 podman[451083]: 2025-11-25 17:48:16.103045221 +0000 UTC m=+0.131498171 container died 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:48:16 np0005535469 systemd[1]: var-lib-containers-storage-overlay-52f2693ab2d10e43b121ac6a83abf5546d507783652d991f55e84fda94ef3eec-merged.mount: Deactivated successfully.
Nov 25 12:48:16 np0005535469 podman[451083]: 2025-11-25 17:48:16.148009825 +0000 UTC m=+0.176462775 container remove 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:48:16 np0005535469 systemd[1]: libpod-conmon-44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540.scope: Deactivated successfully.
Nov 25 12:48:16 np0005535469 podman[451123]: 2025-11-25 17:48:16.314534359 +0000 UTC m=+0.044559544 container create b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:48:16 np0005535469 systemd[1]: Started libpod-conmon-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope.
Nov 25 12:48:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:48:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:16 np0005535469 podman[451123]: 2025-11-25 17:48:16.297831794 +0000 UTC m=+0.027856999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:48:16 np0005535469 podman[451123]: 2025-11-25 17:48:16.395111352 +0000 UTC m=+0.125136537 container init b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:48:16 np0005535469 podman[451123]: 2025-11-25 17:48:16.403002777 +0000 UTC m=+0.133027962 container start b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:48:16 np0005535469 podman[451123]: 2025-11-25 17:48:16.406286596 +0000 UTC m=+0.136311781 container attach b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:48:16 np0005535469 nova_compute[254092]: 2025-11-25 17:48:16.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:17 np0005535469 zen_euler[451139]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:48:17 np0005535469 zen_euler[451139]: --> relative data size: 1.0
Nov 25 12:48:17 np0005535469 zen_euler[451139]: --> All data devices are unavailable
Nov 25 12:48:17 np0005535469 systemd[1]: libpod-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope: Deactivated successfully.
Nov 25 12:48:17 np0005535469 systemd[1]: libpod-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope: Consumed 1.010s CPU time.
Nov 25 12:48:17 np0005535469 podman[451123]: 2025-11-25 17:48:17.463909229 +0000 UTC m=+1.193934404 container died b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:48:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22-merged.mount: Deactivated successfully.
Nov 25 12:48:17 np0005535469 podman[451123]: 2025-11-25 17:48:17.522216296 +0000 UTC m=+1.252241481 container remove b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 12:48:17 np0005535469 systemd[1]: libpod-conmon-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope: Deactivated successfully.
Nov 25 12:48:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:18 np0005535469 podman[451319]: 2025-11-25 17:48:18.072457235 +0000 UTC m=+0.038861039 container create 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:48:18 np0005535469 systemd[1]: Started libpod-conmon-95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378.scope.
Nov 25 12:48:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:48:18 np0005535469 podman[451319]: 2025-11-25 17:48:18.057304823 +0000 UTC m=+0.023708647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:48:18 np0005535469 podman[451319]: 2025-11-25 17:48:18.153565473 +0000 UTC m=+0.119969297 container init 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:48:18 np0005535469 podman[451319]: 2025-11-25 17:48:18.160374488 +0000 UTC m=+0.126778292 container start 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:48:18 np0005535469 podman[451319]: 2025-11-25 17:48:18.164211983 +0000 UTC m=+0.130615787 container attach 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:48:18 np0005535469 intelligent_montalcini[451335]: 167 167
Nov 25 12:48:18 np0005535469 systemd[1]: libpod-95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378.scope: Deactivated successfully.
Nov 25 12:48:18 np0005535469 podman[451319]: 2025-11-25 17:48:18.165056496 +0000 UTC m=+0.131460300 container died 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:48:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1437d73c5febeca8b2a1922bb5256341cbabc0a2dbc66a06915b3e3d3e29dc70-merged.mount: Deactivated successfully.
Nov 25 12:48:18 np0005535469 podman[451319]: 2025-11-25 17:48:18.199576426 +0000 UTC m=+0.165980230 container remove 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 12:48:18 np0005535469 systemd[1]: libpod-conmon-95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378.scope: Deactivated successfully.
Nov 25 12:48:18 np0005535469 podman[451359]: 2025-11-25 17:48:18.344170012 +0000 UTC m=+0.037599484 container create a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:48:18 np0005535469 systemd[1]: Started libpod-conmon-a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19.scope.
Nov 25 12:48:18 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:48:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:18 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:18 np0005535469 podman[451359]: 2025-11-25 17:48:18.328394692 +0000 UTC m=+0.021824214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:48:18 np0005535469 podman[451359]: 2025-11-25 17:48:18.437892813 +0000 UTC m=+0.131322345 container init a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:48:18 np0005535469 podman[451359]: 2025-11-25 17:48:18.450274001 +0000 UTC m=+0.143703483 container start a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:48:18 np0005535469 podman[451359]: 2025-11-25 17:48:18.453842258 +0000 UTC m=+0.147271730 container attach a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:48:19 np0005535469 competent_margulis[451375]: {
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:    "0": [
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:        {
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "devices": [
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "/dev/loop3"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            ],
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_name": "ceph_lv0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_size": "21470642176",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "name": "ceph_lv0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "tags": {
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cluster_name": "ceph",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.crush_device_class": "",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.encrypted": "0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osd_id": "0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.type": "block",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.vdo": "0"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            },
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "type": "block",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "vg_name": "ceph_vg0"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:        }
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:    ],
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:    "1": [
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:        {
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "devices": [
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "/dev/loop4"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            ],
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_name": "ceph_lv1",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_size": "21470642176",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "name": "ceph_lv1",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "tags": {
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cluster_name": "ceph",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.crush_device_class": "",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.encrypted": "0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osd_id": "1",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.type": "block",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.vdo": "0"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            },
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "type": "block",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "vg_name": "ceph_vg1"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:        }
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:    ],
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:    "2": [
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:        {
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "devices": [
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "/dev/loop5"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            ],
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_name": "ceph_lv2",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_size": "21470642176",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "name": "ceph_lv2",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "tags": {
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.cluster_name": "ceph",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.crush_device_class": "",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.encrypted": "0",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osd_id": "2",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.type": "block",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:                "ceph.vdo": "0"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            },
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "type": "block",
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:            "vg_name": "ceph_vg2"
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:        }
Nov 25 12:48:19 np0005535469 competent_margulis[451375]:    ]
Nov 25 12:48:19 np0005535469 competent_margulis[451375]: }
Nov 25 12:48:19 np0005535469 systemd[1]: libpod-a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19.scope: Deactivated successfully.
Nov 25 12:48:19 np0005535469 podman[451359]: 2025-11-25 17:48:19.215782531 +0000 UTC m=+0.909212033 container died a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:48:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a-merged.mount: Deactivated successfully.
Nov 25 12:48:19 np0005535469 podman[451359]: 2025-11-25 17:48:19.272551136 +0000 UTC m=+0.965980608 container remove a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:48:19 np0005535469 systemd[1]: libpod-conmon-a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19.scope: Deactivated successfully.
Nov 25 12:48:19 np0005535469 podman[451537]: 2025-11-25 17:48:19.895705621 +0000 UTC m=+0.047640328 container create dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:48:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:19 np0005535469 systemd[1]: Started libpod-conmon-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope.
Nov 25 12:48:19 np0005535469 podman[451537]: 2025-11-25 17:48:19.87475778 +0000 UTC m=+0.026692467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:48:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:48:19 np0005535469 podman[451537]: 2025-11-25 17:48:19.990036579 +0000 UTC m=+0.141971276 container init dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:48:19 np0005535469 podman[451537]: 2025-11-25 17:48:19.998116998 +0000 UTC m=+0.150051675 container start dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:48:20 np0005535469 podman[451537]: 2025-11-25 17:48:20.001853901 +0000 UTC m=+0.153788618 container attach dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:48:20 np0005535469 systemd[1]: libpod-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope: Deactivated successfully.
Nov 25 12:48:20 np0005535469 dazzling_elbakyan[451553]: 167 167
Nov 25 12:48:20 np0005535469 conmon[451553]: conmon dcf5552e540a028a794f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope/container/memory.events
Nov 25 12:48:20 np0005535469 podman[451537]: 2025-11-25 17:48:20.008024808 +0000 UTC m=+0.159959485 container died dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:48:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-868c3ece7813b0669a93c40ba0f410a3d7bfb470534d92a4718ec568b92cb60d-merged.mount: Deactivated successfully.
Nov 25 12:48:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:20 np0005535469 podman[451537]: 2025-11-25 17:48:20.054105743 +0000 UTC m=+0.206040410 container remove dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:48:20 np0005535469 systemd[1]: libpod-conmon-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope: Deactivated successfully.
Nov 25 12:48:20 np0005535469 podman[451578]: 2025-11-25 17:48:20.261038616 +0000 UTC m=+0.068018662 container create a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:48:20 np0005535469 systemd[1]: Started libpod-conmon-a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15.scope.
Nov 25 12:48:20 np0005535469 podman[451578]: 2025-11-25 17:48:20.233042294 +0000 UTC m=+0.040022390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:48:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:48:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:48:20 np0005535469 podman[451578]: 2025-11-25 17:48:20.356876125 +0000 UTC m=+0.163856201 container init a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:48:20 np0005535469 podman[451578]: 2025-11-25 17:48:20.370556018 +0000 UTC m=+0.177536024 container start a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:48:20 np0005535469 podman[451578]: 2025-11-25 17:48:20.373885759 +0000 UTC m=+0.180865865 container attach a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:48:20 np0005535469 nova_compute[254092]: 2025-11-25 17:48:20.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:21 np0005535469 elated_volhard[451595]: {
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "osd_id": 1,
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "type": "bluestore"
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:    },
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "osd_id": 2,
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "type": "bluestore"
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:    },
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "osd_id": 0,
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:        "type": "bluestore"
Nov 25 12:48:21 np0005535469 elated_volhard[451595]:    }
Nov 25 12:48:21 np0005535469 elated_volhard[451595]: }
Nov 25 12:48:21 np0005535469 systemd[1]: libpod-a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15.scope: Deactivated successfully.
Nov 25 12:48:21 np0005535469 podman[451578]: 2025-11-25 17:48:21.330287555 +0000 UTC m=+1.137267601 container died a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:48:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f-merged.mount: Deactivated successfully.
Nov 25 12:48:21 np0005535469 podman[451578]: 2025-11-25 17:48:21.380190854 +0000 UTC m=+1.187170870 container remove a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:48:21 np0005535469 systemd[1]: libpod-conmon-a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15.scope: Deactivated successfully.
Nov 25 12:48:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:48:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:48:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:48:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:48:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8a834df9-42b7-4951-b0aa-4838ce7c4ba1 does not exist
Nov 25 12:48:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5f9716f7-2ac0-4114-bf7a-ae88ab43ed04 does not exist
Nov 25 12:48:21 np0005535469 nova_compute[254092]: 2025-11-25 17:48:21.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:48:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:48:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:24 np0005535469 podman[451693]: 2025-11-25 17:48:24.660702661 +0000 UTC m=+0.073906513 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:48:24 np0005535469 podman[451692]: 2025-11-25 17:48:24.697688048 +0000 UTC m=+0.109136822 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 12:48:24 np0005535469 podman[451694]: 2025-11-25 17:48:24.717469336 +0000 UTC m=+0.130675218 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:48:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:25 np0005535469 nova_compute[254092]: 2025-11-25 17:48:25.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:25 np0005535469 nova_compute[254092]: 2025-11-25 17:48:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3769: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:26 np0005535469 nova_compute[254092]: 2025-11-25 17:48:26.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:29 np0005535469 nova_compute[254092]: 2025-11-25 17:48:29.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:48:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:30 np0005535469 nova_compute[254092]: 2025-11-25 17:48:30.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:31 np0005535469 nova_compute[254092]: 2025-11-25 17:48:31.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:35 np0005535469 nova_compute[254092]: 2025-11-25 17:48:35.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:36 np0005535469 nova_compute[254092]: 2025-11-25 17:48:36.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3775: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:48:40
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', '.mgr', 'images']
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:48:40 np0005535469 nova_compute[254092]: 2025-11-25 17:48:40.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:48:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:48:41 np0005535469 nova_compute[254092]: 2025-11-25 17:48:41.734 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:45 np0005535469 nova_compute[254092]: 2025-11-25 17:48:45.491 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:46 np0005535469 nova_compute[254092]: 2025-11-25 17:48:46.735 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:50 np0005535469 nova_compute[254092]: 2025-11-25 17:48:50.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:51 np0005535469 nova_compute[254092]: 2025-11-25 17:48:51.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:48:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:48:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:48:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:48:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860690747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:48:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:48:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860690747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:48:55 np0005535469 nova_compute[254092]: 2025-11-25 17:48:55.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:55 np0005535469 podman[451752]: 2025-11-25 17:48:55.636882275 +0000 UTC m=+0.049546399 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:48:55 np0005535469 podman[451751]: 2025-11-25 17:48:55.643430784 +0000 UTC m=+0.057592259 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 12:48:55 np0005535469 podman[451753]: 2025-11-25 17:48:55.682552079 +0000 UTC m=+0.091592044 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:48:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:56 np0005535469 nova_compute[254092]: 2025-11-25 17:48:56.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:48:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:48:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:00 np0005535469 nova_compute[254092]: 2025-11-25 17:49:00.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:01 np0005535469 nova_compute[254092]: 2025-11-25 17:49:01.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:05 np0005535469 nova_compute[254092]: 2025-11-25 17:49:05.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:06 np0005535469 nova_compute[254092]: 2025-11-25 17:49:06.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:07 np0005535469 nova_compute[254092]: 2025-11-25 17:49:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:08 np0005535469 nova_compute[254092]: 2025-11-25 17:49:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:09 np0005535469 nova_compute[254092]: 2025-11-25 17:49:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:49:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:49:10 np0005535469 nova_compute[254092]: 2025-11-25 17:49:10.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:11 np0005535469 nova_compute[254092]: 2025-11-25 17:49:11.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:12 np0005535469 nova_compute[254092]: 2025-11-25 17:49:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:12 np0005535469 nova_compute[254092]: 2025-11-25 17:49:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:49:12 np0005535469 nova_compute[254092]: 2025-11-25 17:49:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:49:12 np0005535469 nova_compute[254092]: 2025-11-25 17:49:12.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:49:13 np0005535469 nova_compute[254092]: 2025-11-25 17:49:13.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:13 np0005535469 nova_compute[254092]: 2025-11-25 17:49:13.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:49:13 np0005535469 nova_compute[254092]: 2025-11-25 17:49:13.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:49:13 np0005535469 nova_compute[254092]: 2025-11-25 17:49:13.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:49:13 np0005535469 nova_compute[254092]: 2025-11-25 17:49:13.538 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:49:13 np0005535469 nova_compute[254092]: 2025-11-25 17:49:13.539 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:49:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:49:13.693 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:49:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:49:13.694 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:49:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:49:13.695 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:49:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:49:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120976132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.065 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.337 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.339 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.340 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.340 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.425 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.426 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.457 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:49:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:49:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3053958096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.937 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.945 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.960 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.963 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:49:14 np0005535469 nova_compute[254092]: 2025-11-25 17:49:14.964 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:49:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:15 np0005535469 nova_compute[254092]: 2025-11-25 17:49:15.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:16 np0005535469 nova_compute[254092]: 2025-11-25 17:49:16.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:16 np0005535469 nova_compute[254092]: 2025-11-25 17:49:16.964 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:16 np0005535469 nova_compute[254092]: 2025-11-25 17:49:16.965 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:16 np0005535469 nova_compute[254092]: 2025-11-25 17:49:16.965 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:49:17 np0005535469 nova_compute[254092]: 2025-11-25 17:49:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:20 np0005535469 nova_compute[254092]: 2025-11-25 17:49:20.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:21 np0005535469 nova_compute[254092]: 2025-11-25 17:49:21.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:49:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 32bf1ed7-d3f9-4d27-a410-812de383399d does not exist
Nov 25 12:49:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 178a01c1-b04d-4ded-ad03-eca2fedbfd6e does not exist
Nov 25 12:49:22 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e1aafb24-ebad-447b-9d0a-c8befa3616a1 does not exist
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:49:22 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:49:23 np0005535469 podman[452130]: 2025-11-25 17:49:23.3814207 +0000 UTC m=+0.075748323 container create 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:49:23 np0005535469 systemd[1]: Started libpod-conmon-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope.
Nov 25 12:49:23 np0005535469 podman[452130]: 2025-11-25 17:49:23.351708462 +0000 UTC m=+0.046036135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:49:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:49:23 np0005535469 nova_compute[254092]: 2025-11-25 17:49:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:23 np0005535469 nova_compute[254092]: 2025-11-25 17:49:23.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:49:23 np0005535469 podman[452130]: 2025-11-25 17:49:23.502890707 +0000 UTC m=+0.197218330 container init 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:49:23 np0005535469 podman[452130]: 2025-11-25 17:49:23.516758684 +0000 UTC m=+0.211086267 container start 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:49:23 np0005535469 podman[452130]: 2025-11-25 17:49:23.521491473 +0000 UTC m=+0.215819146 container attach 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:49:23 np0005535469 brave_joliot[452146]: 167 167
Nov 25 12:49:23 np0005535469 systemd[1]: libpod-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope: Deactivated successfully.
Nov 25 12:49:23 np0005535469 conmon[452146]: conmon 46ae5d335ce15be8c74f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope/container/memory.events
Nov 25 12:49:23 np0005535469 podman[452130]: 2025-11-25 17:49:23.52830896 +0000 UTC m=+0.222636583 container died 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:49:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0ee1f939f567cd04c1fc160f10be22753be2f0fb1e13a39031cff1584e650dd1-merged.mount: Deactivated successfully.
Nov 25 12:49:23 np0005535469 podman[452130]: 2025-11-25 17:49:23.589064963 +0000 UTC m=+0.283392586 container remove 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:49:23 np0005535469 systemd[1]: libpod-conmon-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope: Deactivated successfully.
Nov 25 12:49:23 np0005535469 podman[452170]: 2025-11-25 17:49:23.801854836 +0000 UTC m=+0.070146161 container create 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:49:23 np0005535469 systemd[1]: Started libpod-conmon-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope.
Nov 25 12:49:23 np0005535469 podman[452170]: 2025-11-25 17:49:23.781265656 +0000 UTC m=+0.049557011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:49:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:49:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:23 np0005535469 podman[452170]: 2025-11-25 17:49:23.918298366 +0000 UTC m=+0.186589731 container init 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:49:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:23 np0005535469 podman[452170]: 2025-11-25 17:49:23.931757813 +0000 UTC m=+0.200049178 container start 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:49:23 np0005535469 podman[452170]: 2025-11-25 17:49:23.936601454 +0000 UTC m=+0.204892809 container attach 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:49:24 np0005535469 kind_einstein[452187]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:49:24 np0005535469 kind_einstein[452187]: --> relative data size: 1.0
Nov 25 12:49:24 np0005535469 kind_einstein[452187]: --> All data devices are unavailable
Nov 25 12:49:25 np0005535469 systemd[1]: libpod-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope: Deactivated successfully.
Nov 25 12:49:25 np0005535469 systemd[1]: libpod-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope: Consumed 1.022s CPU time.
Nov 25 12:49:25 np0005535469 podman[452170]: 2025-11-25 17:49:25.004521897 +0000 UTC m=+1.272813242 container died 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 12:49:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a-merged.mount: Deactivated successfully.
Nov 25 12:49:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:25 np0005535469 podman[452170]: 2025-11-25 17:49:25.081924494 +0000 UTC m=+1.350215859 container remove 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:49:25 np0005535469 systemd[1]: libpod-conmon-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope: Deactivated successfully.
Nov 25 12:49:25 np0005535469 nova_compute[254092]: 2025-11-25 17:49:25.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:25 np0005535469 podman[452367]: 2025-11-25 17:49:25.960143923 +0000 UTC m=+0.051051162 container create eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:49:26 np0005535469 systemd[1]: Started libpod-conmon-eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122.scope.
Nov 25 12:49:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:49:26 np0005535469 podman[452367]: 2025-11-25 17:49:25.93836809 +0000 UTC m=+0.029275379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:49:26 np0005535469 podman[452367]: 2025-11-25 17:49:26.049870065 +0000 UTC m=+0.140777394 container init eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:49:26 np0005535469 podman[452367]: 2025-11-25 17:49:26.061606645 +0000 UTC m=+0.152513924 container start eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:49:26 np0005535469 podman[452367]: 2025-11-25 17:49:26.06587505 +0000 UTC m=+0.156782319 container attach eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:49:26 np0005535469 lucid_ishizaka[452387]: 167 167
Nov 25 12:49:26 np0005535469 systemd[1]: libpod-eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122.scope: Deactivated successfully.
Nov 25 12:49:26 np0005535469 podman[452367]: 2025-11-25 17:49:26.071894965 +0000 UTC m=+0.162802234 container died eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:49:26 np0005535469 podman[452385]: 2025-11-25 17:49:26.085084983 +0000 UTC m=+0.077286244 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:49:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8d42334ea497758cec7857125ab6ec112e93960aaad8a55955d0a6cdd7ed01db-merged.mount: Deactivated successfully.
Nov 25 12:49:26 np0005535469 podman[452367]: 2025-11-25 17:49:26.122479712 +0000 UTC m=+0.213386981 container remove eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:49:26 np0005535469 podman[452381]: 2025-11-25 17:49:26.12721561 +0000 UTC m=+0.113348436 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 12:49:26 np0005535469 systemd[1]: libpod-conmon-eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122.scope: Deactivated successfully.
Nov 25 12:49:26 np0005535469 podman[452386]: 2025-11-25 17:49:26.141494749 +0000 UTC m=+0.131718767 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 12:49:26 np0005535469 podman[452472]: 2025-11-25 17:49:26.343110378 +0000 UTC m=+0.052276154 container create 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:49:26 np0005535469 systemd[1]: Started libpod-conmon-269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1.scope.
Nov 25 12:49:26 np0005535469 podman[452472]: 2025-11-25 17:49:26.321333536 +0000 UTC m=+0.030499362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:49:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:49:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:26 np0005535469 podman[452472]: 2025-11-25 17:49:26.450013889 +0000 UTC m=+0.159179695 container init 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:49:26 np0005535469 podman[452472]: 2025-11-25 17:49:26.460385551 +0000 UTC m=+0.169551337 container start 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 12:49:26 np0005535469 podman[452472]: 2025-11-25 17:49:26.463619418 +0000 UTC m=+0.172785244 container attach 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 12:49:26 np0005535469 nova_compute[254092]: 2025-11-25 17:49:26.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:26 np0005535469 nova_compute[254092]: 2025-11-25 17:49:26.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:27 np0005535469 agitated_borg[452489]: {
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:    "0": [
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:        {
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "devices": [
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "/dev/loop3"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            ],
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_name": "ceph_lv0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_size": "21470642176",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "name": "ceph_lv0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "tags": {
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cluster_name": "ceph",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.crush_device_class": "",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.encrypted": "0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osd_id": "0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.type": "block",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.vdo": "0"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            },
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "type": "block",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "vg_name": "ceph_vg0"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:        }
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:    ],
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:    "1": [
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:        {
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "devices": [
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "/dev/loop4"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            ],
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_name": "ceph_lv1",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_size": "21470642176",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "name": "ceph_lv1",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "tags": {
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cluster_name": "ceph",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.crush_device_class": "",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.encrypted": "0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osd_id": "1",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.type": "block",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.vdo": "0"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            },
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "type": "block",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "vg_name": "ceph_vg1"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:        }
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:    ],
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:    "2": [
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:        {
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "devices": [
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "/dev/loop5"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            ],
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_name": "ceph_lv2",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_size": "21470642176",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "name": "ceph_lv2",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "tags": {
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.cluster_name": "ceph",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.crush_device_class": "",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.encrypted": "0",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osd_id": "2",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.type": "block",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:                "ceph.vdo": "0"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            },
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "type": "block",
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:            "vg_name": "ceph_vg2"
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:        }
Nov 25 12:49:27 np0005535469 agitated_borg[452489]:    ]
Nov 25 12:49:27 np0005535469 agitated_borg[452489]: }
Nov 25 12:49:27 np0005535469 systemd[1]: libpod-269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1.scope: Deactivated successfully.
Nov 25 12:49:27 np0005535469 podman[452472]: 2025-11-25 17:49:27.371889846 +0000 UTC m=+1.081055632 container died 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:49:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93-merged.mount: Deactivated successfully.
Nov 25 12:49:27 np0005535469 podman[452472]: 2025-11-25 17:49:27.435111596 +0000 UTC m=+1.144277372 container remove 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:49:27 np0005535469 systemd[1]: libpod-conmon-269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1.scope: Deactivated successfully.
Nov 25 12:49:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:28 np0005535469 podman[452650]: 2025-11-25 17:49:28.150029129 +0000 UTC m=+0.049683323 container create 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 12:49:28 np0005535469 systemd[1]: Started libpod-conmon-1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a.scope.
Nov 25 12:49:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:49:28 np0005535469 podman[452650]: 2025-11-25 17:49:28.134224969 +0000 UTC m=+0.033879193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:49:28 np0005535469 podman[452650]: 2025-11-25 17:49:28.243199696 +0000 UTC m=+0.142853920 container init 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:49:28 np0005535469 podman[452650]: 2025-11-25 17:49:28.251863142 +0000 UTC m=+0.151517336 container start 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:49:28 np0005535469 podman[452650]: 2025-11-25 17:49:28.255789548 +0000 UTC m=+0.155443782 container attach 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:49:28 np0005535469 jovial_aryabhata[452666]: 167 167
Nov 25 12:49:28 np0005535469 systemd[1]: libpod-1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a.scope: Deactivated successfully.
Nov 25 12:49:28 np0005535469 podman[452650]: 2025-11-25 17:49:28.260727343 +0000 UTC m=+0.160381537 container died 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:49:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-194c8f934ce7b239597908d30e79017011c87fa01f4c3d77ad9be632ec504f0a-merged.mount: Deactivated successfully.
Nov 25 12:49:28 np0005535469 podman[452650]: 2025-11-25 17:49:28.298194632 +0000 UTC m=+0.197848866 container remove 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:49:28 np0005535469 systemd[1]: libpod-conmon-1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a.scope: Deactivated successfully.
Nov 25 12:49:28 np0005535469 podman[452689]: 2025-11-25 17:49:28.484231717 +0000 UTC m=+0.057716692 container create 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 12:49:28 np0005535469 systemd[1]: Started libpod-conmon-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope.
Nov 25 12:49:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:49:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:49:28 np0005535469 podman[452689]: 2025-11-25 17:49:28.463095832 +0000 UTC m=+0.036580857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:49:28 np0005535469 podman[452689]: 2025-11-25 17:49:28.563265809 +0000 UTC m=+0.136750794 container init 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:49:28 np0005535469 podman[452689]: 2025-11-25 17:49:28.571412021 +0000 UTC m=+0.144896996 container start 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:49:28 np0005535469 podman[452689]: 2025-11-25 17:49:28.574759892 +0000 UTC m=+0.148244877 container attach 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:49:29 np0005535469 confident_goodall[452706]: {
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "osd_id": 1,
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "type": "bluestore"
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:    },
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "osd_id": 2,
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "type": "bluestore"
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:    },
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "osd_id": 0,
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:        "type": "bluestore"
Nov 25 12:49:29 np0005535469 confident_goodall[452706]:    }
Nov 25 12:49:29 np0005535469 confident_goodall[452706]: }
Nov 25 12:49:29 np0005535469 systemd[1]: libpod-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope: Deactivated successfully.
Nov 25 12:49:29 np0005535469 systemd[1]: libpod-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope: Consumed 1.283s CPU time.
Nov 25 12:49:29 np0005535469 podman[452739]: 2025-11-25 17:49:29.88824954 +0000 UTC m=+0.026746000 container died 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:49:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912-merged.mount: Deactivated successfully.
Nov 25 12:49:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:29 np0005535469 podman[452739]: 2025-11-25 17:49:29.953403743 +0000 UTC m=+0.091900193 container remove 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:49:29 np0005535469 systemd[1]: libpod-conmon-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope: Deactivated successfully.
Nov 25 12:49:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:49:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:49:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:49:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:49:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a63e0cbd-a2a5-41f4-968f-9938eed75f3e does not exist
Nov 25 12:49:30 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 38e5c5df-e6b5-44a9-8ed2-2468333826df does not exist
Nov 25 12:49:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:30 np0005535469 nova_compute[254092]: 2025-11-25 17:49:30.549 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:31 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:49:31 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:49:31 np0005535469 nova_compute[254092]: 2025-11-25 17:49:31.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:33 np0005535469 nova_compute[254092]: 2025-11-25 17:49:33.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:33 np0005535469 nova_compute[254092]: 2025-11-25 17:49:33.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:49:33 np0005535469 nova_compute[254092]: 2025-11-25 17:49:33.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:49:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:35 np0005535469 nova_compute[254092]: 2025-11-25 17:49:35.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:36 np0005535469 nova_compute[254092]: 2025-11-25 17:49:36.862 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:49:40
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'images', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:49:40 np0005535469 nova_compute[254092]: 2025-11-25 17:49:40.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:49:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:49:41 np0005535469 nova_compute[254092]: 2025-11-25 17:49:41.864 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3808: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:45 np0005535469 nova_compute[254092]: 2025-11-25 17:49:45.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3809: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:46 np0005535469 nova_compute[254092]: 2025-11-25 17:49:46.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:47 np0005535469 nova_compute[254092]: 2025-11-25 17:49:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:49:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3810: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3811: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:50 np0005535469 nova_compute[254092]: 2025-11-25 17:49:50.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:51 np0005535469 nova_compute[254092]: 2025-11-25 17:49:51.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3812: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:49:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:49:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3813: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:49:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:49:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328540502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:49:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:49:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328540502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:49:55 np0005535469 nova_compute[254092]: 2025-11-25 17:49:55.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3814: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:56 np0005535469 podman[452804]: 2025-11-25 17:49:56.633786959 +0000 UTC m=+0.049460008 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 12:49:56 np0005535469 podman[452803]: 2025-11-25 17:49:56.643938226 +0000 UTC m=+0.059616925 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 12:49:56 np0005535469 podman[452805]: 2025-11-25 17:49:56.721726083 +0000 UTC m=+0.136276700 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 12:49:56 np0005535469 nova_compute[254092]: 2025-11-25 17:49:56.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:49:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3815: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:49:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3816: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:00 np0005535469 nova_compute[254092]: 2025-11-25 17:50:00.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:01 np0005535469 nova_compute[254092]: 2025-11-25 17:50:01.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3817: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3818: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:05 np0005535469 nova_compute[254092]: 2025-11-25 17:50:05.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3819: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:06 np0005535469 nova_compute[254092]: 2025-11-25 17:50:06.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3820: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:09 np0005535469 nova_compute[254092]: 2025-11-25 17:50:09.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3821: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:50:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:50:10 np0005535469 nova_compute[254092]: 2025-11-25 17:50:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:10 np0005535469 nova_compute[254092]: 2025-11-25 17:50:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:10 np0005535469 nova_compute[254092]: 2025-11-25 17:50:10.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:11 np0005535469 nova_compute[254092]: 2025-11-25 17:50:11.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3822: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:13 np0005535469 nova_compute[254092]: 2025-11-25 17:50:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:13 np0005535469 nova_compute[254092]: 2025-11-25 17:50:13.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:50:13 np0005535469 nova_compute[254092]: 2025-11-25 17:50:13.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:50:13 np0005535469 nova_compute[254092]: 2025-11-25 17:50:13.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:50:13.694 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:50:13.695 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:50:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:50:13.695 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:50:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3823: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:50:15 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527462887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:50:15 np0005535469 nova_compute[254092]: 2025-11-25 17:50:15.943 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:50:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3824: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.114 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.115 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3624MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.116 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.116 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.200 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.215 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:50:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:50:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141294128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.687 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.693 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.707 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.709 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.709 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:50:16 np0005535469 nova_compute[254092]: 2025-11-25 17:50:16.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:17 np0005535469 nova_compute[254092]: 2025-11-25 17:50:17.709 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:17 np0005535469 nova_compute[254092]: 2025-11-25 17:50:17.710 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:17 np0005535469 nova_compute[254092]: 2025-11-25 17:50:17.710 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:50:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3825: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:18 np0005535469 nova_compute[254092]: 2025-11-25 17:50:18.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3826: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:20 np0005535469 nova_compute[254092]: 2025-11-25 17:50:20.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:21 np0005535469 nova_compute[254092]: 2025-11-25 17:50:21.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3827: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3828: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:25 np0005535469 nova_compute[254092]: 2025-11-25 17:50:25.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3829: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:26 np0005535469 nova_compute[254092]: 2025-11-25 17:50:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:26 np0005535469 nova_compute[254092]: 2025-11-25 17:50:26.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:27 np0005535469 podman[452919]: 2025-11-25 17:50:27.630665454 +0000 UTC m=+0.049176190 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 12:50:27 np0005535469 podman[452918]: 2025-11-25 17:50:27.661426541 +0000 UTC m=+0.081353716 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 12:50:27 np0005535469 podman[452920]: 2025-11-25 17:50:27.684704135 +0000 UTC m=+0.098149013 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 12:50:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3830: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3831: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.097071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030097279, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1542, "num_deletes": 250, "total_data_size": 2471697, "memory_usage": 2512664, "flush_reason": "Manual Compaction"}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030111881, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 1412433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77465, "largest_seqno": 79006, "table_properties": {"data_size": 1407242, "index_size": 2458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13648, "raw_average_key_size": 20, "raw_value_size": 1395714, "raw_average_value_size": 2114, "num_data_blocks": 113, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092862, "oldest_key_time": 1764092862, "file_creation_time": 1764093030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 14671 microseconds, and 3785 cpu microseconds.
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.111921) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 1412433 bytes OK
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.111938) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.115563) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.115577) EVENT_LOG_v1 {"time_micros": 1764093030115572, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.115595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2464997, prev total WAL file size 2464997, number of live WAL files 2.
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.116179) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323631' seq:72057594037927935, type:22 .. '6D6772737461740033353132' seq:0, type:0; will stop at (end)
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1379KB)], [182(10MB)]
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030116203, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12554031, "oldest_snapshot_seqno": -1}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9546 keys, 10279279 bytes, temperature: kUnknown
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030160894, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10279279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10221475, "index_size": 32859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 250687, "raw_average_key_size": 26, "raw_value_size": 10056931, "raw_average_value_size": 1053, "num_data_blocks": 1266, "num_entries": 9546, "num_filter_entries": 9546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.161108) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10279279 bytes
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.162599) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 280.5 rd, 229.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(16.2) write-amplify(7.3) OK, records in: 9980, records dropped: 434 output_compression: NoCompression
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.162614) EVENT_LOG_v1 {"time_micros": 1764093030162607, "job": 114, "event": "compaction_finished", "compaction_time_micros": 44758, "compaction_time_cpu_micros": 24842, "output_level": 6, "num_output_files": 1, "total_output_size": 10279279, "num_input_records": 9980, "num_output_records": 9546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030162948, "job": 114, "event": "table_file_deletion", "file_number": 184}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030165103, "job": 114, "event": "table_file_deletion", "file_number": 182}
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.116138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:50:30 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:50:30 np0005535469 nova_compute[254092]: 2025-11-25 17:50:30.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:30 np0005535469 podman[453151]: 2025-11-25 17:50:30.942121274 +0000 UTC m=+0.071313582 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:50:31 np0005535469 podman[453151]: 2025-11-25 17:50:31.032136054 +0000 UTC m=+0.161328362 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:50:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:50:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:50:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:31 np0005535469 nova_compute[254092]: 2025-11-25 17:50:31.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3832: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 13cba712-1017-492e-aafd-43db4268005a does not exist
Nov 25 12:50:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev afa8a022-da52-4465-9316-5c32a3aec5ae does not exist
Nov 25 12:50:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 98af20ff-adc6-42ac-81e2-590c5f351d2c does not exist
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:32 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:50:33 np0005535469 podman[453581]: 2025-11-25 17:50:33.06032701 +0000 UTC m=+0.039090535 container create 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:50:33 np0005535469 systemd[1]: Started libpod-conmon-1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c.scope.
Nov 25 12:50:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:50:33 np0005535469 podman[453581]: 2025-11-25 17:50:33.132781843 +0000 UTC m=+0.111545448 container init 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:50:33 np0005535469 podman[453581]: 2025-11-25 17:50:33.138780886 +0000 UTC m=+0.117544401 container start 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:50:33 np0005535469 podman[453581]: 2025-11-25 17:50:33.041368244 +0000 UTC m=+0.020131789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:50:33 np0005535469 podman[453581]: 2025-11-25 17:50:33.143064843 +0000 UTC m=+0.121828398 container attach 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:50:33 np0005535469 happy_fermat[453597]: 167 167
Nov 25 12:50:33 np0005535469 systemd[1]: libpod-1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c.scope: Deactivated successfully.
Nov 25 12:50:33 np0005535469 podman[453581]: 2025-11-25 17:50:33.145212441 +0000 UTC m=+0.123975956 container died 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:50:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c514a65e41eb2381a42ec38f7733b77f3996e849b6087bdf3bf91fdd8575989d-merged.mount: Deactivated successfully.
Nov 25 12:50:33 np0005535469 podman[453581]: 2025-11-25 17:50:33.181204341 +0000 UTC m=+0.159967856 container remove 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 12:50:33 np0005535469 systemd[1]: libpod-conmon-1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c.scope: Deactivated successfully.
Nov 25 12:50:33 np0005535469 podman[453622]: 2025-11-25 17:50:33.343074888 +0000 UTC m=+0.050351482 container create 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:50:33 np0005535469 systemd[1]: Started libpod-conmon-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope.
Nov 25 12:50:33 np0005535469 podman[453622]: 2025-11-25 17:50:33.319872796 +0000 UTC m=+0.027149410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:50:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:50:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:33 np0005535469 podman[453622]: 2025-11-25 17:50:33.473512459 +0000 UTC m=+0.180789053 container init 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:50:33 np0005535469 podman[453622]: 2025-11-25 17:50:33.486590145 +0000 UTC m=+0.193866749 container start 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:50:33 np0005535469 podman[453622]: 2025-11-25 17:50:33.492772484 +0000 UTC m=+0.200049118 container attach 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:50:33 np0005535469 nova_compute[254092]: 2025-11-25 17:50:33.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:50:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3833: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:34 np0005535469 laughing_hodgkin[453638]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:50:34 np0005535469 laughing_hodgkin[453638]: --> relative data size: 1.0
Nov 25 12:50:34 np0005535469 laughing_hodgkin[453638]: --> All data devices are unavailable
Nov 25 12:50:34 np0005535469 systemd[1]: libpod-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope: Deactivated successfully.
Nov 25 12:50:34 np0005535469 systemd[1]: libpod-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope: Consumed 1.014s CPU time.
Nov 25 12:50:34 np0005535469 podman[453622]: 2025-11-25 17:50:34.563168064 +0000 UTC m=+1.270444658 container died 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:50:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42-merged.mount: Deactivated successfully.
Nov 25 12:50:34 np0005535469 podman[453622]: 2025-11-25 17:50:34.622460088 +0000 UTC m=+1.329736682 container remove 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:50:34 np0005535469 systemd[1]: libpod-conmon-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope: Deactivated successfully.
Nov 25 12:50:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:35 np0005535469 podman[453817]: 2025-11-25 17:50:35.404405305 +0000 UTC m=+0.082039934 container create 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:50:35 np0005535469 podman[453817]: 2025-11-25 17:50:35.348131493 +0000 UTC m=+0.025766212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:50:35 np0005535469 systemd[1]: Started libpod-conmon-6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6.scope.
Nov 25 12:50:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:50:35 np0005535469 podman[453817]: 2025-11-25 17:50:35.522476239 +0000 UTC m=+0.200110878 container init 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:50:35 np0005535469 podman[453817]: 2025-11-25 17:50:35.535179005 +0000 UTC m=+0.212813634 container start 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:50:35 np0005535469 exciting_perlman[453833]: 167 167
Nov 25 12:50:35 np0005535469 systemd[1]: libpod-6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6.scope: Deactivated successfully.
Nov 25 12:50:35 np0005535469 podman[453817]: 2025-11-25 17:50:35.572398388 +0000 UTC m=+0.250033047 container attach 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:50:35 np0005535469 podman[453817]: 2025-11-25 17:50:35.572904042 +0000 UTC m=+0.250538671 container died 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:50:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-55db5ace3432b8d7f6f7b6160f8d2f4356acafdec8d3dbcb7e1cd6190da25971-merged.mount: Deactivated successfully.
Nov 25 12:50:35 np0005535469 podman[453817]: 2025-11-25 17:50:35.62537274 +0000 UTC m=+0.303007409 container remove 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:50:35 np0005535469 systemd[1]: libpod-conmon-6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6.scope: Deactivated successfully.
Nov 25 12:50:35 np0005535469 nova_compute[254092]: 2025-11-25 17:50:35.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:35 np0005535469 podman[453861]: 2025-11-25 17:50:35.843098817 +0000 UTC m=+0.050322860 container create b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:50:35 np0005535469 systemd[1]: Started libpod-conmon-b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc.scope.
Nov 25 12:50:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:50:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:35 np0005535469 podman[453861]: 2025-11-25 17:50:35.820179423 +0000 UTC m=+0.027403516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:50:35 np0005535469 podman[453861]: 2025-11-25 17:50:35.921939114 +0000 UTC m=+0.129163177 container init b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:50:35 np0005535469 podman[453861]: 2025-11-25 17:50:35.941406044 +0000 UTC m=+0.148630087 container start b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:50:35 np0005535469 podman[453861]: 2025-11-25 17:50:35.944497328 +0000 UTC m=+0.151721381 container attach b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:50:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3834: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]: {
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:    "0": [
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:        {
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "devices": [
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "/dev/loop3"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            ],
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_name": "ceph_lv0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_size": "21470642176",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "name": "ceph_lv0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "tags": {
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cluster_name": "ceph",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.crush_device_class": "",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.encrypted": "0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osd_id": "0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.type": "block",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.vdo": "0"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            },
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "type": "block",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "vg_name": "ceph_vg0"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:        }
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:    ],
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:    "1": [
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:        {
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "devices": [
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "/dev/loop4"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            ],
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_name": "ceph_lv1",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_size": "21470642176",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "name": "ceph_lv1",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "tags": {
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cluster_name": "ceph",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.crush_device_class": "",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.encrypted": "0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osd_id": "1",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.type": "block",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.vdo": "0"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            },
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "type": "block",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "vg_name": "ceph_vg1"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:        }
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:    ],
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:    "2": [
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:        {
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "devices": [
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "/dev/loop5"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            ],
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_name": "ceph_lv2",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_size": "21470642176",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "name": "ceph_lv2",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "tags": {
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.cluster_name": "ceph",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.crush_device_class": "",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.encrypted": "0",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osd_id": "2",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.type": "block",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:                "ceph.vdo": "0"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            },
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "type": "block",
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:            "vg_name": "ceph_vg2"
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:        }
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]:    ]
Nov 25 12:50:36 np0005535469 beautiful_moore[453877]: }
Nov 25 12:50:36 np0005535469 systemd[1]: libpod-b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc.scope: Deactivated successfully.
Nov 25 12:50:36 np0005535469 podman[453861]: 2025-11-25 17:50:36.720231086 +0000 UTC m=+0.927455129 container died b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:50:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f-merged.mount: Deactivated successfully.
Nov 25 12:50:36 np0005535469 podman[453861]: 2025-11-25 17:50:36.780585449 +0000 UTC m=+0.987809492 container remove b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:50:36 np0005535469 systemd[1]: libpod-conmon-b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc.scope: Deactivated successfully.
Nov 25 12:50:36 np0005535469 nova_compute[254092]: 2025-11-25 17:50:36.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:37 np0005535469 podman[454042]: 2025-11-25 17:50:37.491653287 +0000 UTC m=+0.044762569 container create aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:50:37 np0005535469 systemd[1]: Started libpod-conmon-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope.
Nov 25 12:50:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:50:37 np0005535469 podman[454042]: 2025-11-25 17:50:37.471726955 +0000 UTC m=+0.024836257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:50:37 np0005535469 podman[454042]: 2025-11-25 17:50:37.579036276 +0000 UTC m=+0.132145578 container init aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:50:37 np0005535469 podman[454042]: 2025-11-25 17:50:37.588313508 +0000 UTC m=+0.141422790 container start aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:50:37 np0005535469 podman[454042]: 2025-11-25 17:50:37.591424884 +0000 UTC m=+0.144534166 container attach aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 12:50:37 np0005535469 nostalgic_hypatia[454058]: 167 167
Nov 25 12:50:37 np0005535469 systemd[1]: libpod-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope: Deactivated successfully.
Nov 25 12:50:37 np0005535469 conmon[454058]: conmon aeccb46769799235bb21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope/container/memory.events
Nov 25 12:50:37 np0005535469 podman[454042]: 2025-11-25 17:50:37.596932264 +0000 UTC m=+0.150041576 container died aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:50:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d09580fde88e75b202a4de61dbc3c2c516ad1a922b9cabb43aeec449c0132e9f-merged.mount: Deactivated successfully.
Nov 25 12:50:37 np0005535469 podman[454042]: 2025-11-25 17:50:37.640831908 +0000 UTC m=+0.193941210 container remove aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:50:37 np0005535469 systemd[1]: libpod-conmon-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope: Deactivated successfully.
Nov 25 12:50:37 np0005535469 podman[454083]: 2025-11-25 17:50:37.852435169 +0000 UTC m=+0.060289853 container create 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:50:37 np0005535469 systemd[1]: Started libpod-conmon-260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba.scope.
Nov 25 12:50:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:50:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:50:37 np0005535469 podman[454083]: 2025-11-25 17:50:37.836053673 +0000 UTC m=+0.043908377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:50:37 np0005535469 podman[454083]: 2025-11-25 17:50:37.945355989 +0000 UTC m=+0.153210713 container init 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:50:37 np0005535469 podman[454083]: 2025-11-25 17:50:37.954885168 +0000 UTC m=+0.162739852 container start 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:50:37 np0005535469 podman[454083]: 2025-11-25 17:50:37.958445775 +0000 UTC m=+0.166300459 container attach 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:50:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3835: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]: {
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "osd_id": 1,
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "type": "bluestore"
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:    },
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "osd_id": 2,
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "type": "bluestore"
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:    },
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "osd_id": 0,
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:        "type": "bluestore"
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]:    }
Nov 25 12:50:38 np0005535469 hopeful_hermann[454099]: }
Nov 25 12:50:38 np0005535469 systemd[1]: libpod-260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba.scope: Deactivated successfully.
Nov 25 12:50:38 np0005535469 podman[454083]: 2025-11-25 17:50:38.898959009 +0000 UTC m=+1.106813693 container died 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:50:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6-merged.mount: Deactivated successfully.
Nov 25 12:50:38 np0005535469 podman[454083]: 2025-11-25 17:50:38.955051186 +0000 UTC m=+1.162905870 container remove 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:50:38 np0005535469 systemd[1]: libpod-conmon-260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba.scope: Deactivated successfully.
Nov 25 12:50:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:50:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:50:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1891583c-dd36-4336-b19f-ca50420b6da6 does not exist
Nov 25 12:50:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5922e36e-c6e4-4114-98a7-9c119a26e481 does not exist
Nov 25 12:50:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:50:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3836: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:50:40
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log']
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:50:40 np0005535469 nova_compute[254092]: 2025-11-25 17:50:40.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:50:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:50:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3837: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:41 np0005535469 nova_compute[254092]: 2025-11-25 17:50:41.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3838: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:45 np0005535469 nova_compute[254092]: 2025-11-25 17:50:45.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3839: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:47 np0005535469 nova_compute[254092]: 2025-11-25 17:50:47.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3840: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3841: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:50 np0005535469 nova_compute[254092]: 2025-11-25 17:50:50.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3842: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:52 np0005535469 nova_compute[254092]: 2025-11-25 17:50:52.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:50:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:50:53 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3843: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:50:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:50:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/229368862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:50:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:50:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/229368862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:50:55 np0005535469 nova_compute[254092]: 2025-11-25 17:50:55.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:55 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3844: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:57 np0005535469 nova_compute[254092]: 2025-11-25 17:50:57.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:50:57 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3845: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:50:58 np0005535469 podman[454196]: 2025-11-25 17:50:58.680032183 +0000 UTC m=+0.078577441 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 12:50:58 np0005535469 podman[454195]: 2025-11-25 17:50:58.695191145 +0000 UTC m=+0.093656751 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 12:50:58 np0005535469 podman[454197]: 2025-11-25 17:50:58.70714239 +0000 UTC m=+0.107326512 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 25 12:50:59 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3846: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:00 np0005535469 nova_compute[254092]: 2025-11-25 17:51:00.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:01 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3847: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:02 np0005535469 nova_compute[254092]: 2025-11-25 17:51:02.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:03 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3848: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:05 np0005535469 nova_compute[254092]: 2025-11-25 17:51:05.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:05 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3849: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:07 np0005535469 nova_compute[254092]: 2025-11-25 17:51:07.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:07 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3850: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:09 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3851: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:51:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:51:10 np0005535469 nova_compute[254092]: 2025-11-25 17:51:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:10 np0005535469 nova_compute[254092]: 2025-11-25 17:51:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:10 np0005535469 nova_compute[254092]: 2025-11-25 17:51:10.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:11 np0005535469 nova_compute[254092]: 2025-11-25 17:51:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:11 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3852: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:12 np0005535469 nova_compute[254092]: 2025-11-25 17:51:12.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:51:13.696 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:51:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:51:13.696 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:51:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:51:13.696 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:51:13 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3853: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:15 np0005535469 nova_compute[254092]: 2025-11-25 17:51:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:15 np0005535469 nova_compute[254092]: 2025-11-25 17:51:15.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:51:15 np0005535469 nova_compute[254092]: 2025-11-25 17:51:15.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:51:15 np0005535469 nova_compute[254092]: 2025-11-25 17:51:15.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:51:15 np0005535469 nova_compute[254092]: 2025-11-25 17:51:15.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:15 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3854: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:51:17 np0005535469 nova_compute[254092]: 2025-11-25 17:51:17.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:51:17 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3855: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:51:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350217163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.007 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.231 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.233 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.234 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.235 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.406 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.599 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.700 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.701 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.717 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.755 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:51:18 np0005535469 nova_compute[254092]: 2025-11-25 17:51:18.774 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:51:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:51:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680698078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:51:19 np0005535469 nova_compute[254092]: 2025-11-25 17:51:19.246 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:51:19 np0005535469 nova_compute[254092]: 2025-11-25 17:51:19.252 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:51:19 np0005535469 nova_compute[254092]: 2025-11-25 17:51:19.265 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:51:19 np0005535469 nova_compute[254092]: 2025-11-25 17:51:19.266 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:51:19 np0005535469 nova_compute[254092]: 2025-11-25 17:51:19.266 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:51:19 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3856: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:20 np0005535469 nova_compute[254092]: 2025-11-25 17:51:20.266 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:20 np0005535469 nova_compute[254092]: 2025-11-25 17:51:20.267 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:20 np0005535469 nova_compute[254092]: 2025-11-25 17:51:20.267 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:51:20 np0005535469 nova_compute[254092]: 2025-11-25 17:51:20.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:21 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3857: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:22 np0005535469 nova_compute[254092]: 2025-11-25 17:51:22.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:23 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3858: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:25 np0005535469 nova_compute[254092]: 2025-11-25 17:51:25.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:25 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3859: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:27 np0005535469 nova_compute[254092]: 2025-11-25 17:51:27.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:27 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3860: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:28 np0005535469 nova_compute[254092]: 2025-11-25 17:51:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:51:29 np0005535469 podman[454304]: 2025-11-25 17:51:29.679543669 +0000 UTC m=+0.084622634 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 12:51:29 np0005535469 podman[454305]: 2025-11-25 17:51:29.681962185 +0000 UTC m=+0.085348374 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 12:51:29 np0005535469 podman[454306]: 2025-11-25 17:51:29.696496941 +0000 UTC m=+0.104138556 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:51:29 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3861: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:30 np0005535469 nova_compute[254092]: 2025-11-25 17:51:30.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.057352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091057417, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 251, "total_data_size": 951689, "memory_usage": 965608, "flush_reason": "Manual Compaction"}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091067895, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 931728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79007, "largest_seqno": 79756, "table_properties": {"data_size": 927844, "index_size": 1663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8670, "raw_average_key_size": 19, "raw_value_size": 920112, "raw_average_value_size": 2063, "num_data_blocks": 74, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093031, "oldest_key_time": 1764093031, "file_creation_time": 1764093091, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 10614 microseconds, and 6407 cpu microseconds.
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.067962) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 931728 bytes OK
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.068001) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.069398) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.069426) EVENT_LOG_v1 {"time_micros": 1764093091069417, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.069458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 947868, prev total WAL file size 947868, number of live WAL files 2.
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.070500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(909KB)], [185(10038KB)]
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091070562, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 11211007, "oldest_snapshot_seqno": -1}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9479 keys, 9467891 bytes, temperature: kUnknown
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091142424, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 9467891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9411331, "index_size": 31743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 249954, "raw_average_key_size": 26, "raw_value_size": 9248694, "raw_average_value_size": 975, "num_data_blocks": 1213, "num_entries": 9479, "num_filter_entries": 9479, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093091, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.143687) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 9467891 bytes
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.145372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.4 rd, 130.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.8 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(22.2) write-amplify(10.2) OK, records in: 9992, records dropped: 513 output_compression: NoCompression
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.145406) EVENT_LOG_v1 {"time_micros": 1764093091145390, "job": 116, "event": "compaction_finished", "compaction_time_micros": 72606, "compaction_time_cpu_micros": 54641, "output_level": 6, "num_output_files": 1, "total_output_size": 9467891, "num_input_records": 9992, "num_output_records": 9479, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091145997, "job": 116, "event": "table_file_deletion", "file_number": 187}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091149483, "job": 116, "event": "table_file_deletion", "file_number": 185}
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.070313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:51:31 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:51:31 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3862: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:32 np0005535469 nova_compute[254092]: 2025-11-25 17:51:32.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:33 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3863: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:35 np0005535469 nova_compute[254092]: 2025-11-25 17:51:35.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:35 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3864: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:51:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 17K writes, 79K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1312 writes, 6191 keys, 1312 commit groups, 1.0 writes per commit group, ingest: 8.55 MB, 0.01 MB/s#012Interval WAL: 1312 writes, 1312 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     39.5      2.44              0.34        58    0.042       0      0       0.0       0.0#012  L6      1/0    9.03 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.1    130.3    110.6      4.44              1.51        57    0.078    411K    30K       0.0       0.0#012 Sum      1/0    9.03 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.1     84.1     85.4      6.88              1.84       115    0.060    411K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2    154.0    150.2      0.44              0.23        12    0.037     58K   2955       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    130.3    110.6      4.44              1.51        57    0.078    411K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     40.3      2.39              0.34        57    0.042       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.1 total, 600.0 interval#012Flush(GB): cumulative 0.094, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.57 GB write, 0.08 MB/s write, 0.56 GB read, 0.08 MB/s read, 6.9 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 67.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000713 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4320,64.28 MB,21.1438%) FilterBlock(116,1.11 MB,0.363716%) IndexBlock(116,1.77 MB,0.581325%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 12:51:37 np0005535469 nova_compute[254092]: 2025-11-25 17:51:37.358 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:37 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3865: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:39 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3866: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 31867035-aa9c-43a0-80f0-39baf80704ea does not exist
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b9f131e5-af65-4c53-a051-89896c3ed3c5 does not exist
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 0639f5ff-b1d5-4f2f-8137-025aae8a19ca does not exist
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:51:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:51:40
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', '.mgr', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:51:40 np0005535469 nova_compute[254092]: 2025-11-25 17:51:40.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:51:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:51:40 np0005535469 podman[454640]: 2025-11-25 17:51:40.888531448 +0000 UTC m=+0.058816872 container create a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:51:40 np0005535469 systemd[1]: Started libpod-conmon-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope.
Nov 25 12:51:40 np0005535469 podman[454640]: 2025-11-25 17:51:40.863780934 +0000 UTC m=+0.034066438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:51:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:51:40 np0005535469 podman[454640]: 2025-11-25 17:51:40.990536874 +0000 UTC m=+0.160822338 container init a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:51:41 np0005535469 podman[454640]: 2025-11-25 17:51:41.000928648 +0000 UTC m=+0.171214082 container start a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:51:41 np0005535469 podman[454640]: 2025-11-25 17:51:41.004954657 +0000 UTC m=+0.175240171 container attach a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:51:41 np0005535469 distracted_mendeleev[454656]: 167 167
Nov 25 12:51:41 np0005535469 systemd[1]: libpod-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope: Deactivated successfully.
Nov 25 12:51:41 np0005535469 conmon[454656]: conmon a955d0a2e418e41b9a93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope/container/memory.events
Nov 25 12:51:41 np0005535469 podman[454640]: 2025-11-25 17:51:41.012332088 +0000 UTC m=+0.182617542 container died a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:51:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-da09fd61977de1f7b9bc95c3b6461950d7f04722415b4c48a9f0b1a2709532be-merged.mount: Deactivated successfully.
Nov 25 12:51:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:51:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:51:41 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:51:41 np0005535469 podman[454640]: 2025-11-25 17:51:41.068078126 +0000 UTC m=+0.238363570 container remove a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:51:41 np0005535469 systemd[1]: libpod-conmon-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope: Deactivated successfully.
Nov 25 12:51:41 np0005535469 podman[454680]: 2025-11-25 17:51:41.298600731 +0000 UTC m=+0.054569917 container create 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:51:41 np0005535469 systemd[1]: Started libpod-conmon-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope.
Nov 25 12:51:41 np0005535469 podman[454680]: 2025-11-25 17:51:41.276423867 +0000 UTC m=+0.032393073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:51:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:51:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:41 np0005535469 podman[454680]: 2025-11-25 17:51:41.416779568 +0000 UTC m=+0.172748784 container init 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:51:41 np0005535469 podman[454680]: 2025-11-25 17:51:41.431389376 +0000 UTC m=+0.187358552 container start 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:51:41 np0005535469 podman[454680]: 2025-11-25 17:51:41.434874091 +0000 UTC m=+0.190843287 container attach 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:51:41 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3867: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:42 np0005535469 nova_compute[254092]: 2025-11-25 17:51:42.361 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:42 np0005535469 mystifying_borg[454697]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:51:42 np0005535469 mystifying_borg[454697]: --> relative data size: 1.0
Nov 25 12:51:42 np0005535469 mystifying_borg[454697]: --> All data devices are unavailable
Nov 25 12:51:42 np0005535469 systemd[1]: libpod-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope: Deactivated successfully.
Nov 25 12:51:42 np0005535469 podman[454680]: 2025-11-25 17:51:42.573530869 +0000 UTC m=+1.329500065 container died 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:51:42 np0005535469 systemd[1]: libpod-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope: Consumed 1.092s CPU time.
Nov 25 12:51:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9-merged.mount: Deactivated successfully.
Nov 25 12:51:42 np0005535469 podman[454680]: 2025-11-25 17:51:42.662622955 +0000 UTC m=+1.418592121 container remove 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 12:51:42 np0005535469 systemd[1]: libpod-conmon-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope: Deactivated successfully.
Nov 25 12:51:43 np0005535469 podman[454880]: 2025-11-25 17:51:43.51247896 +0000 UTC m=+0.050112085 container create 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:51:43 np0005535469 systemd[1]: Started libpod-conmon-4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068.scope.
Nov 25 12:51:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:51:43 np0005535469 podman[454880]: 2025-11-25 17:51:43.492568829 +0000 UTC m=+0.030202004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:51:43 np0005535469 podman[454880]: 2025-11-25 17:51:43.609615195 +0000 UTC m=+0.147248360 container init 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 12:51:43 np0005535469 podman[454880]: 2025-11-25 17:51:43.6204361 +0000 UTC m=+0.158069225 container start 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:51:43 np0005535469 podman[454880]: 2025-11-25 17:51:43.623464302 +0000 UTC m=+0.161097467 container attach 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:51:43 np0005535469 objective_cray[454896]: 167 167
Nov 25 12:51:43 np0005535469 systemd[1]: libpod-4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068.scope: Deactivated successfully.
Nov 25 12:51:43 np0005535469 podman[454901]: 2025-11-25 17:51:43.677406671 +0000 UTC m=+0.033632107 container died 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:51:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a55993b95c074cde371746373e1453f4f1e0697bc7a83d86116a3b67bd43781b-merged.mount: Deactivated successfully.
Nov 25 12:51:43 np0005535469 podman[454901]: 2025-11-25 17:51:43.736431947 +0000 UTC m=+0.092657323 container remove 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:51:43 np0005535469 systemd[1]: libpod-conmon-4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068.scope: Deactivated successfully.
Nov 25 12:51:43 np0005535469 podman[454923]: 2025-11-25 17:51:43.95839459 +0000 UTC m=+0.053923859 container create f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:51:43 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3868: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:43 np0005535469 systemd[1]: Started libpod-conmon-f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87.scope.
Nov 25 12:51:44 np0005535469 podman[454923]: 2025-11-25 17:51:43.938663683 +0000 UTC m=+0.034192982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:51:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:51:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:44 np0005535469 podman[454923]: 2025-11-25 17:51:44.061062145 +0000 UTC m=+0.156591444 container init f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:51:44 np0005535469 podman[454923]: 2025-11-25 17:51:44.067785848 +0000 UTC m=+0.163315127 container start f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:51:44 np0005535469 podman[454923]: 2025-11-25 17:51:44.074198573 +0000 UTC m=+0.169727842 container attach f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]: {
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:    "0": [
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:        {
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "devices": [
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "/dev/loop3"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            ],
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_name": "ceph_lv0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_size": "21470642176",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "name": "ceph_lv0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "tags": {
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cluster_name": "ceph",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.crush_device_class": "",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.encrypted": "0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osd_id": "0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.type": "block",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.vdo": "0"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            },
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "type": "block",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "vg_name": "ceph_vg0"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:        }
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:    ],
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:    "1": [
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:        {
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "devices": [
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "/dev/loop4"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            ],
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_name": "ceph_lv1",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_size": "21470642176",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "name": "ceph_lv1",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "tags": {
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cluster_name": "ceph",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.crush_device_class": "",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.encrypted": "0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osd_id": "1",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.type": "block",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.vdo": "0"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            },
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "type": "block",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "vg_name": "ceph_vg1"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:        }
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:    ],
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:    "2": [
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:        {
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "devices": [
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "/dev/loop5"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            ],
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_name": "ceph_lv2",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_size": "21470642176",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "name": "ceph_lv2",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "tags": {
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.cluster_name": "ceph",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.crush_device_class": "",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.encrypted": "0",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osd_id": "2",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.type": "block",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:                "ceph.vdo": "0"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            },
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "type": "block",
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:            "vg_name": "ceph_vg2"
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:        }
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]:    ]
Nov 25 12:51:44 np0005535469 unruffled_blackwell[454939]: }
Nov 25 12:51:44 np0005535469 systemd[1]: libpod-f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87.scope: Deactivated successfully.
Nov 25 12:51:44 np0005535469 podman[454923]: 2025-11-25 17:51:44.902198164 +0000 UTC m=+0.997727523 container died f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:51:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544-merged.mount: Deactivated successfully.
Nov 25 12:51:44 np0005535469 podman[454923]: 2025-11-25 17:51:44.963585315 +0000 UTC m=+1.059114584 container remove f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:51:44 np0005535469 systemd[1]: libpod-conmon-f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87.scope: Deactivated successfully.
Nov 25 12:51:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:45 np0005535469 nova_compute[254092]: 2025-11-25 17:51:45.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:45 np0005535469 podman[455101]: 2025-11-25 17:51:45.860600125 +0000 UTC m=+0.060030845 container create 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:51:45 np0005535469 systemd[1]: Started libpod-conmon-8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa.scope.
Nov 25 12:51:45 np0005535469 podman[455101]: 2025-11-25 17:51:45.835199173 +0000 UTC m=+0.034629883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:51:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:51:45 np0005535469 podman[455101]: 2025-11-25 17:51:45.961017479 +0000 UTC m=+0.160448209 container init 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:51:45 np0005535469 podman[455101]: 2025-11-25 17:51:45.970690722 +0000 UTC m=+0.170121442 container start 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:51:45 np0005535469 podman[455101]: 2025-11-25 17:51:45.97541583 +0000 UTC m=+0.174846570 container attach 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:51:45 np0005535469 musing_davinci[455118]: 167 167
Nov 25 12:51:45 np0005535469 systemd[1]: libpod-8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa.scope: Deactivated successfully.
Nov 25 12:51:45 np0005535469 podman[455101]: 2025-11-25 17:51:45.978865485 +0000 UTC m=+0.178296215 container died 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:51:45 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3869: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0f7c184eab5a43cf268f16e90352d2ed20123f05cb93084c7aa991683c4c545f-merged.mount: Deactivated successfully.
Nov 25 12:51:46 np0005535469 podman[455101]: 2025-11-25 17:51:46.032160696 +0000 UTC m=+0.231591396 container remove 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:51:46 np0005535469 systemd[1]: libpod-conmon-8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa.scope: Deactivated successfully.
Nov 25 12:51:46 np0005535469 podman[455141]: 2025-11-25 17:51:46.248418583 +0000 UTC m=+0.044017030 container create 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:51:46 np0005535469 systemd[1]: Started libpod-conmon-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope.
Nov 25 12:51:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:51:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:46 np0005535469 podman[455141]: 2025-11-25 17:51:46.233798595 +0000 UTC m=+0.029397062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:51:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:46 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:51:46 np0005535469 podman[455141]: 2025-11-25 17:51:46.345722922 +0000 UTC m=+0.141321379 container init 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 12:51:46 np0005535469 podman[455141]: 2025-11-25 17:51:46.354301676 +0000 UTC m=+0.149900133 container start 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:51:46 np0005535469 podman[455141]: 2025-11-25 17:51:46.357152463 +0000 UTC m=+0.152750920 container attach 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:51:47 np0005535469 nova_compute[254092]: 2025-11-25 17:51:47.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]: {
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "osd_id": 1,
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "type": "bluestore"
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:    },
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "osd_id": 2,
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "type": "bluestore"
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:    },
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "osd_id": 0,
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:        "type": "bluestore"
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]:    }
Nov 25 12:51:47 np0005535469 brave_lederberg[455158]: }
Nov 25 12:51:47 np0005535469 systemd[1]: libpod-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope: Deactivated successfully.
Nov 25 12:51:47 np0005535469 systemd[1]: libpod-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope: Consumed 1.224s CPU time.
Nov 25 12:51:47 np0005535469 podman[455141]: 2025-11-25 17:51:47.571678296 +0000 UTC m=+1.367276773 container died 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:51:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f-merged.mount: Deactivated successfully.
Nov 25 12:51:47 np0005535469 podman[455141]: 2025-11-25 17:51:47.657278307 +0000 UTC m=+1.452876754 container remove 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:51:47 np0005535469 systemd[1]: libpod-conmon-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope: Deactivated successfully.
Nov 25 12:51:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:51:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:51:47 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:51:47 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:51:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 966f6b15-0798-4064-9f59-986d8d24e34d does not exist
Nov 25 12:51:47 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9a4109b7-42a7-4bf1-97a6-9f44e04c56e7 does not exist
Nov 25 12:51:47 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3870: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:51:48 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:51:49 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3871: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:50 np0005535469 nova_compute[254092]: 2025-11-25 17:51:50.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:51 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3872: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:51:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:51:52 np0005535469 nova_compute[254092]: 2025-11-25 17:51:52.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3873: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:51:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:51:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391902824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:51:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:51:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391902824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:51:55 np0005535469 nova_compute[254092]: 2025-11-25 17:51:55.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3874: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:51:57 np0005535469 nova_compute[254092]: 2025-11-25 17:51:57.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:51:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3875: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3876: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:00 np0005535469 podman[455256]: 2025-11-25 17:52:00.674013302 +0000 UTC m=+0.081463389 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:52:00 np0005535469 podman[455255]: 2025-11-25 17:52:00.716627042 +0000 UTC m=+0.128108488 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 12:52:00 np0005535469 podman[455257]: 2025-11-25 17:52:00.725673299 +0000 UTC m=+0.131723067 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:52:00 np0005535469 nova_compute[254092]: 2025-11-25 17:52:00.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3877: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:02 np0005535469 nova_compute[254092]: 2025-11-25 17:52:02.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3878: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:05 np0005535469 nova_compute[254092]: 2025-11-25 17:52:05.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3879: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:07 np0005535469 nova_compute[254092]: 2025-11-25 17:52:07.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3880: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3881: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:52:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:52:10 np0005535469 nova_compute[254092]: 2025-11-25 17:52:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:10 np0005535469 nova_compute[254092]: 2025-11-25 17:52:10.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:11 np0005535469 nova_compute[254092]: 2025-11-25 17:52:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3882: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:12 np0005535469 nova_compute[254092]: 2025-11-25 17:52:12.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:12 np0005535469 nova_compute[254092]: 2025-11-25 17:52:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:52:13.697 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:52:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:52:13.698 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:52:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:52:13.698 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:52:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3883: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:15 np0005535469 nova_compute[254092]: 2025-11-25 17:52:15.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3884: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:17 np0005535469 nova_compute[254092]: 2025-11-25 17:52:17.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:17 np0005535469 nova_compute[254092]: 2025-11-25 17:52:17.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:17 np0005535469 nova_compute[254092]: 2025-11-25 17:52:17.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:52:17 np0005535469 nova_compute[254092]: 2025-11-25 17:52:17.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:52:17 np0005535469 nova_compute[254092]: 2025-11-25 17:52:17.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:17 np0005535469 nova_compute[254092]: 2025-11-25 17:52:17.661 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:52:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3885: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.542 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.542 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.542 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.543 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:52:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:52:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514194135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:52:19 np0005535469 nova_compute[254092]: 2025-11-25 17:52:19.987 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:52:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3886: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.176 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.178 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3613MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.178 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.178 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.248 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.249 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.273 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:52:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:52:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419287978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.764 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.776 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.797 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.800 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.801 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:52:20 np0005535469 nova_compute[254092]: 2025-11-25 17:52:20.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:21 np0005535469 nova_compute[254092]: 2025-11-25 17:52:21.803 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3887: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:22 np0005535469 nova_compute[254092]: 2025-11-25 17:52:22.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3888: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:25 np0005535469 nova_compute[254092]: 2025-11-25 17:52:25.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3889: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:27 np0005535469 nova_compute[254092]: 2025-11-25 17:52:27.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3890: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3891: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:30 np0005535469 nova_compute[254092]: 2025-11-25 17:52:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:30 np0005535469 nova_compute[254092]: 2025-11-25 17:52:30.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:31 np0005535469 podman[455363]: 2025-11-25 17:52:31.680755779 +0000 UTC m=+0.082933169 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:52:31 np0005535469 podman[455362]: 2025-11-25 17:52:31.692739115 +0000 UTC m=+0.095435859 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:52:31 np0005535469 podman[455364]: 2025-11-25 17:52:31.728507689 +0000 UTC m=+0.125078687 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:52:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3892: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:32 np0005535469 nova_compute[254092]: 2025-11-25 17:52:32.510 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:33 np0005535469 nova_compute[254092]: 2025-11-25 17:52:33.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:52:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3893: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:35 np0005535469 nova_compute[254092]: 2025-11-25 17:52:35.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3894: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:37 np0005535469 nova_compute[254092]: 2025-11-25 17:52:37.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3895: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3896: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:52:40
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'volumes', 'images', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:52:40 np0005535469 nova_compute[254092]: 2025-11-25 17:52:40.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:40 np0005535469 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 12:52:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3897: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:42 np0005535469 nova_compute[254092]: 2025-11-25 17:52:42.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3898: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:45 np0005535469 nova_compute[254092]: 2025-11-25 17:52:45.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3899: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:47 np0005535469 nova_compute[254092]: 2025-11-25 17:52:47.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3900: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:52:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a0fa81ab-1844-4dd5-92f6-4f100751e09f does not exist
Nov 25 12:52:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4cc3f2f0-14bd-4dd5-b533-4227dbb0755d does not exist
Nov 25 12:52:48 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8382a0f2-8d6e-4f70-8e14-ff280ed7712d does not exist
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:52:48 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:52:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:52:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:52:49 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:52:49 np0005535469 podman[455700]: 2025-11-25 17:52:49.711561891 +0000 UTC m=+0.058256427 container create a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:52:49 np0005535469 systemd[1]: Started libpod-conmon-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope.
Nov 25 12:52:49 np0005535469 podman[455700]: 2025-11-25 17:52:49.681458862 +0000 UTC m=+0.028153448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:52:49 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:52:49 np0005535469 podman[455700]: 2025-11-25 17:52:49.82430407 +0000 UTC m=+0.170998666 container init a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:52:49 np0005535469 podman[455700]: 2025-11-25 17:52:49.833716537 +0000 UTC m=+0.180411083 container start a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 12:52:49 np0005535469 podman[455700]: 2025-11-25 17:52:49.837922311 +0000 UTC m=+0.184616847 container attach a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:52:49 np0005535469 ecstatic_engelbart[455716]: 167 167
Nov 25 12:52:49 np0005535469 systemd[1]: libpod-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope: Deactivated successfully.
Nov 25 12:52:49 np0005535469 conmon[455716]: conmon a7ee95eeddfdfa1de295 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope/container/memory.events
Nov 25 12:52:49 np0005535469 podman[455700]: 2025-11-25 17:52:49.843193065 +0000 UTC m=+0.189887611 container died a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:52:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8cf7bf8594c7de5fb4fc9b3634c66a8901dbe3cb351e481e3a3217c4af0eb4b9-merged.mount: Deactivated successfully.
Nov 25 12:52:49 np0005535469 podman[455700]: 2025-11-25 17:52:49.9036221 +0000 UTC m=+0.250316616 container remove a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:52:49 np0005535469 systemd[1]: libpod-conmon-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope: Deactivated successfully.
Nov 25 12:52:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3901: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:50 np0005535469 podman[455740]: 2025-11-25 17:52:50.109854554 +0000 UTC m=+0.069676848 container create 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.122188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170122259, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 873, "num_deletes": 254, "total_data_size": 1164844, "memory_usage": 1187336, "flush_reason": "Manual Compaction"}
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170135463, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 1153987, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79757, "largest_seqno": 80629, "table_properties": {"data_size": 1149585, "index_size": 2053, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9536, "raw_average_key_size": 19, "raw_value_size": 1140705, "raw_average_value_size": 2290, "num_data_blocks": 92, "num_entries": 498, "num_filter_entries": 498, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093092, "oldest_key_time": 1764093092, "file_creation_time": 1764093170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 13333 microseconds, and 9329 cpu microseconds.
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.135525) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 1153987 bytes OK
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.135553) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.136848) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.136866) EVENT_LOG_v1 {"time_micros": 1764093170136859, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.136889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1160543, prev total WAL file size 1187031, number of live WAL files 2.
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.137565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373730' seq:0, type:0; will stop at (end)
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(1126KB)], [188(9245KB)]
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170137691, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 10621878, "oldest_snapshot_seqno": -1}
Nov 25 12:52:50 np0005535469 systemd[1]: Started libpod-conmon-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope.
Nov 25 12:52:50 np0005535469 podman[455740]: 2025-11-25 17:52:50.086450137 +0000 UTC m=+0.046272461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:52:50 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:50 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9454 keys, 10519778 bytes, temperature: kUnknown
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170221893, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 10519778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10461573, "index_size": 33452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23685, "raw_key_size": 250351, "raw_average_key_size": 26, "raw_value_size": 10297591, "raw_average_value_size": 1089, "num_data_blocks": 1284, "num_entries": 9454, "num_filter_entries": 9454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.222190) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 10519778 bytes
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.223693) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.0 rd, 124.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.0 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(18.3) write-amplify(9.1) OK, records in: 9977, records dropped: 523 output_compression: NoCompression
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.223721) EVENT_LOG_v1 {"time_micros": 1764093170223711, "job": 118, "event": "compaction_finished", "compaction_time_micros": 84276, "compaction_time_cpu_micros": 44572, "output_level": 6, "num_output_files": 1, "total_output_size": 10519778, "num_input_records": 9977, "num_output_records": 9454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170224023, "job": 118, "event": "table_file_deletion", "file_number": 190}
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170225774, "job": 118, "event": "table_file_deletion", "file_number": 188}
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.137462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:52:50 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:52:50 np0005535469 podman[455740]: 2025-11-25 17:52:50.231956089 +0000 UTC m=+0.191778453 container init 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:52:50 np0005535469 podman[455740]: 2025-11-25 17:52:50.250743559 +0000 UTC m=+0.210565883 container start 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 12:52:50 np0005535469 podman[455740]: 2025-11-25 17:52:50.255196621 +0000 UTC m=+0.215018935 container attach 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:52:50 np0005535469 nova_compute[254092]: 2025-11-25 17:52:50.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:51 np0005535469 objective_colden[455756]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:52:51 np0005535469 objective_colden[455756]: --> relative data size: 1.0
Nov 25 12:52:51 np0005535469 objective_colden[455756]: --> All data devices are unavailable
Nov 25 12:52:51 np0005535469 systemd[1]: libpod-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope: Deactivated successfully.
Nov 25 12:52:51 np0005535469 systemd[1]: libpod-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope: Consumed 1.318s CPU time.
Nov 25 12:52:51 np0005535469 podman[455740]: 2025-11-25 17:52:51.621731133 +0000 UTC m=+1.581553457 container died 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:52:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945-merged.mount: Deactivated successfully.
Nov 25 12:52:51 np0005535469 podman[455740]: 2025-11-25 17:52:51.705454843 +0000 UTC m=+1.665277167 container remove 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:52:51 np0005535469 systemd[1]: libpod-conmon-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope: Deactivated successfully.
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3902: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:52:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:52:52 np0005535469 nova_compute[254092]: 2025-11-25 17:52:52.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:52 np0005535469 podman[455940]: 2025-11-25 17:52:52.712867717 +0000 UTC m=+0.054557365 container create d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:52:52 np0005535469 systemd[1]: Started libpod-conmon-d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9.scope.
Nov 25 12:52:52 np0005535469 podman[455940]: 2025-11-25 17:52:52.683858588 +0000 UTC m=+0.025548246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:52:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:52:52 np0005535469 podman[455940]: 2025-11-25 17:52:52.828495626 +0000 UTC m=+0.170185284 container init d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:52:52 np0005535469 podman[455940]: 2025-11-25 17:52:52.843998327 +0000 UTC m=+0.185687975 container start d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 12:52:52 np0005535469 podman[455940]: 2025-11-25 17:52:52.848309075 +0000 UTC m=+0.189998933 container attach d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:52:52 np0005535469 dazzling_kirch[455957]: 167 167
Nov 25 12:52:52 np0005535469 systemd[1]: libpod-d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9.scope: Deactivated successfully.
Nov 25 12:52:52 np0005535469 podman[455940]: 2025-11-25 17:52:52.852498719 +0000 UTC m=+0.194188357 container died d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:52:52 np0005535469 systemd[1]: var-lib-containers-storage-overlay-050116956e7cd819ad3874478d6c98a1ea01125551ce43662ae4367bc213f188-merged.mount: Deactivated successfully.
Nov 25 12:52:52 np0005535469 podman[455940]: 2025-11-25 17:52:52.909008837 +0000 UTC m=+0.250698475 container remove d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:52:52 np0005535469 systemd[1]: libpod-conmon-d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9.scope: Deactivated successfully.
Nov 25 12:52:53 np0005535469 podman[455981]: 2025-11-25 17:52:53.167457854 +0000 UTC m=+0.072722771 container create 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 12:52:53 np0005535469 podman[455981]: 2025-11-25 17:52:53.135828092 +0000 UTC m=+0.041093019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:52:53 np0005535469 systemd[1]: Started libpod-conmon-48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58.scope.
Nov 25 12:52:53 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:52:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:53 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:53 np0005535469 podman[455981]: 2025-11-25 17:52:53.307987719 +0000 UTC m=+0.213252676 container init 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 12:52:53 np0005535469 podman[455981]: 2025-11-25 17:52:53.320218472 +0000 UTC m=+0.225483389 container start 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:52:53 np0005535469 podman[455981]: 2025-11-25 17:52:53.325835025 +0000 UTC m=+0.231099952 container attach 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:52:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3903: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]: {
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:    "0": [
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:        {
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "devices": [
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "/dev/loop3"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            ],
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_name": "ceph_lv0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_size": "21470642176",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "name": "ceph_lv0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "tags": {
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cluster_name": "ceph",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.crush_device_class": "",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.encrypted": "0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osd_id": "0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.type": "block",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.vdo": "0"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            },
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "type": "block",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "vg_name": "ceph_vg0"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:        }
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:    ],
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:    "1": [
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:        {
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "devices": [
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "/dev/loop4"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            ],
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_name": "ceph_lv1",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_size": "21470642176",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "name": "ceph_lv1",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "tags": {
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cluster_name": "ceph",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.crush_device_class": "",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.encrypted": "0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osd_id": "1",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.type": "block",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.vdo": "0"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            },
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "type": "block",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "vg_name": "ceph_vg1"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:        }
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:    ],
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:    "2": [
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:        {
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "devices": [
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "/dev/loop5"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            ],
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_name": "ceph_lv2",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_size": "21470642176",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "name": "ceph_lv2",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "tags": {
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.cluster_name": "ceph",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.crush_device_class": "",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.encrypted": "0",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osd_id": "2",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.type": "block",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:                "ceph.vdo": "0"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            },
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "type": "block",
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:            "vg_name": "ceph_vg2"
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:        }
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]:    ]
Nov 25 12:52:54 np0005535469 kind_zhukovsky[455998]: }
Nov 25 12:52:54 np0005535469 systemd[1]: libpod-48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58.scope: Deactivated successfully.
Nov 25 12:52:54 np0005535469 podman[455981]: 2025-11-25 17:52:54.143169666 +0000 UTC m=+1.048434593 container died 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:52:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51-merged.mount: Deactivated successfully.
Nov 25 12:52:54 np0005535469 podman[455981]: 2025-11-25 17:52:54.218540858 +0000 UTC m=+1.123805745 container remove 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:52:54 np0005535469 systemd[1]: libpod-conmon-48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58.scope: Deactivated successfully.
Nov 25 12:52:54 np0005535469 podman[456159]: 2025-11-25 17:52:54.931005653 +0000 UTC m=+0.052231702 container create 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:52:54 np0005535469 systemd[1]: Started libpod-conmon-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope.
Nov 25 12:52:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:52:54 np0005535469 podman[456159]: 2025-11-25 17:52:54.90441314 +0000 UTC m=+0.025639279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:52:55 np0005535469 podman[456159]: 2025-11-25 17:52:55.012511632 +0000 UTC m=+0.133737771 container init 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:52:55 np0005535469 podman[456159]: 2025-11-25 17:52:55.021791695 +0000 UTC m=+0.143017744 container start 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:52:55 np0005535469 podman[456159]: 2025-11-25 17:52:55.025174907 +0000 UTC m=+0.146401056 container attach 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:52:55 np0005535469 systemd[1]: libpod-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope: Deactivated successfully.
Nov 25 12:52:55 np0005535469 distracted_pare[456175]: 167 167
Nov 25 12:52:55 np0005535469 conmon[456175]: conmon 23c52e1c999d53c0828e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope/container/memory.events
Nov 25 12:52:55 np0005535469 podman[456159]: 2025-11-25 17:52:55.029961238 +0000 UTC m=+0.151187327 container died 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:52:55 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e9c1147e321e5bd6f7509fb6afb78bb84aad29379a6883656609e28a092722ed-merged.mount: Deactivated successfully.
Nov 25 12:52:55 np0005535469 podman[456159]: 2025-11-25 17:52:55.086502497 +0000 UTC m=+0.207728546 container remove 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:52:55 np0005535469 systemd[1]: libpod-conmon-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope: Deactivated successfully.
Nov 25 12:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:52:55 np0005535469 podman[456201]: 2025-11-25 17:52:55.334594 +0000 UTC m=+0.071548438 container create 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:52:55 np0005535469 systemd[1]: Started libpod-conmon-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope.
Nov 25 12:52:55 np0005535469 podman[456201]: 2025-11-25 17:52:55.310439683 +0000 UTC m=+0.047394191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:52:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:52:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:52:55 np0005535469 podman[456201]: 2025-11-25 17:52:55.446015834 +0000 UTC m=+0.182970292 container init 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:52:55 np0005535469 podman[456201]: 2025-11-25 17:52:55.461258028 +0000 UTC m=+0.198212466 container start 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:52:55 np0005535469 podman[456201]: 2025-11-25 17:52:55.465466783 +0000 UTC m=+0.202421231 container attach 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:52:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34646103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:52:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:52:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34646103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:52:55 np0005535469 nova_compute[254092]: 2025-11-25 17:52:55.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3904: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:52:56 np0005535469 jovial_pike[456219]: {
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "osd_id": 1,
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "type": "bluestore"
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:    },
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "osd_id": 2,
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "type": "bluestore"
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:    },
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "osd_id": 0,
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:        "type": "bluestore"
Nov 25 12:52:56 np0005535469 jovial_pike[456219]:    }
Nov 25 12:52:56 np0005535469 jovial_pike[456219]: }
Nov 25 12:52:56 np0005535469 systemd[1]: libpod-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope: Deactivated successfully.
Nov 25 12:52:56 np0005535469 podman[456201]: 2025-11-25 17:52:56.635588028 +0000 UTC m=+1.372542456 container died 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:52:56 np0005535469 systemd[1]: libpod-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope: Consumed 1.182s CPU time.
Nov 25 12:52:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe-merged.mount: Deactivated successfully.
Nov 25 12:52:56 np0005535469 podman[456201]: 2025-11-25 17:52:56.697442042 +0000 UTC m=+1.434396470 container remove 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 12:52:56 np0005535469 systemd[1]: libpod-conmon-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope: Deactivated successfully.
Nov 25 12:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:52:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:52:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:52:56 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:52:56 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 9bffd952-46c8-41cf-9b1e-8e8e77866204 does not exist
Nov 25 12:52:56 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 501a36e1-1941-4fb4-9186-4e51fc2913ed does not exist
Nov 25 12:52:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:52:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:52:57 np0005535469 nova_compute[254092]: 2025-11-25 17:52:57.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:52:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3905: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3906: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:00 np0005535469 nova_compute[254092]: 2025-11-25 17:53:00.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3907: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:02 np0005535469 nova_compute[254092]: 2025-11-25 17:53:02.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:02 np0005535469 podman[456316]: 2025-11-25 17:53:02.704118037 +0000 UTC m=+0.102075481 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 12:53:02 np0005535469 podman[456317]: 2025-11-25 17:53:02.724704017 +0000 UTC m=+0.120011759 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 12:53:02 np0005535469 podman[456318]: 2025-11-25 17:53:02.759725811 +0000 UTC m=+0.156825951 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 12:53:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3908: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:05 np0005535469 nova_compute[254092]: 2025-11-25 17:53:05.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3909: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:07 np0005535469 nova_compute[254092]: 2025-11-25 17:53:07.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3910: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3911: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:53:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:53:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:10 np0005535469 nova_compute[254092]: 2025-11-25 17:53:10.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:11 np0005535469 nova_compute[254092]: 2025-11-25 17:53:11.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3912: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:12 np0005535469 nova_compute[254092]: 2025-11-25 17:53:12.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:12 np0005535469 nova_compute[254092]: 2025-11-25 17:53:12.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:13 np0005535469 nova_compute[254092]: 2025-11-25 17:53:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:53:13.699 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:53:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:53:13.699 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:53:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:53:13.699 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:53:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3913: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:15 np0005535469 nova_compute[254092]: 2025-11-25 17:53:15.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3914: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:17 np0005535469 nova_compute[254092]: 2025-11-25 17:53:17.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3915: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:18 np0005535469 nova_compute[254092]: 2025-11-25 17:53:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:18 np0005535469 nova_compute[254092]: 2025-11-25 17:53:18.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:53:18 np0005535469 nova_compute[254092]: 2025-11-25 17:53:18.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:53:18 np0005535469 nova_compute[254092]: 2025-11-25 17:53:18.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.528 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:53:19 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:53:19 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147502024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:53:19 np0005535469 nova_compute[254092]: 2025-11-25 17:53:19.989 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:53:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3916: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.190 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.192 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3623MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.192 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.192 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.260 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.260 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.285 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:53:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:53:20 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623878651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.728 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.735 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.747 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.749 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.749 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:53:20 np0005535469 nova_compute[254092]: 2025-11-25 17:53:20.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:21 np0005535469 nova_compute[254092]: 2025-11-25 17:53:21.750 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:21 np0005535469 nova_compute[254092]: 2025-11-25 17:53:21.751 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:21 np0005535469 nova_compute[254092]: 2025-11-25 17:53:21.751 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:53:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3917: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:22 np0005535469 nova_compute[254092]: 2025-11-25 17:53:22.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3918: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:25 np0005535469 nova_compute[254092]: 2025-11-25 17:53:25.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3919: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:27 np0005535469 nova_compute[254092]: 2025-11-25 17:53:27.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3920: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3921: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:30 np0005535469 nova_compute[254092]: 2025-11-25 17:53:30.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:53:30 np0005535469 nova_compute[254092]: 2025-11-25 17:53:30.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3922: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:32 np0005535469 nova_compute[254092]: 2025-11-25 17:53:32.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:33 np0005535469 podman[456421]: 2025-11-25 17:53:33.662554087 +0000 UTC m=+0.064832165 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:53:33 np0005535469 podman[456420]: 2025-11-25 17:53:33.688921495 +0000 UTC m=+0.089793456 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:53:33 np0005535469 podman[456422]: 2025-11-25 17:53:33.708010645 +0000 UTC m=+0.101668479 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 12:53:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3923: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:35 np0005535469 nova_compute[254092]: 2025-11-25 17:53:35.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3924: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:37 np0005535469 nova_compute[254092]: 2025-11-25 17:53:37.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3925: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3926: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:53:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:53:40
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr']
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:53:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:53:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:53:40 np0005535469 nova_compute[254092]: 2025-11-25 17:53:40.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3927: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:42 np0005535469 nova_compute[254092]: 2025-11-25 17:53:42.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3928: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:45 np0005535469 nova_compute[254092]: 2025-11-25 17:53:45.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3929: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:47 np0005535469 nova_compute[254092]: 2025-11-25 17:53:47.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3930: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3931: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:53:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7201.2 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:53:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:50 np0005535469 nova_compute[254092]: 2025-11-25 17:53:50.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3932: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:53:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:53:52 np0005535469 nova_compute[254092]: 2025-11-25 17:53:52.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3933: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:53:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3982988324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:53:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:53:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3982988324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:53:55 np0005535469 nova_compute[254092]: 2025-11-25 17:53:55.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3934: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:53:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 62cb84fc-1604-4bb8-a272-2570eaf7ecef does not exist
Nov 25 12:53:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 265d3ce0-41dc-4382-a931-cfd77324b3f3 does not exist
Nov 25 12:53:57 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b3a8c510-d8f3-400d-938b-ea64f9ba8cbf does not exist
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:53:57 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:53:57 np0005535469 nova_compute[254092]: 2025-11-25 17:53:57.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:53:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3935: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:53:58 np0005535469 podman[456752]: 2025-11-25 17:53:58.443599199 +0000 UTC m=+0.039402450 container create 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:53:58 np0005535469 systemd[1]: Started libpod-conmon-2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406.scope.
Nov 25 12:53:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:53:58 np0005535469 podman[456752]: 2025-11-25 17:53:58.425587961 +0000 UTC m=+0.021391232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:53:58 np0005535469 podman[456752]: 2025-11-25 17:53:58.523275521 +0000 UTC m=+0.119078782 container init 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:53:58 np0005535469 podman[456752]: 2025-11-25 17:53:58.536429448 +0000 UTC m=+0.132232689 container start 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 12:53:58 np0005535469 podman[456752]: 2025-11-25 17:53:58.541725182 +0000 UTC m=+0.137528443 container attach 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 12:53:58 np0005535469 frosty_almeida[456769]: 167 167
Nov 25 12:53:58 np0005535469 systemd[1]: libpod-2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406.scope: Deactivated successfully.
Nov 25 12:53:58 np0005535469 podman[456752]: 2025-11-25 17:53:58.544122677 +0000 UTC m=+0.139925928 container died 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:53:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9a17e9a14b1ed0007aac89e941baecba1cf53b22578b218afbed794313aaef79-merged.mount: Deactivated successfully.
Nov 25 12:53:58 np0005535469 podman[456752]: 2025-11-25 17:53:58.587820493 +0000 UTC m=+0.183623744 container remove 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:53:58 np0005535469 systemd[1]: libpod-conmon-2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406.scope: Deactivated successfully.
Nov 25 12:53:58 np0005535469 podman[456795]: 2025-11-25 17:53:58.841137526 +0000 UTC m=+0.062432145 container create aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:53:58 np0005535469 systemd[1]: Started libpod-conmon-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope.
Nov 25 12:53:58 np0005535469 podman[456795]: 2025-11-25 17:53:58.819896499 +0000 UTC m=+0.041191128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:53:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:53:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:53:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:53:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:53:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:53:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:53:58 np0005535469 podman[456795]: 2025-11-25 17:53:58.9481602 +0000 UTC m=+0.169454799 container init aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:53:58 np0005535469 podman[456795]: 2025-11-25 17:53:58.963405654 +0000 UTC m=+0.184700253 container start aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 12:53:58 np0005535469 podman[456795]: 2025-11-25 17:53:58.966829627 +0000 UTC m=+0.188124216 container attach aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:54:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3936: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:00 np0005535469 gallant_khorana[456812]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:54:00 np0005535469 gallant_khorana[456812]: --> relative data size: 1.0
Nov 25 12:54:00 np0005535469 gallant_khorana[456812]: --> All data devices are unavailable
Nov 25 12:54:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:00 np0005535469 systemd[1]: libpod-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope: Deactivated successfully.
Nov 25 12:54:00 np0005535469 podman[456795]: 2025-11-25 17:54:00.178756231 +0000 UTC m=+1.400050900 container died aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:54:00 np0005535469 systemd[1]: libpod-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope: Consumed 1.168s CPU time.
Nov 25 12:54:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2-merged.mount: Deactivated successfully.
Nov 25 12:54:00 np0005535469 podman[456795]: 2025-11-25 17:54:00.2693639 +0000 UTC m=+1.490658509 container remove aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 12:54:00 np0005535469 systemd[1]: libpod-conmon-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope: Deactivated successfully.
Nov 25 12:54:00 np0005535469 nova_compute[254092]: 2025-11-25 17:54:00.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:01 np0005535469 podman[456995]: 2025-11-25 17:54:01.0591314 +0000 UTC m=+0.052558427 container create 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 12:54:01 np0005535469 systemd[1]: Started libpod-conmon-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope.
Nov 25 12:54:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:54:01 np0005535469 podman[456995]: 2025-11-25 17:54:01.039341323 +0000 UTC m=+0.032768330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:54:01 np0005535469 podman[456995]: 2025-11-25 17:54:01.153673134 +0000 UTC m=+0.147100201 container init 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 12:54:01 np0005535469 podman[456995]: 2025-11-25 17:54:01.160610073 +0000 UTC m=+0.154037090 container start 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 12:54:01 np0005535469 podman[456995]: 2025-11-25 17:54:01.167910642 +0000 UTC m=+0.161337669 container attach 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:54:01 np0005535469 admiring_fermi[457012]: 167 167
Nov 25 12:54:01 np0005535469 systemd[1]: libpod-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope: Deactivated successfully.
Nov 25 12:54:01 np0005535469 conmon[457012]: conmon 82b9bcef644d15a38017 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope/container/memory.events
Nov 25 12:54:01 np0005535469 podman[456995]: 2025-11-25 17:54:01.171464808 +0000 UTC m=+0.164891805 container died 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:54:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1c1152bea6dcb4e1f4ce5114eecd8193713508195d4b8474df440d86b55a01d6-merged.mount: Deactivated successfully.
Nov 25 12:54:01 np0005535469 podman[456995]: 2025-11-25 17:54:01.212247144 +0000 UTC m=+0.205674131 container remove 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:54:01 np0005535469 systemd[1]: libpod-conmon-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope: Deactivated successfully.
Nov 25 12:54:01 np0005535469 podman[457037]: 2025-11-25 17:54:01.43911317 +0000 UTC m=+0.071784388 container create 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 12:54:01 np0005535469 systemd[1]: Started libpod-conmon-82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106.scope.
Nov 25 12:54:01 np0005535469 podman[457037]: 2025-11-25 17:54:01.411044718 +0000 UTC m=+0.043715986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:54:01 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:54:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:01 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:01 np0005535469 podman[457037]: 2025-11-25 17:54:01.538188289 +0000 UTC m=+0.170859557 container init 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:54:01 np0005535469 podman[457037]: 2025-11-25 17:54:01.553337649 +0000 UTC m=+0.186008847 container start 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 12:54:01 np0005535469 podman[457037]: 2025-11-25 17:54:01.557494873 +0000 UTC m=+0.190166141 container attach 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:54:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3937: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]: {
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:    "0": [
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:        {
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "devices": [
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "/dev/loop3"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            ],
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_name": "ceph_lv0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_size": "21470642176",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "name": "ceph_lv0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "tags": {
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cluster_name": "ceph",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.crush_device_class": "",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.encrypted": "0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osd_id": "0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.type": "block",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.vdo": "0"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            },
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "type": "block",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "vg_name": "ceph_vg0"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:        }
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:    ],
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:    "1": [
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:        {
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "devices": [
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "/dev/loop4"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            ],
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_name": "ceph_lv1",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_size": "21470642176",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "name": "ceph_lv1",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "tags": {
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cluster_name": "ceph",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.crush_device_class": "",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.encrypted": "0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osd_id": "1",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.type": "block",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.vdo": "0"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            },
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "type": "block",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "vg_name": "ceph_vg1"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:        }
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:    ],
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:    "2": [
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:        {
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "devices": [
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "/dev/loop5"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            ],
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_name": "ceph_lv2",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_size": "21470642176",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "name": "ceph_lv2",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "tags": {
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.cluster_name": "ceph",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.crush_device_class": "",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.encrypted": "0",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osd_id": "2",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.type": "block",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:                "ceph.vdo": "0"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            },
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "type": "block",
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:            "vg_name": "ceph_vg2"
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:        }
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]:    ]
Nov 25 12:54:02 np0005535469 zealous_gauss[457053]: }
Nov 25 12:54:02 np0005535469 systemd[1]: libpod-82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106.scope: Deactivated successfully.
Nov 25 12:54:02 np0005535469 podman[457037]: 2025-11-25 17:54:02.357707505 +0000 UTC m=+0.990378703 container died 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 12:54:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7-merged.mount: Deactivated successfully.
Nov 25 12:54:02 np0005535469 podman[457037]: 2025-11-25 17:54:02.419713488 +0000 UTC m=+1.052384676 container remove 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:54:02 np0005535469 systemd[1]: libpod-conmon-82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106.scope: Deactivated successfully.
Nov 25 12:54:02 np0005535469 nova_compute[254092]: 2025-11-25 17:54:02.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:03 np0005535469 podman[457214]: 2025-11-25 17:54:03.038756065 +0000 UTC m=+0.044726325 container create e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 12:54:03 np0005535469 systemd[1]: Started libpod-conmon-e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb.scope.
Nov 25 12:54:03 np0005535469 podman[457214]: 2025-11-25 17:54:03.021059065 +0000 UTC m=+0.027029335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:54:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:54:03 np0005535469 podman[457214]: 2025-11-25 17:54:03.144693179 +0000 UTC m=+0.150663499 container init e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:54:03 np0005535469 podman[457214]: 2025-11-25 17:54:03.15246117 +0000 UTC m=+0.158431450 container start e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:54:03 np0005535469 podman[457214]: 2025-11-25 17:54:03.156198652 +0000 UTC m=+0.162168912 container attach e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:54:03 np0005535469 friendly_blackwell[457230]: 167 167
Nov 25 12:54:03 np0005535469 systemd[1]: libpod-e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb.scope: Deactivated successfully.
Nov 25 12:54:03 np0005535469 podman[457214]: 2025-11-25 17:54:03.164061075 +0000 UTC m=+0.170031375 container died e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:54:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4ae33584bb82086cbaebcc64dd7a5e02203614bc904af549444cbdc1a308a244-merged.mount: Deactivated successfully.
Nov 25 12:54:03 np0005535469 podman[457214]: 2025-11-25 17:54:03.208938933 +0000 UTC m=+0.214909233 container remove e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:54:03 np0005535469 systemd[1]: libpod-conmon-e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb.scope: Deactivated successfully.
Nov 25 12:54:03 np0005535469 podman[457254]: 2025-11-25 17:54:03.451731611 +0000 UTC m=+0.071115661 container create 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:54:03 np0005535469 systemd[1]: Started libpod-conmon-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope.
Nov 25 12:54:03 np0005535469 podman[457254]: 2025-11-25 17:54:03.430508985 +0000 UTC m=+0.049893065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:54:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:54:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:03 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:54:03 np0005535469 podman[457254]: 2025-11-25 17:54:03.570426502 +0000 UTC m=+0.189810622 container init 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:54:03 np0005535469 podman[457254]: 2025-11-25 17:54:03.58621937 +0000 UTC m=+0.205603460 container start 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:54:03 np0005535469 podman[457254]: 2025-11-25 17:54:03.590066084 +0000 UTC m=+0.209450184 container attach 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 12:54:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3938: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:04 np0005535469 podman[457297]: 2025-11-25 17:54:04.660966812 +0000 UTC m=+0.071131151 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:54:04 np0005535469 eager_perlman[457271]: {
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "osd_id": 1,
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "type": "bluestore"
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:    },
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "osd_id": 2,
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "type": "bluestore"
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:    },
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "osd_id": 0,
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:        "type": "bluestore"
Nov 25 12:54:04 np0005535469 eager_perlman[457271]:    }
Nov 25 12:54:04 np0005535469 eager_perlman[457271]: }
Nov 25 12:54:04 np0005535469 podman[457296]: 2025-11-25 17:54:04.67452567 +0000 UTC m=+0.084941225 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 12:54:04 np0005535469 systemd[1]: libpod-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope: Deactivated successfully.
Nov 25 12:54:04 np0005535469 systemd[1]: libpod-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope: Consumed 1.115s CPU time.
Nov 25 12:54:04 np0005535469 podman[457300]: 2025-11-25 17:54:04.733522561 +0000 UTC m=+0.139238398 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:54:04 np0005535469 podman[457366]: 2025-11-25 17:54:04.756779173 +0000 UTC m=+0.033615753 container died 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:54:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d-merged.mount: Deactivated successfully.
Nov 25 12:54:04 np0005535469 podman[457366]: 2025-11-25 17:54:04.815019342 +0000 UTC m=+0.091855942 container remove 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:54:04 np0005535469 systemd[1]: libpod-conmon-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope: Deactivated successfully.
Nov 25 12:54:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:54:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:54:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:54:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:54:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2f5fcc3e-9b86-4589-a2f2-a45fe259260a does not exist
Nov 25 12:54:04 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b9410ac0-5321-4fe0-9df5-401ba8989727 does not exist
Nov 25 12:54:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:54:05 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:54:05 np0005535469 nova_compute[254092]: 2025-11-25 17:54:05.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3939: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 12:54:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7201.5 total, 600.0 interval#012Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 12:54:07 np0005535469 nova_compute[254092]: 2025-11-25 17:54:07.846 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3940: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3941: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:54:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:54:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:10 np0005535469 nova_compute[254092]: 2025-11-25 17:54:10.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3942: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:12 np0005535469 nova_compute[254092]: 2025-11-25 17:54:12.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:13 np0005535469 nova_compute[254092]: 2025-11-25 17:54:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:54:13.700 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:54:13.701 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:54:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:54:13.701 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:54:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3943: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:14 np0005535469 nova_compute[254092]: 2025-11-25 17:54:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:15 np0005535469 nova_compute[254092]: 2025-11-25 17:54:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:15 np0005535469 nova_compute[254092]: 2025-11-25 17:54:15.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3944: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:17 np0005535469 nova_compute[254092]: 2025-11-25 17:54:17.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3945: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:19 np0005535469 nova_compute[254092]: 2025-11-25 17:54:19.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:19 np0005535469 nova_compute[254092]: 2025-11-25 17:54:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:19 np0005535469 nova_compute[254092]: 2025-11-25 17:54:19.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:54:19 np0005535469 nova_compute[254092]: 2025-11-25 17:54:19.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:54:19 np0005535469 nova_compute[254092]: 2025-11-25 17:54:19.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:54:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3946: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:20 np0005535469 nova_compute[254092]: 2025-11-25 17:54:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:20 np0005535469 nova_compute[254092]: 2025-11-25 17:54:20.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:54:20 np0005535469 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:54:20 np0005535469 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:54:20 np0005535469 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:54:20 np0005535469 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:54:20 np0005535469 nova_compute[254092]: 2025-11-25 17:54:20.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:54:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942230637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.034 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.257 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.258 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.258 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.259 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.328 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.329 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.349 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:54:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:54:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874217884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.773 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.783 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.813 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.818 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:54:21 np0005535469 nova_compute[254092]: 2025-11-25 17:54:21.818 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:54:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3947: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:22 np0005535469 nova_compute[254092]: 2025-11-25 17:54:22.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:23 np0005535469 nova_compute[254092]: 2025-11-25 17:54:23.820 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:23 np0005535469 nova_compute[254092]: 2025-11-25 17:54:23.821 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:23 np0005535469 nova_compute[254092]: 2025-11-25 17:54:23.821 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:54:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3948: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 12:54:25 np0005535469 nova_compute[254092]: 2025-11-25 17:54:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:25 np0005535469 nova_compute[254092]: 2025-11-25 17:54:25.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:54:25 np0005535469 nova_compute[254092]: 2025-11-25 17:54:25.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3949: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:28 np0005535469 nova_compute[254092]: 2025-11-25 17:54:28.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3950: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3951: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:30 np0005535469 nova_compute[254092]: 2025-11-25 17:54:30.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3952: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:32 np0005535469 nova_compute[254092]: 2025-11-25 17:54:32.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:33 np0005535469 nova_compute[254092]: 2025-11-25 17:54:33.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:33 np0005535469 nova_compute[254092]: 2025-11-25 17:54:33.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3953: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:35 np0005535469 podman[457478]: 2025-11-25 17:54:35.650557131 +0000 UTC m=+0.067877903 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 12:54:35 np0005535469 podman[457480]: 2025-11-25 17:54:35.672445084 +0000 UTC m=+0.083165167 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 12:54:35 np0005535469 podman[457479]: 2025-11-25 17:54:35.681126231 +0000 UTC m=+0.087795704 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:54:35 np0005535469 nova_compute[254092]: 2025-11-25 17:54:35.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3954: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:36 np0005535469 nova_compute[254092]: 2025-11-25 17:54:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:36 np0005535469 nova_compute[254092]: 2025-11-25 17:54:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:54:36 np0005535469 nova_compute[254092]: 2025-11-25 17:54:36.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:54:38 np0005535469 nova_compute[254092]: 2025-11-25 17:54:38.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3955: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3956: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:54:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:54:40
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', '.mgr', 'vms', 'images', 'default.rgw.meta', 'volumes']
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:54:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:54:40 np0005535469 nova_compute[254092]: 2025-11-25 17:54:40.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3957: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:43 np0005535469 nova_compute[254092]: 2025-11-25 17:54:43.081 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3958: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:45 np0005535469 nova_compute[254092]: 2025-11-25 17:54:45.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3959: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:47 np0005535469 nova_compute[254092]: 2025-11-25 17:54:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:54:48 np0005535469 nova_compute[254092]: 2025-11-25 17:54:48.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3960: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3961: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:50 np0005535469 nova_compute[254092]: 2025-11-25 17:54:50.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3962: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:54:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:54:53 np0005535469 nova_compute[254092]: 2025-11-25 17:54:53.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3963: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:54:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:54:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1299631687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:54:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:54:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1299631687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:54:55 np0005535469 nova_compute[254092]: 2025-11-25 17:54:55.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3964: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:54:58 np0005535469 nova_compute[254092]: 2025-11-25 17:54:58.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:54:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3965: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3966: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:00 np0005535469 nova_compute[254092]: 2025-11-25 17:55:00.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3967: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:03 np0005535469 nova_compute[254092]: 2025-11-25 17:55:03.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3968: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:05 np0005535469 nova_compute[254092]: 2025-11-25 17:55:05.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:55:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:55:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:55:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:55:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f5956c67-45df-45de-aa6f-f28ba6312299 does not exist
Nov 25 12:55:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ae69cf78-d17e-4865-b027-ed5457fe5b95 does not exist
Nov 25 12:55:06 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 8b2fea4a-6c0b-4ec2-933b-b57664a45961 does not exist
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:55:06 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:55:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3969: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:06 np0005535469 podman[457703]: 2025-11-25 17:55:06.1928828 +0000 UTC m=+0.061333635 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 12:55:06 np0005535469 podman[457702]: 2025-11-25 17:55:06.210856448 +0000 UTC m=+0.085556973 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 12:55:06 np0005535469 podman[457704]: 2025-11-25 17:55:06.232443613 +0000 UTC m=+0.104193348 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 12:55:06 np0005535469 podman[457880]: 2025-11-25 17:55:06.687244375 +0000 UTC m=+0.042778413 container create 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:55:06 np0005535469 systemd[1]: Started libpod-conmon-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope.
Nov 25 12:55:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:55:06 np0005535469 podman[457880]: 2025-11-25 17:55:06.671375294 +0000 UTC m=+0.026909352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:55:06 np0005535469 podman[457880]: 2025-11-25 17:55:06.772549529 +0000 UTC m=+0.128083597 container init 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 12:55:06 np0005535469 podman[457880]: 2025-11-25 17:55:06.781403769 +0000 UTC m=+0.136937807 container start 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:55:06 np0005535469 podman[457880]: 2025-11-25 17:55:06.784760971 +0000 UTC m=+0.140295009 container attach 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:55:06 np0005535469 nostalgic_kilby[457897]: 167 167
Nov 25 12:55:06 np0005535469 systemd[1]: libpod-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope: Deactivated successfully.
Nov 25 12:55:06 np0005535469 conmon[457897]: conmon 135e00fac1613096951f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope/container/memory.events
Nov 25 12:55:06 np0005535469 podman[457880]: 2025-11-25 17:55:06.789705404 +0000 UTC m=+0.145239442 container died 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:55:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3185c74086fcc2745072874c91e1d428a7e2ac9fb1c95e90a4438165c679ec9a-merged.mount: Deactivated successfully.
Nov 25 12:55:06 np0005535469 podman[457880]: 2025-11-25 17:55:06.833497363 +0000 UTC m=+0.189031401 container remove 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:55:06 np0005535469 systemd[1]: libpod-conmon-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope: Deactivated successfully.
Nov 25 12:55:07 np0005535469 podman[457920]: 2025-11-25 17:55:07.058349164 +0000 UTC m=+0.047778748 container create b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:55:07 np0005535469 systemd[1]: Started libpod-conmon-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope.
Nov 25 12:55:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:55:07 np0005535469 podman[457920]: 2025-11-25 17:55:07.04124749 +0000 UTC m=+0.030677104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:55:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:07 np0005535469 podman[457920]: 2025-11-25 17:55:07.156076456 +0000 UTC m=+0.145506060 container init b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:55:07 np0005535469 podman[457920]: 2025-11-25 17:55:07.163334143 +0000 UTC m=+0.152763727 container start b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:55:07 np0005535469 podman[457920]: 2025-11-25 17:55:07.166366695 +0000 UTC m=+0.155796299 container attach b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:55:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3970: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:08 np0005535469 nova_compute[254092]: 2025-11-25 17:55:08.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:08 np0005535469 friendly_galileo[457936]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:55:08 np0005535469 friendly_galileo[457936]: --> relative data size: 1.0
Nov 25 12:55:08 np0005535469 friendly_galileo[457936]: --> All data devices are unavailable
Nov 25 12:55:08 np0005535469 systemd[1]: libpod-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope: Deactivated successfully.
Nov 25 12:55:08 np0005535469 systemd[1]: libpod-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope: Consumed 1.134s CPU time.
Nov 25 12:55:08 np0005535469 podman[457920]: 2025-11-25 17:55:08.416411594 +0000 UTC m=+1.405841178 container died b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 12:55:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb-merged.mount: Deactivated successfully.
Nov 25 12:55:08 np0005535469 podman[457920]: 2025-11-25 17:55:08.479958299 +0000 UTC m=+1.469387883 container remove b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:55:08 np0005535469 systemd[1]: libpod-conmon-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope: Deactivated successfully.
Nov 25 12:55:09 np0005535469 podman[458118]: 2025-11-25 17:55:09.284139979 +0000 UTC m=+0.061019147 container create fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:55:09 np0005535469 systemd[1]: Started libpod-conmon-fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1.scope.
Nov 25 12:55:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:55:09 np0005535469 podman[458118]: 2025-11-25 17:55:09.266599973 +0000 UTC m=+0.043479131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:55:09 np0005535469 podman[458118]: 2025-11-25 17:55:09.375724254 +0000 UTC m=+0.152603462 container init fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:55:09 np0005535469 podman[458118]: 2025-11-25 17:55:09.383764942 +0000 UTC m=+0.160644120 container start fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 12:55:09 np0005535469 podman[458118]: 2025-11-25 17:55:09.386460095 +0000 UTC m=+0.163339273 container attach fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:55:09 np0005535469 silly_goldstine[458134]: 167 167
Nov 25 12:55:09 np0005535469 systemd[1]: libpod-fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1.scope: Deactivated successfully.
Nov 25 12:55:09 np0005535469 podman[458118]: 2025-11-25 17:55:09.3914175 +0000 UTC m=+0.168296658 container died fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:55:09 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ed1d316139c266c1e9885b9065292368505a723594df76ffaaea78f00fd2f90e-merged.mount: Deactivated successfully.
Nov 25 12:55:09 np0005535469 podman[458118]: 2025-11-25 17:55:09.431027214 +0000 UTC m=+0.207906412 container remove fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:55:09 np0005535469 systemd[1]: libpod-conmon-fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1.scope: Deactivated successfully.
Nov 25 12:55:09 np0005535469 podman[458158]: 2025-11-25 17:55:09.656087711 +0000 UTC m=+0.045428983 container create e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 12:55:09 np0005535469 systemd[1]: Started libpod-conmon-e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e.scope.
Nov 25 12:55:09 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:55:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:09 np0005535469 podman[458158]: 2025-11-25 17:55:09.636677295 +0000 UTC m=+0.026018557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:55:09 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:09 np0005535469 podman[458158]: 2025-11-25 17:55:09.744668935 +0000 UTC m=+0.134010227 container init e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 12:55:09 np0005535469 podman[458158]: 2025-11-25 17:55:09.755610572 +0000 UTC m=+0.144951814 container start e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:55:09 np0005535469 podman[458158]: 2025-11-25 17:55:09.758779568 +0000 UTC m=+0.148120800 container attach e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:55:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3971: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:55:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:55:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]: {
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:    "0": [
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:        {
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "devices": [
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "/dev/loop3"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            ],
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_name": "ceph_lv0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_size": "21470642176",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "name": "ceph_lv0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "tags": {
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cluster_name": "ceph",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.crush_device_class": "",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.encrypted": "0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osd_id": "0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.type": "block",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.vdo": "0"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            },
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "type": "block",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "vg_name": "ceph_vg0"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:        }
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:    ],
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:    "1": [
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:        {
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "devices": [
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "/dev/loop4"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            ],
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_name": "ceph_lv1",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_size": "21470642176",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "name": "ceph_lv1",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "tags": {
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cluster_name": "ceph",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.crush_device_class": "",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.encrypted": "0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osd_id": "1",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.type": "block",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.vdo": "0"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            },
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "type": "block",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "vg_name": "ceph_vg1"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:        }
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:    ],
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:    "2": [
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:        {
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "devices": [
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "/dev/loop5"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            ],
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_name": "ceph_lv2",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_size": "21470642176",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "name": "ceph_lv2",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "tags": {
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.cluster_name": "ceph",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.crush_device_class": "",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.encrypted": "0",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osd_id": "2",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.type": "block",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:                "ceph.vdo": "0"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            },
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "type": "block",
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:            "vg_name": "ceph_vg2"
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:        }
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]:    ]
Nov 25 12:55:10 np0005535469 sleepy_wu[458174]: }
Nov 25 12:55:10 np0005535469 systemd[1]: libpod-e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e.scope: Deactivated successfully.
Nov 25 12:55:10 np0005535469 podman[458158]: 2025-11-25 17:55:10.522533632 +0000 UTC m=+0.911874944 container died e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 12:55:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263-merged.mount: Deactivated successfully.
Nov 25 12:55:10 np0005535469 podman[458158]: 2025-11-25 17:55:10.59656184 +0000 UTC m=+0.985903082 container remove e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:55:10 np0005535469 systemd[1]: libpod-conmon-e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e.scope: Deactivated successfully.
Nov 25 12:55:10 np0005535469 nova_compute[254092]: 2025-11-25 17:55:10.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:11 np0005535469 podman[458335]: 2025-11-25 17:55:11.272896303 +0000 UTC m=+0.050614845 container create 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:55:11 np0005535469 systemd[1]: Started libpod-conmon-15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570.scope.
Nov 25 12:55:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:55:11 np0005535469 podman[458335]: 2025-11-25 17:55:11.348397171 +0000 UTC m=+0.126115733 container init 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:55:11 np0005535469 podman[458335]: 2025-11-25 17:55:11.257194826 +0000 UTC m=+0.034913378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:55:11 np0005535469 podman[458335]: 2025-11-25 17:55:11.357173789 +0000 UTC m=+0.134892331 container start 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:55:11 np0005535469 podman[458335]: 2025-11-25 17:55:11.360067428 +0000 UTC m=+0.137785970 container attach 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 12:55:11 np0005535469 goofy_pasteur[458351]: 167 167
Nov 25 12:55:11 np0005535469 systemd[1]: libpod-15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570.scope: Deactivated successfully.
Nov 25 12:55:11 np0005535469 podman[458356]: 2025-11-25 17:55:11.401591755 +0000 UTC m=+0.024968739 container died 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:55:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b5b3ffd6b166281217861ef5db44936535f05f606da6e4644cc5e68580e56be8-merged.mount: Deactivated successfully.
Nov 25 12:55:11 np0005535469 podman[458356]: 2025-11-25 17:55:11.428469034 +0000 UTC m=+0.051845998 container remove 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 12:55:11 np0005535469 systemd[1]: libpod-conmon-15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570.scope: Deactivated successfully.
Nov 25 12:55:11 np0005535469 podman[458378]: 2025-11-25 17:55:11.592554896 +0000 UTC m=+0.041898798 container create 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:55:11 np0005535469 systemd[1]: Started libpod-conmon-538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107.scope.
Nov 25 12:55:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:55:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:11 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:55:11 np0005535469 podman[458378]: 2025-11-25 17:55:11.575932825 +0000 UTC m=+0.025276757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:55:11 np0005535469 podman[458378]: 2025-11-25 17:55:11.682672961 +0000 UTC m=+0.132016883 container init 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 12:55:11 np0005535469 podman[458378]: 2025-11-25 17:55:11.688203241 +0000 UTC m=+0.137547143 container start 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:55:11 np0005535469 podman[458378]: 2025-11-25 17:55:11.691567283 +0000 UTC m=+0.140911215 container attach 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:55:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3972: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:12 np0005535469 competent_banach[458395]: {
Nov 25 12:55:12 np0005535469 competent_banach[458395]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "osd_id": 1,
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "type": "bluestore"
Nov 25 12:55:12 np0005535469 competent_banach[458395]:    },
Nov 25 12:55:12 np0005535469 competent_banach[458395]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "osd_id": 2,
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "type": "bluestore"
Nov 25 12:55:12 np0005535469 competent_banach[458395]:    },
Nov 25 12:55:12 np0005535469 competent_banach[458395]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "osd_id": 0,
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:55:12 np0005535469 competent_banach[458395]:        "type": "bluestore"
Nov 25 12:55:12 np0005535469 competent_banach[458395]:    }
Nov 25 12:55:12 np0005535469 competent_banach[458395]: }
Nov 25 12:55:12 np0005535469 systemd[1]: libpod-538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107.scope: Deactivated successfully.
Nov 25 12:55:12 np0005535469 podman[458378]: 2025-11-25 17:55:12.614463665 +0000 UTC m=+1.063807557 container died 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:55:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694-merged.mount: Deactivated successfully.
Nov 25 12:55:12 np0005535469 podman[458378]: 2025-11-25 17:55:12.662385676 +0000 UTC m=+1.111729578 container remove 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:55:12 np0005535469 systemd[1]: libpod-conmon-538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107.scope: Deactivated successfully.
Nov 25 12:55:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:55:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:55:12 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:55:12 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:55:12 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fda29980-9a99-45d6-801f-3ea2b4160525 does not exist
Nov 25 12:55:12 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 41f1299e-0a84-4c29-a311-031cfce62a25 does not exist
Nov 25 12:55:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:55:13 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:55:13 np0005535469 nova_compute[254092]: 2025-11-25 17:55:13.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:13 np0005535469 nova_compute[254092]: 2025-11-25 17:55:13.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:55:13.701 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:55:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:55:13.702 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:55:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:55:13.702 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:55:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3973: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:14 np0005535469 nova_compute[254092]: 2025-11-25 17:55:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:15 np0005535469 nova_compute[254092]: 2025-11-25 17:55:15.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:15 np0005535469 nova_compute[254092]: 2025-11-25 17:55:15.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3974: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.188915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317188958, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1428, "num_deletes": 251, "total_data_size": 2237469, "memory_usage": 2275544, "flush_reason": "Manual Compaction"}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317204734, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 2193753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80630, "largest_seqno": 82057, "table_properties": {"data_size": 2187069, "index_size": 3818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13811, "raw_average_key_size": 19, "raw_value_size": 2173729, "raw_average_value_size": 3123, "num_data_blocks": 172, "num_entries": 696, "num_filter_entries": 696, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093170, "oldest_key_time": 1764093170, "file_creation_time": 1764093317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 15934 microseconds, and 6662 cpu microseconds.
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.204778) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 2193753 bytes OK
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.204867) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206134) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206148) EVENT_LOG_v1 {"time_micros": 1764093317206143, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206169) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2231182, prev total WAL file size 2231182, number of live WAL files 2.
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.207037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(2142KB)], [191(10MB)]
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317207150, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12713531, "oldest_snapshot_seqno": -1}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9636 keys, 10954011 bytes, temperature: kUnknown
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317283413, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 10954011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10894175, "index_size": 34635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24133, "raw_key_size": 254735, "raw_average_key_size": 26, "raw_value_size": 10726498, "raw_average_value_size": 1113, "num_data_blocks": 1329, "num_entries": 9636, "num_filter_entries": 9636, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.283872) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 10954011 bytes
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.285443) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.5 rd, 143.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.8) write-amplify(5.0) OK, records in: 10150, records dropped: 514 output_compression: NoCompression
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.285476) EVENT_LOG_v1 {"time_micros": 1764093317285461, "job": 120, "event": "compaction_finished", "compaction_time_micros": 76369, "compaction_time_cpu_micros": 52918, "output_level": 6, "num_output_files": 1, "total_output_size": 10954011, "num_input_records": 10150, "num_output_records": 9636, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317286562, "job": 120, "event": "table_file_deletion", "file_number": 193}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317291373, "job": 120, "event": "table_file_deletion", "file_number": 191}
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:55:17 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:55:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3975: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:18 np0005535469 nova_compute[254092]: 2025-11-25 17:55:18.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:19 np0005535469 nova_compute[254092]: 2025-11-25 17:55:19.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:19 np0005535469 nova_compute[254092]: 2025-11-25 17:55:19.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:55:19 np0005535469 nova_compute[254092]: 2025-11-25 17:55:19.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:55:19 np0005535469 nova_compute[254092]: 2025-11-25 17:55:19.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:55:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3976: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:20 np0005535469 nova_compute[254092]: 2025-11-25 17:55:20.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:20 np0005535469 nova_compute[254092]: 2025-11-25 17:55:20.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3977: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:22 np0005535469 nova_compute[254092]: 2025-11-25 17:55:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:22 np0005535469 nova_compute[254092]: 2025-11-25 17:55:22.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:55:22 np0005535469 nova_compute[254092]: 2025-11-25 17:55:22.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:55:22 np0005535469 nova_compute[254092]: 2025-11-25 17:55:22.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:55:22 np0005535469 nova_compute[254092]: 2025-11-25 17:55:22.539 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:55:22 np0005535469 nova_compute[254092]: 2025-11-25 17:55:22.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:55:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:55:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794403191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.049 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.319 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.322 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.322 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.427 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:55:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:55:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639957549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.886 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.893 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.909 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.911 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:55:23 np0005535469 nova_compute[254092]: 2025-11-25 17:55:23.911 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:55:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3978: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:25 np0005535469 nova_compute[254092]: 2025-11-25 17:55:25.911 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:25 np0005535469 nova_compute[254092]: 2025-11-25 17:55:25.912 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:25 np0005535469 nova_compute[254092]: 2025-11-25 17:55:25.912 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:55:25 np0005535469 nova_compute[254092]: 2025-11-25 17:55:25.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3979: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3980: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:28 np0005535469 nova_compute[254092]: 2025-11-25 17:55:28.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3981: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:30 np0005535469 nova_compute[254092]: 2025-11-25 17:55:30.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3982: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:33 np0005535469 nova_compute[254092]: 2025-11-25 17:55:33.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3983: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:34 np0005535469 nova_compute[254092]: 2025-11-25 17:55:34.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:55:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:35 np0005535469 nova_compute[254092]: 2025-11-25 17:55:35.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3984: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:36 np0005535469 podman[458535]: 2025-11-25 17:55:36.697140858 +0000 UTC m=+0.095739449 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:55:36 np0005535469 podman[458534]: 2025-11-25 17:55:36.704152579 +0000 UTC m=+0.106995294 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 12:55:36 np0005535469 podman[458536]: 2025-11-25 17:55:36.769344037 +0000 UTC m=+0.165437080 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 12:55:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3985: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:38 np0005535469 nova_compute[254092]: 2025-11-25 17:55:38.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3986: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:55:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:55:40
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:55:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:55:40 np0005535469 nova_compute[254092]: 2025-11-25 17:55:40.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3987: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:43 np0005535469 nova_compute[254092]: 2025-11-25 17:55:43.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3988: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:45 np0005535469 nova_compute[254092]: 2025-11-25 17:55:45.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3989: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3990: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:48 np0005535469 nova_compute[254092]: 2025-11-25 17:55:48.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3991: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:50 np0005535469 nova_compute[254092]: 2025-11-25 17:55:50.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3992: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:55:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:55:53 np0005535469 nova_compute[254092]: 2025-11-25 17:55:53.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3993: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217114339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:55:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:55:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217114339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:55:55 np0005535469 nova_compute[254092]: 2025-11-25 17:55:55.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:55:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3994: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3995: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:55:58 np0005535469 nova_compute[254092]: 2025-11-25 17:55:58.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3996: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:00 np0005535469 nova_compute[254092]: 2025-11-25 17:56:00.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3997: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 12:56:03 np0005535469 nova_compute[254092]: 2025-11-25 17:56:03.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:03 np0005535469 nova_compute[254092]: 2025-11-25 17:56:03.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3998: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 12:56:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:05 np0005535469 nova_compute[254092]: 2025-11-25 17:56:05.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3999: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 12:56:07 np0005535469 podman[458600]: 2025-11-25 17:56:07.683205598 +0000 UTC m=+0.077895604 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:56:07 np0005535469 podman[458599]: 2025-11-25 17:56:07.692568043 +0000 UTC m=+0.090680052 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 12:56:07 np0005535469 podman[458601]: 2025-11-25 17:56:07.715829784 +0000 UTC m=+0.111581079 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 12:56:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4000: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 12:56:08 np0005535469 nova_compute[254092]: 2025-11-25 17:56:08.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4001: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Nov 25 12:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:56:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:56:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:10 np0005535469 nova_compute[254092]: 2025-11-25 17:56:10.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4002: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 12:56:13 np0005535469 nova_compute[254092]: 2025-11-25 17:56:13.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:56:13.702 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:56:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:56:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:56:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:56:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:56:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4003: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 30559ecc-4f10-49dc-ae64-fb2fb2bb6863 does not exist
Nov 25 12:56:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7896aede-c725-4cbd-b63d-00c1fa524874 does not exist
Nov 25 12:56:14 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a98d9ab2-e97e-43bc-9f7a-de7a91e687cd does not exist
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:56:14 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:56:14 np0005535469 nova_compute[254092]: 2025-11-25 17:56:14.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:14 np0005535469 podman[459056]: 2025-11-25 17:56:14.818079668 +0000 UTC m=+0.050346027 container create bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:56:14 np0005535469 systemd[1]: Started libpod-conmon-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope.
Nov 25 12:56:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:56:14 np0005535469 podman[459056]: 2025-11-25 17:56:14.794035746 +0000 UTC m=+0.026302195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:56:14 np0005535469 podman[459056]: 2025-11-25 17:56:14.902277913 +0000 UTC m=+0.134544282 container init bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:56:14 np0005535469 podman[459056]: 2025-11-25 17:56:14.913459466 +0000 UTC m=+0.145725845 container start bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:56:14 np0005535469 podman[459056]: 2025-11-25 17:56:14.916841408 +0000 UTC m=+0.149107767 container attach bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 12:56:14 np0005535469 crazy_colden[459073]: 167 167
Nov 25 12:56:14 np0005535469 systemd[1]: libpod-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope: Deactivated successfully.
Nov 25 12:56:14 np0005535469 conmon[459073]: conmon bb8be36881b84396297a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope/container/memory.events
Nov 25 12:56:14 np0005535469 podman[459056]: 2025-11-25 17:56:14.923477758 +0000 UTC m=+0.155744107 container died bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:56:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-471c2c0c53ac4c9df553ab1f62092575f8c967a5a0b89db00481b900d393e49f-merged.mount: Deactivated successfully.
Nov 25 12:56:14 np0005535469 podman[459056]: 2025-11-25 17:56:14.962305242 +0000 UTC m=+0.194571611 container remove bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 12:56:14 np0005535469 systemd[1]: libpod-conmon-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope: Deactivated successfully.
Nov 25 12:56:15 np0005535469 podman[459096]: 2025-11-25 17:56:15.147673142 +0000 UTC m=+0.051984302 container create fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:56:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:56:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:15 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:56:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:15 np0005535469 systemd[1]: Started libpod-conmon-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope.
Nov 25 12:56:15 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:56:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:15 np0005535469 podman[459096]: 2025-11-25 17:56:15.131333348 +0000 UTC m=+0.035644528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:56:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:15 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:15 np0005535469 podman[459096]: 2025-11-25 17:56:15.240535921 +0000 UTC m=+0.144847081 container init fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:56:15 np0005535469 podman[459096]: 2025-11-25 17:56:15.251808527 +0000 UTC m=+0.156119677 container start fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:56:15 np0005535469 podman[459096]: 2025-11-25 17:56:15.255483897 +0000 UTC m=+0.159795067 container attach fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:56:15 np0005535469 nova_compute[254092]: 2025-11-25 17:56:15.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4004: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 12:56:16 np0005535469 bold_beaver[459112]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:56:16 np0005535469 bold_beaver[459112]: --> relative data size: 1.0
Nov 25 12:56:16 np0005535469 bold_beaver[459112]: --> All data devices are unavailable
Nov 25 12:56:16 np0005535469 systemd[1]: libpod-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope: Deactivated successfully.
Nov 25 12:56:16 np0005535469 systemd[1]: libpod-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope: Consumed 1.065s CPU time.
Nov 25 12:56:16 np0005535469 podman[459096]: 2025-11-25 17:56:16.365692312 +0000 UTC m=+1.270003482 container died fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:56:16 np0005535469 systemd[1]: var-lib-containers-storage-overlay-00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4-merged.mount: Deactivated successfully.
Nov 25 12:56:16 np0005535469 podman[459096]: 2025-11-25 17:56:16.415054871 +0000 UTC m=+1.319366011 container remove fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 12:56:16 np0005535469 systemd[1]: libpod-conmon-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope: Deactivated successfully.
Nov 25 12:56:16 np0005535469 nova_compute[254092]: 2025-11-25 17:56:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:16 np0005535469 nova_compute[254092]: 2025-11-25 17:56:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:17 np0005535469 podman[459294]: 2025-11-25 17:56:17.112124705 +0000 UTC m=+0.056924935 container create 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:56:17 np0005535469 systemd[1]: Started libpod-conmon-2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62.scope.
Nov 25 12:56:17 np0005535469 podman[459294]: 2025-11-25 17:56:17.080512908 +0000 UTC m=+0.025313208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:56:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:56:17 np0005535469 podman[459294]: 2025-11-25 17:56:17.218489632 +0000 UTC m=+0.163289872 container init 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:56:17 np0005535469 podman[459294]: 2025-11-25 17:56:17.231485035 +0000 UTC m=+0.176285225 container start 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:56:17 np0005535469 podman[459294]: 2025-11-25 17:56:17.235181634 +0000 UTC m=+0.179981844 container attach 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 12:56:17 np0005535469 romantic_hawking[459311]: 167 167
Nov 25 12:56:17 np0005535469 systemd[1]: libpod-2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62.scope: Deactivated successfully.
Nov 25 12:56:17 np0005535469 podman[459294]: 2025-11-25 17:56:17.24167889 +0000 UTC m=+0.186479081 container died 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 12:56:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-efdb87e44ddd92b026e5109c1cc7238479c4f1fe3ee3f0f342bfb8e1a980d8e9-merged.mount: Deactivated successfully.
Nov 25 12:56:17 np0005535469 podman[459294]: 2025-11-25 17:56:17.282341104 +0000 UTC m=+0.227141294 container remove 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:56:17 np0005535469 systemd[1]: libpod-conmon-2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62.scope: Deactivated successfully.
Nov 25 12:56:17 np0005535469 podman[459334]: 2025-11-25 17:56:17.534410904 +0000 UTC m=+0.077789462 container create b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 12:56:17 np0005535469 systemd[1]: Started libpod-conmon-b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb.scope.
Nov 25 12:56:17 np0005535469 podman[459334]: 2025-11-25 17:56:17.504623855 +0000 UTC m=+0.048002463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:56:17 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:56:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:17 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:17 np0005535469 podman[459334]: 2025-11-25 17:56:17.644572663 +0000 UTC m=+0.187951231 container init b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 12:56:17 np0005535469 podman[459334]: 2025-11-25 17:56:17.659090487 +0000 UTC m=+0.202469035 container start b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 12:56:17 np0005535469 podman[459334]: 2025-11-25 17:56:17.663334242 +0000 UTC m=+0.206712840 container attach b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:56:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4005: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Nov 25 12:56:18 np0005535469 competent_shamir[459350]: {
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:    "0": [
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:        {
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "devices": [
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "/dev/loop3"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            ],
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_name": "ceph_lv0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_size": "21470642176",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "name": "ceph_lv0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "tags": {
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cluster_name": "ceph",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.crush_device_class": "",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.encrypted": "0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osd_id": "0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.type": "block",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.vdo": "0"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            },
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "type": "block",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "vg_name": "ceph_vg0"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:        }
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:    ],
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:    "1": [
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:        {
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "devices": [
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "/dev/loop4"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            ],
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_name": "ceph_lv1",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_size": "21470642176",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "name": "ceph_lv1",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "tags": {
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cluster_name": "ceph",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.crush_device_class": "",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.encrypted": "0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osd_id": "1",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.type": "block",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.vdo": "0"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            },
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "type": "block",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "vg_name": "ceph_vg1"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:        }
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:    ],
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:    "2": [
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:        {
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "devices": [
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "/dev/loop5"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            ],
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_name": "ceph_lv2",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_size": "21470642176",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "name": "ceph_lv2",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "tags": {
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.cluster_name": "ceph",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.crush_device_class": "",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.encrypted": "0",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osd_id": "2",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.type": "block",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:                "ceph.vdo": "0"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            },
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "type": "block",
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:            "vg_name": "ceph_vg2"
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:        }
Nov 25 12:56:18 np0005535469 competent_shamir[459350]:    ]
Nov 25 12:56:18 np0005535469 competent_shamir[459350]: }
Nov 25 12:56:18 np0005535469 podman[459334]: 2025-11-25 17:56:18.497420315 +0000 UTC m=+1.040798833 container died b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:56:18 np0005535469 systemd[1]: libpod-b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb.scope: Deactivated successfully.
Nov 25 12:56:18 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0-merged.mount: Deactivated successfully.
Nov 25 12:56:18 np0005535469 podman[459334]: 2025-11-25 17:56:18.561896864 +0000 UTC m=+1.105275392 container remove b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 12:56:18 np0005535469 systemd[1]: libpod-conmon-b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb.scope: Deactivated successfully.
Nov 25 12:56:18 np0005535469 nova_compute[254092]: 2025-11-25 17:56:18.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:19 np0005535469 podman[459514]: 2025-11-25 17:56:19.292018435 +0000 UTC m=+0.047527191 container create a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:56:19 np0005535469 systemd[1]: Started libpod-conmon-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope.
Nov 25 12:56:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:56:19 np0005535469 podman[459514]: 2025-11-25 17:56:19.273852103 +0000 UTC m=+0.029360869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:56:19 np0005535469 podman[459514]: 2025-11-25 17:56:19.371608705 +0000 UTC m=+0.127117541 container init a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:56:19 np0005535469 podman[459514]: 2025-11-25 17:56:19.378852111 +0000 UTC m=+0.134360887 container start a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 12:56:19 np0005535469 podman[459514]: 2025-11-25 17:56:19.382478959 +0000 UTC m=+0.137987715 container attach a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 12:56:19 np0005535469 angry_heyrovsky[459530]: 167 167
Nov 25 12:56:19 np0005535469 systemd[1]: libpod-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope: Deactivated successfully.
Nov 25 12:56:19 np0005535469 conmon[459530]: conmon a01c027e47b176dac0c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope/container/memory.events
Nov 25 12:56:19 np0005535469 podman[459514]: 2025-11-25 17:56:19.386079038 +0000 UTC m=+0.141587794 container died a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 12:56:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1fb1e4e24e1a885b7d50ecc36d818b36b07a50d8c987ff180fa6655629009721-merged.mount: Deactivated successfully.
Nov 25 12:56:19 np0005535469 podman[459514]: 2025-11-25 17:56:19.428788546 +0000 UTC m=+0.184297302 container remove a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:56:19 np0005535469 systemd[1]: libpod-conmon-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope: Deactivated successfully.
Nov 25 12:56:19 np0005535469 podman[459554]: 2025-11-25 17:56:19.624459736 +0000 UTC m=+0.058685714 container create 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:56:19 np0005535469 systemd[1]: Started libpod-conmon-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope.
Nov 25 12:56:19 np0005535469 podman[459554]: 2025-11-25 17:56:19.607486215 +0000 UTC m=+0.041712213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:56:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:56:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:56:19 np0005535469 podman[459554]: 2025-11-25 17:56:19.730272127 +0000 UTC m=+0.164498105 container init 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 12:56:19 np0005535469 podman[459554]: 2025-11-25 17:56:19.742272763 +0000 UTC m=+0.176498771 container start 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:56:19 np0005535469 podman[459554]: 2025-11-25 17:56:19.746493477 +0000 UTC m=+0.180719475 container attach 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:56:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4006: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Nov 25 12:56:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]: {
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "osd_id": 1,
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "type": "bluestore"
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:    },
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "osd_id": 2,
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "type": "bluestore"
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:    },
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "osd_id": 0,
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:        "type": "bluestore"
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]:    }
Nov 25 12:56:20 np0005535469 heuristic_bohr[459571]: }
Nov 25 12:56:20 np0005535469 systemd[1]: libpod-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope: Deactivated successfully.
Nov 25 12:56:20 np0005535469 systemd[1]: libpod-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope: Consumed 1.053s CPU time.
Nov 25 12:56:20 np0005535469 podman[459604]: 2025-11-25 17:56:20.849611429 +0000 UTC m=+0.042140144 container died 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:56:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8-merged.mount: Deactivated successfully.
Nov 25 12:56:20 np0005535469 podman[459604]: 2025-11-25 17:56:20.930167715 +0000 UTC m=+0.122696340 container remove 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 12:56:20 np0005535469 systemd[1]: libpod-conmon-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope: Deactivated successfully.
Nov 25 12:56:21 np0005535469 nova_compute[254092]: 2025-11-25 17:56:21.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:56:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:56:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f34c8fe6-844d-42d0-9019-ef60f42c0d02 does not exist
Nov 25 12:56:21 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 16363266-8559-4e5b-82ff-1b3f33928d1a does not exist
Nov 25 12:56:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:21 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:56:21 np0005535469 nova_compute[254092]: 2025-11-25 17:56:21.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:21 np0005535469 nova_compute[254092]: 2025-11-25 17:56:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:21 np0005535469 nova_compute[254092]: 2025-11-25 17:56:21.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:56:21 np0005535469 nova_compute[254092]: 2025-11-25 17:56:21.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:56:21 np0005535469 nova_compute[254092]: 2025-11-25 17:56:21.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:56:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4007: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 12:56:22 np0005535469 nova_compute[254092]: 2025-11-25 17:56:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:22 np0005535469 nova_compute[254092]: 2025-11-25 17:56:22.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:56:22 np0005535469 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:56:22 np0005535469 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:56:22 np0005535469 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:56:22 np0005535469 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:56:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:56:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105391424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:56:22 np0005535469 nova_compute[254092]: 2025-11-25 17:56:22.951 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.197 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.200 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.201 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.202 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.584 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.585 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.718 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.872 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.873 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.887 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.917 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 12:56:23 np0005535469 nova_compute[254092]: 2025-11-25 17:56:23.999 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:56:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4008: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:56:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190681390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:56:24 np0005535469 nova_compute[254092]: 2025-11-25 17:56:24.442 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:56:24 np0005535469 nova_compute[254092]: 2025-11-25 17:56:24.450 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:56:24 np0005535469 nova_compute[254092]: 2025-11-25 17:56:24.486 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:56:24 np0005535469 nova_compute[254092]: 2025-11-25 17:56:24.489 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:56:24 np0005535469 nova_compute[254092]: 2025-11-25 17:56:24.489 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:56:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:26 np0005535469 nova_compute[254092]: 2025-11-25 17:56:26.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4009: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:26 np0005535469 nova_compute[254092]: 2025-11-25 17:56:26.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:26 np0005535469 nova_compute[254092]: 2025-11-25 17:56:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:26 np0005535469 nova_compute[254092]: 2025-11-25 17:56:26.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:56:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4010: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:28 np0005535469 nova_compute[254092]: 2025-11-25 17:56:28.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4011: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:31 np0005535469 nova_compute[254092]: 2025-11-25 17:56:31.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4012: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:33 np0005535469 nova_compute[254092]: 2025-11-25 17:56:33.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4013: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:35 np0005535469 nova_compute[254092]: 2025-11-25 17:56:35.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:36 np0005535469 nova_compute[254092]: 2025-11-25 17:56:36.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4014: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:36 np0005535469 nova_compute[254092]: 2025-11-25 17:56:36.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:56:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4015: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:38 np0005535469 nova_compute[254092]: 2025-11-25 17:56:38.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:38 np0005535469 podman[459724]: 2025-11-25 17:56:38.668859861 +0000 UTC m=+0.084375120 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 12:56:38 np0005535469 podman[459725]: 2025-11-25 17:56:38.677251119 +0000 UTC m=+0.078729698 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 25 12:56:38 np0005535469 podman[459726]: 2025-11-25 17:56:38.736495856 +0000 UTC m=+0.132137576 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4016: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:56:40
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr', 'images']
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:56:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:56:41 np0005535469 nova_compute[254092]: 2025-11-25 17:56:41.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4017: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:43 np0005535469 nova_compute[254092]: 2025-11-25 17:56:43.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4018: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:46 np0005535469 nova_compute[254092]: 2025-11-25 17:56:46.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4019: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4020: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:48 np0005535469 nova_compute[254092]: 2025-11-25 17:56:48.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4021: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:51 np0005535469 nova_compute[254092]: 2025-11-25 17:56:51.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4022: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:56:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:56:53 np0005535469 nova_compute[254092]: 2025-11-25 17:56:53.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4023: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:56:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:56:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533605188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:56:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:56:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533605188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:56:56 np0005535469 nova_compute[254092]: 2025-11-25 17:56:56.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:56:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4024: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4025: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:56:58 np0005535469 nova_compute[254092]: 2025-11-25 17:56:58.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4026: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:01 np0005535469 nova_compute[254092]: 2025-11-25 17:57:01.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4027: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:03 np0005535469 ceph-mds[102090]: mds.beacon.cephfs.compute-0.aidjys missed beacon ack from the monitors
Nov 25 12:57:03 np0005535469 nova_compute[254092]: 2025-11-25 17:57:03.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4028: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:06 np0005535469 nova_compute[254092]: 2025-11-25 17:57:06.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4029: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:07 np0005535469 ceph-mds[102090]: mds.beacon.cephfs.compute-0.aidjys missed beacon ack from the monitors
Nov 25 12:57:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4030: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:08 np0005535469 nova_compute[254092]: 2025-11-25 17:57:08.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:09 np0005535469 podman[459790]: 2025-11-25 17:57:09.689139241 +0000 UTC m=+0.095893432 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 12:57:09 np0005535469 podman[459791]: 2025-11-25 17:57:09.704379895 +0000 UTC m=+0.104339503 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:57:09 np0005535469 podman[459792]: 2025-11-25 17:57:09.741875532 +0000 UTC m=+0.138788786 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 12:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:57:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:57:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4031: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 15.6366 seconds
Nov 25 12:57:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:11 np0005535469 nova_compute[254092]: 2025-11-25 17:57:11.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4032: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:13 np0005535469 nova_compute[254092]: 2025-11-25 17:57:13.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:57:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:57:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:57:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:57:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:57:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:57:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4033: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:15 np0005535469 nova_compute[254092]: 2025-11-25 17:57:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4034: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:16 np0005535469 nova_compute[254092]: 2025-11-25 17:57:16.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:16 np0005535469 nova_compute[254092]: 2025-11-25 17:57:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4035: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:18 np0005535469 nova_compute[254092]: 2025-11-25 17:57:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:18 np0005535469 nova_compute[254092]: 2025-11-25 17:57:18.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4036: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:21 np0005535469 nova_compute[254092]: 2025-11-25 17:57:21.211 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:21 np0005535469 nova_compute[254092]: 2025-11-25 17:57:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:21 np0005535469 nova_compute[254092]: 2025-11-25 17:57:21.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:57:21 np0005535469 nova_compute[254092]: 2025-11-25 17:57:21.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:57:21 np0005535469 nova_compute[254092]: 2025-11-25 17:57:21.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:57:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:57:21 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:57:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4037: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:57:22 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:57:22 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568836969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:57:22 np0005535469 nova_compute[254092]: 2025-11-25 17:57:22.959 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:57:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:23 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.141 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.142 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3620MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.142 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.143 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.216 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.217 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.237 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:57:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219603912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.700 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.709 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.729 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.731 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:57:23 np0005535469 nova_compute[254092]: 2025-11-25 17:57:23.731 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:57:23 np0005535469 podman[460287]: 2025-11-25 17:57:23.736345034 +0000 UTC m=+0.035112364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:23 np0005535469 podman[460287]: 2025-11-25 17:57:23.932403764 +0000 UTC m=+0.231170994 container create ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:57:24 np0005535469 systemd[1]: Started libpod-conmon-ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea.scope.
Nov 25 12:57:24 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:24 np0005535469 podman[460287]: 2025-11-25 17:57:24.137416106 +0000 UTC m=+0.436183356 container init ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:57:24 np0005535469 podman[460287]: 2025-11-25 17:57:24.153116612 +0000 UTC m=+0.451883842 container start ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 12:57:24 np0005535469 silly_chaum[460304]: 167 167
Nov 25 12:57:24 np0005535469 systemd[1]: libpod-ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea.scope: Deactivated successfully.
Nov 25 12:57:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4038: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:24 np0005535469 podman[460287]: 2025-11-25 17:57:24.224197571 +0000 UTC m=+0.522964841 container attach ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:57:24 np0005535469 podman[460287]: 2025-11-25 17:57:24.224717125 +0000 UTC m=+0.523484365 container died ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:57:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e3ff0d2bb1c1cf4b7239d0c7cce2017c24cd411e462e4048b1b56ade89265efa-merged.mount: Deactivated successfully.
Nov 25 12:57:24 np0005535469 podman[460287]: 2025-11-25 17:57:24.972903137 +0000 UTC m=+1.271670387 container remove ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:57:25 np0005535469 systemd[1]: libpod-conmon-ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea.scope: Deactivated successfully.
Nov 25 12:57:25 np0005535469 podman[460330]: 2025-11-25 17:57:25.209013493 +0000 UTC m=+0.037917990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:25 np0005535469 podman[460330]: 2025-11-25 17:57:25.342947627 +0000 UTC m=+0.171852064 container create bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:57:25 np0005535469 systemd[1]: Started libpod-conmon-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope.
Nov 25 12:57:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:25 np0005535469 podman[460330]: 2025-11-25 17:57:25.532373628 +0000 UTC m=+0.361278055 container init bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:57:25 np0005535469 podman[460330]: 2025-11-25 17:57:25.545212705 +0000 UTC m=+0.374117142 container start bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:57:25 np0005535469 podman[460330]: 2025-11-25 17:57:25.571584382 +0000 UTC m=+0.400488829 container attach bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 25 12:57:25 np0005535469 nova_compute[254092]: 2025-11-25 17:57:25.731 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4039: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:26 np0005535469 nova_compute[254092]: 2025-11-25 17:57:26.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]: [
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:    {
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "available": false,
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "ceph_device": false,
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "lsm_data": {},
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "lvs": [],
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "path": "/dev/sr0",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "rejected_reasons": [
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "Has a FileSystem",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "Insufficient space (<5GB)"
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        ],
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        "sys_api": {
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "actuators": null,
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "device_nodes": "sr0",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "devname": "sr0",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "human_readable_size": "482.00 KB",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "id_bus": "ata",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "model": "QEMU DVD-ROM",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "nr_requests": "2",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "parent": "/dev/sr0",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "partitions": {},
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "path": "/dev/sr0",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "removable": "1",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "rev": "2.5+",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "ro": "0",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "rotational": "1",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "sas_address": "",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "sas_device_handle": "",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "scheduler_mode": "mq-deadline",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "sectors": 0,
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "sectorsize": "2048",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "size": 493568.0,
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "support_discard": "2048",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "type": "disk",
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:            "vendor": "QEMU"
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:        }
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]:    }
Nov 25 12:57:27 np0005535469 zealous_haibt[460347]: ]
Nov 25 12:57:27 np0005535469 systemd[1]: libpod-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope: Deactivated successfully.
Nov 25 12:57:27 np0005535469 systemd[1]: libpod-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope: Consumed 1.444s CPU time.
Nov 25 12:57:27 np0005535469 podman[462175]: 2025-11-25 17:57:27.332559273 +0000 UTC m=+0.023138548 container died bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:57:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3-merged.mount: Deactivated successfully.
Nov 25 12:57:27 np0005535469 podman[462175]: 2025-11-25 17:57:27.757970617 +0000 UTC m=+0.448549872 container remove bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 12:57:27 np0005535469 systemd[1]: libpod-conmon-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope: Deactivated successfully.
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c29fe0a0-e8fa-492a-8b04-d8764ccbc928 does not exist
Nov 25 12:57:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 234d231f-e9ea-4d4d-92a8-1ae69a2a8f0e does not exist
Nov 25 12:57:27 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 70293527-044f-460e-bb46-b1d1edb06486 does not exist
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:57:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:57:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4040: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:28 np0005535469 podman[462332]: 2025-11-25 17:57:28.485396335 +0000 UTC m=+0.066284559 container create 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 12:57:28 np0005535469 nova_compute[254092]: 2025-11-25 17:57:28.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:28 np0005535469 nova_compute[254092]: 2025-11-25 17:57:28.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:57:28 np0005535469 podman[462332]: 2025-11-25 17:57:28.439782998 +0000 UTC m=+0.020671232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:28 np0005535469 systemd[1]: Started libpod-conmon-29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54.scope.
Nov 25 12:57:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:28 np0005535469 podman[462332]: 2025-11-25 17:57:28.615086044 +0000 UTC m=+0.195974288 container init 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:57:28 np0005535469 podman[462332]: 2025-11-25 17:57:28.622712152 +0000 UTC m=+0.203600376 container start 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:57:28 np0005535469 trusting_bell[462348]: 167 167
Nov 25 12:57:28 np0005535469 systemd[1]: libpod-29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54.scope: Deactivated successfully.
Nov 25 12:57:28 np0005535469 podman[462332]: 2025-11-25 17:57:28.652088168 +0000 UTC m=+0.232976412 container attach 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:57:28 np0005535469 podman[462332]: 2025-11-25 17:57:28.652407727 +0000 UTC m=+0.233295951 container died 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:57:28 np0005535469 nova_compute[254092]: 2025-11-25 17:57:28.675 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1e29febedfe4ee59dbaf6d75cfe8515697486b4d21ab78b2fee77323b36d7d5e-merged.mount: Deactivated successfully.
Nov 25 12:57:28 np0005535469 podman[462332]: 2025-11-25 17:57:28.804728591 +0000 UTC m=+0.385616815 container remove 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:57:28 np0005535469 systemd[1]: libpod-conmon-29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54.scope: Deactivated successfully.
Nov 25 12:57:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:57:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:57:29 np0005535469 podman[462373]: 2025-11-25 17:57:29.002438795 +0000 UTC m=+0.065575810 container create 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:57:29 np0005535469 podman[462373]: 2025-11-25 17:57:28.965146983 +0000 UTC m=+0.028284018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:29 np0005535469 systemd[1]: Started libpod-conmon-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope.
Nov 25 12:57:29 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:57:29 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:57:29 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:57:29 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 12:57:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:29 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:29 np0005535469 podman[462373]: 2025-11-25 17:57:29.110311962 +0000 UTC m=+0.173448977 container init 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 12:57:29 np0005535469 podman[462373]: 2025-11-25 17:57:29.118502415 +0000 UTC m=+0.181639430 container start 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:57:29 np0005535469 podman[462373]: 2025-11-25 17:57:29.132120734 +0000 UTC m=+0.195257749 container attach 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:57:30 np0005535469 serene_mahavira[462390]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:57:30 np0005535469 serene_mahavira[462390]: --> relative data size: 1.0
Nov 25 12:57:30 np0005535469 serene_mahavira[462390]: --> All data devices are unavailable
Nov 25 12:57:30 np0005535469 systemd[1]: libpod-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope: Deactivated successfully.
Nov 25 12:57:30 np0005535469 conmon[462390]: conmon 7f9b3fafa5a190b709e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope/container/memory.events
Nov 25 12:57:30 np0005535469 podman[462373]: 2025-11-25 17:57:30.139305683 +0000 UTC m=+1.202442698 container died 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:57:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba-merged.mount: Deactivated successfully.
Nov 25 12:57:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4041: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:30 np0005535469 podman[462373]: 2025-11-25 17:57:30.199366443 +0000 UTC m=+1.262503468 container remove 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:57:30 np0005535469 systemd[1]: libpod-conmon-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope: Deactivated successfully.
Nov 25 12:57:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:30 np0005535469 podman[462573]: 2025-11-25 17:57:30.853734738 +0000 UTC m=+0.040545110 container create c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 12:57:30 np0005535469 systemd[1]: Started libpod-conmon-c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618.scope.
Nov 25 12:57:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:30 np0005535469 podman[462573]: 2025-11-25 17:57:30.836283405 +0000 UTC m=+0.023093807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:30 np0005535469 podman[462573]: 2025-11-25 17:57:30.943035561 +0000 UTC m=+0.129845953 container init c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 12:57:30 np0005535469 podman[462573]: 2025-11-25 17:57:30.951995314 +0000 UTC m=+0.138805686 container start c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 25 12:57:30 np0005535469 podman[462573]: 2025-11-25 17:57:30.955561402 +0000 UTC m=+0.142371794 container attach c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 12:57:30 np0005535469 flamboyant_hypatia[462590]: 167 167
Nov 25 12:57:30 np0005535469 systemd[1]: libpod-c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618.scope: Deactivated successfully.
Nov 25 12:57:30 np0005535469 podman[462573]: 2025-11-25 17:57:30.959219961 +0000 UTC m=+0.146030333 container died c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:57:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-54d9cd6f09fde9f6d6404aaaf0d46e7b7c816ec030db12bdd02e49c442e89e96-merged.mount: Deactivated successfully.
Nov 25 12:57:30 np0005535469 podman[462573]: 2025-11-25 17:57:30.993516981 +0000 UTC m=+0.180327353 container remove c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:57:31 np0005535469 systemd[1]: libpod-conmon-c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618.scope: Deactivated successfully.
Nov 25 12:57:31 np0005535469 podman[462612]: 2025-11-25 17:57:31.151905609 +0000 UTC m=+0.043530192 container create 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 12:57:31 np0005535469 systemd[1]: Started libpod-conmon-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope.
Nov 25 12:57:31 np0005535469 nova_compute[254092]: 2025-11-25 17:57:31.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:31 np0005535469 podman[462612]: 2025-11-25 17:57:31.134482606 +0000 UTC m=+0.026107209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:31 np0005535469 podman[462612]: 2025-11-25 17:57:31.241003107 +0000 UTC m=+0.132627710 container init 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:57:31 np0005535469 podman[462612]: 2025-11-25 17:57:31.249679672 +0000 UTC m=+0.141304255 container start 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:57:31 np0005535469 podman[462612]: 2025-11-25 17:57:31.253480265 +0000 UTC m=+0.145104868 container attach 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]: {
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:    "0": [
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:        {
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "devices": [
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "/dev/loop3"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            ],
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_name": "ceph_lv0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_size": "21470642176",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "name": "ceph_lv0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "tags": {
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cluster_name": "ceph",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.crush_device_class": "",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.encrypted": "0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osd_id": "0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.type": "block",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.vdo": "0"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            },
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "type": "block",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "vg_name": "ceph_vg0"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:        }
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:    ],
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:    "1": [
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:        {
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "devices": [
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "/dev/loop4"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            ],
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_name": "ceph_lv1",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_size": "21470642176",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "name": "ceph_lv1",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "tags": {
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cluster_name": "ceph",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.crush_device_class": "",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.encrypted": "0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osd_id": "1",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.type": "block",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.vdo": "0"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            },
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "type": "block",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "vg_name": "ceph_vg1"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:        }
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:    ],
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:    "2": [
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:        {
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "devices": [
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "/dev/loop5"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            ],
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_name": "ceph_lv2",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_size": "21470642176",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "name": "ceph_lv2",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "tags": {
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.cluster_name": "ceph",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.crush_device_class": "",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.encrypted": "0",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osd_id": "2",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.type": "block",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:                "ceph.vdo": "0"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            },
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "type": "block",
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:            "vg_name": "ceph_vg2"
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:        }
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]:    ]
Nov 25 12:57:32 np0005535469 stupefied_saha[462628]: }
Nov 25 12:57:32 np0005535469 systemd[1]: libpod-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope: Deactivated successfully.
Nov 25 12:57:32 np0005535469 conmon[462628]: conmon 0fea1f15c1e2d4416976 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope/container/memory.events
Nov 25 12:57:32 np0005535469 podman[462612]: 2025-11-25 17:57:32.064587685 +0000 UTC m=+0.956212268 container died 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:57:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3-merged.mount: Deactivated successfully.
Nov 25 12:57:32 np0005535469 podman[462612]: 2025-11-25 17:57:32.121047976 +0000 UTC m=+1.012672579 container remove 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:57:32 np0005535469 systemd[1]: libpod-conmon-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope: Deactivated successfully.
Nov 25 12:57:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4042: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:32 np0005535469 podman[462788]: 2025-11-25 17:57:32.717007496 +0000 UTC m=+0.042464273 container create e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 12:57:32 np0005535469 systemd[1]: Started libpod-conmon-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope.
Nov 25 12:57:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:32 np0005535469 podman[462788]: 2025-11-25 17:57:32.696563522 +0000 UTC m=+0.022020319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:32 np0005535469 podman[462788]: 2025-11-25 17:57:32.795291661 +0000 UTC m=+0.120748458 container init e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:57:32 np0005535469 podman[462788]: 2025-11-25 17:57:32.801770207 +0000 UTC m=+0.127226984 container start e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:57:32 np0005535469 podman[462788]: 2025-11-25 17:57:32.804290626 +0000 UTC m=+0.129747583 container attach e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 12:57:32 np0005535469 musing_swanson[462804]: 167 167
Nov 25 12:57:32 np0005535469 systemd[1]: libpod-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope: Deactivated successfully.
Nov 25 12:57:32 np0005535469 conmon[462804]: conmon e88c24c7597abf36ceca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope/container/memory.events
Nov 25 12:57:32 np0005535469 podman[462788]: 2025-11-25 17:57:32.808478049 +0000 UTC m=+0.133934826 container died e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 12:57:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0bb7ecc7213a6441faf053cbf5e873df1fe3e1e041cc05776b112a1fcf8f933b-merged.mount: Deactivated successfully.
Nov 25 12:57:32 np0005535469 podman[462788]: 2025-11-25 17:57:32.84094351 +0000 UTC m=+0.166400287 container remove e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:57:32 np0005535469 systemd[1]: libpod-conmon-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope: Deactivated successfully.
Nov 25 12:57:32 np0005535469 podman[462827]: 2025-11-25 17:57:32.98541531 +0000 UTC m=+0.040568111 container create 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 12:57:33 np0005535469 systemd[1]: Started libpod-conmon-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope.
Nov 25 12:57:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:57:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:57:33 np0005535469 podman[462827]: 2025-11-25 17:57:32.966791755 +0000 UTC m=+0.021944576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:57:33 np0005535469 podman[462827]: 2025-11-25 17:57:33.069747298 +0000 UTC m=+0.124900109 container init 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 12:57:33 np0005535469 podman[462827]: 2025-11-25 17:57:33.07498373 +0000 UTC m=+0.130136531 container start 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 12:57:33 np0005535469 podman[462827]: 2025-11-25 17:57:33.077806837 +0000 UTC m=+0.132959668 container attach 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:57:33 np0005535469 nova_compute[254092]: 2025-11-25 17:57:33.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:34 np0005535469 competent_lewin[462844]: {
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "osd_id": 1,
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "type": "bluestore"
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:    },
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "osd_id": 2,
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "type": "bluestore"
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:    },
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "osd_id": 0,
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:        "type": "bluestore"
Nov 25 12:57:34 np0005535469 competent_lewin[462844]:    }
Nov 25 12:57:34 np0005535469 competent_lewin[462844]: }
Nov 25 12:57:34 np0005535469 systemd[1]: libpod-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope: Deactivated successfully.
Nov 25 12:57:34 np0005535469 systemd[1]: libpod-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope: Consumed 1.012s CPU time.
Nov 25 12:57:34 np0005535469 podman[462827]: 2025-11-25 17:57:34.079362784 +0000 UTC m=+1.134515585 container died 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:57:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a-merged.mount: Deactivated successfully.
Nov 25 12:57:34 np0005535469 podman[462827]: 2025-11-25 17:57:34.138114628 +0000 UTC m=+1.193267439 container remove 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:57:34 np0005535469 systemd[1]: libpod-conmon-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope: Deactivated successfully.
Nov 25 12:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:57:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4043: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:57:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 721df93d-5682-4df7-a7a8-63bfda36596f does not exist
Nov 25 12:57:34 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 12e03706-c366-4f70-bd65-b391e5762f92 does not exist
Nov 25 12:57:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:57:35 np0005535469 nova_compute[254092]: 2025-11-25 17:57:35.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:57:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4044: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:36 np0005535469 nova_compute[254092]: 2025-11-25 17:57:36.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4045: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:38 np0005535469 nova_compute[254092]: 2025-11-25 17:57:38.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4046: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:57:40
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:57:40 np0005535469 podman[462940]: 2025-11-25 17:57:40.641779591 +0000 UTC m=+0.061540451 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 25 12:57:40 np0005535469 podman[462941]: 2025-11-25 17:57:40.663775808 +0000 UTC m=+0.083480087 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 12:57:40 np0005535469 podman[462942]: 2025-11-25 17:57:40.675834295 +0000 UTC m=+0.089948412 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:57:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:57:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:57:41 np0005535469 nova_compute[254092]: 2025-11-25 17:57:41.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4047: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:43 np0005535469 nova_compute[254092]: 2025-11-25 17:57:43.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4048: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4049: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:46 np0005535469 nova_compute[254092]: 2025-11-25 17:57:46.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4050: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:48 np0005535469 nova_compute[254092]: 2025-11-25 17:57:48.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4051: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:51 np0005535469 nova_compute[254092]: 2025-11-25 17:57:51.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4052: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:57:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:57:53 np0005535469 nova_compute[254092]: 2025-11-25 17:57:53.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4053: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:57:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559460647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:57:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:57:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559460647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:57:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:57:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4054: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:56 np0005535469 nova_compute[254092]: 2025-11-25 17:57:56.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:57:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4055: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:57:58 np0005535469 nova_compute[254092]: 2025-11-25 17:57:58.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4056: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:01 np0005535469 nova_compute[254092]: 2025-11-25 17:58:01.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4057: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:03 np0005535469 nova_compute[254092]: 2025-11-25 17:58:03.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4058: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4059: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:06 np0005535469 nova_compute[254092]: 2025-11-25 17:58:06.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4060: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:08 np0005535469 nova_compute[254092]: 2025-11-25 17:58:08.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:58:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:58:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4061: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:11 np0005535469 nova_compute[254092]: 2025-11-25 17:58:11.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:11 np0005535469 podman[463004]: 2025-11-25 17:58:11.638092826 +0000 UTC m=+0.059789613 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 25 12:58:11 np0005535469 podman[463005]: 2025-11-25 17:58:11.654291736 +0000 UTC m=+0.064718097 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 12:58:11 np0005535469 podman[463006]: 2025-11-25 17:58:11.667693869 +0000 UTC m=+0.082574601 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 12:58:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4062: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:58:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:58:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:58:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:58:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:58:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:58:13 np0005535469 nova_compute[254092]: 2025-11-25 17:58:13.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4063: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:15 np0005535469 nova_compute[254092]: 2025-11-25 17:58:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4064: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:16 np0005535469 nova_compute[254092]: 2025-11-25 17:58:16.273 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:16 np0005535469 nova_compute[254092]: 2025-11-25 17:58:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4065: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:18 np0005535469 nova_compute[254092]: 2025-11-25 17:58:18.727 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4066: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:20 np0005535469 nova_compute[254092]: 2025-11-25 17:58:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:21 np0005535469 nova_compute[254092]: 2025-11-25 17:58:21.277 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4067: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:22 np0005535469 nova_compute[254092]: 2025-11-25 17:58:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:22 np0005535469 nova_compute[254092]: 2025-11-25 17:58:22.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:58:22 np0005535469 nova_compute[254092]: 2025-11-25 17:58:22.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:58:22 np0005535469 nova_compute[254092]: 2025-11-25 17:58:22.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.550 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.551 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.551 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.552 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.552 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:23 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:58:23 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3518698529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:58:23 np0005535469 nova_compute[254092]: 2025-11-25 17:58:23.985 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.200 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3607MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.202 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.202 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:58:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4068: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.286 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.287 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.315 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:58:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:58:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719363109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.750 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.759 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.789 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.792 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:58:24 np0005535469 nova_compute[254092]: 2025-11-25 17:58:24.792 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:58:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4069: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:26 np0005535469 nova_compute[254092]: 2025-11-25 17:58:26.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:27 np0005535469 nova_compute[254092]: 2025-11-25 17:58:27.794 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4070: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:28 np0005535469 nova_compute[254092]: 2025-11-25 17:58:28.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:29 np0005535469 nova_compute[254092]: 2025-11-25 17:58:29.550 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:29 np0005535469 nova_compute[254092]: 2025-11-25 17:58:29.551 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:58:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4071: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:31 np0005535469 nova_compute[254092]: 2025-11-25 17:58:31.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4072: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:33 np0005535469 nova_compute[254092]: 2025-11-25 17:58:33.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4073: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:58:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2b4cff6d-4b63-436a-a749-0db7e53274f8 does not exist
Nov 25 12:58:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 899231f5-8cba-44ee-b567-db6d231c0ea3 does not exist
Nov 25 12:58:35 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 915e68b5-eeac-4357-981f-1ec968792294 does not exist
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:58:35 np0005535469 podman[463384]: 2025-11-25 17:58:35.78347181 +0000 UTC m=+0.049685639 container create c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:58:35 np0005535469 systemd[1]: Started libpod-conmon-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope.
Nov 25 12:58:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:35 np0005535469 podman[463384]: 2025-11-25 17:58:35.757876156 +0000 UTC m=+0.024090075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:58:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:58:35 np0005535469 podman[463384]: 2025-11-25 17:58:35.888588663 +0000 UTC m=+0.154802552 container init c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:58:35 np0005535469 podman[463384]: 2025-11-25 17:58:35.897498995 +0000 UTC m=+0.163712814 container start c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:58:35 np0005535469 podman[463384]: 2025-11-25 17:58:35.900422824 +0000 UTC m=+0.166636743 container attach c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 12:58:35 np0005535469 stoic_robinson[463401]: 167 167
Nov 25 12:58:35 np0005535469 systemd[1]: libpod-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope: Deactivated successfully.
Nov 25 12:58:35 np0005535469 conmon[463401]: conmon c7e3f7b3c05629e64b7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope/container/memory.events
Nov 25 12:58:35 np0005535469 podman[463384]: 2025-11-25 17:58:35.905243894 +0000 UTC m=+0.171457763 container died c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 12:58:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2001923bd13ee96e4aa38b8b68c6465852a6b7f31ce659e2d548e945c23e1791-merged.mount: Deactivated successfully.
Nov 25 12:58:35 np0005535469 podman[463384]: 2025-11-25 17:58:35.955772776 +0000 UTC m=+0.221986605 container remove c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 12:58:35 np0005535469 systemd[1]: libpod-conmon-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope: Deactivated successfully.
Nov 25 12:58:36 np0005535469 podman[463425]: 2025-11-25 17:58:36.171948602 +0000 UTC m=+0.060795091 container create 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:58:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4074: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:36 np0005535469 systemd[1]: Started libpod-conmon-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope.
Nov 25 12:58:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:58:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:36 np0005535469 podman[463425]: 2025-11-25 17:58:36.154350553 +0000 UTC m=+0.043197052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:58:36 np0005535469 podman[463425]: 2025-11-25 17:58:36.260949336 +0000 UTC m=+0.149795825 container init 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:58:36 np0005535469 podman[463425]: 2025-11-25 17:58:36.268307786 +0000 UTC m=+0.157154275 container start 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 12:58:36 np0005535469 podman[463425]: 2025-11-25 17:58:36.271114312 +0000 UTC m=+0.159960801 container attach 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:58:36 np0005535469 nova_compute[254092]: 2025-11-25 17:58:36.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:36 np0005535469 nova_compute[254092]: 2025-11-25 17:58:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:37 np0005535469 friendly_swirles[463442]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:58:37 np0005535469 friendly_swirles[463442]: --> relative data size: 1.0
Nov 25 12:58:37 np0005535469 friendly_swirles[463442]: --> All data devices are unavailable
Nov 25 12:58:37 np0005535469 systemd[1]: libpod-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope: Deactivated successfully.
Nov 25 12:58:37 np0005535469 systemd[1]: libpod-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope: Consumed 1.104s CPU time.
Nov 25 12:58:37 np0005535469 podman[463425]: 2025-11-25 17:58:37.423998435 +0000 UTC m=+1.312844964 container died 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 12:58:37 np0005535469 systemd[1]: var-lib-containers-storage-overlay-081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25-merged.mount: Deactivated successfully.
Nov 25 12:58:37 np0005535469 podman[463425]: 2025-11-25 17:58:37.50379076 +0000 UTC m=+1.392637259 container remove 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:58:37 np0005535469 systemd[1]: libpod-conmon-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope: Deactivated successfully.
Nov 25 12:58:38 np0005535469 podman[463622]: 2025-11-25 17:58:38.196018102 +0000 UTC m=+0.056261188 container create 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 25 12:58:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4075: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:38 np0005535469 systemd[1]: Started libpod-conmon-4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3.scope.
Nov 25 12:58:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:58:38 np0005535469 podman[463622]: 2025-11-25 17:58:38.168186036 +0000 UTC m=+0.028429162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:58:38 np0005535469 podman[463622]: 2025-11-25 17:58:38.273468044 +0000 UTC m=+0.133711180 container init 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 12:58:38 np0005535469 podman[463622]: 2025-11-25 17:58:38.282801276 +0000 UTC m=+0.143044362 container start 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:58:38 np0005535469 podman[463622]: 2025-11-25 17:58:38.285833889 +0000 UTC m=+0.146076995 container attach 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 12:58:38 np0005535469 serene_neumann[463638]: 167 167
Nov 25 12:58:38 np0005535469 systemd[1]: libpod-4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3.scope: Deactivated successfully.
Nov 25 12:58:38 np0005535469 podman[463622]: 2025-11-25 17:58:38.288854011 +0000 UTC m=+0.149097117 container died 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 12:58:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4c54f84bb7002f4e6014f04760bfa47d4c3b26c87c4be5b66c28b2681b84655e-merged.mount: Deactivated successfully.
Nov 25 12:58:38 np0005535469 podman[463622]: 2025-11-25 17:58:38.326063501 +0000 UTC m=+0.186306587 container remove 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:58:38 np0005535469 systemd[1]: libpod-conmon-4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3.scope: Deactivated successfully.
Nov 25 12:58:38 np0005535469 podman[463662]: 2025-11-25 17:58:38.516307403 +0000 UTC m=+0.040157521 container create c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 12:58:38 np0005535469 systemd[1]: Started libpod-conmon-c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff.scope.
Nov 25 12:58:38 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:58:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:38 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:38 np0005535469 podman[463662]: 2025-11-25 17:58:38.499514377 +0000 UTC m=+0.023364505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:58:38 np0005535469 podman[463662]: 2025-11-25 17:58:38.594983718 +0000 UTC m=+0.118833856 container init c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 12:58:38 np0005535469 podman[463662]: 2025-11-25 17:58:38.602324136 +0000 UTC m=+0.126174264 container start c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 12:58:38 np0005535469 podman[463662]: 2025-11-25 17:58:38.605352789 +0000 UTC m=+0.129202907 container attach c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:58:38 np0005535469 nova_compute[254092]: 2025-11-25 17:58:38.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]: {
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:    "0": [
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:        {
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "devices": [
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "/dev/loop3"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            ],
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_name": "ceph_lv0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_size": "21470642176",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "name": "ceph_lv0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "tags": {
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cluster_name": "ceph",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.crush_device_class": "",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.encrypted": "0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osd_id": "0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.type": "block",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.vdo": "0"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            },
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "type": "block",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "vg_name": "ceph_vg0"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:        }
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:    ],
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:    "1": [
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:        {
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "devices": [
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "/dev/loop4"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            ],
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_name": "ceph_lv1",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_size": "21470642176",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "name": "ceph_lv1",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "tags": {
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cluster_name": "ceph",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.crush_device_class": "",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.encrypted": "0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osd_id": "1",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.type": "block",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.vdo": "0"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            },
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "type": "block",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "vg_name": "ceph_vg1"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:        }
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:    ],
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:    "2": [
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:        {
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "devices": [
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "/dev/loop5"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            ],
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_name": "ceph_lv2",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_size": "21470642176",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "name": "ceph_lv2",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "tags": {
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.cluster_name": "ceph",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.crush_device_class": "",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.encrypted": "0",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osd_id": "2",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.type": "block",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:                "ceph.vdo": "0"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            },
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "type": "block",
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:            "vg_name": "ceph_vg2"
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:        }
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]:    ]
Nov 25 12:58:39 np0005535469 festive_hofstadter[463678]: }
Nov 25 12:58:39 np0005535469 systemd[1]: libpod-c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff.scope: Deactivated successfully.
Nov 25 12:58:39 np0005535469 podman[463662]: 2025-11-25 17:58:39.382771764 +0000 UTC m=+0.906621892 container died c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 25 12:58:39 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7-merged.mount: Deactivated successfully.
Nov 25 12:58:39 np0005535469 podman[463662]: 2025-11-25 17:58:39.449017241 +0000 UTC m=+0.972867389 container remove c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:58:39 np0005535469 systemd[1]: libpod-conmon-c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff.scope: Deactivated successfully.
Nov 25 12:58:40 np0005535469 podman[463839]: 2025-11-25 17:58:40.097488697 +0000 UTC m=+0.057590364 container create 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:58:40 np0005535469 systemd[1]: Started libpod-conmon-52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf.scope.
Nov 25 12:58:40 np0005535469 podman[463839]: 2025-11-25 17:58:40.073889487 +0000 UTC m=+0.033991244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:58:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:58:40 np0005535469 podman[463839]: 2025-11-25 17:58:40.208628273 +0000 UTC m=+0.168730030 container init 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4076: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:40 np0005535469 podman[463839]: 2025-11-25 17:58:40.217517954 +0000 UTC m=+0.177619621 container start 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:58:40 np0005535469 podman[463839]: 2025-11-25 17:58:40.220915946 +0000 UTC m=+0.181017703 container attach 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 12:58:40 np0005535469 thirsty_meitner[463855]: 167 167
Nov 25 12:58:40 np0005535469 systemd[1]: libpod-52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf.scope: Deactivated successfully.
Nov 25 12:58:40 np0005535469 podman[463839]: 2025-11-25 17:58:40.224512683 +0000 UTC m=+0.184614350 container died 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 12:58:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f0abbd84eb03d0666ea3a870cbb3e9638c51d6a62cc515ba92714814080df2ea-merged.mount: Deactivated successfully.
Nov 25 12:58:40 np0005535469 podman[463839]: 2025-11-25 17:58:40.262145465 +0000 UTC m=+0.222247132 container remove 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:58:40 np0005535469 systemd[1]: libpod-conmon-52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf.scope: Deactivated successfully.
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:58:40
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:58:40 np0005535469 podman[463880]: 2025-11-25 17:58:40.482872445 +0000 UTC m=+0.084664300 container create cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:58:40 np0005535469 nova_compute[254092]: 2025-11-25 17:58:40.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:58:40 np0005535469 podman[463880]: 2025-11-25 17:58:40.425469476 +0000 UTC m=+0.027261411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:58:40 np0005535469 systemd[1]: Started libpod-conmon-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope.
Nov 25 12:58:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:58:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:40 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:58:40 np0005535469 podman[463880]: 2025-11-25 17:58:40.573906494 +0000 UTC m=+0.175698369 container init cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:58:40 np0005535469 podman[463880]: 2025-11-25 17:58:40.584528552 +0000 UTC m=+0.186320437 container start cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 12:58:40 np0005535469 podman[463880]: 2025-11-25 17:58:40.589102496 +0000 UTC m=+0.190894391 container attach cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:58:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:58:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:58:41 np0005535469 nova_compute[254092]: 2025-11-25 17:58:41.314 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:41 np0005535469 competent_shirley[463896]: {
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "osd_id": 1,
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "type": "bluestore"
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:    },
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "osd_id": 2,
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "type": "bluestore"
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:    },
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "osd_id": 0,
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:        "type": "bluestore"
Nov 25 12:58:41 np0005535469 competent_shirley[463896]:    }
Nov 25 12:58:41 np0005535469 competent_shirley[463896]: }
Nov 25 12:58:41 np0005535469 systemd[1]: libpod-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope: Deactivated successfully.
Nov 25 12:58:41 np0005535469 podman[463880]: 2025-11-25 17:58:41.667079346 +0000 UTC m=+1.268871201 container died cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:58:41 np0005535469 systemd[1]: libpod-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope: Consumed 1.082s CPU time.
Nov 25 12:58:41 np0005535469 systemd[1]: var-lib-containers-storage-overlay-8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3-merged.mount: Deactivated successfully.
Nov 25 12:58:41 np0005535469 podman[463880]: 2025-11-25 17:58:41.778274053 +0000 UTC m=+1.380065898 container remove cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 12:58:41 np0005535469 systemd[1]: libpod-conmon-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope: Deactivated successfully.
Nov 25 12:58:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:58:41 np0005535469 podman[463932]: 2025-11-25 17:58:41.817617331 +0000 UTC m=+0.072759415 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 12:58:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:58:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:58:41 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:58:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e1a4889d-9700-402e-8eb6-37a7e0946881 does not exist
Nov 25 12:58:41 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a64333cc-9fce-4bb2-b2c7-c0affcaa51bf does not exist
Nov 25 12:58:41 np0005535469 podman[463940]: 2025-11-25 17:58:41.83747143 +0000 UTC m=+0.092851421 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 12:58:41 np0005535469 podman[463941]: 2025-11-25 17:58:41.840338457 +0000 UTC m=+0.093257351 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:58:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4077: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:58:42 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:58:43 np0005535469 nova_compute[254092]: 2025-11-25 17:58:43.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4078: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4079: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:46 np0005535469 nova_compute[254092]: 2025-11-25 17:58:46.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4080: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:48 np0005535469 nova_compute[254092]: 2025-11-25 17:58:48.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4081: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:51 np0005535469 nova_compute[254092]: 2025-11-25 17:58:51.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4082: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:58:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:58:53 np0005535469 nova_compute[254092]: 2025-11-25 17:58:53.837 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4083: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:58:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2738161178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:58:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:58:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2738161178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:58:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:58:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4084: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:56 np0005535469 nova_compute[254092]: 2025-11-25 17:58:56.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:58:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4085: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:58:58 np0005535469 nova_compute[254092]: 2025-11-25 17:58:58.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4086: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.848511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540848534, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1955, "num_deletes": 250, "total_data_size": 3309509, "memory_usage": 3364008, "flush_reason": "Manual Compaction"}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540863633, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 1897128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82058, "largest_seqno": 84012, "table_properties": {"data_size": 1890749, "index_size": 3260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16461, "raw_average_key_size": 20, "raw_value_size": 1876615, "raw_average_value_size": 2363, "num_data_blocks": 150, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093318, "oldest_key_time": 1764093318, "file_creation_time": 1764093540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 15211 microseconds, and 7414 cpu microseconds.
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.863716) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 1897128 bytes OK
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.863738) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.865180) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.865194) EVENT_LOG_v1 {"time_micros": 1764093540865189, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.865214) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 3301262, prev total WAL file size 3301262, number of live WAL files 2.
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.866211) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353131' seq:72057594037927935, type:22 .. '6D6772737461740033373632' seq:0, type:0; will stop at (end)
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(1852KB)], [194(10MB)]
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540866236, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 12851139, "oldest_snapshot_seqno": -1}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 10019 keys, 10878473 bytes, temperature: kUnknown
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540931996, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10878473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10817903, "index_size": 34409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 262823, "raw_average_key_size": 26, "raw_value_size": 10645243, "raw_average_value_size": 1062, "num_data_blocks": 1328, "num_entries": 10019, "num_filter_entries": 10019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.932265) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10878473 bytes
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.933707) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.2 rd, 165.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.4 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(12.5) write-amplify(5.7) OK, records in: 10430, records dropped: 411 output_compression: NoCompression
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.933726) EVENT_LOG_v1 {"time_micros": 1764093540933716, "job": 122, "event": "compaction_finished", "compaction_time_micros": 65843, "compaction_time_cpu_micros": 27992, "output_level": 6, "num_output_files": 1, "total_output_size": 10878473, "num_input_records": 10430, "num_output_records": 10019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540934198, "job": 122, "event": "table_file_deletion", "file_number": 196}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540936378, "job": 122, "event": "table_file_deletion", "file_number": 194}
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.866148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:00 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:01 np0005535469 nova_compute[254092]: 2025-11-25 17:59:01.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4087: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:03 np0005535469 nova_compute[254092]: 2025-11-25 17:59:03.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4088: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4089: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:06 np0005535469 nova_compute[254092]: 2025-11-25 17:59:06.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4090: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:08 np0005535469 nova_compute[254092]: 2025-11-25 17:59:08.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:59:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:59:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4091: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:11 np0005535469 nova_compute[254092]: 2025-11-25 17:59:11.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4092: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:12 np0005535469 podman[464049]: 2025-11-25 17:59:12.670713296 +0000 UTC m=+0.071244735 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 12:59:12 np0005535469 podman[464048]: 2025-11-25 17:59:12.676484412 +0000 UTC m=+0.079690354 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 12:59:12 np0005535469 podman[464050]: 2025-11-25 17:59:12.70771211 +0000 UTC m=+0.107323124 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 12:59:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:59:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:59:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:59:13.705 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:59:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 17:59:13.705 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:59:13 np0005535469 nova_compute[254092]: 2025-11-25 17:59:13.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4093: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.291907) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554291943, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 347, "num_deletes": 251, "total_data_size": 206013, "memory_usage": 213720, "flush_reason": "Manual Compaction"}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554294849, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 204447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84013, "largest_seqno": 84359, "table_properties": {"data_size": 202256, "index_size": 354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5369, "raw_average_key_size": 18, "raw_value_size": 198035, "raw_average_value_size": 680, "num_data_blocks": 16, "num_entries": 291, "num_filter_entries": 291, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093542, "oldest_key_time": 1764093542, "file_creation_time": 1764093554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 2984 microseconds, and 1100 cpu microseconds.
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.294887) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 204447 bytes OK
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.294906) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296069) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296085) EVENT_LOG_v1 {"time_micros": 1764093554296080, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296103) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 203669, prev total WAL file size 203669, number of live WAL files 2.
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296493) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(199KB)], [197(10MB)]
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554296535, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 11082920, "oldest_snapshot_seqno": -1}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 9801 keys, 9361557 bytes, temperature: kUnknown
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554367037, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 9361557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9303898, "index_size": 32074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 258973, "raw_average_key_size": 26, "raw_value_size": 9136497, "raw_average_value_size": 932, "num_data_blocks": 1221, "num_entries": 9801, "num_filter_entries": 9801, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.367287) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 9361557 bytes
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.368874) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.0 rd, 132.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(100.0) write-amplify(45.8) OK, records in: 10310, records dropped: 509 output_compression: NoCompression
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.368894) EVENT_LOG_v1 {"time_micros": 1764093554368885, "job": 124, "event": "compaction_finished", "compaction_time_micros": 70581, "compaction_time_cpu_micros": 32398, "output_level": 6, "num_output_files": 1, "total_output_size": 9361557, "num_input_records": 10310, "num_output_records": 9801, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554369043, "job": 124, "event": "table_file_deletion", "file_number": 199}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554370840, "job": 124, "event": "table_file_deletion", "file_number": 197}
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:14 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 12:59:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4094: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:16 np0005535469 nova_compute[254092]: 2025-11-25 17:59:16.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:16 np0005535469 nova_compute[254092]: 2025-11-25 17:59:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4095: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:18 np0005535469 nova_compute[254092]: 2025-11-25 17:59:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:18 np0005535469 nova_compute[254092]: 2025-11-25 17:59:18.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4096: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:20 np0005535469 nova_compute[254092]: 2025-11-25 17:59:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:21 np0005535469 nova_compute[254092]: 2025-11-25 17:59:21.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4097: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:23 np0005535469 nova_compute[254092]: 2025-11-25 17:59:23.493 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:23 np0005535469 nova_compute[254092]: 2025-11-25 17:59:23.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4098: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.516 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.549 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.549 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 12:59:24 np0005535469 nova_compute[254092]: 2025-11-25 17:59:24.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:59:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:59:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672255322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.040 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.265 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.267 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3624MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.268 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.269 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.346 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.347 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.372 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 12:59:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 12:59:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251580964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.801 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.808 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.823 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.825 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 12:59:25 np0005535469 nova_compute[254092]: 2025-11-25 17:59:25.826 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 12:59:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4099: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:26 np0005535469 nova_compute[254092]: 2025-11-25 17:59:26.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4100: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:28 np0005535469 nova_compute[254092]: 2025-11-25 17:59:28.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:29 np0005535469 nova_compute[254092]: 2025-11-25 17:59:29.813 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:29 np0005535469 nova_compute[254092]: 2025-11-25 17:59:29.815 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:29 np0005535469 nova_compute[254092]: 2025-11-25 17:59:29.815 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 12:59:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4101: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:31 np0005535469 nova_compute[254092]: 2025-11-25 17:59:31.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4102: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:33 np0005535469 nova_compute[254092]: 2025-11-25 17:59:33.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4103: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:34 np0005535469 nova_compute[254092]: 2025-11-25 17:59:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:34 np0005535469 nova_compute[254092]: 2025-11-25 17:59:34.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 12:59:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4104: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:36 np0005535469 nova_compute[254092]: 2025-11-25 17:59:36.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:37 np0005535469 nova_compute[254092]: 2025-11-25 17:59:37.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4105: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:38 np0005535469 nova_compute[254092]: 2025-11-25 17:59:38.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4106: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:59:40
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', '.rgw.root', '.mgr']
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:59:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 12:59:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 12:59:41 np0005535469 nova_compute[254092]: 2025-11-25 17:59:41.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4107: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:59:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 043169e6-03c9-41f2-8f3b-3fdb5a0c52f3 does not exist
Nov 25 12:59:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2f06f259-f02e-43d9-a607-fc7827b5a53d does not exist
Nov 25 12:59:42 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2eeaaed8-b6e5-4062-838b-4534866f5909 does not exist
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 12:59:42 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 12:59:42 np0005535469 podman[464311]: 2025-11-25 17:59:42.972499071 +0000 UTC m=+0.076143457 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 12:59:42 np0005535469 podman[464310]: 2025-11-25 17:59:42.990619563 +0000 UTC m=+0.093397916 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 12:59:43 np0005535469 podman[464312]: 2025-11-25 17:59:43.006312898 +0000 UTC m=+0.108171486 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Nov 25 12:59:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 12:59:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 12:59:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:59:43 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 12:59:43 np0005535469 podman[464486]: 2025-11-25 17:59:43.49138522 +0000 UTC m=+0.038622918 container create 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:59:43 np0005535469 systemd[1]: Started libpod-conmon-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope.
Nov 25 12:59:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:59:43 np0005535469 podman[464486]: 2025-11-25 17:59:43.473901526 +0000 UTC m=+0.021139244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:59:43 np0005535469 podman[464486]: 2025-11-25 17:59:43.586590214 +0000 UTC m=+0.133827952 container init 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:59:43 np0005535469 podman[464486]: 2025-11-25 17:59:43.593918473 +0000 UTC m=+0.141156171 container start 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:59:43 np0005535469 podman[464486]: 2025-11-25 17:59:43.597298864 +0000 UTC m=+0.144536602 container attach 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 12:59:43 np0005535469 condescending_euler[464502]: 167 167
Nov 25 12:59:43 np0005535469 systemd[1]: libpod-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope: Deactivated successfully.
Nov 25 12:59:43 np0005535469 conmon[464502]: conmon 5c9c71aff816ae3d6ef2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope/container/memory.events
Nov 25 12:59:43 np0005535469 podman[464486]: 2025-11-25 17:59:43.602855885 +0000 UTC m=+0.150093583 container died 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:59:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0edf2a1332a5a9c336c55154110dc8b1f7ad1001ce2644f196070f9f19f259c6-merged.mount: Deactivated successfully.
Nov 25 12:59:43 np0005535469 podman[464486]: 2025-11-25 17:59:43.647534628 +0000 UTC m=+0.194772326 container remove 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:59:43 np0005535469 systemd[1]: libpod-conmon-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope: Deactivated successfully.
Nov 25 12:59:43 np0005535469 podman[464526]: 2025-11-25 17:59:43.805878375 +0000 UTC m=+0.044450008 container create 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:59:43 np0005535469 systemd[1]: Started libpod-conmon-629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e.scope.
Nov 25 12:59:43 np0005535469 nova_compute[254092]: 2025-11-25 17:59:43.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:59:43 np0005535469 podman[464526]: 2025-11-25 17:59:43.78580665 +0000 UTC m=+0.024378303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:59:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:43 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:43 np0005535469 podman[464526]: 2025-11-25 17:59:43.89380518 +0000 UTC m=+0.132376843 container init 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 12:59:43 np0005535469 podman[464526]: 2025-11-25 17:59:43.901501359 +0000 UTC m=+0.140072992 container start 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 12:59:43 np0005535469 podman[464526]: 2025-11-25 17:59:43.904388047 +0000 UTC m=+0.142959680 container attach 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:59:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4108: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:44 np0005535469 gracious_noether[464542]: --> passed data devices: 0 physical, 3 LVM
Nov 25 12:59:44 np0005535469 gracious_noether[464542]: --> relative data size: 1.0
Nov 25 12:59:44 np0005535469 gracious_noether[464542]: --> All data devices are unavailable
Nov 25 12:59:44 np0005535469 systemd[1]: libpod-629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e.scope: Deactivated successfully.
Nov 25 12:59:44 np0005535469 podman[464526]: 2025-11-25 17:59:44.926948184 +0000 UTC m=+1.165519827 container died 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:59:44 np0005535469 systemd[1]: var-lib-containers-storage-overlay-096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf-merged.mount: Deactivated successfully.
Nov 25 12:59:44 np0005535469 podman[464526]: 2025-11-25 17:59:44.982525392 +0000 UTC m=+1.221097025 container remove 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 12:59:44 np0005535469 systemd[1]: libpod-conmon-629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e.scope: Deactivated successfully.
Nov 25 12:59:45 np0005535469 podman[464724]: 2025-11-25 17:59:45.575704987 +0000 UTC m=+0.042672139 container create 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 12:59:45 np0005535469 systemd[1]: Started libpod-conmon-8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d.scope.
Nov 25 12:59:45 np0005535469 podman[464724]: 2025-11-25 17:59:45.555826558 +0000 UTC m=+0.022793770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:59:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:59:45 np0005535469 podman[464724]: 2025-11-25 17:59:45.665886115 +0000 UTC m=+0.132853277 container init 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 12:59:45 np0005535469 podman[464724]: 2025-11-25 17:59:45.672881064 +0000 UTC m=+0.139848226 container start 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:59:45 np0005535469 podman[464724]: 2025-11-25 17:59:45.676000769 +0000 UTC m=+0.142967931 container attach 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:59:45 np0005535469 mystifying_mendeleev[464741]: 167 167
Nov 25 12:59:45 np0005535469 systemd[1]: libpod-8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d.scope: Deactivated successfully.
Nov 25 12:59:45 np0005535469 podman[464724]: 2025-11-25 17:59:45.678143227 +0000 UTC m=+0.145110389 container died 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 12:59:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1d89c354e42c66b347609f34995d65435dfd84c455e89ad083d775cb0ee8c9aa-merged.mount: Deactivated successfully.
Nov 25 12:59:45 np0005535469 podman[464724]: 2025-11-25 17:59:45.718389999 +0000 UTC m=+0.185357161 container remove 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 12:59:45 np0005535469 systemd[1]: libpod-conmon-8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d.scope: Deactivated successfully.
Nov 25 12:59:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:45 np0005535469 podman[464765]: 2025-11-25 17:59:45.874290739 +0000 UTC m=+0.039368699 container create 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 12:59:45 np0005535469 systemd[1]: Started libpod-conmon-22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d.scope.
Nov 25 12:59:45 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:59:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:45 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:45 np0005535469 podman[464765]: 2025-11-25 17:59:45.952418679 +0000 UTC m=+0.117496659 container init 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:59:45 np0005535469 podman[464765]: 2025-11-25 17:59:45.858091069 +0000 UTC m=+0.023169039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:59:45 np0005535469 podman[464765]: 2025-11-25 17:59:45.959984525 +0000 UTC m=+0.125062485 container start 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:59:45 np0005535469 podman[464765]: 2025-11-25 17:59:45.962686268 +0000 UTC m=+0.127764228 container attach 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 12:59:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4109: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:46 np0005535469 nova_compute[254092]: 2025-11-25 17:59:46.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]: {
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:    "0": [
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:        {
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "devices": [
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "/dev/loop3"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            ],
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_name": "ceph_lv0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_size": "21470642176",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "name": "ceph_lv0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "tags": {
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cluster_name": "ceph",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.crush_device_class": "",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.encrypted": "0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osd_id": "0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.type": "block",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.vdo": "0"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            },
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "type": "block",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "vg_name": "ceph_vg0"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:        }
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:    ],
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:    "1": [
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:        {
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "devices": [
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "/dev/loop4"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            ],
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_name": "ceph_lv1",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_size": "21470642176",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "name": "ceph_lv1",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "tags": {
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cluster_name": "ceph",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.crush_device_class": "",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.encrypted": "0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osd_id": "1",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.type": "block",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.vdo": "0"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            },
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "type": "block",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "vg_name": "ceph_vg1"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:        }
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:    ],
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:    "2": [
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:        {
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "devices": [
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "/dev/loop5"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            ],
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_name": "ceph_lv2",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_size": "21470642176",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "name": "ceph_lv2",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "tags": {
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cephx_lockbox_secret": "",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.cluster_name": "ceph",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.crush_device_class": "",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.encrypted": "0",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osd_id": "2",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.type": "block",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:                "ceph.vdo": "0"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            },
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "type": "block",
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:            "vg_name": "ceph_vg2"
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:        }
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]:    ]
Nov 25 12:59:46 np0005535469 dreamy_stonebraker[464782]: }
Nov 25 12:59:46 np0005535469 systemd[1]: libpod-22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d.scope: Deactivated successfully.
Nov 25 12:59:46 np0005535469 podman[464765]: 2025-11-25 17:59:46.720692486 +0000 UTC m=+0.885770446 container died 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:59:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f-merged.mount: Deactivated successfully.
Nov 25 12:59:46 np0005535469 podman[464765]: 2025-11-25 17:59:46.772355708 +0000 UTC m=+0.937433668 container remove 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 12:59:46 np0005535469 systemd[1]: libpod-conmon-22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d.scope: Deactivated successfully.
Nov 25 12:59:47 np0005535469 podman[464945]: 2025-11-25 17:59:47.440580159 +0000 UTC m=+0.047412497 container create aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:59:47 np0005535469 systemd[1]: Started libpod-conmon-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope.
Nov 25 12:59:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:59:47 np0005535469 podman[464945]: 2025-11-25 17:59:47.421575253 +0000 UTC m=+0.028407571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:59:47 np0005535469 podman[464945]: 2025-11-25 17:59:47.528194356 +0000 UTC m=+0.135026654 container init aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 12:59:47 np0005535469 podman[464945]: 2025-11-25 17:59:47.535799553 +0000 UTC m=+0.142631881 container start aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 12:59:47 np0005535469 podman[464945]: 2025-11-25 17:59:47.539476282 +0000 UTC m=+0.146308600 container attach aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 12:59:47 np0005535469 nice_goodall[464961]: 167 167
Nov 25 12:59:47 np0005535469 systemd[1]: libpod-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope: Deactivated successfully.
Nov 25 12:59:47 np0005535469 conmon[464961]: conmon aa6f9312714aad220207 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope/container/memory.events
Nov 25 12:59:47 np0005535469 podman[464945]: 2025-11-25 17:59:47.544937411 +0000 UTC m=+0.151769729 container died aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 12:59:47 np0005535469 systemd[1]: var-lib-containers-storage-overlay-882c959bb73b8af3ac7f47a872ec394082dcca84288800df7918bff07f85ece1-merged.mount: Deactivated successfully.
Nov 25 12:59:47 np0005535469 podman[464945]: 2025-11-25 17:59:47.589077159 +0000 UTC m=+0.195909457 container remove aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 12:59:47 np0005535469 systemd[1]: libpod-conmon-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope: Deactivated successfully.
Nov 25 12:59:47 np0005535469 podman[464983]: 2025-11-25 17:59:47.818044862 +0000 UTC m=+0.057901713 container create 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 12:59:47 np0005535469 systemd[1]: Started libpod-conmon-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope.
Nov 25 12:59:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 12:59:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 12:59:47 np0005535469 podman[464983]: 2025-11-25 17:59:47.799665913 +0000 UTC m=+0.039522824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 12:59:47 np0005535469 podman[464983]: 2025-11-25 17:59:47.898400192 +0000 UTC m=+0.138257073 container init 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 12:59:47 np0005535469 podman[464983]: 2025-11-25 17:59:47.904777335 +0000 UTC m=+0.144634186 container start 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 12:59:47 np0005535469 podman[464983]: 2025-11-25 17:59:47.907957521 +0000 UTC m=+0.147814382 container attach 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 12:59:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4110: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:48 np0005535469 nova_compute[254092]: 2025-11-25 17:59:48.875 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:48 np0005535469 cool_margulis[464999]: {
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "osd_id": 1,
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "type": "bluestore"
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:    },
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "osd_id": 2,
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "type": "bluestore"
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:    },
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "osd_id": 0,
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:        "type": "bluestore"
Nov 25 12:59:48 np0005535469 cool_margulis[464999]:    }
Nov 25 12:59:48 np0005535469 cool_margulis[464999]: }
Nov 25 12:59:49 np0005535469 systemd[1]: libpod-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope: Deactivated successfully.
Nov 25 12:59:49 np0005535469 systemd[1]: libpod-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope: Consumed 1.108s CPU time.
Nov 25 12:59:49 np0005535469 conmon[464999]: conmon 0866a3c73a9eeadeb902 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope/container/memory.events
Nov 25 12:59:49 np0005535469 podman[464983]: 2025-11-25 17:59:49.009334266 +0000 UTC m=+1.249191137 container died 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 12:59:49 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9-merged.mount: Deactivated successfully.
Nov 25 12:59:49 np0005535469 podman[464983]: 2025-11-25 17:59:49.076224771 +0000 UTC m=+1.316081632 container remove 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 12:59:49 np0005535469 systemd[1]: libpod-conmon-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope: Deactivated successfully.
Nov 25 12:59:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 12:59:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:59:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 12:59:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:59:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e2d9804f-719f-4bf6-92da-03a7660c6e43 does not exist
Nov 25 12:59:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4dc6530f-9d34-4b55-ab6f-b073f8be3cd7 does not exist
Nov 25 12:59:49 np0005535469 nova_compute[254092]: 2025-11-25 17:59:49.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:59:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 12:59:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4111: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:50 np0005535469 nova_compute[254092]: 2025-11-25 17:59:50.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 12:59:50 np0005535469 nova_compute[254092]: 2025-11-25 17:59:50.508 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 12:59:50 np0005535469 nova_compute[254092]: 2025-11-25 17:59:50.537 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 12:59:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:51 np0005535469 nova_compute[254092]: 2025-11-25 17:59:51.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4112: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 12:59:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 12:59:53 np0005535469 nova_compute[254092]: 2025-11-25 17:59:53.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4113: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 12:59:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2940417034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 12:59:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 12:59:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2940417034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 12:59:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 12:59:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4114: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:56 np0005535469 nova_compute[254092]: 2025-11-25 17:59:56.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 12:59:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4115: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 12:59:58 np0005535469 nova_compute[254092]: 2025-11-25 17:59:58.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4116: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:01 np0005535469 nova_compute[254092]: 2025-11-25 18:00:01.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4117: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:03 np0005535469 nova_compute[254092]: 2025-11-25 18:00:03.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4118: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4119: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:06 np0005535469 nova_compute[254092]: 2025-11-25 18:00:06.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4120: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:08 np0005535469 nova_compute[254092]: 2025-11-25 18:00:08.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:00:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:00:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4121: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:11 np0005535469 nova_compute[254092]: 2025-11-25 18:00:11.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4122: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:13 np0005535469 podman[465095]: 2025-11-25 18:00:13.685572095 +0000 UTC m=+0.088782150 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:00:13 np0005535469 podman[465094]: 2025-11-25 18:00:13.694312063 +0000 UTC m=+0.107607682 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 13:00:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:00:13.705 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:00:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:00:13.706 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:00:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:00:13.706 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:00:13 np0005535469 podman[465096]: 2025-11-25 18:00:13.769088412 +0000 UTC m=+0.165667307 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 13:00:13 np0005535469 nova_compute[254092]: 2025-11-25 18:00:13.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4123: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:15 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4124: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:16 np0005535469 nova_compute[254092]: 2025-11-25 18:00:16.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4125: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:18 np0005535469 nova_compute[254092]: 2025-11-25 18:00:18.526 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:18 np0005535469 nova_compute[254092]: 2025-11-25 18:00:18.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:19 np0005535469 nova_compute[254092]: 2025-11-25 18:00:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4126: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:20 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:21 np0005535469 nova_compute[254092]: 2025-11-25 18:00:21.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4127: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:22 np0005535469 nova_compute[254092]: 2025-11-25 18:00:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:23 np0005535469 nova_compute[254092]: 2025-11-25 18:00:23.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:23 np0005535469 nova_compute[254092]: 2025-11-25 18:00:23.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4128: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4129: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.523 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.524 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.549 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:00:26 np0005535469 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:00:27 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:00:27 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024088845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.563 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.732 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.734 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3616MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.734 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.734 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.795 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.796 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:00:27 np0005535469 nova_compute[254092]: 2025-11-25 18:00:27.812 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:00:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:00:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080625504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:00:28 np0005535469 nova_compute[254092]: 2025-11-25 18:00:28.232 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:00:28 np0005535469 nova_compute[254092]: 2025-11-25 18:00:28.239 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:00:28 np0005535469 nova_compute[254092]: 2025-11-25 18:00:28.254 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:00:28 np0005535469 nova_compute[254092]: 2025-11-25 18:00:28.255 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:00:28 np0005535469 nova_compute[254092]: 2025-11-25 18:00:28.255 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:00:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4130: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:28 np0005535469 nova_compute[254092]: 2025-11-25 18:00:28.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:30 np0005535469 nova_compute[254092]: 2025-11-25 18:00:30.227 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4131: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:31 np0005535469 nova_compute[254092]: 2025-11-25 18:00:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:31 np0005535469 nova_compute[254092]: 2025-11-25 18:00:31.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:00:31 np0005535469 nova_compute[254092]: 2025-11-25 18:00:31.499 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4132: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:34 np0005535469 nova_compute[254092]: 2025-11-25 18:00:34.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4133: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4134: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:36 np0005535469 nova_compute[254092]: 2025-11-25 18:00:36.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4135: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:39 np0005535469 nova_compute[254092]: 2025-11-25 18:00:39.071 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:39 np0005535469 nova_compute[254092]: 2025-11-25 18:00:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4136: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:00:40
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data']
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:00:40 np0005535469 nova_compute[254092]: 2025-11-25 18:00:40.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.911517) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093640911556, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 255, "total_data_size": 1270155, "memory_usage": 1292968, "flush_reason": "Manual Compaction"}
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:00:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093640959176, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 1258100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84360, "largest_seqno": 85289, "table_properties": {"data_size": 1253480, "index_size": 2207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9852, "raw_average_key_size": 19, "raw_value_size": 1244236, "raw_average_value_size": 2415, "num_data_blocks": 99, "num_entries": 515, "num_filter_entries": 515, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093554, "oldest_key_time": 1764093554, "file_creation_time": 1764093640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 47700 microseconds, and 4298 cpu microseconds.
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.959216) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 1258100 bytes OK
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.959234) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962472) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962485) EVENT_LOG_v1 {"time_micros": 1764093640962481, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962502) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 1265670, prev total WAL file size 1265670, number of live WAL files 2.
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962970) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373639' seq:72057594037927935, type:22 .. '6C6F676D0034303230' seq:0, type:0; will stop at (end)
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(1228KB)], [200(9142KB)]
Nov 25 13:00:40 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093640963026, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 10619657, "oldest_snapshot_seqno": -1}
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 9794 keys, 10520083 bytes, temperature: kUnknown
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093641129769, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 10520083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10460620, "index_size": 33878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 259715, "raw_average_key_size": 26, "raw_value_size": 10291483, "raw_average_value_size": 1050, "num_data_blocks": 1297, "num_entries": 9794, "num_filter_entries": 9794, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.130183) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 10520083 bytes
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.134997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.6 rd, 63.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.9 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(16.8) write-amplify(8.4) OK, records in: 10316, records dropped: 522 output_compression: NoCompression
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.135018) EVENT_LOG_v1 {"time_micros": 1764093641135010, "job": 126, "event": "compaction_finished", "compaction_time_micros": 166970, "compaction_time_cpu_micros": 26357, "output_level": 6, "num_output_files": 1, "total_output_size": 10520083, "num_input_records": 10316, "num_output_records": 9794, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093641135322, "job": 126, "event": "table_file_deletion", "file_number": 202}
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093641137066, "job": 126, "event": "table_file_deletion", "file_number": 200}
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:00:41 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:00:41 np0005535469 nova_compute[254092]: 2025-11-25 18:00:41.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4137: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:44 np0005535469 nova_compute[254092]: 2025-11-25 18:00:44.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4138: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:44 np0005535469 podman[465206]: 2025-11-25 18:00:44.64017533 +0000 UTC m=+0.049554756 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 13:00:44 np0005535469 podman[465205]: 2025-11-25 18:00:44.651379104 +0000 UTC m=+0.063965797 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 13:00:44 np0005535469 podman[465207]: 2025-11-25 18:00:44.698333627 +0000 UTC m=+0.105778170 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:00:45 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4139: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:46 np0005535469 nova_compute[254092]: 2025-11-25 18:00:46.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4140: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:49 np0005535469 nova_compute[254092]: 2025-11-25 18:00:49.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:50 np0005535469 podman[465440]: 2025-11-25 18:00:50.037225484 +0000 UTC m=+0.088141172 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:00:50 np0005535469 podman[465440]: 2025-11-25 18:00:50.13435603 +0000 UTC m=+0.185271708 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 13:00:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4141: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:51 np0005535469 nova_compute[254092]: 2025-11-25 18:00:51.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:00:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:00:52 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4142: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:00:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:00:52 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:00:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ffa0e4cb-95fe-4786-9d56-dcc87ba90dd9 does not exist
Nov 25 13:00:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c441c53c-4bb7-48eb-98f2-922b1e73c2c0 does not exist
Nov 25 13:00:53 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 5e2a8684-05f1-4f01-a52b-720ee8726403 does not exist
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:00:53 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:00:54 np0005535469 podman[465865]: 2025-11-25 18:00:54.032322468 +0000 UTC m=+0.088088321 container create 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 13:00:54 np0005535469 podman[465865]: 2025-11-25 18:00:53.977921022 +0000 UTC m=+0.033686855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:00:54 np0005535469 nova_compute[254092]: 2025-11-25 18:00:54.104 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:54 np0005535469 systemd[1]: Started libpod-conmon-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope.
Nov 25 13:00:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:00:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4143: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:54 np0005535469 podman[465865]: 2025-11-25 18:00:54.422166945 +0000 UTC m=+0.477932848 container init 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:00:54 np0005535469 podman[465865]: 2025-11-25 18:00:54.433119243 +0000 UTC m=+0.488885056 container start 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:00:54 np0005535469 systemd[1]: libpod-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope: Deactivated successfully.
Nov 25 13:00:54 np0005535469 silly_wilson[465882]: 167 167
Nov 25 13:00:54 np0005535469 conmon[465882]: conmon 8d2323bca6d74cf08328 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope/container/memory.events
Nov 25 13:00:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:00:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:00:54 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:00:54 np0005535469 podman[465865]: 2025-11-25 18:00:54.480875259 +0000 UTC m=+0.536641072 container attach 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:00:54 np0005535469 podman[465865]: 2025-11-25 18:00:54.481410263 +0000 UTC m=+0.537176076 container died 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 13:00:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-da949b1ec941125a0c97dd7e23205ffca60ddb25ed9da9729bfbb563e5862428-merged.mount: Deactivated successfully.
Nov 25 13:00:54 np0005535469 podman[465865]: 2025-11-25 18:00:54.59037394 +0000 UTC m=+0.646139753 container remove 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 13:00:54 np0005535469 systemd[1]: libpod-conmon-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope: Deactivated successfully.
Nov 25 13:00:54 np0005535469 podman[465906]: 2025-11-25 18:00:54.795724283 +0000 UTC m=+0.038847086 container create 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 13:00:54 np0005535469 systemd[1]: Started libpod-conmon-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope.
Nov 25 13:00:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:00:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:54 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:54 np0005535469 podman[465906]: 2025-11-25 18:00:54.779250375 +0000 UTC m=+0.022373198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:00:54 np0005535469 podman[465906]: 2025-11-25 18:00:54.88188008 +0000 UTC m=+0.125002913 container init 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:00:54 np0005535469 podman[465906]: 2025-11-25 18:00:54.895609202 +0000 UTC m=+0.138732005 container start 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 13:00:54 np0005535469 podman[465906]: 2025-11-25 18:00:54.899528889 +0000 UTC m=+0.142651692 container attach 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:00:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:00:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973916735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:00:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:00:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973916735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:00:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:00:55 np0005535469 kind_wilbur[465922]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:00:55 np0005535469 kind_wilbur[465922]: --> relative data size: 1.0
Nov 25 13:00:55 np0005535469 kind_wilbur[465922]: --> All data devices are unavailable
Nov 25 13:00:55 np0005535469 systemd[1]: libpod-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope: Deactivated successfully.
Nov 25 13:00:55 np0005535469 systemd[1]: libpod-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope: Consumed 1.021s CPU time.
Nov 25 13:00:55 np0005535469 podman[465906]: 2025-11-25 18:00:55.967086466 +0000 UTC m=+1.210209269 container died 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:00:56 np0005535469 systemd[1]: var-lib-containers-storage-overlay-663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc-merged.mount: Deactivated successfully.
Nov 25 13:00:56 np0005535469 podman[465906]: 2025-11-25 18:00:56.035897903 +0000 UTC m=+1.279020706 container remove 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 13:00:56 np0005535469 systemd[1]: libpod-conmon-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope: Deactivated successfully.
Nov 25 13:00:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4144: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:56 np0005535469 nova_compute[254092]: 2025-11-25 18:00:56.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:56 np0005535469 podman[466103]: 2025-11-25 18:00:56.731913619 +0000 UTC m=+0.021697670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:00:56 np0005535469 podman[466103]: 2025-11-25 18:00:56.88708573 +0000 UTC m=+0.176869811 container create 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 13:00:57 np0005535469 systemd[1]: Started libpod-conmon-4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f.scope.
Nov 25 13:00:57 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:00:57 np0005535469 podman[466103]: 2025-11-25 18:00:57.237008654 +0000 UTC m=+0.526792705 container init 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:00:57 np0005535469 podman[466103]: 2025-11-25 18:00:57.249971496 +0000 UTC m=+0.539755567 container start 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:00:57 np0005535469 naughty_banach[466120]: 167 167
Nov 25 13:00:57 np0005535469 systemd[1]: libpod-4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f.scope: Deactivated successfully.
Nov 25 13:00:57 np0005535469 podman[466103]: 2025-11-25 18:00:57.60096222 +0000 UTC m=+0.890746291 container attach 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 13:00:57 np0005535469 podman[466103]: 2025-11-25 18:00:57.601752471 +0000 UTC m=+0.891536512 container died 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:00:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e12758bbfd5fe5331cc3a74a7eb68890d142137527505a0438cffa9ca099ef88-merged.mount: Deactivated successfully.
Nov 25 13:00:57 np0005535469 podman[466103]: 2025-11-25 18:00:57.878366277 +0000 UTC m=+1.168150308 container remove 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 13:00:57 np0005535469 systemd[1]: libpod-conmon-4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f.scope: Deactivated successfully.
Nov 25 13:00:58 np0005535469 podman[466144]: 2025-11-25 18:00:58.051673669 +0000 UTC m=+0.025444181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:00:58 np0005535469 podman[466144]: 2025-11-25 18:00:58.16813621 +0000 UTC m=+0.141906652 container create 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 13:00:58 np0005535469 systemd[1]: Started libpod-conmon-0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36.scope.
Nov 25 13:00:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4145: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:00:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:00:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:00:58 np0005535469 podman[466144]: 2025-11-25 18:00:58.350288322 +0000 UTC m=+0.324058794 container init 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:00:58 np0005535469 podman[466144]: 2025-11-25 18:00:58.365223337 +0000 UTC m=+0.338993779 container start 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 13:00:58 np0005535469 podman[466144]: 2025-11-25 18:00:58.449225937 +0000 UTC m=+0.422996339 container attach 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 13:00:59 np0005535469 nova_compute[254092]: 2025-11-25 18:00:59.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:00:59 np0005535469 objective_williamson[466161]: {
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:    "0": [
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:        {
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "devices": [
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "/dev/loop3"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            ],
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_name": "ceph_lv0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_size": "21470642176",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "name": "ceph_lv0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "tags": {
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cluster_name": "ceph",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.crush_device_class": "",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.encrypted": "0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osd_id": "0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.type": "block",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.vdo": "0"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            },
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "type": "block",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "vg_name": "ceph_vg0"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:        }
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:    ],
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:    "1": [
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:        {
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "devices": [
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "/dev/loop4"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            ],
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_name": "ceph_lv1",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_size": "21470642176",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "name": "ceph_lv1",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "tags": {
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cluster_name": "ceph",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.crush_device_class": "",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.encrypted": "0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osd_id": "1",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.type": "block",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.vdo": "0"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            },
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "type": "block",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "vg_name": "ceph_vg1"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:        }
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:    ],
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:    "2": [
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:        {
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "devices": [
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "/dev/loop5"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            ],
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_name": "ceph_lv2",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_size": "21470642176",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "name": "ceph_lv2",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "tags": {
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.cluster_name": "ceph",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.crush_device_class": "",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.encrypted": "0",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osd_id": "2",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.type": "block",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:                "ceph.vdo": "0"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            },
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "type": "block",
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:            "vg_name": "ceph_vg2"
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:        }
Nov 25 13:00:59 np0005535469 objective_williamson[466161]:    ]
Nov 25 13:00:59 np0005535469 objective_williamson[466161]: }
Nov 25 13:00:59 np0005535469 systemd[1]: libpod-0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36.scope: Deactivated successfully.
Nov 25 13:00:59 np0005535469 podman[466144]: 2025-11-25 18:00:59.234952546 +0000 UTC m=+1.208723058 container died 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:00:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af-merged.mount: Deactivated successfully.
Nov 25 13:00:59 np0005535469 podman[466144]: 2025-11-25 18:00:59.354289855 +0000 UTC m=+1.328060257 container remove 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:00:59 np0005535469 systemd[1]: libpod-conmon-0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36.scope: Deactivated successfully.
Nov 25 13:01:00 np0005535469 podman[466325]: 2025-11-25 18:01:00.0370209 +0000 UTC m=+0.039993927 container create fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:01:00 np0005535469 systemd[1]: Started libpod-conmon-fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd.scope.
Nov 25 13:01:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:01:00 np0005535469 podman[466325]: 2025-11-25 18:01:00.111293075 +0000 UTC m=+0.114266112 container init fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:01:00 np0005535469 podman[466325]: 2025-11-25 18:01:00.016576495 +0000 UTC m=+0.019549562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:01:00 np0005535469 podman[466325]: 2025-11-25 18:01:00.123362582 +0000 UTC m=+0.126335629 container start fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:01:00 np0005535469 intelligent_khorana[466342]: 167 167
Nov 25 13:01:00 np0005535469 podman[466325]: 2025-11-25 18:01:00.129730726 +0000 UTC m=+0.132703813 container attach fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:01:00 np0005535469 systemd[1]: libpod-fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd.scope: Deactivated successfully.
Nov 25 13:01:00 np0005535469 podman[466325]: 2025-11-25 18:01:00.131575225 +0000 UTC m=+0.134548272 container died fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:01:00 np0005535469 systemd[1]: var-lib-containers-storage-overlay-cc63579565e7e4cff7554d7cbbbfab95b4a8dd5d8be9422c763b2ecfa66e94c3-merged.mount: Deactivated successfully.
Nov 25 13:01:00 np0005535469 podman[466325]: 2025-11-25 18:01:00.187587305 +0000 UTC m=+0.190560342 container remove fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:01:00 np0005535469 systemd[1]: libpod-conmon-fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd.scope: Deactivated successfully.
Nov 25 13:01:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4146: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:00 np0005535469 podman[466365]: 2025-11-25 18:01:00.4155044 +0000 UTC m=+0.079333484 container create 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 13:01:00 np0005535469 podman[466365]: 2025-11-25 18:01:00.357042174 +0000 UTC m=+0.020871268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:01:00 np0005535469 systemd[1]: Started libpod-conmon-6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37.scope.
Nov 25 13:01:00 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:01:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:01:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:01:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:01:00 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:01:00 np0005535469 podman[466365]: 2025-11-25 18:01:00.59461024 +0000 UTC m=+0.258439314 container init 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:01:00 np0005535469 podman[466365]: 2025-11-25 18:01:00.602594857 +0000 UTC m=+0.266423931 container start 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:01:00 np0005535469 podman[466365]: 2025-11-25 18:01:00.631359507 +0000 UTC m=+0.295188581 container attach 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 13:01:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:01 np0005535469 quirky_turing[466382]: {
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "osd_id": 1,
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "type": "bluestore"
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:    },
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "osd_id": 2,
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "type": "bluestore"
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:    },
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "osd_id": 0,
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:        "type": "bluestore"
Nov 25 13:01:01 np0005535469 quirky_turing[466382]:    }
Nov 25 13:01:01 np0005535469 quirky_turing[466382]: }
Nov 25 13:01:01 np0005535469 systemd[1]: libpod-6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37.scope: Deactivated successfully.
Nov 25 13:01:01 np0005535469 podman[466365]: 2025-11-25 18:01:01.546465877 +0000 UTC m=+1.210294991 container died 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 13:01:01 np0005535469 nova_compute[254092]: 2025-11-25 18:01:01.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:01 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3-merged.mount: Deactivated successfully.
Nov 25 13:01:01 np0005535469 podman[466365]: 2025-11-25 18:01:01.676505486 +0000 UTC m=+1.340334560 container remove 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 13:01:01 np0005535469 systemd[1]: libpod-conmon-6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37.scope: Deactivated successfully.
Nov 25 13:01:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:01:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:01:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:01:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:01:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f1552ad7-3bc2-45ee-9475-0e308dbf295c does not exist
Nov 25 13:01:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev dc8cd95b-fbd5-45e9-9dcc-5589bc99b19c does not exist
Nov 25 13:01:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:01:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:01:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4147: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:04 np0005535469 nova_compute[254092]: 2025-11-25 18:01:04.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4148: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4149: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:06 np0005535469 nova_compute[254092]: 2025-11-25 18:01:06.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4150: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:09 np0005535469 nova_compute[254092]: 2025-11-25 18:01:09.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:01:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:01:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4151: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:11 np0005535469 nova_compute[254092]: 2025-11-25 18:01:11.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4152: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:01:13.706 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:01:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:01:13.707 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:01:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:01:13.707 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:01:14 np0005535469 nova_compute[254092]: 2025-11-25 18:01:14.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4153: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:15 np0005535469 podman[466488]: 2025-11-25 18:01:15.659626108 +0000 UTC m=+0.067639777 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:01:15 np0005535469 podman[466489]: 2025-11-25 18:01:15.677952965 +0000 UTC m=+0.087530255 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 13:01:15 np0005535469 podman[466490]: 2025-11-25 18:01:15.683573698 +0000 UTC m=+0.093491718 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:01:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4154: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:16 np0005535469 nova_compute[254092]: 2025-11-25 18:01:16.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4155: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:18 np0005535469 nova_compute[254092]: 2025-11-25 18:01:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:19 np0005535469 nova_compute[254092]: 2025-11-25 18:01:19.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:19 np0005535469 nova_compute[254092]: 2025-11-25 18:01:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4156: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:21 np0005535469 nova_compute[254092]: 2025-11-25 18:01:21.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4157: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:23 np0005535469 nova_compute[254092]: 2025-11-25 18:01:23.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:23 np0005535469 nova_compute[254092]: 2025-11-25 18:01:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:24 np0005535469 nova_compute[254092]: 2025-11-25 18:01:24.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4158: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4159: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:26 np0005535469 nova_compute[254092]: 2025-11-25 18:01:26.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:27 np0005535469 nova_compute[254092]: 2025-11-25 18:01:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:27 np0005535469 nova_compute[254092]: 2025-11-25 18:01:27.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:01:27 np0005535469 nova_compute[254092]: 2025-11-25 18:01:27.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:01:27 np0005535469 nova_compute[254092]: 2025-11-25 18:01:27.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:01:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4160: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:28 np0005535469 nova_compute[254092]: 2025-11-25 18:01:28.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:28 np0005535469 nova_compute[254092]: 2025-11-25 18:01:28.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:28 np0005535469 nova_compute[254092]: 2025-11-25 18:01:28.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:01:28 np0005535469 nova_compute[254092]: 2025-11-25 18:01:28.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:01:28 np0005535469 nova_compute[254092]: 2025-11-25 18:01:28.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:01:28 np0005535469 nova_compute[254092]: 2025-11-25 18:01:28.538 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:01:28 np0005535469 nova_compute[254092]: 2025-11-25 18:01:28.539 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:01:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:01:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128121881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.016 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.209 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.210 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.210 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.211 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.373 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.373 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.505 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.655 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.655 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.682 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.714 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 13:01:29 np0005535469 nova_compute[254092]: 2025-11-25 18:01:29.733 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:01:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4161: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:01:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093183224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:01:30 np0005535469 nova_compute[254092]: 2025-11-25 18:01:30.370 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:01:30 np0005535469 nova_compute[254092]: 2025-11-25 18:01:30.375 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:01:30 np0005535469 nova_compute[254092]: 2025-11-25 18:01:30.393 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:01:30 np0005535469 nova_compute[254092]: 2025-11-25 18:01:30.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:01:30 np0005535469 nova_compute[254092]: 2025-11-25 18:01:30.395 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:01:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:31 np0005535469 nova_compute[254092]: 2025-11-25 18:01:31.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4162: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:34 np0005535469 nova_compute[254092]: 2025-11-25 18:01:34.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4163: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:35 np0005535469 nova_compute[254092]: 2025-11-25 18:01:35.392 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:35 np0005535469 nova_compute[254092]: 2025-11-25 18:01:35.393 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:01:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4164: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:36 np0005535469 nova_compute[254092]: 2025-11-25 18:01:36.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:01:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 18K writes, 85K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1308 writes, 5944 keys, 1308 commit groups, 1.0 writes per commit group, ingest: 8.54 MB, 0.01 MB/s#012Interval WAL: 1308 writes, 1308 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.6      2.54              0.37        63    0.040       0      0       0.0       0.0#012  L6      1/0   10.03 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.3    129.2    110.3      4.90              1.69        62    0.079    462K    33K       0.0       0.0#012 Sum      1/0   10.03 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.3     85.2     86.5      7.44              2.06       125    0.059    462K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   8.8     98.7    100.5      0.56              0.21        10    0.056     51K   2479       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    129.2    110.3      4.90              1.69        62    0.079    462K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.3      2.49              0.37        62    0.040       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.1 total, 600.0 interval#012Flush(GB): cumulative 0.100, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.63 GB write, 0.08 MB/s write, 0.62 GB read, 0.08 MB/s read, 7.4 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 72.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000964 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4638,69.18 MB,22.7566%) FilterBlock(126,1.23 MB,0.405537%) IndexBlock(126,1.97 MB,0.647148%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 13:01:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4165: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:39 np0005535469 nova_compute[254092]: 2025-11-25 18:01:39.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:39 np0005535469 nova_compute[254092]: 2025-11-25 18:01:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4166: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:01:40
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:01:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:01:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:41 np0005535469 nova_compute[254092]: 2025-11-25 18:01:41.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4167: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:44 np0005535469 nova_compute[254092]: 2025-11-25 18:01:44.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4168: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4169: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:46 np0005535469 podman[466596]: 2025-11-25 18:01:46.66552657 +0000 UTC m=+0.077491423 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:01:46 np0005535469 podman[466595]: 2025-11-25 18:01:46.701579998 +0000 UTC m=+0.115303709 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 13:01:46 np0005535469 podman[466597]: 2025-11-25 18:01:46.766027137 +0000 UTC m=+0.163205579 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:01:46 np0005535469 nova_compute[254092]: 2025-11-25 18:01:46.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4170: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:49 np0005535469 nova_compute[254092]: 2025-11-25 18:01:49.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4171: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:51 np0005535469 nova_compute[254092]: 2025-11-25 18:01:51.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4172: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:01:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:01:54 np0005535469 nova_compute[254092]: 2025-11-25 18:01:54.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4173: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:01:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971686985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:01:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:01:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971686985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:01:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:01:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4174: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:56 np0005535469 nova_compute[254092]: 2025-11-25 18:01:56.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:01:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4175: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:01:59 np0005535469 nova_compute[254092]: 2025-11-25 18:01:59.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4176: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:01 np0005535469 nova_compute[254092]: 2025-11-25 18:02:01.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4177: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:02:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 517bbcb8-3996-49fc-81d4-2fef00f1c9a5 does not exist
Nov 25 13:02:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3debf604-0c90-4223-9a08-cf5d93a2dc63 does not exist
Nov 25 13:02:02 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c11892a3-94a3-4a00-95f1-c2aed5aacefc does not exist
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:02:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:02:03 np0005535469 podman[466927]: 2025-11-25 18:02:03.57745082 +0000 UTC m=+0.066634118 container create 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 25 13:02:03 np0005535469 systemd[1]: Started libpod-conmon-78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4.scope.
Nov 25 13:02:03 np0005535469 podman[466927]: 2025-11-25 18:02:03.546045628 +0000 UTC m=+0.035228966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:02:03 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:02:03 np0005535469 podman[466927]: 2025-11-25 18:02:03.684846055 +0000 UTC m=+0.174029403 container init 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:02:03 np0005535469 podman[466927]: 2025-11-25 18:02:03.699457421 +0000 UTC m=+0.188640679 container start 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 13:02:03 np0005535469 podman[466927]: 2025-11-25 18:02:03.703762768 +0000 UTC m=+0.192946196 container attach 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 13:02:03 np0005535469 interesting_heisenberg[466943]: 167 167
Nov 25 13:02:03 np0005535469 systemd[1]: libpod-78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4.scope: Deactivated successfully.
Nov 25 13:02:03 np0005535469 podman[466927]: 2025-11-25 18:02:03.710723216 +0000 UTC m=+0.199906494 container died 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:02:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-040e8612f55c13e0e9a8e0e230790ce012a0312476b56294bb08c34a22039de2-merged.mount: Deactivated successfully.
Nov 25 13:02:03 np0005535469 podman[466927]: 2025-11-25 18:02:03.761902985 +0000 UTC m=+0.251086273 container remove 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 13:02:03 np0005535469 systemd[1]: libpod-conmon-78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4.scope: Deactivated successfully.
Nov 25 13:02:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:02:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:02:03 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:02:03 np0005535469 podman[466968]: 2025-11-25 18:02:03.962129618 +0000 UTC m=+0.049080643 container create 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:02:04 np0005535469 systemd[1]: Started libpod-conmon-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope.
Nov 25 13:02:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:02:04 np0005535469 podman[466968]: 2025-11-25 18:02:03.940373778 +0000 UTC m=+0.027324813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:02:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:04 np0005535469 podman[466968]: 2025-11-25 18:02:04.046171259 +0000 UTC m=+0.133122274 container init 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 13:02:04 np0005535469 podman[466968]: 2025-11-25 18:02:04.053306322 +0000 UTC m=+0.140257317 container start 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:02:04 np0005535469 podman[466968]: 2025-11-25 18:02:04.057139817 +0000 UTC m=+0.144090812 container attach 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:02:04 np0005535469 nova_compute[254092]: 2025-11-25 18:02:04.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4178: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:05 np0005535469 suspicious_wing[466984]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:02:05 np0005535469 suspicious_wing[466984]: --> relative data size: 1.0
Nov 25 13:02:05 np0005535469 suspicious_wing[466984]: --> All data devices are unavailable
Nov 25 13:02:05 np0005535469 systemd[1]: libpod-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope: Deactivated successfully.
Nov 25 13:02:05 np0005535469 podman[466968]: 2025-11-25 18:02:05.295097568 +0000 UTC m=+1.382048583 container died 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 13:02:05 np0005535469 systemd[1]: libpod-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope: Consumed 1.175s CPU time.
Nov 25 13:02:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c-merged.mount: Deactivated successfully.
Nov 25 13:02:05 np0005535469 podman[466968]: 2025-11-25 18:02:05.391503433 +0000 UTC m=+1.478454438 container remove 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 13:02:05 np0005535469 systemd[1]: libpod-conmon-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope: Deactivated successfully.
Nov 25 13:02:06 np0005535469 podman[467166]: 2025-11-25 18:02:06.03155224 +0000 UTC m=+0.035852313 container create 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 13:02:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:06 np0005535469 systemd[1]: Started libpod-conmon-81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c.scope.
Nov 25 13:02:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:02:06 np0005535469 podman[467166]: 2025-11-25 18:02:06.016920543 +0000 UTC m=+0.021220636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:02:06 np0005535469 podman[467166]: 2025-11-25 18:02:06.116149086 +0000 UTC m=+0.120449179 container init 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 13:02:06 np0005535469 podman[467166]: 2025-11-25 18:02:06.124127873 +0000 UTC m=+0.128427946 container start 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 13:02:06 np0005535469 podman[467166]: 2025-11-25 18:02:06.127855574 +0000 UTC m=+0.132155717 container attach 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:02:06 np0005535469 objective_bouman[467183]: 167 167
Nov 25 13:02:06 np0005535469 systemd[1]: libpod-81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c.scope: Deactivated successfully.
Nov 25 13:02:06 np0005535469 podman[467166]: 2025-11-25 18:02:06.129556129 +0000 UTC m=+0.133856202 container died 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:02:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1f5894d33a7dc684ace6732b1170470b77f6ba2dab502da704ea0688dce05c61-merged.mount: Deactivated successfully.
Nov 25 13:02:06 np0005535469 podman[467166]: 2025-11-25 18:02:06.170739797 +0000 UTC m=+0.175039880 container remove 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:02:06 np0005535469 systemd[1]: libpod-conmon-81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c.scope: Deactivated successfully.
Nov 25 13:02:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4179: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:06 np0005535469 podman[467208]: 2025-11-25 18:02:06.408748835 +0000 UTC m=+0.061808938 container create 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 13:02:06 np0005535469 systemd[1]: Started libpod-conmon-0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628.scope.
Nov 25 13:02:06 np0005535469 podman[467208]: 2025-11-25 18:02:06.382873133 +0000 UTC m=+0.035933336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:02:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:02:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:06 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:06 np0005535469 podman[467208]: 2025-11-25 18:02:06.520492167 +0000 UTC m=+0.173552370 container init 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:02:06 np0005535469 podman[467208]: 2025-11-25 18:02:06.529707747 +0000 UTC m=+0.182767850 container start 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 13:02:06 np0005535469 podman[467208]: 2025-11-25 18:02:06.53532878 +0000 UTC m=+0.188388973 container attach 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 13:02:06 np0005535469 nova_compute[254092]: 2025-11-25 18:02:06.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]: {
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:    "0": [
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:        {
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "devices": [
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "/dev/loop3"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            ],
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_name": "ceph_lv0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_size": "21470642176",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "name": "ceph_lv0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "tags": {
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cluster_name": "ceph",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.crush_device_class": "",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.encrypted": "0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osd_id": "0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.type": "block",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.vdo": "0"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            },
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "type": "block",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "vg_name": "ceph_vg0"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:        }
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:    ],
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:    "1": [
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:        {
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "devices": [
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "/dev/loop4"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            ],
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_name": "ceph_lv1",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_size": "21470642176",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "name": "ceph_lv1",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "tags": {
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cluster_name": "ceph",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.crush_device_class": "",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.encrypted": "0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osd_id": "1",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.type": "block",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.vdo": "0"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            },
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "type": "block",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "vg_name": "ceph_vg1"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:        }
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:    ],
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:    "2": [
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:        {
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "devices": [
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "/dev/loop5"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            ],
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_name": "ceph_lv2",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_size": "21470642176",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "name": "ceph_lv2",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "tags": {
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.cluster_name": "ceph",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.crush_device_class": "",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.encrypted": "0",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osd_id": "2",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.type": "block",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:                "ceph.vdo": "0"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            },
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "type": "block",
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:            "vg_name": "ceph_vg2"
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:        }
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]:    ]
Nov 25 13:02:07 np0005535469 laughing_johnson[467225]: }
Nov 25 13:02:07 np0005535469 systemd[1]: libpod-0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628.scope: Deactivated successfully.
Nov 25 13:02:07 np0005535469 podman[467208]: 2025-11-25 18:02:07.373279457 +0000 UTC m=+1.026339560 container died 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 13:02:07 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae-merged.mount: Deactivated successfully.
Nov 25 13:02:07 np0005535469 podman[467208]: 2025-11-25 18:02:07.54957311 +0000 UTC m=+1.202633253 container remove 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:02:07 np0005535469 systemd[1]: libpod-conmon-0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628.scope: Deactivated successfully.
Nov 25 13:02:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4180: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:08 np0005535469 podman[467388]: 2025-11-25 18:02:08.368365588 +0000 UTC m=+0.062495107 container create c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:02:08 np0005535469 systemd[1]: Started libpod-conmon-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope.
Nov 25 13:02:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:02:08 np0005535469 podman[467388]: 2025-11-25 18:02:08.348740946 +0000 UTC m=+0.042870505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:02:08 np0005535469 podman[467388]: 2025-11-25 18:02:08.45208761 +0000 UTC m=+0.146217149 container init c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:02:08 np0005535469 podman[467388]: 2025-11-25 18:02:08.464074155 +0000 UTC m=+0.158203704 container start c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:02:08 np0005535469 podman[467388]: 2025-11-25 18:02:08.468613818 +0000 UTC m=+0.162743357 container attach c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 13:02:08 np0005535469 systemd[1]: libpod-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope: Deactivated successfully.
Nov 25 13:02:08 np0005535469 great_moser[467404]: 167 167
Nov 25 13:02:08 np0005535469 podman[467388]: 2025-11-25 18:02:08.4705532 +0000 UTC m=+0.164682739 container died c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:02:08 np0005535469 conmon[467404]: conmon c03a93ffbbb91c48bcae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope/container/memory.events
Nov 25 13:02:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ea033297e667fa507de95a807fab2176a8a2b116d5c50bc46528c57bbaf0f24b-merged.mount: Deactivated successfully.
Nov 25 13:02:08 np0005535469 podman[467388]: 2025-11-25 18:02:08.52321635 +0000 UTC m=+0.217345859 container remove c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:02:08 np0005535469 systemd[1]: libpod-conmon-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope: Deactivated successfully.
Nov 25 13:02:08 np0005535469 podman[467427]: 2025-11-25 18:02:08.744216086 +0000 UTC m=+0.058738394 container create 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:02:08 np0005535469 systemd[1]: Started libpod-conmon-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope.
Nov 25 13:02:08 np0005535469 podman[467427]: 2025-11-25 18:02:08.725106978 +0000 UTC m=+0.039629266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:02:08 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:02:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:08 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:02:08 np0005535469 podman[467427]: 2025-11-25 18:02:08.847880609 +0000 UTC m=+0.162402907 container init 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 13:02:08 np0005535469 podman[467427]: 2025-11-25 18:02:08.855879817 +0000 UTC m=+0.170402085 container start 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:02:08 np0005535469 podman[467427]: 2025-11-25 18:02:08.863982896 +0000 UTC m=+0.178505204 container attach 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:02:09 np0005535469 nova_compute[254092]: 2025-11-25 18:02:09.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]: {
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "osd_id": 1,
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "type": "bluestore"
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:    },
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "osd_id": 2,
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "type": "bluestore"
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:    },
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "osd_id": 0,
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:        "type": "bluestore"
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]:    }
Nov 25 13:02:09 np0005535469 unruffled_nash[467443]: }
Nov 25 13:02:10 np0005535469 systemd[1]: libpod-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope: Deactivated successfully.
Nov 25 13:02:10 np0005535469 podman[467427]: 2025-11-25 18:02:10.019943572 +0000 UTC m=+1.334465850 container died 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 13:02:10 np0005535469 systemd[1]: libpod-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope: Consumed 1.170s CPU time.
Nov 25 13:02:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d-merged.mount: Deactivated successfully.
Nov 25 13:02:10 np0005535469 podman[467427]: 2025-11-25 18:02:10.079845398 +0000 UTC m=+1.394367676 container remove 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 25 13:02:10 np0005535469 systemd[1]: libpod-conmon-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope: Deactivated successfully.
Nov 25 13:02:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:02:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:02:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:02:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 781a5a1b-9a50-4021-b937-c551150f0a7a does not exist
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e37e3d93-b9cd-4232-a47b-aac9e10190aa does not exist
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:02:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4181: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:02:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:02:11 np0005535469 nova_compute[254092]: 2025-11-25 18:02:11.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4182: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:02:13.708 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:02:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:02:13.708 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:02:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:02:13.708 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:02:14 np0005535469 nova_compute[254092]: 2025-11-25 18:02:14.231 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4183: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4184: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:16 np0005535469 nova_compute[254092]: 2025-11-25 18:02:16.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:17 np0005535469 podman[467539]: 2025-11-25 18:02:17.657846919 +0000 UTC m=+0.068117140 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:02:17 np0005535469 podman[467538]: 2025-11-25 18:02:17.670908433 +0000 UTC m=+0.091500183 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 25 13:02:17 np0005535469 podman[467540]: 2025-11-25 18:02:17.685284373 +0000 UTC m=+0.101730741 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 13:02:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4185: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:19 np0005535469 nova_compute[254092]: 2025-11-25 18:02:19.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4186: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:20 np0005535469 nova_compute[254092]: 2025-11-25 18:02:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:21 np0005535469 nova_compute[254092]: 2025-11-25 18:02:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:21 np0005535469 nova_compute[254092]: 2025-11-25 18:02:21.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4187: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:23 np0005535469 nova_compute[254092]: 2025-11-25 18:02:23.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:24 np0005535469 nova_compute[254092]: 2025-11-25 18:02:24.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4188: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:25 np0005535469 nova_compute[254092]: 2025-11-25 18:02:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4189: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:26 np0005535469 nova_compute[254092]: 2025-11-25 18:02:26.837 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4190: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.542 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.542 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:02:29 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:02:29 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247302969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:02:29 np0005535469 nova_compute[254092]: 2025-11-25 18:02:29.995 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.166 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.167 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.167 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.167 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.228 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.229 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:02:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4191: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:02:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150636084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.667 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.672 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.684 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.686 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:02:30 np0005535469 nova_compute[254092]: 2025-11-25 18:02:30.686 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:02:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:31 np0005535469 nova_compute[254092]: 2025-11-25 18:02:31.665 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:31 np0005535469 nova_compute[254092]: 2025-11-25 18:02:31.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4192: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:34 np0005535469 nova_compute[254092]: 2025-11-25 18:02:34.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4193: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:35 np0005535469 nova_compute[254092]: 2025-11-25 18:02:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:35 np0005535469 nova_compute[254092]: 2025-11-25 18:02:35.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:02:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4194: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:36 np0005535469 nova_compute[254092]: 2025-11-25 18:02:36.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4195: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:39 np0005535469 nova_compute[254092]: 2025-11-25 18:02:39.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4196: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:02:40
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data']
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:02:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:02:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:41 np0005535469 nova_compute[254092]: 2025-11-25 18:02:41.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:41 np0005535469 nova_compute[254092]: 2025-11-25 18:02:41.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4197: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:44 np0005535469 nova_compute[254092]: 2025-11-25 18:02:44.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4198: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:44 np0005535469 nova_compute[254092]: 2025-11-25 18:02:44.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:02:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4199: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:46 np0005535469 nova_compute[254092]: 2025-11-25 18:02:46.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4200: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:48 np0005535469 podman[467648]: 2025-11-25 18:02:48.672494537 +0000 UTC m=+0.080829294 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 13:02:48 np0005535469 podman[467647]: 2025-11-25 18:02:48.677807731 +0000 UTC m=+0.085384638 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 13:02:48 np0005535469 podman[467649]: 2025-11-25 18:02:48.715476073 +0000 UTC m=+0.113581272 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 13:02:49 np0005535469 nova_compute[254092]: 2025-11-25 18:02:49.314 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4201: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:51 np0005535469 nova_compute[254092]: 2025-11-25 18:02:51.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4202: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:02:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:02:54 np0005535469 nova_compute[254092]: 2025-11-25 18:02:54.317 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4203: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:02:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721422047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:02:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:02:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721422047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:02:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:02:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4204: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:56 np0005535469 nova_compute[254092]: 2025-11-25 18:02:56.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:02:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4205: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:02:59 np0005535469 nova_compute[254092]: 2025-11-25 18:02:59.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4206: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:01 np0005535469 nova_compute[254092]: 2025-11-25 18:03:01.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4207: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.735487) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782735518, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1368, "num_deletes": 251, "total_data_size": 2152011, "memory_usage": 2186096, "flush_reason": "Manual Compaction"}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782753166, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 2109738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85290, "largest_seqno": 86657, "table_properties": {"data_size": 2103241, "index_size": 3696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13400, "raw_average_key_size": 19, "raw_value_size": 2090326, "raw_average_value_size": 3096, "num_data_blocks": 166, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093641, "oldest_key_time": 1764093641, "file_creation_time": 1764093782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 17867 microseconds, and 10371 cpu microseconds.
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.753345) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 2109738 bytes OK
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.753440) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.754876) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.754899) EVENT_LOG_v1 {"time_micros": 1764093782754892, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.754922) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 2145928, prev total WAL file size 2145928, number of live WAL files 2.
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.756834) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(2060KB)], [203(10MB)]
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782756883, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 12629821, "oldest_snapshot_seqno": -1}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 9955 keys, 10907978 bytes, temperature: kUnknown
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782842918, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 10907978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10847048, "index_size": 34961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24901, "raw_key_size": 263688, "raw_average_key_size": 26, "raw_value_size": 10674634, "raw_average_value_size": 1072, "num_data_blocks": 1338, "num_entries": 9955, "num_filter_entries": 9955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.843301) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 10907978 bytes
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.845236) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.6 rd, 126.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(11.2) write-amplify(5.2) OK, records in: 10469, records dropped: 514 output_compression: NoCompression
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.845266) EVENT_LOG_v1 {"time_micros": 1764093782845252, "job": 128, "event": "compaction_finished", "compaction_time_micros": 86125, "compaction_time_cpu_micros": 49666, "output_level": 6, "num_output_files": 1, "total_output_size": 10907978, "num_input_records": 10469, "num_output_records": 9955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782846286, "job": 128, "event": "table_file_deletion", "file_number": 205}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782849822, "job": 128, "event": "table_file_deletion", "file_number": 203}
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.756711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:03:02 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:03:04 np0005535469 nova_compute[254092]: 2025-11-25 18:03:04.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4208: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4209: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:06 np0005535469 nova_compute[254092]: 2025-11-25 18:03:06.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4210: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:09 np0005535469 nova_compute[254092]: 2025-11-25 18:03:09.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:03:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:03:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4211: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:03:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 651ca80a-56ef-4eba-844b-829f02288cc4 does not exist
Nov 25 13:03:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e89301fb-ad9a-4eaf-840f-b45b3fff0f49 does not exist
Nov 25 13:03:11 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ac5c2c7e-c51d-4639-9d5a-b8e46e5edd85 does not exist
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:03:11 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:03:11 np0005535469 podman[467980]: 2025-11-25 18:03:11.866789808 +0000 UTC m=+0.039886363 container create 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:03:11 np0005535469 nova_compute[254092]: 2025-11-25 18:03:11.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:11 np0005535469 systemd[1]: Started libpod-conmon-1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6.scope.
Nov 25 13:03:11 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:03:11 np0005535469 podman[467980]: 2025-11-25 18:03:11.848368219 +0000 UTC m=+0.021464764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:03:11 np0005535469 podman[467980]: 2025-11-25 18:03:11.953861062 +0000 UTC m=+0.126957607 container init 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:03:11 np0005535469 podman[467980]: 2025-11-25 18:03:11.960884882 +0000 UTC m=+0.133981417 container start 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 13:03:11 np0005535469 podman[467980]: 2025-11-25 18:03:11.96414369 +0000 UTC m=+0.137240245 container attach 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 13:03:11 np0005535469 bold_brahmagupta[467996]: 167 167
Nov 25 13:03:11 np0005535469 podman[467980]: 2025-11-25 18:03:11.968443157 +0000 UTC m=+0.141539692 container died 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 13:03:11 np0005535469 systemd[1]: libpod-1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6.scope: Deactivated successfully.
Nov 25 13:03:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-252093b1dd2f67f6d4a95ec2fdc32acf2234ff029248c57ad345f042c7741901-merged.mount: Deactivated successfully.
Nov 25 13:03:12 np0005535469 podman[467980]: 2025-11-25 18:03:12.006249313 +0000 UTC m=+0.179345838 container remove 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 13:03:12 np0005535469 systemd[1]: libpod-conmon-1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6.scope: Deactivated successfully.
Nov 25 13:03:12 np0005535469 podman[468022]: 2025-11-25 18:03:12.174847167 +0000 UTC m=+0.046026389 container create cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 13:03:12 np0005535469 systemd[1]: Started libpod-conmon-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope.
Nov 25 13:03:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:03:12 np0005535469 podman[468022]: 2025-11-25 18:03:12.156529211 +0000 UTC m=+0.027708423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:03:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:12 np0005535469 podman[468022]: 2025-11-25 18:03:12.272691232 +0000 UTC m=+0.143870444 container init cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:03:12 np0005535469 podman[468022]: 2025-11-25 18:03:12.281365018 +0000 UTC m=+0.152544220 container start cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 13:03:12 np0005535469 podman[468022]: 2025-11-25 18:03:12.28472496 +0000 UTC m=+0.155904182 container attach cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:03:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4212: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:13 np0005535469 gallant_kalam[468039]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:03:13 np0005535469 gallant_kalam[468039]: --> relative data size: 1.0
Nov 25 13:03:13 np0005535469 gallant_kalam[468039]: --> All data devices are unavailable
Nov 25 13:03:13 np0005535469 systemd[1]: libpod-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope: Deactivated successfully.
Nov 25 13:03:13 np0005535469 systemd[1]: libpod-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope: Consumed 1.029s CPU time.
Nov 25 13:03:13 np0005535469 podman[468022]: 2025-11-25 18:03:13.356395718 +0000 UTC m=+1.227574950 container died cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:03:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec-merged.mount: Deactivated successfully.
Nov 25 13:03:13 np0005535469 podman[468022]: 2025-11-25 18:03:13.438986339 +0000 UTC m=+1.310165581 container remove cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:03:13 np0005535469 systemd[1]: libpod-conmon-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope: Deactivated successfully.
Nov 25 13:03:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:03:13.709 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:03:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:03:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:03:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:03:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:03:14 np0005535469 podman[468221]: 2025-11-25 18:03:14.303197499 +0000 UTC m=+0.062270021 container create a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 13:03:14 np0005535469 systemd[1]: Started libpod-conmon-a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542.scope.
Nov 25 13:03:14 np0005535469 nova_compute[254092]: 2025-11-25 18:03:14.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4213: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:03:14 np0005535469 podman[468221]: 2025-11-25 18:03:14.375415998 +0000 UTC m=+0.134488520 container init a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:03:14 np0005535469 podman[468221]: 2025-11-25 18:03:14.282329363 +0000 UTC m=+0.041401935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:03:14 np0005535469 podman[468221]: 2025-11-25 18:03:14.382611893 +0000 UTC m=+0.141684435 container start a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:03:14 np0005535469 podman[468221]: 2025-11-25 18:03:14.385992676 +0000 UTC m=+0.145065228 container attach a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:03:14 np0005535469 beautiful_ardinghelli[468238]: 167 167
Nov 25 13:03:14 np0005535469 systemd[1]: libpod-a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542.scope: Deactivated successfully.
Nov 25 13:03:14 np0005535469 podman[468221]: 2025-11-25 18:03:14.389501311 +0000 UTC m=+0.148573843 container died a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:03:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dbe82724a27541c7e8cdfad59aa08c0129962a5069f97814df5553623323f0ad-merged.mount: Deactivated successfully.
Nov 25 13:03:14 np0005535469 podman[468221]: 2025-11-25 18:03:14.428441567 +0000 UTC m=+0.187514089 container remove a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 13:03:14 np0005535469 systemd[1]: libpod-conmon-a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542.scope: Deactivated successfully.
Nov 25 13:03:14 np0005535469 podman[468264]: 2025-11-25 18:03:14.603743143 +0000 UTC m=+0.040351015 container create f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 13:03:14 np0005535469 systemd[1]: Started libpod-conmon-f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35.scope.
Nov 25 13:03:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:03:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:14 np0005535469 podman[468264]: 2025-11-25 18:03:14.588087469 +0000 UTC m=+0.024695351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:03:14 np0005535469 podman[468264]: 2025-11-25 18:03:14.694700882 +0000 UTC m=+0.131308764 container init f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 13:03:14 np0005535469 podman[468264]: 2025-11-25 18:03:14.701027293 +0000 UTC m=+0.137635165 container start f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 13:03:14 np0005535469 podman[468264]: 2025-11-25 18:03:14.703616234 +0000 UTC m=+0.140224156 container attach f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:03:15 np0005535469 bold_shannon[468281]: {
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:    "0": [
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:        {
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "devices": [
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "/dev/loop3"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            ],
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_name": "ceph_lv0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_size": "21470642176",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "name": "ceph_lv0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "tags": {
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cluster_name": "ceph",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.crush_device_class": "",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.encrypted": "0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osd_id": "0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.type": "block",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.vdo": "0"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            },
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "type": "block",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "vg_name": "ceph_vg0"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:        }
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:    ],
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:    "1": [
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:        {
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "devices": [
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "/dev/loop4"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            ],
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_name": "ceph_lv1",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_size": "21470642176",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "name": "ceph_lv1",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "tags": {
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cluster_name": "ceph",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.crush_device_class": "",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.encrypted": "0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osd_id": "1",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.type": "block",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.vdo": "0"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            },
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "type": "block",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "vg_name": "ceph_vg1"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:        }
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:    ],
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:    "2": [
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:        {
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "devices": [
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "/dev/loop5"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            ],
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_name": "ceph_lv2",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_size": "21470642176",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "name": "ceph_lv2",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "tags": {
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.cluster_name": "ceph",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.crush_device_class": "",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.encrypted": "0",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osd_id": "2",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.type": "block",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:                "ceph.vdo": "0"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            },
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "type": "block",
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:            "vg_name": "ceph_vg2"
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:        }
Nov 25 13:03:15 np0005535469 bold_shannon[468281]:    ]
Nov 25 13:03:15 np0005535469 bold_shannon[468281]: }
Nov 25 13:03:15 np0005535469 systemd[1]: libpod-f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35.scope: Deactivated successfully.
Nov 25 13:03:15 np0005535469 podman[468264]: 2025-11-25 18:03:15.490856705 +0000 UTC m=+0.927464587 container died f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:03:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d-merged.mount: Deactivated successfully.
Nov 25 13:03:15 np0005535469 podman[468264]: 2025-11-25 18:03:15.560735341 +0000 UTC m=+0.997343223 container remove f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 13:03:15 np0005535469 systemd[1]: libpod-conmon-f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35.scope: Deactivated successfully.
Nov 25 13:03:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:16 np0005535469 podman[468441]: 2025-11-25 18:03:16.285056395 +0000 UTC m=+0.055035184 container create 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 13:03:16 np0005535469 systemd[1]: Started libpod-conmon-5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db.scope.
Nov 25 13:03:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:03:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4214: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:16 np0005535469 podman[468441]: 2025-11-25 18:03:16.260801767 +0000 UTC m=+0.030780596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:03:16 np0005535469 podman[468441]: 2025-11-25 18:03:16.365909258 +0000 UTC m=+0.135888107 container init 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:03:16 np0005535469 podman[468441]: 2025-11-25 18:03:16.377499304 +0000 UTC m=+0.147478093 container start 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:03:16 np0005535469 podman[468441]: 2025-11-25 18:03:16.381592214 +0000 UTC m=+0.151571073 container attach 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 13:03:16 np0005535469 stoic_mahavira[468457]: 167 167
Nov 25 13:03:16 np0005535469 systemd[1]: libpod-5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db.scope: Deactivated successfully.
Nov 25 13:03:16 np0005535469 podman[468441]: 2025-11-25 18:03:16.38287668 +0000 UTC m=+0.152855459 container died 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 13:03:16 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1e0aa0154967610bd61d527e30ae15f4c48efcf70768452a316925635f76ea3d-merged.mount: Deactivated successfully.
Nov 25 13:03:16 np0005535469 podman[468441]: 2025-11-25 18:03:16.424297823 +0000 UTC m=+0.194276592 container remove 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 13:03:16 np0005535469 systemd[1]: libpod-conmon-5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db.scope: Deactivated successfully.
Nov 25 13:03:16 np0005535469 podman[468480]: 2025-11-25 18:03:16.613296982 +0000 UTC m=+0.048767235 container create 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:03:16 np0005535469 systemd[1]: Started libpod-conmon-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope.
Nov 25 13:03:16 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:03:16 np0005535469 podman[468480]: 2025-11-25 18:03:16.588293863 +0000 UTC m=+0.023764226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:03:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:16 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:03:16 np0005535469 podman[468480]: 2025-11-25 18:03:16.699899172 +0000 UTC m=+0.135369475 container init 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:03:16 np0005535469 podman[468480]: 2025-11-25 18:03:16.707172478 +0000 UTC m=+0.142642741 container start 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:03:16 np0005535469 podman[468480]: 2025-11-25 18:03:16.710681624 +0000 UTC m=+0.146151887 container attach 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:03:16 np0005535469 nova_compute[254092]: 2025-11-25 18:03:16.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]: {
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "osd_id": 1,
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "type": "bluestore"
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:    },
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "osd_id": 2,
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "type": "bluestore"
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:    },
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "osd_id": 0,
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:        "type": "bluestore"
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]:    }
Nov 25 13:03:17 np0005535469 vigorous_ellis[468496]: }
Nov 25 13:03:17 np0005535469 systemd[1]: libpod-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope: Deactivated successfully.
Nov 25 13:03:17 np0005535469 systemd[1]: libpod-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope: Consumed 1.026s CPU time.
Nov 25 13:03:17 np0005535469 conmon[468496]: conmon 88d17dc24aa82937a3ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope/container/memory.events
Nov 25 13:03:17 np0005535469 podman[468480]: 2025-11-25 18:03:17.72585371 +0000 UTC m=+1.161323953 container died 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 13:03:17 np0005535469 systemd[1]: var-lib-containers-storage-overlay-41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6-merged.mount: Deactivated successfully.
Nov 25 13:03:17 np0005535469 podman[468480]: 2025-11-25 18:03:17.786881976 +0000 UTC m=+1.222352229 container remove 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 13:03:17 np0005535469 systemd[1]: libpod-conmon-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope: Deactivated successfully.
Nov 25 13:03:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:03:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:03:17 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:03:17 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:03:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1ec295e1-33cb-4d55-bca3-c92a46241b5d does not exist
Nov 25 13:03:17 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d5e2b15e-666e-4ff4-8b98-336365dcd4ed does not exist
Nov 25 13:03:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:03:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:03:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4215: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:19 np0005535469 nova_compute[254092]: 2025-11-25 18:03:19.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:19 np0005535469 podman[468593]: 2025-11-25 18:03:19.656042994 +0000 UTC m=+0.076794054 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:03:19 np0005535469 podman[468594]: 2025-11-25 18:03:19.657422281 +0000 UTC m=+0.067800450 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 13:03:19 np0005535469 podman[468595]: 2025-11-25 18:03:19.681344991 +0000 UTC m=+0.099731018 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 13:03:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4216: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:20 np0005535469 nova_compute[254092]: 2025-11-25 18:03:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:21 np0005535469 nova_compute[254092]: 2025-11-25 18:03:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:21 np0005535469 nova_compute[254092]: 2025-11-25 18:03:21.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4217: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:24 np0005535469 nova_compute[254092]: 2025-11-25 18:03:24.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4218: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:25 np0005535469 nova_compute[254092]: 2025-11-25 18:03:25.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4219: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:26 np0005535469 nova_compute[254092]: 2025-11-25 18:03:26.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:27 np0005535469 nova_compute[254092]: 2025-11-25 18:03:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4220: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:29 np0005535469 nova_compute[254092]: 2025-11-25 18:03:29.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4221: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.552 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.553 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.554 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:03:30 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:03:30 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197200820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:03:30 np0005535469 nova_compute[254092]: 2025-11-25 18:03:30.997 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:03:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.128 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.129 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.129 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.129 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.202 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.231 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:03:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:03:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884788986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.658 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.664 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.681 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.683 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.683 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:03:31 np0005535469 nova_compute[254092]: 2025-11-25 18:03:31.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4222: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:32 np0005535469 nova_compute[254092]: 2025-11-25 18:03:32.660 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:34 np0005535469 nova_compute[254092]: 2025-11-25 18:03:34.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4223: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4224: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:36 np0005535469 nova_compute[254092]: 2025-11-25 18:03:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:36 np0005535469 nova_compute[254092]: 2025-11-25 18:03:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:03:36 np0005535469 nova_compute[254092]: 2025-11-25 18:03:36.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4225: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:39 np0005535469 nova_compute[254092]: 2025-11-25 18:03:39.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:03:40
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'vms']
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4226: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:03:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:03:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:03:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:42 np0005535469 nova_compute[254092]: 2025-11-25 18:03:42.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4227: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:42 np0005535469 nova_compute[254092]: 2025-11-25 18:03:42.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:03:44 np0005535469 nova_compute[254092]: 2025-11-25 18:03:44.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4228: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4229: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:47 np0005535469 nova_compute[254092]: 2025-11-25 18:03:47.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4230: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:49 np0005535469 nova_compute[254092]: 2025-11-25 18:03:49.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:03:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7801.2 total, 600.0 interval#012Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.79 writes per sync, written: 0.18 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 220 writes, 330 keys, 220 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 220 writes, 110 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:03:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4231: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:50 np0005535469 podman[468700]: 2025-11-25 18:03:50.693257423 +0000 UTC m=+0.091197905 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 13:03:50 np0005535469 podman[468699]: 2025-11-25 18:03:50.69461422 +0000 UTC m=+0.100359694 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:03:50 np0005535469 podman[468701]: 2025-11-25 18:03:50.734109502 +0000 UTC m=+0.125820315 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 13:03:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:52 np0005535469 nova_compute[254092]: 2025-11-25 18:03:52.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4232: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:03:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:03:54 np0005535469 nova_compute[254092]: 2025-11-25 18:03:54.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4233: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:03:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531718264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:03:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:03:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531718264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:03:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:03:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4234: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:57 np0005535469 nova_compute[254092]: 2025-11-25 18:03:57.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:03:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4235: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:03:59 np0005535469 nova_compute[254092]: 2025-11-25 18:03:59.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4236: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:02 np0005535469 nova_compute[254092]: 2025-11-25 18:04:02.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4237: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4238: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:04 np0005535469 nova_compute[254092]: 2025-11-25 18:04:04.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4239: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:07 np0005535469 nova_compute[254092]: 2025-11-25 18:04:07.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:04:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7801.5 total, 600.0 interval#012Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 216 writes, 324 keys, 216 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 216 writes, 108 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:04:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4240: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:09 np0005535469 nova_compute[254092]: 2025-11-25 18:04:09.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:04:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:04:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4241: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:12 np0005535469 nova_compute[254092]: 2025-11-25 18:04:12.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4242: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:04:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:04:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:04:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:04:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:04:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:04:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4243: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:14 np0005535469 nova_compute[254092]: 2025-11-25 18:04:14.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4244: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:17 np0005535469 nova_compute[254092]: 2025-11-25 18:04:17.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4245: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:04:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a11f2b8f-0d81-4c28-9d17-245a5ee62d6c does not exist
Nov 25 13:04:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 921b8258-52fa-480d-b278-c17ba9aab9b3 does not exist
Nov 25 13:04:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 61eeedc0-7110-4414-a6de-76ba25fd6eac does not exist
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:04:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:04:19 np0005535469 nova_compute[254092]: 2025-11-25 18:04:19.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:19 np0005535469 podman[469030]: 2025-11-25 18:04:19.566919575 +0000 UTC m=+0.057026878 container create d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 13:04:19 np0005535469 systemd[1]: Started libpod-conmon-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope.
Nov 25 13:04:19 np0005535469 podman[469030]: 2025-11-25 18:04:19.543601723 +0000 UTC m=+0.033709006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:04:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:04:19 np0005535469 podman[469030]: 2025-11-25 18:04:19.676470888 +0000 UTC m=+0.166578241 container init d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 13:04:19 np0005535469 podman[469030]: 2025-11-25 18:04:19.6857274 +0000 UTC m=+0.175834703 container start d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:04:19 np0005535469 podman[469030]: 2025-11-25 18:04:19.689991395 +0000 UTC m=+0.180098668 container attach d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 13:04:19 np0005535469 great_roentgen[469046]: 167 167
Nov 25 13:04:19 np0005535469 systemd[1]: libpod-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope: Deactivated successfully.
Nov 25 13:04:19 np0005535469 conmon[469046]: conmon d820853bea1fd4ed1ec9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope/container/memory.events
Nov 25 13:04:19 np0005535469 podman[469030]: 2025-11-25 18:04:19.698477925 +0000 UTC m=+0.188585228 container died d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 13:04:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:04:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:04:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:04:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0a9a702558cef9d4da448bfc26aab5bb675ca5fd9ddeb3a99a026fb98d30a4c9-merged.mount: Deactivated successfully.
Nov 25 13:04:19 np0005535469 podman[469030]: 2025-11-25 18:04:19.752026368 +0000 UTC m=+0.242133651 container remove d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:04:19 np0005535469 systemd[1]: libpod-conmon-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope: Deactivated successfully.
Nov 25 13:04:19 np0005535469 podman[469070]: 2025-11-25 18:04:19.964793931 +0000 UTC m=+0.067247365 container create de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:04:20 np0005535469 systemd[1]: Started libpod-conmon-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope.
Nov 25 13:04:20 np0005535469 podman[469070]: 2025-11-25 18:04:19.942858036 +0000 UTC m=+0.045311510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:04:20 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:04:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:20 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:20 np0005535469 podman[469070]: 2025-11-25 18:04:20.086755171 +0000 UTC m=+0.189208615 container init de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:04:20 np0005535469 podman[469070]: 2025-11-25 18:04:20.097093161 +0000 UTC m=+0.199546585 container start de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:04:20 np0005535469 podman[469070]: 2025-11-25 18:04:20.100881294 +0000 UTC m=+0.203334758 container attach de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:04:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4246: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:20 np0005535469 nova_compute[254092]: 2025-11-25 18:04:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:21 np0005535469 practical_euler[469087]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:04:21 np0005535469 practical_euler[469087]: --> relative data size: 1.0
Nov 25 13:04:21 np0005535469 practical_euler[469087]: --> All data devices are unavailable
Nov 25 13:04:21 np0005535469 systemd[1]: libpod-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope: Deactivated successfully.
Nov 25 13:04:21 np0005535469 systemd[1]: libpod-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope: Consumed 1.196s CPU time.
Nov 25 13:04:21 np0005535469 podman[469116]: 2025-11-25 18:04:21.400799397 +0000 UTC m=+0.032996237 container died de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 13:04:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9-merged.mount: Deactivated successfully.
Nov 25 13:04:21 np0005535469 podman[469116]: 2025-11-25 18:04:21.789708989 +0000 UTC m=+0.421905799 container remove de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:04:21 np0005535469 systemd[1]: libpod-conmon-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope: Deactivated successfully.
Nov 25 13:04:21 np0005535469 podman[469124]: 2025-11-25 18:04:21.866755099 +0000 UTC m=+0.474228938 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:04:21 np0005535469 podman[469117]: 2025-11-25 18:04:21.90619919 +0000 UTC m=+0.519420925 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 13:04:22 np0005535469 podman[469127]: 2025-11-25 18:04:22.0064369 +0000 UTC m=+0.604145994 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 13:04:22 np0005535469 nova_compute[254092]: 2025-11-25 18:04:22.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4247: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:22 np0005535469 nova_compute[254092]: 2025-11-25 18:04:22.500 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:22 np0005535469 podman[469340]: 2025-11-25 18:04:22.559403954 +0000 UTC m=+0.026772357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:04:22 np0005535469 podman[469340]: 2025-11-25 18:04:22.680009585 +0000 UTC m=+0.147377958 container create 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:04:22 np0005535469 systemd[1]: Started libpod-conmon-49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50.scope.
Nov 25 13:04:22 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:04:22 np0005535469 podman[469340]: 2025-11-25 18:04:22.818552246 +0000 UTC m=+0.285920629 container init 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:04:22 np0005535469 podman[469340]: 2025-11-25 18:04:22.827764096 +0000 UTC m=+0.295132459 container start 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:04:22 np0005535469 podman[469340]: 2025-11-25 18:04:22.831489666 +0000 UTC m=+0.298858029 container attach 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 25 13:04:22 np0005535469 hopeful_perlman[469357]: 167 167
Nov 25 13:04:22 np0005535469 systemd[1]: libpod-49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50.scope: Deactivated successfully.
Nov 25 13:04:22 np0005535469 podman[469340]: 2025-11-25 18:04:22.836922914 +0000 UTC m=+0.304291277 container died 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 13:04:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-66ac467caedf2c76abe744e634a0df367b747d121c3d38cddf3f1670d3cdff87-merged.mount: Deactivated successfully.
Nov 25 13:04:22 np0005535469 podman[469340]: 2025-11-25 18:04:22.874826933 +0000 UTC m=+0.342195296 container remove 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:04:22 np0005535469 systemd[1]: libpod-conmon-49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50.scope: Deactivated successfully.
Nov 25 13:04:23 np0005535469 podman[469381]: 2025-11-25 18:04:23.090169436 +0000 UTC m=+0.060866963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:04:23 np0005535469 podman[469381]: 2025-11-25 18:04:23.280122371 +0000 UTC m=+0.250819848 container create aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:04:23 np0005535469 systemd[1]: Started libpod-conmon-aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af.scope.
Nov 25 13:04:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:04:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:23 np0005535469 podman[469381]: 2025-11-25 18:04:23.521912041 +0000 UTC m=+0.492609538 container init aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:04:23 np0005535469 podman[469381]: 2025-11-25 18:04:23.537080313 +0000 UTC m=+0.507777790 container start aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:04:23 np0005535469 podman[469381]: 2025-11-25 18:04:23.63980052 +0000 UTC m=+0.610498007 container attach aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]: {
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:    "0": [
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:        {
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "devices": [
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "/dev/loop3"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            ],
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_name": "ceph_lv0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_size": "21470642176",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "name": "ceph_lv0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "tags": {
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cluster_name": "ceph",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.crush_device_class": "",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.encrypted": "0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osd_id": "0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.type": "block",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.vdo": "0"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            },
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "type": "block",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "vg_name": "ceph_vg0"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:        }
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:    ],
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:    "1": [
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:        {
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "devices": [
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "/dev/loop4"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            ],
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_name": "ceph_lv1",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_size": "21470642176",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "name": "ceph_lv1",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "tags": {
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cluster_name": "ceph",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.crush_device_class": "",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.encrypted": "0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osd_id": "1",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.type": "block",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.vdo": "0"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            },
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "type": "block",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "vg_name": "ceph_vg1"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:        }
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:    ],
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:    "2": [
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:        {
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "devices": [
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "/dev/loop5"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            ],
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_name": "ceph_lv2",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_size": "21470642176",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "name": "ceph_lv2",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "tags": {
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.cluster_name": "ceph",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.crush_device_class": "",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.encrypted": "0",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osd_id": "2",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.type": "block",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:                "ceph.vdo": "0"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            },
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "type": "block",
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:            "vg_name": "ceph_vg2"
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:        }
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]:    ]
Nov 25 13:04:24 np0005535469 condescending_driscoll[469398]: }
Nov 25 13:04:24 np0005535469 systemd[1]: libpod-aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af.scope: Deactivated successfully.
Nov 25 13:04:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4248: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:24 np0005535469 podman[469407]: 2025-11-25 18:04:24.388454934 +0000 UTC m=+0.025786701 container died aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 13:04:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa-merged.mount: Deactivated successfully.
Nov 25 13:04:24 np0005535469 podman[469407]: 2025-11-25 18:04:24.44579365 +0000 UTC m=+0.083125347 container remove aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 13:04:24 np0005535469 systemd[1]: libpod-conmon-aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af.scope: Deactivated successfully.
Nov 25 13:04:24 np0005535469 nova_compute[254092]: 2025-11-25 18:04:24.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:25 np0005535469 podman[469564]: 2025-11-25 18:04:25.233101623 +0000 UTC m=+0.058658102 container create 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:04:25 np0005535469 systemd[1]: Started libpod-conmon-4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c.scope.
Nov 25 13:04:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:04:25 np0005535469 podman[469564]: 2025-11-25 18:04:25.215426033 +0000 UTC m=+0.040982502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:04:25 np0005535469 podman[469564]: 2025-11-25 18:04:25.319199579 +0000 UTC m=+0.144756038 container init 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 13:04:25 np0005535469 podman[469564]: 2025-11-25 18:04:25.329789266 +0000 UTC m=+0.155345695 container start 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:04:25 np0005535469 podman[469564]: 2025-11-25 18:04:25.332615733 +0000 UTC m=+0.158172212 container attach 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:04:25 np0005535469 elastic_kare[469581]: 167 167
Nov 25 13:04:25 np0005535469 systemd[1]: libpod-4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c.scope: Deactivated successfully.
Nov 25 13:04:25 np0005535469 podman[469564]: 2025-11-25 18:04:25.335987534 +0000 UTC m=+0.161544013 container died 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 13:04:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-55b17501fbf72ac47780c5b8474875682b22587f245f9959ea309b19f6d3de92-merged.mount: Deactivated successfully.
Nov 25 13:04:25 np0005535469 podman[469564]: 2025-11-25 18:04:25.379860515 +0000 UTC m=+0.205416954 container remove 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 13:04:25 np0005535469 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 13:04:25 np0005535469 systemd[1]: libpod-conmon-4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c.scope: Deactivated successfully.
Nov 25 13:04:25 np0005535469 podman[469605]: 2025-11-25 18:04:25.585144575 +0000 UTC m=+0.062155987 container create 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:04:25 np0005535469 systemd[1]: Started libpod-conmon-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope.
Nov 25 13:04:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:04:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:25 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:04:25 np0005535469 podman[469605]: 2025-11-25 18:04:25.564395492 +0000 UTC m=+0.041406904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:04:25 np0005535469 podman[469605]: 2025-11-25 18:04:25.673460412 +0000 UTC m=+0.150471924 container init 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:04:25 np0005535469 podman[469605]: 2025-11-25 18:04:25.681398427 +0000 UTC m=+0.158409879 container start 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:04:25 np0005535469 podman[469605]: 2025-11-25 18:04:25.685439907 +0000 UTC m=+0.162451319 container attach 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:04:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4249: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:26 np0005535469 clever_morse[469621]: {
Nov 25 13:04:26 np0005535469 clever_morse[469621]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "osd_id": 1,
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "type": "bluestore"
Nov 25 13:04:26 np0005535469 clever_morse[469621]:    },
Nov 25 13:04:26 np0005535469 clever_morse[469621]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "osd_id": 2,
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "type": "bluestore"
Nov 25 13:04:26 np0005535469 clever_morse[469621]:    },
Nov 25 13:04:26 np0005535469 clever_morse[469621]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "osd_id": 0,
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:04:26 np0005535469 clever_morse[469621]:        "type": "bluestore"
Nov 25 13:04:26 np0005535469 clever_morse[469621]:    }
Nov 25 13:04:26 np0005535469 clever_morse[469621]: }
Nov 25 13:04:26 np0005535469 systemd[1]: libpod-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope: Deactivated successfully.
Nov 25 13:04:26 np0005535469 systemd[1]: libpod-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope: Consumed 1.199s CPU time.
Nov 25 13:04:26 np0005535469 podman[469605]: 2025-11-25 18:04:26.877500393 +0000 UTC m=+1.354511815 container died 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 13:04:26 np0005535469 systemd[1]: var-lib-containers-storage-overlay-93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021-merged.mount: Deactivated successfully.
Nov 25 13:04:26 np0005535469 podman[469605]: 2025-11-25 18:04:26.934784087 +0000 UTC m=+1.411795489 container remove 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:04:26 np0005535469 systemd[1]: libpod-conmon-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope: Deactivated successfully.
Nov 25 13:04:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:04:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:04:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:04:26 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:04:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 30e845d9-3870-43f9-ab6c-5db9693adddf does not exist
Nov 25 13:04:26 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2cb5da74-21fc-46b0-a652-aa5ac53be2cd does not exist
Nov 25 13:04:27 np0005535469 nova_compute[254092]: 2025-11-25 18:04:27.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:27 np0005535469 nova_compute[254092]: 2025-11-25 18:04:27.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:27 np0005535469 nova_compute[254092]: 2025-11-25 18:04:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:27 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:04:27 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:04:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4250: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:29 np0005535469 nova_compute[254092]: 2025-11-25 18:04:29.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4251: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:30 np0005535469 nova_compute[254092]: 2025-11-25 18:04:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:30 np0005535469 nova_compute[254092]: 2025-11-25 18:04:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:04:30 np0005535469 nova_compute[254092]: 2025-11-25 18:04:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:04:30 np0005535469 nova_compute[254092]: 2025-11-25 18:04:30.521 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:04:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:31 np0005535469 nova_compute[254092]: 2025-11-25 18:04:31.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:31 np0005535469 nova_compute[254092]: 2025-11-25 18:04:31.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:04:31 np0005535469 nova_compute[254092]: 2025-11-25 18:04:31.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:04:31 np0005535469 nova_compute[254092]: 2025-11-25 18:04:31.540 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:04:31 np0005535469 nova_compute[254092]: 2025-11-25 18:04:31.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:04:31 np0005535469 nova_compute[254092]: 2025-11-25 18:04:31.541 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:04:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:04:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/151221612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:04:31 np0005535469 nova_compute[254092]: 2025-11-25 18:04:31.994 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.139 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.140 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3575MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.140 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.140 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.203 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.203 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.220 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:04:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4252: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:04:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520834223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.635 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.641 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.659 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.660 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:04:32 np0005535469 nova_compute[254092]: 2025-11-25 18:04:32.661 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:04:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4253: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:34 np0005535469 nova_compute[254092]: 2025-11-25 18:04:34.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:34 np0005535469 nova_compute[254092]: 2025-11-25 18:04:34.661 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:35 np0005535469 nova_compute[254092]: 2025-11-25 18:04:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:35 np0005535469 nova_compute[254092]: 2025-11-25 18:04:35.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 13:04:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4254: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:37 np0005535469 nova_compute[254092]: 2025-11-25 18:04:37.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:37 np0005535469 nova_compute[254092]: 2025-11-25 18:04:37.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:37 np0005535469 nova_compute[254092]: 2025-11-25 18:04:37.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:04:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4255: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:39 np0005535469 nova_compute[254092]: 2025-11-25 18:04:39.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:04:40
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4256: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:04:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:04:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:42 np0005535469 nova_compute[254092]: 2025-11-25 18:04:42.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4257: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:43 np0005535469 nova_compute[254092]: 2025-11-25 18:04:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4258: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:44 np0005535469 nova_compute[254092]: 2025-11-25 18:04:44.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4259: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:46 np0005535469 nova_compute[254092]: 2025-11-25 18:04:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:47 np0005535469 nova_compute[254092]: 2025-11-25 18:04:47.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4260: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:49 np0005535469 nova_compute[254092]: 2025-11-25 18:04:49.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4261: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:52 np0005535469 nova_compute[254092]: 2025-11-25 18:04:52.230 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4262: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:04:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:04:52 np0005535469 podman[469759]: 2025-11-25 18:04:52.690924306 +0000 UTC m=+0.087163016 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 13:04:52 np0005535469 podman[469758]: 2025-11-25 18:04:52.705398968 +0000 UTC m=+0.102066550 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 13:04:52 np0005535469 podman[469760]: 2025-11-25 18:04:52.744317834 +0000 UTC m=+0.133758409 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 13:04:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4263: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:54 np0005535469 nova_compute[254092]: 2025-11-25 18:04:54.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:04:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342693728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:04:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:04:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342693728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:04:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:04:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4264: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:56 np0005535469 nova_compute[254092]: 2025-11-25 18:04:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:04:56 np0005535469 nova_compute[254092]: 2025-11-25 18:04:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 13:04:56 np0005535469 nova_compute[254092]: 2025-11-25 18:04:56.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 13:04:57 np0005535469 nova_compute[254092]: 2025-11-25 18:04:57.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:04:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4265: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:04:59 np0005535469 nova_compute[254092]: 2025-11-25 18:04:59.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4266: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:02 np0005535469 nova_compute[254092]: 2025-11-25 18:05:02.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4267: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:03 np0005535469 nova_compute[254092]: 2025-11-25 18:05:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4268: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:04 np0005535469 nova_compute[254092]: 2025-11-25 18:05:04.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4269: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:07 np0005535469 nova_compute[254092]: 2025-11-25 18:05:07.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4270: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:09 np0005535469 nova_compute[254092]: 2025-11-25 18:05:09.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:05:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:05:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4271: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:12 np0005535469 nova_compute[254092]: 2025-11-25 18:05:12.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4272: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:05:13.712 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:05:13.713 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:05:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:05:13.713 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:05:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4273: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:14 np0005535469 nova_compute[254092]: 2025-11-25 18:05:14.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4274: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:17 np0005535469 nova_compute[254092]: 2025-11-25 18:05:17.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4275: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:19 np0005535469 nova_compute[254092]: 2025-11-25 18:05:19.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4276: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:21 np0005535469 nova_compute[254092]: 2025-11-25 18:05:21.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:22 np0005535469 nova_compute[254092]: 2025-11-25 18:05:22.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4277: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:23 np0005535469 podman[469827]: 2025-11-25 18:05:23.676370238 +0000 UTC m=+0.081940624 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 13:05:23 np0005535469 podman[469826]: 2025-11-25 18:05:23.684343595 +0000 UTC m=+0.089916871 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 13:05:23 np0005535469 podman[469828]: 2025-11-25 18:05:23.710982687 +0000 UTC m=+0.114668332 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 13:05:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4278: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:24 np0005535469 nova_compute[254092]: 2025-11-25 18:05:24.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:24 np0005535469 nova_compute[254092]: 2025-11-25 18:05:24.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4279: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:27 np0005535469 nova_compute[254092]: 2025-11-25 18:05:27.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:05:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 48204fc0-ff82-4230-8d6a-588023046200 does not exist
Nov 25 13:05:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2e1a5f77-b5f1-4cee-869e-9ce2962f16b2 does not exist
Nov 25 13:05:28 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b84f05aa-26cb-430e-9c43-5b656b2b0817 does not exist
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:05:28 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:05:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4280: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:28 np0005535469 nova_compute[254092]: 2025-11-25 18:05:28.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:28 np0005535469 podman[470162]: 2025-11-25 18:05:28.74858245 +0000 UTC m=+0.023822578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:05:29 np0005535469 podman[470162]: 2025-11-25 18:05:29.003149637 +0000 UTC m=+0.278389735 container create 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 13:05:29 np0005535469 systemd[1]: Started libpod-conmon-633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9.scope.
Nov 25 13:05:29 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:05:29 np0005535469 nova_compute[254092]: 2025-11-25 18:05:29.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:29 np0005535469 podman[470162]: 2025-11-25 18:05:29.510165724 +0000 UTC m=+0.785405852 container init 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 13:05:29 np0005535469 podman[470162]: 2025-11-25 18:05:29.518407268 +0000 UTC m=+0.793647376 container start 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 13:05:29 np0005535469 quizzical_khorana[470178]: 167 167
Nov 25 13:05:29 np0005535469 systemd[1]: libpod-633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9.scope: Deactivated successfully.
Nov 25 13:05:29 np0005535469 podman[470162]: 2025-11-25 18:05:29.806124995 +0000 UTC m=+1.081365143 container attach 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:05:29 np0005535469 podman[470162]: 2025-11-25 18:05:29.807847932 +0000 UTC m=+1.083088060 container died 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 13:05:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:05:29 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:05:29 np0005535469 nova_compute[254092]: 2025-11-25 18:05:29.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3b1d3184f67005f6cad05e2c5780b12361da75c6983d1f4b0cb966d56b749fdd-merged.mount: Deactivated successfully.
Nov 25 13:05:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4281: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:30 np0005535469 podman[470162]: 2025-11-25 18:05:30.776920176 +0000 UTC m=+2.052160314 container remove 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:05:30 np0005535469 systemd[1]: libpod-conmon-633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9.scope: Deactivated successfully.
Nov 25 13:05:31 np0005535469 podman[470204]: 2025-11-25 18:05:30.99414669 +0000 UTC m=+0.031974199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:05:31 np0005535469 podman[470204]: 2025-11-25 18:05:31.122974685 +0000 UTC m=+0.160802144 container create 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:05:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:31 np0005535469 systemd[1]: Started libpod-conmon-08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe.scope.
Nov 25 13:05:31 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:05:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:31 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.559 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.560 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.560 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.560 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:05:31 np0005535469 nova_compute[254092]: 2025-11-25 18:05:31.561 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:05:31 np0005535469 podman[470204]: 2025-11-25 18:05:31.561295409 +0000 UTC m=+0.599122848 container init 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 13:05:31 np0005535469 podman[470204]: 2025-11-25 18:05:31.567861717 +0000 UTC m=+0.605689136 container start 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:05:31 np0005535469 podman[470204]: 2025-11-25 18:05:31.700307421 +0000 UTC m=+0.738134870 container attach 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:05:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:05:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472023277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.014 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.175 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.176 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.273 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4282: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:32 np0005535469 zen_brattain[470221]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:05:32 np0005535469 zen_brattain[470221]: --> relative data size: 1.0
Nov 25 13:05:32 np0005535469 zen_brattain[470221]: --> All data devices are unavailable
Nov 25 13:05:32 np0005535469 systemd[1]: libpod-08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe.scope: Deactivated successfully.
Nov 25 13:05:32 np0005535469 podman[470204]: 2025-11-25 18:05:32.607707362 +0000 UTC m=+1.645534781 container died 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 25 13:05:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db-merged.mount: Deactivated successfully.
Nov 25 13:05:32 np0005535469 podman[470204]: 2025-11-25 18:05:32.668027889 +0000 UTC m=+1.705855308 container remove 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 13:05:32 np0005535469 systemd[1]: libpod-conmon-08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe.scope: Deactivated successfully.
Nov 25 13:05:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:05:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1869174271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.739 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.745 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.757 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.759 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:05:32 np0005535469 nova_compute[254092]: 2025-11-25 18:05:32.760 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:05:33 np0005535469 podman[470446]: 2025-11-25 18:05:33.269836429 +0000 UTC m=+0.043849251 container create 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 13:05:33 np0005535469 systemd[1]: Started libpod-conmon-5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10.scope.
Nov 25 13:05:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:05:33 np0005535469 podman[470446]: 2025-11-25 18:05:33.248826358 +0000 UTC m=+0.022839280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:05:33 np0005535469 podman[470446]: 2025-11-25 18:05:33.357805945 +0000 UTC m=+0.131818767 container init 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:05:33 np0005535469 podman[470446]: 2025-11-25 18:05:33.364050224 +0000 UTC m=+0.138063046 container start 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:05:33 np0005535469 podman[470446]: 2025-11-25 18:05:33.36719177 +0000 UTC m=+0.141204592 container attach 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 13:05:33 np0005535469 bold_clarke[470462]: 167 167
Nov 25 13:05:33 np0005535469 systemd[1]: libpod-5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10.scope: Deactivated successfully.
Nov 25 13:05:33 np0005535469 podman[470446]: 2025-11-25 18:05:33.36902391 +0000 UTC m=+0.143036762 container died 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 13:05:33 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e48c688e9f74013a24ea66527c90e25c698d42cddb59da6faab0a4bfd7245060-merged.mount: Deactivated successfully.
Nov 25 13:05:33 np0005535469 podman[470446]: 2025-11-25 18:05:33.416501928 +0000 UTC m=+0.190514780 container remove 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 13:05:33 np0005535469 systemd[1]: libpod-conmon-5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10.scope: Deactivated successfully.
Nov 25 13:05:33 np0005535469 podman[470485]: 2025-11-25 18:05:33.571426311 +0000 UTC m=+0.043419648 container create dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 13:05:33 np0005535469 systemd[1]: Started libpod-conmon-dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2.scope.
Nov 25 13:05:33 np0005535469 podman[470485]: 2025-11-25 18:05:33.550192396 +0000 UTC m=+0.022185723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:05:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:05:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:33 np0005535469 podman[470485]: 2025-11-25 18:05:33.726110778 +0000 UTC m=+0.198104085 container init dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:05:33 np0005535469 podman[470485]: 2025-11-25 18:05:33.738177216 +0000 UTC m=+0.210170513 container start dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 13:05:33 np0005535469 podman[470485]: 2025-11-25 18:05:33.754277543 +0000 UTC m=+0.226270840 container attach dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 13:05:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4283: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:34 np0005535469 nice_ride[470502]: {
Nov 25 13:05:34 np0005535469 nice_ride[470502]:    "0": [
Nov 25 13:05:34 np0005535469 nice_ride[470502]:        {
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "devices": [
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "/dev/loop3"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            ],
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_name": "ceph_lv0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_size": "21470642176",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "name": "ceph_lv0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "tags": {
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cluster_name": "ceph",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.crush_device_class": "",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.encrypted": "0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osd_id": "0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.type": "block",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.vdo": "0"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            },
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "type": "block",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "vg_name": "ceph_vg0"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:        }
Nov 25 13:05:34 np0005535469 nice_ride[470502]:    ],
Nov 25 13:05:34 np0005535469 nice_ride[470502]:    "1": [
Nov 25 13:05:34 np0005535469 nice_ride[470502]:        {
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "devices": [
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "/dev/loop4"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            ],
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_name": "ceph_lv1",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_size": "21470642176",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "name": "ceph_lv1",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "tags": {
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cluster_name": "ceph",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.crush_device_class": "",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.encrypted": "0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osd_id": "1",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.type": "block",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.vdo": "0"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            },
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "type": "block",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "vg_name": "ceph_vg1"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:        }
Nov 25 13:05:34 np0005535469 nice_ride[470502]:    ],
Nov 25 13:05:34 np0005535469 nice_ride[470502]:    "2": [
Nov 25 13:05:34 np0005535469 nice_ride[470502]:        {
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "devices": [
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "/dev/loop5"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            ],
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_name": "ceph_lv2",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_size": "21470642176",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "name": "ceph_lv2",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "tags": {
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.cluster_name": "ceph",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.crush_device_class": "",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.encrypted": "0",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osd_id": "2",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.type": "block",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:                "ceph.vdo": "0"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            },
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "type": "block",
Nov 25 13:05:34 np0005535469 nice_ride[470502]:            "vg_name": "ceph_vg2"
Nov 25 13:05:34 np0005535469 nice_ride[470502]:        }
Nov 25 13:05:34 np0005535469 nice_ride[470502]:    ]
Nov 25 13:05:34 np0005535469 nice_ride[470502]: }
Nov 25 13:05:34 np0005535469 systemd[1]: libpod-dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2.scope: Deactivated successfully.
Nov 25 13:05:34 np0005535469 podman[470485]: 2025-11-25 18:05:34.543255621 +0000 UTC m=+1.015248998 container died dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 13:05:34 np0005535469 nova_compute[254092]: 2025-11-25 18:05:34.723 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0-merged.mount: Deactivated successfully.
Nov 25 13:05:34 np0005535469 nova_compute[254092]: 2025-11-25 18:05:34.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:34 np0005535469 podman[470485]: 2025-11-25 18:05:34.862071342 +0000 UTC m=+1.334064639 container remove dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:05:34 np0005535469 systemd[1]: libpod-conmon-dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2.scope: Deactivated successfully.
Nov 25 13:05:35 np0005535469 podman[470666]: 2025-11-25 18:05:35.506143559 +0000 UTC m=+0.062979170 container create a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:05:35 np0005535469 systemd[1]: Started libpod-conmon-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope.
Nov 25 13:05:35 np0005535469 podman[470666]: 2025-11-25 18:05:35.477718078 +0000 UTC m=+0.034553709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:05:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:05:35 np0005535469 podman[470666]: 2025-11-25 18:05:35.606037839 +0000 UTC m=+0.162873540 container init a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 13:05:35 np0005535469 podman[470666]: 2025-11-25 18:05:35.613404039 +0000 UTC m=+0.170239690 container start a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:05:35 np0005535469 podman[470666]: 2025-11-25 18:05:35.617597303 +0000 UTC m=+0.174432944 container attach a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:05:35 np0005535469 elastic_chaum[470682]: 167 167
Nov 25 13:05:35 np0005535469 systemd[1]: libpod-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope: Deactivated successfully.
Nov 25 13:05:35 np0005535469 conmon[470682]: conmon a0ba8e955daba448130f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope/container/memory.events
Nov 25 13:05:35 np0005535469 podman[470666]: 2025-11-25 18:05:35.622506926 +0000 UTC m=+0.179342537 container died a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:05:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7662d7b0b3797a57be45d01f30bdc3164ef69b4f09398d50295ed1d5466bf55b-merged.mount: Deactivated successfully.
Nov 25 13:05:35 np0005535469 podman[470666]: 2025-11-25 18:05:35.860099843 +0000 UTC m=+0.416935464 container remove a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:05:35 np0005535469 systemd[1]: libpod-conmon-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope: Deactivated successfully.
Nov 25 13:05:36 np0005535469 podman[470707]: 2025-11-25 18:05:36.025196553 +0000 UTC m=+0.047738616 container create 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:05:36 np0005535469 systemd[1]: Started libpod-conmon-2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec.scope.
Nov 25 13:05:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:05:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:36 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:05:36 np0005535469 podman[470707]: 2025-11-25 18:05:36.002168398 +0000 UTC m=+0.024710551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:05:36 np0005535469 podman[470707]: 2025-11-25 18:05:36.098008418 +0000 UTC m=+0.120550511 container init 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 13:05:36 np0005535469 podman[470707]: 2025-11-25 18:05:36.104732262 +0000 UTC m=+0.127274325 container start 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 13:05:36 np0005535469 podman[470707]: 2025-11-25 18:05:36.107678681 +0000 UTC m=+0.130220774 container attach 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:05:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4284: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]: {
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "osd_id": 1,
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "type": "bluestore"
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:    },
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "osd_id": 2,
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "type": "bluestore"
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:    },
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "osd_id": 0,
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:        "type": "bluestore"
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]:    }
Nov 25 13:05:37 np0005535469 pedantic_archimedes[470724]: }
Nov 25 13:05:37 np0005535469 systemd[1]: libpod-2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec.scope: Deactivated successfully.
Nov 25 13:05:37 np0005535469 podman[470707]: 2025-11-25 18:05:37.09827965 +0000 UTC m=+1.120821713 container died 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:05:37 np0005535469 nova_compute[254092]: 2025-11-25 18:05:37.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c-merged.mount: Deactivated successfully.
Nov 25 13:05:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4285: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:38 np0005535469 nova_compute[254092]: 2025-11-25 18:05:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:38 np0005535469 nova_compute[254092]: 2025-11-25 18:05:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:05:38 np0005535469 podman[470707]: 2025-11-25 18:05:38.605537229 +0000 UTC m=+2.628079292 container remove 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:05:38 np0005535469 systemd[1]: libpod-conmon-2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec.scope: Deactivated successfully.
Nov 25 13:05:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:05:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:05:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:05:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:05:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3d8530e5-ee00-4604-bd7d-165ef41befb0 does not exist
Nov 25 13:05:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7686b04a-10ec-4a8b-bcc5-bdf131cdef65 does not exist
Nov 25 13:05:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:05:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:05:39 np0005535469 nova_compute[254092]: 2025-11-25 18:05:39.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:05:40
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta']
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4286: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:05:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:05:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:42 np0005535469 nova_compute[254092]: 2025-11-25 18:05:42.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4287: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:43 np0005535469 nova_compute[254092]: 2025-11-25 18:05:43.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:05:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4288: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:44 np0005535469 nova_compute[254092]: 2025-11-25 18:05:44.854 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4289: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:47 np0005535469 nova_compute[254092]: 2025-11-25 18:05:47.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4290: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:49 np0005535469 nova_compute[254092]: 2025-11-25 18:05:49.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4291: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:52 np0005535469 nova_compute[254092]: 2025-11-25 18:05:52.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4292: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:05:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:05:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4293: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:54 np0005535469 podman[470822]: 2025-11-25 18:05:54.649687352 +0000 UTC m=+0.063785971 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 13:05:54 np0005535469 podman[470821]: 2025-11-25 18:05:54.651361448 +0000 UTC m=+0.068307014 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:05:54 np0005535469 podman[470823]: 2025-11-25 18:05:54.686341048 +0000 UTC m=+0.091970127 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 13:05:54 np0005535469 nova_compute[254092]: 2025-11-25 18:05:54.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:05:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2034984102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:05:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:05:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2034984102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:05:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:05:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4294: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:57 np0005535469 nova_compute[254092]: 2025-11-25 18:05:57.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:05:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4295: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:05:59 np0005535469 nova_compute[254092]: 2025-11-25 18:05:59.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4296: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:02 np0005535469 nova_compute[254092]: 2025-11-25 18:06:02.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4297: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4298: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:04 np0005535469 nova_compute[254092]: 2025-11-25 18:06:04.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:04 np0005535469 nova_compute[254092]: 2025-11-25 18:06:04.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4299: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 13:06:07 np0005535469 nova_compute[254092]: 2025-11-25 18:06:07.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4300: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 13:06:09 np0005535469 nova_compute[254092]: 2025-11-25 18:06:09.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:06:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:06:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4301: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 13:06:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:12 np0005535469 nova_compute[254092]: 2025-11-25 18:06:12.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4302: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 13:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:06:13.713 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:06:13.714 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:06:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:06:13.714 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:06:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4303: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 13:06:14 np0005535469 nova_compute[254092]: 2025-11-25 18:06:14.876 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4304: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 13:06:17 np0005535469 nova_compute[254092]: 2025-11-25 18:06:17.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4305: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Nov 25 13:06:19 np0005535469 nova_compute[254092]: 2025-11-25 18:06:19.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4306: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Nov 25 13:06:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4307: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 13:06:22 np0005535469 nova_compute[254092]: 2025-11-25 18:06:22.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:23 np0005535469 nova_compute[254092]: 2025-11-25 18:06:23.516 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4308: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:24 np0005535469 nova_compute[254092]: 2025-11-25 18:06:24.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:24 np0005535469 nova_compute[254092]: 2025-11-25 18:06:24.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:25 np0005535469 podman[470887]: 2025-11-25 18:06:25.673546461 +0000 UTC m=+0.081941394 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 13:06:25 np0005535469 podman[470886]: 2025-11-25 18:06:25.680003146 +0000 UTC m=+0.095085921 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 13:06:25 np0005535469 podman[470888]: 2025-11-25 18:06:25.69006469 +0000 UTC m=+0.093957751 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 13:06:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4309: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:27 np0005535469 nova_compute[254092]: 2025-11-25 18:06:27.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4310: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:29 np0005535469 nova_compute[254092]: 2025-11-25 18:06:29.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4311: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:30 np0005535469 nova_compute[254092]: 2025-11-25 18:06:30.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:31 np0005535469 nova_compute[254092]: 2025-11-25 18:06:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4312: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:32 np0005535469 nova_compute[254092]: 2025-11-25 18:06:32.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:06:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:06:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937580322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:06:33 np0005535469 nova_compute[254092]: 2025-11-25 18:06:33.966 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.148 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.149 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:06:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4313: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.439 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.525 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.638 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.638 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.652 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.669 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.689 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:06:34 np0005535469 nova_compute[254092]: 2025-11-25 18:06:34.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:06:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295506503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:06:35 np0005535469 nova_compute[254092]: 2025-11-25 18:06:35.162 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:06:35 np0005535469 nova_compute[254092]: 2025-11-25 18:06:35.167 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:06:35 np0005535469 nova_compute[254092]: 2025-11-25 18:06:35.181 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:06:35 np0005535469 nova_compute[254092]: 2025-11-25 18:06:35.183 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:06:35 np0005535469 nova_compute[254092]: 2025-11-25 18:06:35.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:06:36 np0005535469 nova_compute[254092]: 2025-11-25 18:06:36.165 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4314: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:37 np0005535469 nova_compute[254092]: 2025-11-25 18:06:37.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4315: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:38 np0005535469 nova_compute[254092]: 2025-11-25 18:06:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:38 np0005535469 nova_compute[254092]: 2025-11-25 18:06:38.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:06:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f454e515-bd6f-4b07-b53c-0f04ae3a2697 does not exist
Nov 25 13:06:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b6a12a9c-7783-41c3-b6d1-d6d03497f321 does not exist
Nov 25 13:06:39 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 7e160866-51b7-42cc-bd12-929a20f91bcc does not exist
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:06:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:06:39 np0005535469 nova_compute[254092]: 2025-11-25 18:06:39.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:06:40 np0005535469 podman[471265]: 2025-11-25 18:06:40.223430581 +0000 UTC m=+0.021534246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:06:40
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:06:40 np0005535469 podman[471265]: 2025-11-25 18:06:40.409808517 +0000 UTC m=+0.207912192 container create c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:06:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:06:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:06:40 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4316: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:40 np0005535469 systemd[1]: Started libpod-conmon-c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5.scope.
Nov 25 13:06:40 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:06:40 np0005535469 podman[471265]: 2025-11-25 18:06:40.536426493 +0000 UTC m=+0.334530158 container init c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 13:06:40 np0005535469 podman[471265]: 2025-11-25 18:06:40.544500572 +0000 UTC m=+0.342604217 container start c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:06:40 np0005535469 sweet_herschel[471281]: 167 167
Nov 25 13:06:40 np0005535469 systemd[1]: libpod-c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5.scope: Deactivated successfully.
Nov 25 13:06:40 np0005535469 podman[471265]: 2025-11-25 18:06:40.574367863 +0000 UTC m=+0.372471538 container attach c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 13:06:40 np0005535469 podman[471265]: 2025-11-25 18:06:40.575837513 +0000 UTC m=+0.373941168 container died c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:06:40 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f28ee7151e7418d3f8eabe72e35873505166ffd1c910326ad7504a5a0abd30ae-merged.mount: Deactivated successfully.
Nov 25 13:06:40 np0005535469 podman[471265]: 2025-11-25 18:06:40.818860217 +0000 UTC m=+0.616963862 container remove c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:06:40 np0005535469 systemd[1]: libpod-conmon-c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5.scope: Deactivated successfully.
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:06:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:06:41 np0005535469 podman[471303]: 2025-11-25 18:06:41.027691963 +0000 UTC m=+0.102736029 container create 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 13:06:41 np0005535469 podman[471303]: 2025-11-25 18:06:40.947448636 +0000 UTC m=+0.022492732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:06:41 np0005535469 systemd[1]: Started libpod-conmon-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope.
Nov 25 13:06:41 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:06:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:41 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:41 np0005535469 podman[471303]: 2025-11-25 18:06:41.220320731 +0000 UTC m=+0.295364817 container init 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 13:06:41 np0005535469 podman[471303]: 2025-11-25 18:06:41.226314563 +0000 UTC m=+0.301358629 container start 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:06:41 np0005535469 podman[471303]: 2025-11-25 18:06:41.250923341 +0000 UTC m=+0.325967427 container attach 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:06:42 np0005535469 pedantic_bhaskara[471320]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:06:42 np0005535469 pedantic_bhaskara[471320]: --> relative data size: 1.0
Nov 25 13:06:42 np0005535469 pedantic_bhaskara[471320]: --> All data devices are unavailable
Nov 25 13:06:42 np0005535469 systemd[1]: libpod-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope: Deactivated successfully.
Nov 25 13:06:42 np0005535469 systemd[1]: libpod-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope: Consumed 1.006s CPU time.
Nov 25 13:06:42 np0005535469 podman[471303]: 2025-11-25 18:06:42.290997293 +0000 UTC m=+1.366041369 container died 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 13:06:42 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8-merged.mount: Deactivated successfully.
Nov 25 13:06:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4317: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:42 np0005535469 nova_compute[254092]: 2025-11-25 18:06:42.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:42 np0005535469 podman[471303]: 2025-11-25 18:06:42.759907646 +0000 UTC m=+1.834951712 container remove 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:06:42 np0005535469 systemd[1]: libpod-conmon-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope: Deactivated successfully.
Nov 25 13:06:43 np0005535469 podman[471502]: 2025-11-25 18:06:43.528075928 +0000 UTC m=+0.106240644 container create 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:06:43 np0005535469 podman[471502]: 2025-11-25 18:06:43.460148435 +0000 UTC m=+0.038313191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:06:43 np0005535469 systemd[1]: Started libpod-conmon-01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c.scope.
Nov 25 13:06:43 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:06:43 np0005535469 podman[471502]: 2025-11-25 18:06:43.765340645 +0000 UTC m=+0.343505361 container init 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 13:06:43 np0005535469 podman[471502]: 2025-11-25 18:06:43.780708403 +0000 UTC m=+0.358873099 container start 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 13:06:43 np0005535469 podman[471502]: 2025-11-25 18:06:43.788801642 +0000 UTC m=+0.366966348 container attach 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 13:06:43 np0005535469 thirsty_shaw[471519]: 167 167
Nov 25 13:06:43 np0005535469 systemd[1]: libpod-01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c.scope: Deactivated successfully.
Nov 25 13:06:43 np0005535469 podman[471502]: 2025-11-25 18:06:43.79093257 +0000 UTC m=+0.369097306 container died 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:06:43 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1a2d84a715a990e0b0a073408daaf9893970e086097a03e285a79b79bb7a6179-merged.mount: Deactivated successfully.
Nov 25 13:06:44 np0005535469 podman[471502]: 2025-11-25 18:06:44.019624066 +0000 UTC m=+0.597788772 container remove 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 13:06:44 np0005535469 systemd[1]: libpod-conmon-01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c.scope: Deactivated successfully.
Nov 25 13:06:44 np0005535469 podman[471545]: 2025-11-25 18:06:44.224421523 +0000 UTC m=+0.061234803 container create 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 13:06:44 np0005535469 systemd[1]: Started libpod-conmon-2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1.scope.
Nov 25 13:06:44 np0005535469 podman[471545]: 2025-11-25 18:06:44.189168346 +0000 UTC m=+0.025981646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:06:44 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:06:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:44 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:44 np0005535469 podman[471545]: 2025-11-25 18:06:44.330966953 +0000 UTC m=+0.167780253 container init 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 13:06:44 np0005535469 podman[471545]: 2025-11-25 18:06:44.33969861 +0000 UTC m=+0.176511860 container start 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 13:06:44 np0005535469 podman[471545]: 2025-11-25 18:06:44.426228878 +0000 UTC m=+0.263042148 container attach 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 13:06:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4318: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:44 np0005535469 nova_compute[254092]: 2025-11-25 18:06:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:44 np0005535469 nova_compute[254092]: 2025-11-25 18:06:44.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]: {
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:    "0": [
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:        {
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "devices": [
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "/dev/loop3"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            ],
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_name": "ceph_lv0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_size": "21470642176",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "name": "ceph_lv0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "tags": {
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cluster_name": "ceph",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.crush_device_class": "",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.encrypted": "0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osd_id": "0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.type": "block",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.vdo": "0"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            },
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "type": "block",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "vg_name": "ceph_vg0"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:        }
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:    ],
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:    "1": [
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:        {
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "devices": [
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "/dev/loop4"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            ],
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_name": "ceph_lv1",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_size": "21470642176",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "name": "ceph_lv1",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "tags": {
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cluster_name": "ceph",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.crush_device_class": "",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.encrypted": "0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osd_id": "1",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.type": "block",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.vdo": "0"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            },
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "type": "block",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "vg_name": "ceph_vg1"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:        }
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:    ],
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:    "2": [
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:        {
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "devices": [
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "/dev/loop5"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            ],
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_name": "ceph_lv2",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_size": "21470642176",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "name": "ceph_lv2",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "tags": {
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.cluster_name": "ceph",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.crush_device_class": "",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.encrypted": "0",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osd_id": "2",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.type": "block",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:                "ceph.vdo": "0"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            },
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "type": "block",
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:            "vg_name": "ceph_vg2"
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:        }
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]:    ]
Nov 25 13:06:45 np0005535469 distracted_heisenberg[471562]: }
Nov 25 13:06:45 np0005535469 systemd[1]: libpod-2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1.scope: Deactivated successfully.
Nov 25 13:06:45 np0005535469 podman[471545]: 2025-11-25 18:06:45.12614111 +0000 UTC m=+0.962954360 container died 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:06:45 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e-merged.mount: Deactivated successfully.
Nov 25 13:06:45 np0005535469 podman[471545]: 2025-11-25 18:06:45.697422011 +0000 UTC m=+1.534235261 container remove 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 13:06:45 np0005535469 systemd[1]: libpod-conmon-2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1.scope: Deactivated successfully.
Nov 25 13:06:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4319: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:46 np0005535469 podman[471724]: 2025-11-25 18:06:46.453367933 +0000 UTC m=+0.120608584 container create 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 13:06:46 np0005535469 podman[471724]: 2025-11-25 18:06:46.362347453 +0000 UTC m=+0.029588134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:06:46 np0005535469 nova_compute[254092]: 2025-11-25 18:06:46.493 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:06:46 np0005535469 systemd[1]: Started libpod-conmon-244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43.scope.
Nov 25 13:06:46 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:06:46 np0005535469 podman[471724]: 2025-11-25 18:06:46.759010695 +0000 UTC m=+0.426251366 container init 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:06:46 np0005535469 podman[471724]: 2025-11-25 18:06:46.767015883 +0000 UTC m=+0.434256534 container start 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:06:46 np0005535469 zealous_kepler[471741]: 167 167
Nov 25 13:06:46 np0005535469 systemd[1]: libpod-244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43.scope: Deactivated successfully.
Nov 25 13:06:46 np0005535469 podman[471724]: 2025-11-25 18:06:46.877258454 +0000 UTC m=+0.544499135 container attach 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:06:46 np0005535469 podman[471724]: 2025-11-25 18:06:46.878034665 +0000 UTC m=+0.545275326 container died 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 13:06:46 np0005535469 systemd[1]: var-lib-containers-storage-overlay-63d5a71ef4aea83e3fab189396a4383b485f86e0224a0a30cd9a4dcd6871dace-merged.mount: Deactivated successfully.
Nov 25 13:06:47 np0005535469 podman[471724]: 2025-11-25 18:06:47.09528479 +0000 UTC m=+0.762525441 container remove 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:06:47 np0005535469 systemd[1]: libpod-conmon-244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43.scope: Deactivated successfully.
Nov 25 13:06:47 np0005535469 podman[471766]: 2025-11-25 18:06:47.277818153 +0000 UTC m=+0.052292300 container create d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 13:06:47 np0005535469 systemd[1]: Started libpod-conmon-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope.
Nov 25 13:06:47 np0005535469 podman[471766]: 2025-11-25 18:06:47.25597478 +0000 UTC m=+0.030448937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:06:47 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:06:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:47 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:06:47 np0005535469 podman[471766]: 2025-11-25 18:06:47.423406563 +0000 UTC m=+0.197880720 container init d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:06:47 np0005535469 podman[471766]: 2025-11-25 18:06:47.430075854 +0000 UTC m=+0.204549991 container start d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 13:06:47 np0005535469 podman[471766]: 2025-11-25 18:06:47.452984305 +0000 UTC m=+0.227458442 container attach d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 13:06:47 np0005535469 nova_compute[254092]: 2025-11-25 18:06:47.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4320: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]: {
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "osd_id": 1,
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "type": "bluestore"
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:    },
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "osd_id": 2,
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "type": "bluestore"
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:    },
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "osd_id": 0,
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:        "type": "bluestore"
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]:    }
Nov 25 13:06:48 np0005535469 elastic_goldwasser[471782]: }
Nov 25 13:06:48 np0005535469 systemd[1]: libpod-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope: Deactivated successfully.
Nov 25 13:06:48 np0005535469 systemd[1]: libpod-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope: Consumed 1.077s CPU time.
Nov 25 13:06:48 np0005535469 conmon[471782]: conmon d8069a693005846f4ccb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope/container/memory.events
Nov 25 13:06:48 np0005535469 podman[471766]: 2025-11-25 18:06:48.506113501 +0000 UTC m=+1.280587638 container died d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 13:06:48 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705-merged.mount: Deactivated successfully.
Nov 25 13:06:49 np0005535469 podman[471766]: 2025-11-25 18:06:49.229503761 +0000 UTC m=+2.003977898 container remove d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:06:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:06:49 np0005535469 systemd[1]: libpod-conmon-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope: Deactivated successfully.
Nov 25 13:06:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:06:49 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:06:49 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:06:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e765ea6c-41af-44eb-af27-35efc63dff60 does not exist
Nov 25 13:06:49 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 62e94597-6654-44c6-bd80-af1b88c6e18a does not exist
Nov 25 13:06:49 np0005535469 nova_compute[254092]: 2025-11-25 18:06:49.902 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4321: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:06:50 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:06:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4322: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:06:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:06:52 np0005535469 nova_compute[254092]: 2025-11-25 18:06:52.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4323: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:54 np0005535469 nova_compute[254092]: 2025-11-25 18:06:54.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:06:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972582466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:06:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:06:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972582466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:06:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:06:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4324: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:56 np0005535469 podman[471877]: 2025-11-25 18:06:56.677498426 +0000 UTC m=+0.096242592 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:06:56 np0005535469 podman[471879]: 2025-11-25 18:06:56.683852519 +0000 UTC m=+0.101092215 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 13:06:56 np0005535469 podman[471878]: 2025-11-25 18:06:56.686362446 +0000 UTC m=+0.097423394 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.088846) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017088959, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3495026, "memory_usage": 3556944, "flush_reason": "Manual Compaction"}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017127893, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 3429161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86658, "largest_seqno": 88713, "table_properties": {"data_size": 3419654, "index_size": 6064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18742, "raw_average_key_size": 20, "raw_value_size": 3400926, "raw_average_value_size": 3649, "num_data_blocks": 269, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093783, "oldest_key_time": 1764093783, "file_creation_time": 1764094017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 39231 microseconds, and 9222 cpu microseconds.
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.128091) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 3429161 bytes OK
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.128183) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.138256) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.138285) EVENT_LOG_v1 {"time_micros": 1764094017138276, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.138319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 3486407, prev total WAL file size 3486407, number of live WAL files 2.
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.140717) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(3348KB)], [206(10MB)]
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017140766, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 14337139, "oldest_snapshot_seqno": -1}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 10373 keys, 12558627 bytes, temperature: kUnknown
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017389737, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 12558627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12493204, "index_size": 38366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 273003, "raw_average_key_size": 26, "raw_value_size": 12311777, "raw_average_value_size": 1186, "num_data_blocks": 1479, "num_entries": 10373, "num_filter_entries": 10373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.390076) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12558627 bytes
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.447261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.6 rd, 50.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.4 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.8) write-amplify(3.7) OK, records in: 10887, records dropped: 514 output_compression: NoCompression
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.447296) EVENT_LOG_v1 {"time_micros": 1764094017447284, "job": 130, "event": "compaction_finished", "compaction_time_micros": 249076, "compaction_time_cpu_micros": 43180, "output_level": 6, "num_output_files": 1, "total_output_size": 12558627, "num_input_records": 10887, "num_output_records": 10373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017448126, "job": 130, "event": "table_file_deletion", "file_number": 208}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017450509, "job": 130, "event": "table_file_deletion", "file_number": 206}
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.140554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:06:57 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:06:57 np0005535469 nova_compute[254092]: 2025-11-25 18:06:57.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:06:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4325: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:06:59 np0005535469 nova_compute[254092]: 2025-11-25 18:06:59.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4326: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4327: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:02 np0005535469 nova_compute[254092]: 2025-11-25 18:07:02.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4328: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:04 np0005535469 nova_compute[254092]: 2025-11-25 18:07:04.911 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4329: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:07 np0005535469 nova_compute[254092]: 2025-11-25 18:07:07.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4330: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:09 np0005535469 nova_compute[254092]: 2025-11-25 18:07:09.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:07:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:07:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4331: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4332: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:12 np0005535469 nova_compute[254092]: 2025-11-25 18:07:12.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:07:13.715 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:07:13.715 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:07:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:07:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:07:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4333: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:14 np0005535469 nova_compute[254092]: 2025-11-25 18:07:14.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4334: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:17 np0005535469 nova_compute[254092]: 2025-11-25 18:07:17.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4335: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:19 np0005535469 nova_compute[254092]: 2025-11-25 18:07:19.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4336: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.188548) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041188625, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 423, "num_deletes": 250, "total_data_size": 346672, "memory_usage": 353968, "flush_reason": "Manual Compaction"}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041231226, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 264495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88714, "largest_seqno": 89136, "table_properties": {"data_size": 262105, "index_size": 489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6307, "raw_average_key_size": 20, "raw_value_size": 257411, "raw_average_value_size": 825, "num_data_blocks": 22, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094018, "oldest_key_time": 1764094018, "file_creation_time": 1764094041, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 42813 microseconds, and 3130 cpu microseconds.
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.231321) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 264495 bytes OK
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.231398) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300326) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300372) EVENT_LOG_v1 {"time_micros": 1764094041300362, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 344061, prev total WAL file size 344061, number of live WAL files 2.
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373631' seq:72057594037927935, type:22 .. '6D6772737461740034303132' seq:0, type:0; will stop at (end)
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(258KB)], [209(11MB)]
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041300964, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 12823122, "oldest_snapshot_seqno": -1}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 10184 keys, 9614052 bytes, temperature: kUnknown
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041520063, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 9614052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9554506, "index_size": 33001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 269245, "raw_average_key_size": 26, "raw_value_size": 9380898, "raw_average_value_size": 921, "num_data_blocks": 1255, "num_entries": 10184, "num_filter_entries": 10184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094041, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.520413) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 9614052 bytes
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.525834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.5 rd, 43.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.0 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(84.8) write-amplify(36.3) OK, records in: 10685, records dropped: 501 output_compression: NoCompression
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.525872) EVENT_LOG_v1 {"time_micros": 1764094041525854, "job": 132, "event": "compaction_finished", "compaction_time_micros": 219201, "compaction_time_cpu_micros": 24900, "output_level": 6, "num_output_files": 1, "total_output_size": 9614052, "num_input_records": 10685, "num_output_records": 10184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041526162, "job": 132, "event": "table_file_deletion", "file_number": 211}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041530831, "job": 132, "event": "table_file_deletion", "file_number": 209}
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:07:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:07:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4337: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:22 np0005535469 nova_compute[254092]: 2025-11-25 18:07:22.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4338: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:24 np0005535469 nova_compute[254092]: 2025-11-25 18:07:24.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:24 np0005535469 nova_compute[254092]: 2025-11-25 18:07:24.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:25 np0005535469 nova_compute[254092]: 2025-11-25 18:07:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4339: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:27 np0005535469 nova_compute[254092]: 2025-11-25 18:07:27.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:27 np0005535469 podman[471940]: 2025-11-25 18:07:27.647920275 +0000 UTC m=+0.062563319 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 13:07:27 np0005535469 podman[471941]: 2025-11-25 18:07:27.652694594 +0000 UTC m=+0.059392923 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 13:07:27 np0005535469 podman[471942]: 2025-11-25 18:07:27.685065633 +0000 UTC m=+0.089443268 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 13:07:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4340: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:29 np0005535469 nova_compute[254092]: 2025-11-25 18:07:29.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4341: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:31 np0005535469 nova_compute[254092]: 2025-11-25 18:07:31.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4342: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:32 np0005535469 nova_compute[254092]: 2025-11-25 18:07:32.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:07:33 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:07:33 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501295649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:07:33 np0005535469 nova_compute[254092]: 2025-11-25 18:07:33.960 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.109 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.111 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.111 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.111 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.189 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.190 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.206 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:07:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4343: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:07:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004929326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.709 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.728 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.729 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.730 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:07:34 np0005535469 nova_compute[254092]: 2025-11-25 18:07:34.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4344: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:36 np0005535469 nova_compute[254092]: 2025-11-25 18:07:36.730 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:36 np0005535469 nova_compute[254092]: 2025-11-25 18:07:36.731 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:07:36 np0005535469 nova_compute[254092]: 2025-11-25 18:07:36.731 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:07:36 np0005535469 nova_compute[254092]: 2025-11-25 18:07:36.744 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:07:36 np0005535469 nova_compute[254092]: 2025-11-25 18:07:36.745 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:37 np0005535469 nova_compute[254092]: 2025-11-25 18:07:37.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4345: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:39 np0005535469 nova_compute[254092]: 2025-11-25 18:07:39.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:07:40
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4346: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:40 np0005535469 nova_compute[254092]: 2025-11-25 18:07:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:40 np0005535469 nova_compute[254092]: 2025-11-25 18:07:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:07:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:07:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4347: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:42 np0005535469 nova_compute[254092]: 2025-11-25 18:07:42.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4348: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:45 np0005535469 nova_compute[254092]: 2025-11-25 18:07:45.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4349: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:46 np0005535469 nova_compute[254092]: 2025-11-25 18:07:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:07:47 np0005535469 nova_compute[254092]: 2025-11-25 18:07:47.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4350: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:50 np0005535469 nova_compute[254092]: 2025-11-25 18:07:50.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:07:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4351: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:07:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 561dd13c-5b87-4bb0-bd43-086fd34afe14 does not exist
Nov 25 13:07:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 661eebb7-9565-4bed-a345-daa46bb01a30 does not exist
Nov 25 13:07:50 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fed85de5-8287-49e6-a8b1-d7a3e97c8538 does not exist
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:07:50 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:07:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:07:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:07:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:07:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:07:51 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:07:51 np0005535469 podman[472434]: 2025-11-25 18:07:51.579208238 +0000 UTC m=+0.063360880 container create fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:07:51 np0005535469 systemd[1]: Started libpod-conmon-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope.
Nov 25 13:07:51 np0005535469 podman[472434]: 2025-11-25 18:07:51.542539773 +0000 UTC m=+0.026692465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:07:51 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:07:51 np0005535469 podman[472434]: 2025-11-25 18:07:51.71971219 +0000 UTC m=+0.203864862 container init fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 13:07:51 np0005535469 podman[472434]: 2025-11-25 18:07:51.728731206 +0000 UTC m=+0.212883858 container start fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 13:07:51 np0005535469 brave_brattain[472450]: 167 167
Nov 25 13:07:51 np0005535469 systemd[1]: libpod-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope: Deactivated successfully.
Nov 25 13:07:51 np0005535469 conmon[472450]: conmon fc0909373b59f524249d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope/container/memory.events
Nov 25 13:07:51 np0005535469 podman[472434]: 2025-11-25 18:07:51.752919661 +0000 UTC m=+0.237072333 container attach fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 13:07:51 np0005535469 podman[472434]: 2025-11-25 18:07:51.754231097 +0000 UTC m=+0.238383779 container died fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 13:07:51 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4ebebd122ee90e1f02440da3f4e7810e8fc23f1656f79e390a07e61102ffd57c-merged.mount: Deactivated successfully.
Nov 25 13:07:52 np0005535469 podman[472434]: 2025-11-25 18:07:52.015467475 +0000 UTC m=+0.499620157 container remove fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:07:52 np0005535469 systemd[1]: libpod-conmon-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope: Deactivated successfully.
Nov 25 13:07:52 np0005535469 podman[472475]: 2025-11-25 18:07:52.244900101 +0000 UTC m=+0.064817120 container create 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 13:07:52 np0005535469 systemd[1]: Started libpod-conmon-104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b.scope.
Nov 25 13:07:52 np0005535469 podman[472475]: 2025-11-25 18:07:52.209588333 +0000 UTC m=+0.029505432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:07:52 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:07:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:52 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:52 np0005535469 podman[472475]: 2025-11-25 18:07:52.367432396 +0000 UTC m=+0.187349425 container init 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:07:52 np0005535469 podman[472475]: 2025-11-25 18:07:52.37753077 +0000 UTC m=+0.197447779 container start 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 13:07:52 np0005535469 podman[472475]: 2025-11-25 18:07:52.388305202 +0000 UTC m=+0.208222241 container attach 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4352: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:07:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:07:52 np0005535469 nova_compute[254092]: 2025-11-25 18:07:52.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:53 np0005535469 quirky_greider[472491]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:07:53 np0005535469 quirky_greider[472491]: --> relative data size: 1.0
Nov 25 13:07:53 np0005535469 quirky_greider[472491]: --> All data devices are unavailable
Nov 25 13:07:53 np0005535469 systemd[1]: libpod-104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b.scope: Deactivated successfully.
Nov 25 13:07:53 np0005535469 podman[472475]: 2025-11-25 18:07:53.375557471 +0000 UTC m=+1.195474490 container died 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:07:53 np0005535469 systemd[1]: var-lib-containers-storage-overlay-680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade-merged.mount: Deactivated successfully.
Nov 25 13:07:53 np0005535469 podman[472475]: 2025-11-25 18:07:53.548514093 +0000 UTC m=+1.368431102 container remove 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:07:53 np0005535469 systemd[1]: libpod-conmon-104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b.scope: Deactivated successfully.
Nov 25 13:07:54 np0005535469 podman[472676]: 2025-11-25 18:07:54.225796231 +0000 UTC m=+0.090659060 container create 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 13:07:54 np0005535469 podman[472676]: 2025-11-25 18:07:54.155938196 +0000 UTC m=+0.020801035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:07:54 np0005535469 systemd[1]: Started libpod-conmon-7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76.scope.
Nov 25 13:07:54 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:07:54 np0005535469 podman[472676]: 2025-11-25 18:07:54.33372885 +0000 UTC m=+0.198591689 container init 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 13:07:54 np0005535469 podman[472676]: 2025-11-25 18:07:54.343048813 +0000 UTC m=+0.207911642 container start 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:07:54 np0005535469 sweet_sanderson[472693]: 167 167
Nov 25 13:07:54 np0005535469 systemd[1]: libpod-7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76.scope: Deactivated successfully.
Nov 25 13:07:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4353: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:54 np0005535469 podman[472676]: 2025-11-25 18:07:54.515103451 +0000 UTC m=+0.379966300 container attach 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 13:07:54 np0005535469 podman[472676]: 2025-11-25 18:07:54.516072358 +0000 UTC m=+0.380935177 container died 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 13:07:54 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7b0012523a69f04dadc2ca53afbf9adcddfe9c54c6b159bc620bcdb1dd6736e4-merged.mount: Deactivated successfully.
Nov 25 13:07:55 np0005535469 nova_compute[254092]: 2025-11-25 18:07:55.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:07:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105009306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:07:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:07:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105009306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:07:55 np0005535469 podman[472676]: 2025-11-25 18:07:55.480587459 +0000 UTC m=+1.345450278 container remove 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 13:07:55 np0005535469 systemd[1]: libpod-conmon-7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76.scope: Deactivated successfully.
Nov 25 13:07:55 np0005535469 podman[472715]: 2025-11-25 18:07:55.669482344 +0000 UTC m=+0.079350014 container create 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 13:07:55 np0005535469 podman[472715]: 2025-11-25 18:07:55.617788992 +0000 UTC m=+0.027656672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:07:55 np0005535469 systemd[1]: Started libpod-conmon-60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8.scope.
Nov 25 13:07:55 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:07:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:55 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:56 np0005535469 podman[472715]: 2025-11-25 18:07:56.002089859 +0000 UTC m=+0.411957519 container init 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 13:07:56 np0005535469 podman[472715]: 2025-11-25 18:07:56.008464712 +0000 UTC m=+0.418332362 container start 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:07:56 np0005535469 podman[472715]: 2025-11-25 18:07:56.173999314 +0000 UTC m=+0.583866974 container attach 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 13:07:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:07:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4354: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]: {
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:    "0": [
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:        {
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "devices": [
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "/dev/loop3"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            ],
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_name": "ceph_lv0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_size": "21470642176",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "name": "ceph_lv0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "tags": {
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cluster_name": "ceph",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.crush_device_class": "",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.encrypted": "0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osd_id": "0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.type": "block",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.vdo": "0"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            },
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "type": "block",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "vg_name": "ceph_vg0"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:        }
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:    ],
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:    "1": [
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:        {
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "devices": [
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "/dev/loop4"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            ],
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_name": "ceph_lv1",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_size": "21470642176",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "name": "ceph_lv1",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "tags": {
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cluster_name": "ceph",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.crush_device_class": "",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.encrypted": "0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osd_id": "1",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.type": "block",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.vdo": "0"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            },
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "type": "block",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "vg_name": "ceph_vg1"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:        }
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:    ],
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:    "2": [
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:        {
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "devices": [
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "/dev/loop5"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            ],
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_name": "ceph_lv2",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_size": "21470642176",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "name": "ceph_lv2",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "tags": {
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.cluster_name": "ceph",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.crush_device_class": "",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.encrypted": "0",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osd_id": "2",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.type": "block",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:                "ceph.vdo": "0"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            },
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "type": "block",
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:            "vg_name": "ceph_vg2"
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:        }
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]:    ]
Nov 25 13:07:56 np0005535469 zen_heyrovsky[472732]: }
Nov 25 13:07:56 np0005535469 systemd[1]: libpod-60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8.scope: Deactivated successfully.
Nov 25 13:07:56 np0005535469 podman[472715]: 2025-11-25 18:07:56.773063138 +0000 UTC m=+1.182930798 container died 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:07:57 np0005535469 systemd[1]: var-lib-containers-storage-overlay-98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4-merged.mount: Deactivated successfully.
Nov 25 13:07:57 np0005535469 podman[472715]: 2025-11-25 18:07:57.584707322 +0000 UTC m=+1.994574992 container remove 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:07:57 np0005535469 systemd[1]: libpod-conmon-60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8.scope: Deactivated successfully.
Nov 25 13:07:57 np0005535469 nova_compute[254092]: 2025-11-25 18:07:57.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:07:57 np0005535469 podman[472778]: 2025-11-25 18:07:57.757397188 +0000 UTC m=+0.060943095 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 13:07:57 np0005535469 podman[472777]: 2025-11-25 18:07:57.759650469 +0000 UTC m=+0.063471063 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:07:57 np0005535469 podman[472835]: 2025-11-25 18:07:57.855447598 +0000 UTC m=+0.078427559 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:07:58 np0005535469 podman[472956]: 2025-11-25 18:07:58.204388025 +0000 UTC m=+0.052590087 container create 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:07:58 np0005535469 systemd[1]: Started libpod-conmon-82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491.scope.
Nov 25 13:07:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:07:58 np0005535469 podman[472956]: 2025-11-25 18:07:58.172205092 +0000 UTC m=+0.020407174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:07:58 np0005535469 podman[472956]: 2025-11-25 18:07:58.303771922 +0000 UTC m=+0.151974034 container init 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 13:07:58 np0005535469 podman[472956]: 2025-11-25 18:07:58.317033382 +0000 UTC m=+0.165235444 container start 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:07:58 np0005535469 laughing_poitras[472972]: 167 167
Nov 25 13:07:58 np0005535469 systemd[1]: libpod-82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491.scope: Deactivated successfully.
Nov 25 13:07:58 np0005535469 podman[472956]: 2025-11-25 18:07:58.349135483 +0000 UTC m=+0.197337645 container attach 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 25 13:07:58 np0005535469 podman[472956]: 2025-11-25 18:07:58.351123097 +0000 UTC m=+0.199325249 container died 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 13:07:58 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3e7c53a124dc0239d3a5ba7af44edbf34f18792f0a36b90e535f12f75b2af47b-merged.mount: Deactivated successfully.
Nov 25 13:07:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4355: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:07:58 np0005535469 podman[472956]: 2025-11-25 18:07:58.507790788 +0000 UTC m=+0.355992860 container remove 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 13:07:58 np0005535469 systemd[1]: libpod-conmon-82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491.scope: Deactivated successfully.
Nov 25 13:07:58 np0005535469 podman[472998]: 2025-11-25 18:07:58.712280337 +0000 UTC m=+0.060742940 container create c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:07:58 np0005535469 systemd[1]: Started libpod-conmon-c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb.scope.
Nov 25 13:07:58 np0005535469 podman[472998]: 2025-11-25 18:07:58.674883681 +0000 UTC m=+0.023346304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:07:58 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:58 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:07:58 np0005535469 podman[472998]: 2025-11-25 18:07:58.863194941 +0000 UTC m=+0.211657644 container init c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:07:58 np0005535469 podman[472998]: 2025-11-25 18:07:58.871694593 +0000 UTC m=+0.220157216 container start c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 13:07:58 np0005535469 podman[472998]: 2025-11-25 18:07:58.939685787 +0000 UTC m=+0.288148450 container attach c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]: {
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "osd_id": 1,
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "type": "bluestore"
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:    },
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "osd_id": 2,
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "type": "bluestore"
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:    },
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "osd_id": 0,
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:        "type": "bluestore"
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]:    }
Nov 25 13:07:59 np0005535469 flamboyant_archimedes[473014]: }
Nov 25 13:07:59 np0005535469 systemd[1]: libpod-c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb.scope: Deactivated successfully.
Nov 25 13:07:59 np0005535469 podman[472998]: 2025-11-25 18:07:59.826550241 +0000 UTC m=+1.175012844 container died c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 13:07:59 np0005535469 systemd[1]: var-lib-containers-storage-overlay-9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5-merged.mount: Deactivated successfully.
Nov 25 13:08:00 np0005535469 nova_compute[254092]: 2025-11-25 18:08:00.025 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:00 np0005535469 podman[472998]: 2025-11-25 18:08:00.154436918 +0000 UTC m=+1.502899521 container remove c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 13:08:00 np0005535469 systemd[1]: libpod-conmon-c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb.scope: Deactivated successfully.
Nov 25 13:08:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:08:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:08:00 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:08:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4356: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:00 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:08:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev a73b9244-b586-4bc6-aac4-c584999d5f05 does not exist
Nov 25 13:08:00 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev d35225c1-eca7-44d5-9190-4ca8857e0ed1 does not exist
Nov 25 13:08:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:08:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:08:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4357: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:02 np0005535469 nova_compute[254092]: 2025-11-25 18:08:02.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4358: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:05 np0005535469 nova_compute[254092]: 2025-11-25 18:08:05.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4359: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:07 np0005535469 nova_compute[254092]: 2025-11-25 18:08:07.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4360: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:10 np0005535469 nova_compute[254092]: 2025-11-25 18:08:10.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:08:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:08:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4361: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4362: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:12 np0005535469 nova_compute[254092]: 2025-11-25 18:08:12.712 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:08:13.715 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:08:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:08:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:08:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:08:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4363: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:15 np0005535469 nova_compute[254092]: 2025-11-25 18:08:15.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4364: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:17 np0005535469 nova_compute[254092]: 2025-11-25 18:08:17.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4365: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:20 np0005535469 nova_compute[254092]: 2025-11-25 18:08:20.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4366: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4367: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:22 np0005535469 nova_compute[254092]: 2025-11-25 18:08:22.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4368: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:24 np0005535469 nova_compute[254092]: 2025-11-25 18:08:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:25 np0005535469 nova_compute[254092]: 2025-11-25 18:08:25.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4369: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:27 np0005535469 nova_compute[254092]: 2025-11-25 18:08:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:27 np0005535469 nova_compute[254092]: 2025-11-25 18:08:27.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4370: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:28 np0005535469 podman[473114]: 2025-11-25 18:08:28.66426702 +0000 UTC m=+0.071974083 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 13:08:28 np0005535469 podman[473113]: 2025-11-25 18:08:28.675260448 +0000 UTC m=+0.081701217 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 13:08:28 np0005535469 podman[473115]: 2025-11-25 18:08:28.710369421 +0000 UTC m=+0.116923433 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:08:30 np0005535469 nova_compute[254092]: 2025-11-25 18:08:30.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4371: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4372: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:32 np0005535469 nova_compute[254092]: 2025-11-25 18:08:32.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:33 np0005535469 nova_compute[254092]: 2025-11-25 18:08:33.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4373: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:34 np0005535469 nova_compute[254092]: 2025-11-25 18:08:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:34 np0005535469 nova_compute[254092]: 2025-11-25 18:08:34.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:08:34 np0005535469 nova_compute[254092]: 2025-11-25 18:08:34.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:08:34 np0005535469 nova_compute[254092]: 2025-11-25 18:08:34.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:08:34 np0005535469 nova_compute[254092]: 2025-11-25 18:08:34.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:08:34 np0005535469 nova_compute[254092]: 2025-11-25 18:08:34.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:08:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111203199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.112 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.290 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.291 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.292 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.342 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.342 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.357 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:08:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:08:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697577250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.791 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.797 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.813 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.815 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:08:35 np0005535469 nova_compute[254092]: 2025-11-25 18:08:35.815 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.297222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116297244, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 857, "num_deletes": 256, "total_data_size": 1149571, "memory_usage": 1169328, "flush_reason": "Manual Compaction"}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116305893, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 1128054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89137, "largest_seqno": 89993, "table_properties": {"data_size": 1123712, "index_size": 1993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9342, "raw_average_key_size": 19, "raw_value_size": 1115049, "raw_average_value_size": 2275, "num_data_blocks": 89, "num_entries": 490, "num_filter_entries": 490, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094042, "oldest_key_time": 1764094042, "file_creation_time": 1764094116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 8700 microseconds, and 2949 cpu microseconds.
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.305921) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 1128054 bytes OK
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.305934) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309239) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309252) EVENT_LOG_v1 {"time_micros": 1764094116309249, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 1145332, prev total WAL file size 1145332, number of live WAL files 2.
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303139' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end)
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(1101KB)], [212(9388KB)]
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116309693, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 10742106, "oldest_snapshot_seqno": -1}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 10150 keys, 10640519 bytes, temperature: kUnknown
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116380081, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 10640519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10579462, "index_size": 34576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 269443, "raw_average_key_size": 26, "raw_value_size": 10404669, "raw_average_value_size": 1025, "num_data_blocks": 1322, "num_entries": 10150, "num_filter_entries": 10150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.380333) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 10640519 bytes
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.382618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.4 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.2 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(19.0) write-amplify(9.4) OK, records in: 10674, records dropped: 524 output_compression: NoCompression
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.382683) EVENT_LOG_v1 {"time_micros": 1764094116382669, "job": 134, "event": "compaction_finished", "compaction_time_micros": 70481, "compaction_time_cpu_micros": 24685, "output_level": 6, "num_output_files": 1, "total_output_size": 10640519, "num_input_records": 10674, "num_output_records": 10150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116383175, "job": 134, "event": "table_file_deletion", "file_number": 214}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116386610, "job": 134, "event": "table_file_deletion", "file_number": 212}
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:08:36 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:08:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4374: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:36 np0005535469 nova_compute[254092]: 2025-11-25 18:08:36.816 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:36 np0005535469 nova_compute[254092]: 2025-11-25 18:08:36.816 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:08:36 np0005535469 nova_compute[254092]: 2025-11-25 18:08:36.817 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:08:36 np0005535469 nova_compute[254092]: 2025-11-25 18:08:36.831 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:08:36 np0005535469 nova_compute[254092]: 2025-11-25 18:08:36.831 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:36 np0005535469 nova_compute[254092]: 2025-11-25 18:08:36.831 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:37 np0005535469 nova_compute[254092]: 2025-11-25 18:08:37.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4375: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:40 np0005535469 nova_compute[254092]: 2025-11-25 18:08:40.075 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:08:40
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:08:40 np0005535469 nova_compute[254092]: 2025-11-25 18:08:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:40 np0005535469 nova_compute[254092]: 2025-11-25 18:08:40.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4376: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:08:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:08:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4377: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:42 np0005535469 nova_compute[254092]: 2025-11-25 18:08:42.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4378: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:45 np0005535469 nova_compute[254092]: 2025-11-25 18:08:45.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4379: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:47 np0005535469 nova_compute[254092]: 2025-11-25 18:08:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:47 np0005535469 nova_compute[254092]: 2025-11-25 18:08:47.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4380: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:50 np0005535469 nova_compute[254092]: 2025-11-25 18:08:50.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4381: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:51 np0005535469 nova_compute[254092]: 2025-11-25 18:08:51.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4382: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:08:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:08:52 np0005535469 nova_compute[254092]: 2025-11-25 18:08:52.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4383: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:55 np0005535469 nova_compute[254092]: 2025-11-25 18:08:55.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:08:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195361864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:08:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:08:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195361864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:08:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:08:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4384: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:57 np0005535469 nova_compute[254092]: 2025-11-25 18:08:57.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:08:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4385: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:08:59 np0005535469 podman[473217]: 2025-11-25 18:08:59.637532804 +0000 UTC m=+0.055958409 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 13:08:59 np0005535469 podman[473218]: 2025-11-25 18:08:59.63776442 +0000 UTC m=+0.052607538 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 13:08:59 np0005535469 podman[473219]: 2025-11-25 18:08:59.664491475 +0000 UTC m=+0.076350872 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 13:09:00 np0005535469 nova_compute[254092]: 2025-11-25 18:09:00.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4386: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:09:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 060efeea-a335-4e3b-bd7c-5961f63336b5 does not exist
Nov 25 13:09:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 406d7ba9-4fa3-48bd-a082-25ea142b1049 does not exist
Nov 25 13:09:01 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 64c32308-eaeb-490f-95d7-10797319a082 does not exist
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:09:01 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:09:02 np0005535469 podman[473557]: 2025-11-25 18:09:02.113757545 +0000 UTC m=+0.062059214 container create 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 13:09:02 np0005535469 systemd[1]: Started libpod-conmon-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope.
Nov 25 13:09:02 np0005535469 podman[473557]: 2025-11-25 18:09:02.073448611 +0000 UTC m=+0.021750320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:09:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:09:02 np0005535469 podman[473557]: 2025-11-25 18:09:02.20940015 +0000 UTC m=+0.157701899 container init 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:09:02 np0005535469 podman[473557]: 2025-11-25 18:09:02.216813062 +0000 UTC m=+0.165114761 container start 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 13:09:02 np0005535469 podman[473557]: 2025-11-25 18:09:02.221297073 +0000 UTC m=+0.169598772 container attach 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 13:09:02 np0005535469 agitated_gates[473573]: 167 167
Nov 25 13:09:02 np0005535469 systemd[1]: libpod-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope: Deactivated successfully.
Nov 25 13:09:02 np0005535469 conmon[473573]: conmon 7a451e678e38a1c11113 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope/container/memory.events
Nov 25 13:09:02 np0005535469 podman[473557]: 2025-11-25 18:09:02.22595337 +0000 UTC m=+0.174255029 container died 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:09:02 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0621d5f16e8a9bc26ca411ed175018d606e4a68e6c39b74deef54c94698ff988-merged.mount: Deactivated successfully.
Nov 25 13:09:02 np0005535469 podman[473557]: 2025-11-25 18:09:02.270630832 +0000 UTC m=+0.218932491 container remove 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:09:02 np0005535469 systemd[1]: libpod-conmon-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope: Deactivated successfully.
Nov 25 13:09:02 np0005535469 podman[473597]: 2025-11-25 18:09:02.483454237 +0000 UTC m=+0.099936653 container create a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 13:09:02 np0005535469 podman[473597]: 2025-11-25 18:09:02.406959241 +0000 UTC m=+0.023441667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:09:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4387: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:02 np0005535469 systemd[1]: Started libpod-conmon-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope.
Nov 25 13:09:02 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:09:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:02 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:02 np0005535469 podman[473597]: 2025-11-25 18:09:02.585163916 +0000 UTC m=+0.201646332 container init a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 13:09:02 np0005535469 podman[473597]: 2025-11-25 18:09:02.592797614 +0000 UTC m=+0.209280030 container start a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:09:02 np0005535469 podman[473597]: 2025-11-25 18:09:02.59747611 +0000 UTC m=+0.213958536 container attach a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:09:02 np0005535469 nova_compute[254092]: 2025-11-25 18:09:02.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:03 np0005535469 trusting_nightingale[473613]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:09:03 np0005535469 trusting_nightingale[473613]: --> relative data size: 1.0
Nov 25 13:09:03 np0005535469 trusting_nightingale[473613]: --> All data devices are unavailable
Nov 25 13:09:03 np0005535469 systemd[1]: libpod-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope: Deactivated successfully.
Nov 25 13:09:03 np0005535469 systemd[1]: libpod-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope: Consumed 1.057s CPU time.
Nov 25 13:09:03 np0005535469 podman[473597]: 2025-11-25 18:09:03.712854746 +0000 UTC m=+1.329337192 container died a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:09:03 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf-merged.mount: Deactivated successfully.
Nov 25 13:09:03 np0005535469 podman[473597]: 2025-11-25 18:09:03.786469023 +0000 UTC m=+1.402951429 container remove a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 13:09:03 np0005535469 systemd[1]: libpod-conmon-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope: Deactivated successfully.
Nov 25 13:09:04 np0005535469 podman[473797]: 2025-11-25 18:09:04.392430495 +0000 UTC m=+0.040191541 container create 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:09:04 np0005535469 systemd[1]: Started libpod-conmon-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope.
Nov 25 13:09:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:09:04 np0005535469 podman[473797]: 2025-11-25 18:09:04.463079412 +0000 UTC m=+0.110840358 container init 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:09:04 np0005535469 podman[473797]: 2025-11-25 18:09:04.375019743 +0000 UTC m=+0.022780689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:09:04 np0005535469 podman[473797]: 2025-11-25 18:09:04.469183368 +0000 UTC m=+0.116944294 container start 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 13:09:04 np0005535469 podman[473797]: 2025-11-25 18:09:04.472353794 +0000 UTC m=+0.120114750 container attach 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:09:04 np0005535469 admiring_leakey[473813]: 167 167
Nov 25 13:09:04 np0005535469 systemd[1]: libpod-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope: Deactivated successfully.
Nov 25 13:09:04 np0005535469 conmon[473813]: conmon 053f4563973dc3491011 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope/container/memory.events
Nov 25 13:09:04 np0005535469 podman[473797]: 2025-11-25 18:09:04.474844732 +0000 UTC m=+0.122605668 container died 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 13:09:04 np0005535469 systemd[1]: var-lib-containers-storage-overlay-231e3300730934385c2a718ebdd11c5d03d96b262d2c63dc4ecdf35193784cad-merged.mount: Deactivated successfully.
Nov 25 13:09:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4388: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:04 np0005535469 podman[473797]: 2025-11-25 18:09:04.510485118 +0000 UTC m=+0.158246044 container remove 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 13:09:04 np0005535469 systemd[1]: libpod-conmon-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope: Deactivated successfully.
Nov 25 13:09:04 np0005535469 podman[473837]: 2025-11-25 18:09:04.733230253 +0000 UTC m=+0.055830696 container create 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 13:09:04 np0005535469 systemd[1]: Started libpod-conmon-33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188.scope.
Nov 25 13:09:04 np0005535469 podman[473837]: 2025-11-25 18:09:04.708151432 +0000 UTC m=+0.030751935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:09:04 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:09:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:04 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:04 np0005535469 podman[473837]: 2025-11-25 18:09:04.827392407 +0000 UTC m=+0.149992830 container init 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 13:09:04 np0005535469 podman[473837]: 2025-11-25 18:09:04.835921679 +0000 UTC m=+0.158522082 container start 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:09:04 np0005535469 podman[473837]: 2025-11-25 18:09:04.839782884 +0000 UTC m=+0.162383287 container attach 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 13:09:05 np0005535469 nova_compute[254092]: 2025-11-25 18:09:05.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:05 np0005535469 jovial_edison[473854]: {
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:    "0": [
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:        {
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "devices": [
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "/dev/loop3"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            ],
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_name": "ceph_lv0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_size": "21470642176",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "name": "ceph_lv0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "tags": {
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cluster_name": "ceph",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.crush_device_class": "",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.encrypted": "0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osd_id": "0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.type": "block",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.vdo": "0"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            },
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "type": "block",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "vg_name": "ceph_vg0"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:        }
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:    ],
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:    "1": [
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:        {
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "devices": [
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "/dev/loop4"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            ],
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_name": "ceph_lv1",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_size": "21470642176",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "name": "ceph_lv1",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "tags": {
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cluster_name": "ceph",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.crush_device_class": "",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.encrypted": "0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osd_id": "1",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.type": "block",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.vdo": "0"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            },
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "type": "block",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "vg_name": "ceph_vg1"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:        }
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:    ],
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:    "2": [
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:        {
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "devices": [
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "/dev/loop5"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            ],
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_name": "ceph_lv2",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_size": "21470642176",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "name": "ceph_lv2",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "tags": {
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.cluster_name": "ceph",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.crush_device_class": "",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.encrypted": "0",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osd_id": "2",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.type": "block",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:                "ceph.vdo": "0"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            },
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "type": "block",
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:            "vg_name": "ceph_vg2"
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:        }
Nov 25 13:09:05 np0005535469 jovial_edison[473854]:    ]
Nov 25 13:09:05 np0005535469 jovial_edison[473854]: }
Nov 25 13:09:05 np0005535469 systemd[1]: libpod-33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188.scope: Deactivated successfully.
Nov 25 13:09:05 np0005535469 podman[473837]: 2025-11-25 18:09:05.658134699 +0000 UTC m=+0.980735172 container died 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:09:05 np0005535469 systemd[1]: var-lib-containers-storage-overlay-6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4-merged.mount: Deactivated successfully.
Nov 25 13:09:05 np0005535469 podman[473837]: 2025-11-25 18:09:05.899525179 +0000 UTC m=+1.222125612 container remove 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 13:09:05 np0005535469 systemd[1]: libpod-conmon-33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188.scope: Deactivated successfully.
Nov 25 13:09:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4389: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:06 np0005535469 podman[474019]: 2025-11-25 18:09:06.678426804 +0000 UTC m=+0.112374730 container create 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 13:09:06 np0005535469 podman[474019]: 2025-11-25 18:09:06.589014228 +0000 UTC m=+0.022962194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:09:06 np0005535469 systemd[1]: Started libpod-conmon-3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d.scope.
Nov 25 13:09:06 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:09:06 np0005535469 podman[474019]: 2025-11-25 18:09:06.811487585 +0000 UTC m=+0.245435541 container init 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 13:09:06 np0005535469 podman[474019]: 2025-11-25 18:09:06.817830877 +0000 UTC m=+0.251778813 container start 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:09:06 np0005535469 podman[474019]: 2025-11-25 18:09:06.820975662 +0000 UTC m=+0.254923598 container attach 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 13:09:06 np0005535469 upbeat_raman[474036]: 167 167
Nov 25 13:09:06 np0005535469 systemd[1]: libpod-3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d.scope: Deactivated successfully.
Nov 25 13:09:06 np0005535469 podman[474019]: 2025-11-25 18:09:06.829181915 +0000 UTC m=+0.263129851 container died 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 13:09:06 np0005535469 systemd[1]: var-lib-containers-storage-overlay-2f22ffa39ae4eb3fc67177b96e18159f096d03bb94fab0195416c740bb39772c-merged.mount: Deactivated successfully.
Nov 25 13:09:06 np0005535469 podman[474019]: 2025-11-25 18:09:06.878454982 +0000 UTC m=+0.312402918 container remove 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 13:09:06 np0005535469 systemd[1]: libpod-conmon-3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d.scope: Deactivated successfully.
Nov 25 13:09:07 np0005535469 podman[474059]: 2025-11-25 18:09:07.056812511 +0000 UTC m=+0.038801304 container create 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 13:09:07 np0005535469 systemd[1]: Started libpod-conmon-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope.
Nov 25 13:09:07 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:09:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:07 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:09:07 np0005535469 podman[474059]: 2025-11-25 18:09:07.039692607 +0000 UTC m=+0.021681420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:09:07 np0005535469 podman[474059]: 2025-11-25 18:09:07.142769544 +0000 UTC m=+0.124758397 container init 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:09:07 np0005535469 podman[474059]: 2025-11-25 18:09:07.149422974 +0000 UTC m=+0.131411767 container start 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:09:07 np0005535469 podman[474059]: 2025-11-25 18:09:07.15368158 +0000 UTC m=+0.135670403 container attach 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:09:07 np0005535469 nova_compute[254092]: 2025-11-25 18:09:07.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]: {
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "osd_id": 1,
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "type": "bluestore"
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:    },
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "osd_id": 2,
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "type": "bluestore"
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:    },
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "osd_id": 0,
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:        "type": "bluestore"
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]:    }
Nov 25 13:09:08 np0005535469 hardcore_lichterman[474074]: }
Nov 25 13:09:08 np0005535469 systemd[1]: libpod-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope: Deactivated successfully.
Nov 25 13:09:08 np0005535469 podman[474059]: 2025-11-25 18:09:08.166709737 +0000 UTC m=+1.148698540 container died 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:09:08 np0005535469 systemd[1]: libpod-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope: Consumed 1.029s CPU time.
Nov 25 13:09:08 np0005535469 systemd[1]: var-lib-containers-storage-overlay-7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9-merged.mount: Deactivated successfully.
Nov 25 13:09:08 np0005535469 podman[474059]: 2025-11-25 18:09:08.363283342 +0000 UTC m=+1.345272135 container remove 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:09:08 np0005535469 systemd[1]: libpod-conmon-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope: Deactivated successfully.
Nov 25 13:09:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:09:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:09:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:09:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4390: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:09:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2fdf26f8-1fba-475e-b70c-77a856728538 does not exist
Nov 25 13:09:08 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 4323d5b8-ca6e-4767-ae8e-3e4cb47277d7 does not exist
Nov 25 13:09:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:09:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:09:10 np0005535469 nova_compute[254092]: 2025-11-25 18:09:10.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:09:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:09:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4391: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4392: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:12 np0005535469 nova_compute[254092]: 2025-11-25 18:09:12.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:09:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:09:13.717 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:09:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:09:13.717 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:09:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4393: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:15 np0005535469 nova_compute[254092]: 2025-11-25 18:09:15.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4394: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:17 np0005535469 nova_compute[254092]: 2025-11-25 18:09:17.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4395: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:20 np0005535469 nova_compute[254092]: 2025-11-25 18:09:20.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4396: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4397: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:22 np0005535469 nova_compute[254092]: 2025-11-25 18:09:22.873 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4398: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:25 np0005535469 nova_compute[254092]: 2025-11-25 18:09:25.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:25 np0005535469 nova_compute[254092]: 2025-11-25 18:09:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4399: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:27 np0005535469 nova_compute[254092]: 2025-11-25 18:09:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:27 np0005535469 nova_compute[254092]: 2025-11-25 18:09:27.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4400: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:30 np0005535469 nova_compute[254092]: 2025-11-25 18:09:30.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4401: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:30 np0005535469 podman[474173]: 2025-11-25 18:09:30.654126708 +0000 UTC m=+0.066399113 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 13:09:30 np0005535469 podman[474172]: 2025-11-25 18:09:30.683513605 +0000 UTC m=+0.095904043 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 13:09:30 np0005535469 podman[474174]: 2025-11-25 18:09:30.732487855 +0000 UTC m=+0.135627682 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:09:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4402: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:32 np0005535469 nova_compute[254092]: 2025-11-25 18:09:32.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4403: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:09:34 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:09:34 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331853356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:09:34 np0005535469 nova_compute[254092]: 2025-11-25 18:09:34.965 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:09:35 np0005535469 nova_compute[254092]: 2025-11-25 18:09:35.106 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:09:35 np0005535469 nova_compute[254092]: 2025-11-25 18:09:35.108 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3617MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:09:35 np0005535469 nova_compute[254092]: 2025-11-25 18:09:35.108 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:09:35 np0005535469 nova_compute[254092]: 2025-11-25 18:09:35.108 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:09:35 np0005535469 nova_compute[254092]: 2025-11-25 18:09:35.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:35 np0005535469 nova_compute[254092]: 2025-11-25 18:09:35.701 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:09:35 np0005535469 nova_compute[254092]: 2025-11-25 18:09:35.701 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:09:36 np0005535469 nova_compute[254092]: 2025-11-25 18:09:36.112 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:09:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4404: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:09:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642846535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:09:36 np0005535469 nova_compute[254092]: 2025-11-25 18:09:36.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:09:36 np0005535469 nova_compute[254092]: 2025-11-25 18:09:36.554 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:09:36 np0005535469 nova_compute[254092]: 2025-11-25 18:09:36.566 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:09:36 np0005535469 nova_compute[254092]: 2025-11-25 18:09:36.567 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:09:36 np0005535469 nova_compute[254092]: 2025-11-25 18:09:36.568 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 13:09:37 np0005535469 nova_compute[254092]: 2025-11-25 18:09:37.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4405: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:09:40 np0005535469 nova_compute[254092]: 2025-11-25 18:09:40.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:09:40
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.meta', 'images', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4406: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:09:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:09:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:41 np0005535469 nova_compute[254092]: 2025-11-25 18:09:41.504 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:41 np0005535469 nova_compute[254092]: 2025-11-25 18:09:41.504 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:09:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4407: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:42 np0005535469 nova_compute[254092]: 2025-11-25 18:09:42.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4408: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:45 np0005535469 nova_compute[254092]: 2025-11-25 18:09:45.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4409: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:47 np0005535469 nova_compute[254092]: 2025-11-25 18:09:47.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:48 np0005535469 nova_compute[254092]: 2025-11-25 18:09:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:09:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4410: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:50 np0005535469 nova_compute[254092]: 2025-11-25 18:09:50.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4411: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4412: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:09:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:09:52 np0005535469 nova_compute[254092]: 2025-11-25 18:09:52.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4413: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:55 np0005535469 nova_compute[254092]: 2025-11-25 18:09:55.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:09:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828155944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:09:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:09:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828155944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:09:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:09:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4414: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:09:57 np0005535469 nova_compute[254092]: 2025-11-25 18:09:57.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:09:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4415: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:00 np0005535469 nova_compute[254092]: 2025-11-25 18:10:00.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4416: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:01 np0005535469 podman[474284]: 2025-11-25 18:10:01.625761759 +0000 UTC m=+0.045151887 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 13:10:01 np0005535469 podman[474283]: 2025-11-25 18:10:01.633724494 +0000 UTC m=+0.056642818 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:10:01 np0005535469 podman[474285]: 2025-11-25 18:10:01.668966331 +0000 UTC m=+0.082257983 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:10:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4417: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:02 np0005535469 nova_compute[254092]: 2025-11-25 18:10:02.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4418: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:05 np0005535469 nova_compute[254092]: 2025-11-25 18:10:05.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:06 np0005535469 nova_compute[254092]: 2025-11-25 18:10:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:06 np0005535469 nova_compute[254092]: 2025-11-25 18:10:06.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 13:10:06 np0005535469 nova_compute[254092]: 2025-11-25 18:10:06.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 13:10:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4419: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:07 np0005535469 nova_compute[254092]: 2025-11-25 18:10:07.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4420: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:10:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev f885bc6e-55f0-4884-b7c4-12e2609749ec does not exist
Nov 25 13:10:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev b7d718a6-c3b5-467d-b275-5c426ca39ee1 does not exist
Nov 25 13:10:09 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev fcc65d9b-8af6-4034-a3bd-4445a377cb6f does not exist
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:10:09 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:10:09 np0005535469 podman[474619]: 2025-11-25 18:10:09.97720101 +0000 UTC m=+0.040307355 container create 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 13:10:10 np0005535469 systemd[1]: Started libpod-conmon-830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa.scope.
Nov 25 13:10:10 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:10:10 np0005535469 podman[474619]: 2025-11-25 18:10:09.960790495 +0000 UTC m=+0.023896840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:10:10 np0005535469 podman[474619]: 2025-11-25 18:10:10.061474807 +0000 UTC m=+0.124581222 container init 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 13:10:10 np0005535469 podman[474619]: 2025-11-25 18:10:10.06970417 +0000 UTC m=+0.132810505 container start 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 13:10:10 np0005535469 podman[474619]: 2025-11-25 18:10:10.073078582 +0000 UTC m=+0.136185007 container attach 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:10:10 np0005535469 mystifying_chaplygin[474635]: 167 167
Nov 25 13:10:10 np0005535469 systemd[1]: libpod-830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa.scope: Deactivated successfully.
Nov 25 13:10:10 np0005535469 podman[474619]: 2025-11-25 18:10:10.076747821 +0000 UTC m=+0.139854156 container died 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:10:10 np0005535469 systemd[1]: var-lib-containers-storage-overlay-4daaebc09c98882e3fb66e93c168fb0a8bc6f379d3f4dc7d80297cdd9b5ffab2-merged.mount: Deactivated successfully.
Nov 25 13:10:10 np0005535469 podman[474619]: 2025-11-25 18:10:10.115349029 +0000 UTC m=+0.178455364 container remove 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 13:10:10 np0005535469 systemd[1]: libpod-conmon-830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa.scope: Deactivated successfully.
Nov 25 13:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:10:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:10:10 np0005535469 nova_compute[254092]: 2025-11-25 18:10:10.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:10 np0005535469 podman[474661]: 2025-11-25 18:10:10.309319672 +0000 UTC m=+0.045192037 container create e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:10:10 np0005535469 systemd[1]: Started libpod-conmon-e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f.scope.
Nov 25 13:10:10 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:10:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:10 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:10 np0005535469 podman[474661]: 2025-11-25 18:10:10.292324901 +0000 UTC m=+0.028197296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:10:10 np0005535469 podman[474661]: 2025-11-25 18:10:10.390883535 +0000 UTC m=+0.126755930 container init e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 13:10:10 np0005535469 podman[474661]: 2025-11-25 18:10:10.396718873 +0000 UTC m=+0.132591228 container start e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:10:10 np0005535469 podman[474661]: 2025-11-25 18:10:10.399559161 +0000 UTC m=+0.135431556 container attach e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 13:10:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4421: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:11 np0005535469 youthful_kowalevski[474677]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:10:11 np0005535469 youthful_kowalevski[474677]: --> relative data size: 1.0
Nov 25 13:10:11 np0005535469 youthful_kowalevski[474677]: --> All data devices are unavailable
Nov 25 13:10:11 np0005535469 systemd[1]: libpod-e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f.scope: Deactivated successfully.
Nov 25 13:10:11 np0005535469 podman[474661]: 2025-11-25 18:10:11.406573255 +0000 UTC m=+1.142445610 container died e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 13:10:11 np0005535469 systemd[1]: var-lib-containers-storage-overlay-e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7-merged.mount: Deactivated successfully.
Nov 25 13:10:11 np0005535469 podman[474661]: 2025-11-25 18:10:11.581872352 +0000 UTC m=+1.317744717 container remove e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:10:11 np0005535469 systemd[1]: libpod-conmon-e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f.scope: Deactivated successfully.
Nov 25 13:10:12 np0005535469 podman[474860]: 2025-11-25 18:10:12.197465605 +0000 UTC m=+0.037934240 container create c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:10:12 np0005535469 systemd[1]: Started libpod-conmon-c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1.scope.
Nov 25 13:10:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:10:12 np0005535469 podman[474860]: 2025-11-25 18:10:12.18139505 +0000 UTC m=+0.021863705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:10:12 np0005535469 podman[474860]: 2025-11-25 18:10:12.285667179 +0000 UTC m=+0.126135834 container init c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 13:10:12 np0005535469 podman[474860]: 2025-11-25 18:10:12.293062899 +0000 UTC m=+0.133531534 container start c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 13:10:12 np0005535469 podman[474860]: 2025-11-25 18:10:12.295969618 +0000 UTC m=+0.136438273 container attach c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:10:12 np0005535469 elastic_elion[474876]: 167 167
Nov 25 13:10:12 np0005535469 systemd[1]: libpod-c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1.scope: Deactivated successfully.
Nov 25 13:10:12 np0005535469 podman[474860]: 2025-11-25 18:10:12.298592629 +0000 UTC m=+0.139061264 container died c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:10:12 np0005535469 systemd[1]: var-lib-containers-storage-overlay-77725b266f4ed036a4d935f98857517dd572a44abb7863c2f0d706cee2fc77e0-merged.mount: Deactivated successfully.
Nov 25 13:10:12 np0005535469 podman[474860]: 2025-11-25 18:10:12.335756268 +0000 UTC m=+0.176224903 container remove c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 13:10:12 np0005535469 systemd[1]: libpod-conmon-c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1.scope: Deactivated successfully.
Nov 25 13:10:12 np0005535469 podman[474902]: 2025-11-25 18:10:12.506078859 +0000 UTC m=+0.057628564 container create cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 13:10:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4422: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:12 np0005535469 systemd[1]: Started libpod-conmon-cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028.scope.
Nov 25 13:10:12 np0005535469 podman[474902]: 2025-11-25 18:10:12.478264265 +0000 UTC m=+0.029814070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:10:12 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:10:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:12 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:12 np0005535469 podman[474902]: 2025-11-25 18:10:12.654472165 +0000 UTC m=+0.206021900 container init cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 13:10:12 np0005535469 podman[474902]: 2025-11-25 18:10:12.661957519 +0000 UTC m=+0.213507224 container start cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 13:10:12 np0005535469 podman[474902]: 2025-11-25 18:10:12.728029752 +0000 UTC m=+0.279579477 container attach cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:10:12 np0005535469 nova_compute[254092]: 2025-11-25 18:10:12.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:13 np0005535469 pensive_edison[474918]: {
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:    "0": [
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:        {
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "devices": [
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "/dev/loop3"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            ],
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_name": "ceph_lv0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_size": "21470642176",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "name": "ceph_lv0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "tags": {
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cluster_name": "ceph",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.crush_device_class": "",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.encrypted": "0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osd_id": "0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.type": "block",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.vdo": "0"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            },
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "type": "block",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "vg_name": "ceph_vg0"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:        }
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:    ],
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:    "1": [
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:        {
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "devices": [
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "/dev/loop4"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            ],
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_name": "ceph_lv1",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_size": "21470642176",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "name": "ceph_lv1",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "tags": {
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cluster_name": "ceph",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.crush_device_class": "",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.encrypted": "0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osd_id": "1",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.type": "block",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.vdo": "0"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            },
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "type": "block",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "vg_name": "ceph_vg1"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:        }
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:    ],
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:    "2": [
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:        {
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "devices": [
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "/dev/loop5"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            ],
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_name": "ceph_lv2",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_size": "21470642176",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "name": "ceph_lv2",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "tags": {
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.cluster_name": "ceph",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.crush_device_class": "",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.encrypted": "0",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osd_id": "2",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.type": "block",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:                "ceph.vdo": "0"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            },
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "type": "block",
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:            "vg_name": "ceph_vg2"
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:        }
Nov 25 13:10:13 np0005535469 pensive_edison[474918]:    ]
Nov 25 13:10:13 np0005535469 pensive_edison[474918]: }
Nov 25 13:10:13 np0005535469 systemd[1]: libpod-cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028.scope: Deactivated successfully.
Nov 25 13:10:13 np0005535469 podman[474902]: 2025-11-25 18:10:13.434527372 +0000 UTC m=+0.986077087 container died cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:10:13 np0005535469 systemd[1]: var-lib-containers-storage-overlay-b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645-merged.mount: Deactivated successfully.
Nov 25 13:10:13 np0005535469 podman[474902]: 2025-11-25 18:10:13.558457595 +0000 UTC m=+1.110007330 container remove cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:10:13 np0005535469 systemd[1]: libpod-conmon-cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028.scope: Deactivated successfully.
Nov 25 13:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:10:13.718 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:10:13.719 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:10:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:10:13.719 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:10:14 np0005535469 podman[475082]: 2025-11-25 18:10:14.370682564 +0000 UTC m=+0.040813529 container create bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:10:14 np0005535469 systemd[1]: Started libpod-conmon-bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3.scope.
Nov 25 13:10:14 np0005535469 podman[475082]: 2025-11-25 18:10:14.353025815 +0000 UTC m=+0.023156820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:10:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:10:14 np0005535469 podman[475082]: 2025-11-25 18:10:14.481195593 +0000 UTC m=+0.151326598 container init bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 13:10:14 np0005535469 podman[475082]: 2025-11-25 18:10:14.490241148 +0000 UTC m=+0.160372113 container start bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 13:10:14 np0005535469 podman[475082]: 2025-11-25 18:10:14.493087005 +0000 UTC m=+0.163217970 container attach bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:10:14 np0005535469 condescending_hawking[475098]: 167 167
Nov 25 13:10:14 np0005535469 systemd[1]: libpod-bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3.scope: Deactivated successfully.
Nov 25 13:10:14 np0005535469 podman[475082]: 2025-11-25 18:10:14.496926139 +0000 UTC m=+0.167057104 container died bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:10:14 np0005535469 nova_compute[254092]: 2025-11-25 18:10:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:14 np0005535469 systemd[1]: var-lib-containers-storage-overlay-75dc6dfa47b30ec8b13779ec3d921d2457ade029da22c662d08116a099077f8b-merged.mount: Deactivated successfully.
Nov 25 13:10:14 np0005535469 podman[475082]: 2025-11-25 18:10:14.528723052 +0000 UTC m=+0.198854017 container remove bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:10:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4423: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:14 np0005535469 systemd[1]: libpod-conmon-bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3.scope: Deactivated successfully.
Nov 25 13:10:14 np0005535469 podman[475120]: 2025-11-25 18:10:14.710236127 +0000 UTC m=+0.064573283 container create 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:10:14 np0005535469 podman[475120]: 2025-11-25 18:10:14.667314973 +0000 UTC m=+0.021652159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:10:14 np0005535469 systemd[1]: Started libpod-conmon-244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6.scope.
Nov 25 13:10:14 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:10:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:14 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:10:14 np0005535469 podman[475120]: 2025-11-25 18:10:14.872630023 +0000 UTC m=+0.226967209 container init 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 13:10:14 np0005535469 podman[475120]: 2025-11-25 18:10:14.881983778 +0000 UTC m=+0.236320934 container start 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 13:10:14 np0005535469 podman[475120]: 2025-11-25 18:10:14.903753248 +0000 UTC m=+0.258090414 container attach 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 13:10:15 np0005535469 nova_compute[254092]: 2025-11-25 18:10:15.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]: {
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "osd_id": 1,
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "type": "bluestore"
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:    },
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "osd_id": 2,
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "type": "bluestore"
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:    },
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "osd_id": 0,
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:        "type": "bluestore"
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]:    }
Nov 25 13:10:15 np0005535469 adoring_mestorf[475136]: }
Nov 25 13:10:15 np0005535469 systemd[1]: libpod-244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6.scope: Deactivated successfully.
Nov 25 13:10:15 np0005535469 podman[475120]: 2025-11-25 18:10:15.847221488 +0000 UTC m=+1.201558634 container died 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 13:10:15 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6-merged.mount: Deactivated successfully.
Nov 25 13:10:16 np0005535469 podman[475120]: 2025-11-25 18:10:16.11040803 +0000 UTC m=+1.464745216 container remove 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 13:10:16 np0005535469 systemd[1]: libpod-conmon-244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6.scope: Deactivated successfully.
Nov 25 13:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:10:16 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:10:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ae17b0c7-7c63-4d49-a6dd-c603eff77393 does not exist
Nov 25 13:10:16 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 1fb7cf95-e341-4bfa-98dd-e7f4fba4d181 does not exist
Nov 25 13:10:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4424: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:10:17 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:10:17 np0005535469 nova_compute[254092]: 2025-11-25 18:10:17.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4425: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:20 np0005535469 nova_compute[254092]: 2025-11-25 18:10:20.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4426: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4427: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:22 np0005535469 nova_compute[254092]: 2025-11-25 18:10:22.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4428: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:25 np0005535469 nova_compute[254092]: 2025-11-25 18:10:25.198 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4429: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:27 np0005535469 nova_compute[254092]: 2025-11-25 18:10:27.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:27 np0005535469 nova_compute[254092]: 2025-11-25 18:10:27.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4430: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:29 np0005535469 nova_compute[254092]: 2025-11-25 18:10:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:30 np0005535469 nova_compute[254092]: 2025-11-25 18:10:30.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4431: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4432: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:32 np0005535469 podman[475235]: 2025-11-25 18:10:32.67254046 +0000 UTC m=+0.078863141 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:10:32 np0005535469 podman[475236]: 2025-11-25 18:10:32.692958445 +0000 UTC m=+0.103076908 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 13:10:32 np0005535469 podman[475237]: 2025-11-25 18:10:32.697524548 +0000 UTC m=+0.103630113 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:10:32 np0005535469 nova_compute[254092]: 2025-11-25 18:10:32.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4433: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:10:35 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:10:35 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092278912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:10:35 np0005535469 nova_compute[254092]: 2025-11-25 18:10:35.951 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.087 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.088 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3599MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.088 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.088 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.164 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.165 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.180 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:10:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4434: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:10:36 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234527518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.608 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.614 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.649 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.651 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:10:36 np0005535469 nova_compute[254092]: 2025-11-25 18:10:36.651 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.646 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.646 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.647 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.647 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.659 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.659 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.659 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:37 np0005535469 nova_compute[254092]: 2025-11-25 18:10:37.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4435: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:10:40 np0005535469 nova_compute[254092]: 2025-11-25 18:10:40.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:10:40
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4436: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:10:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:10:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:10:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:10:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:10:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:41 np0005535469 nova_compute[254092]: 2025-11-25 18:10:41.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:41 np0005535469 nova_compute[254092]: 2025-11-25 18:10:41.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:10:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4437: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:42 np0005535469 nova_compute[254092]: 2025-11-25 18:10:42.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4438: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:45 np0005535469 nova_compute[254092]: 2025-11-25 18:10:45.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4439: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:47 np0005535469 nova_compute[254092]: 2025-11-25 18:10:47.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:48 np0005535469 nova_compute[254092]: 2025-11-25 18:10:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4440: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.763066) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094248763145, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1268, "num_deletes": 251, "total_data_size": 1990005, "memory_usage": 2022336, "flush_reason": "Manual Compaction"}
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094248876825, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 1960571, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89994, "largest_seqno": 91261, "table_properties": {"data_size": 1954473, "index_size": 3428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12656, "raw_average_key_size": 19, "raw_value_size": 1942322, "raw_average_value_size": 3044, "num_data_blocks": 154, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094117, "oldest_key_time": 1764094117, "file_creation_time": 1764094248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 113817 microseconds, and 5854 cpu microseconds.
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.876892) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 1960571 bytes OK
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.876922) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.884969) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.885028) EVENT_LOG_v1 {"time_micros": 1764094248885014, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.885053) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1984286, prev total WAL file size 1984286, number of live WAL files 2.
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.886021) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(1914KB)], [215(10MB)]
Nov 25 13:10:48 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094248886100, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 12601090, "oldest_snapshot_seqno": -1}
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 10274 keys, 10869408 bytes, temperature: kUnknown
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094249153259, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 10869408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10807234, "index_size": 35373, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25733, "raw_key_size": 272672, "raw_average_key_size": 26, "raw_value_size": 10629907, "raw_average_value_size": 1034, "num_data_blocks": 1351, "num_entries": 10274, "num_filter_entries": 10274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.153539) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 10869408 bytes
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.171441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.2 rd, 40.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 10.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(12.0) write-amplify(5.5) OK, records in: 10788, records dropped: 514 output_compression: NoCompression
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.171484) EVENT_LOG_v1 {"time_micros": 1764094249171469, "job": 136, "event": "compaction_finished", "compaction_time_micros": 267255, "compaction_time_cpu_micros": 35471, "output_level": 6, "num_output_files": 1, "total_output_size": 10869408, "num_input_records": 10788, "num_output_records": 10274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094249172062, "job": 136, "event": "table_file_deletion", "file_number": 217}
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094249174134, "job": 136, "event": "table_file_deletion", "file_number": 215}
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.885881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:10:49 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:10:50 np0005535469 nova_compute[254092]: 2025-11-25 18:10:50.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4441: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:51 np0005535469 nova_compute[254092]: 2025-11-25 18:10:51.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4442: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:10:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:10:52 np0005535469 nova_compute[254092]: 2025-11-25 18:10:52.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4443: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:55 np0005535469 nova_compute[254092]: 2025-11-25 18:10:55.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:10:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4444: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:10:57 np0005535469 nova_compute[254092]: 2025-11-25 18:10:57.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:10:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4445: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:00 np0005535469 nova_compute[254092]: 2025-11-25 18:11:00.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4446: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4447: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:02 np0005535469 nova_compute[254092]: 2025-11-25 18:11:02.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:03 np0005535469 podman[475346]: 2025-11-25 18:11:03.636264254 +0000 UTC m=+0.056234896 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:11:03 np0005535469 podman[475347]: 2025-11-25 18:11:03.656685549 +0000 UTC m=+0.061777488 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 13:11:03 np0005535469 podman[475348]: 2025-11-25 18:11:03.700360354 +0000 UTC m=+0.103193161 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 13:11:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4448: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:05 np0005535469 nova_compute[254092]: 2025-11-25 18:11:05.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4449: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:07 np0005535469 nova_compute[254092]: 2025-11-25 18:11:07.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4450: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:11:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:11:10 np0005535469 nova_compute[254092]: 2025-11-25 18:11:10.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4451: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4452: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:12 np0005535469 nova_compute[254092]: 2025-11-25 18:11:12.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:11:13.720 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:11:13.720 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:11:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:11:13.720 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:11:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4453: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:15 np0005535469 nova_compute[254092]: 2025-11-25 18:11:15.232 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4454: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:17 np0005535469 podman[475581]: 2025-11-25 18:11:17.268322158 +0000 UTC m=+0.239552919 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 13:11:17 np0005535469 podman[475581]: 2025-11-25 18:11:17.363984941 +0000 UTC m=+0.335215672 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:11:18 np0005535469 nova_compute[254092]: 2025-11-25 18:11:18.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4455: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f9b0789-35e5-48e9-be4b-f53909fc8f47 does not exist
Nov 25 13:11:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 2852e686-8346-47be-836a-a5f7d68ab795 does not exist
Nov 25 13:11:18 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c17a1d97-e91b-428d-83e6-a2701e9c9804 does not exist
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:11:18 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:11:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:11:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:19 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:11:19 np0005535469 podman[476011]: 2025-11-25 18:11:19.417885686 +0000 UTC m=+0.038308934 container create 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 13:11:19 np0005535469 systemd[1]: Started libpod-conmon-0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f.scope.
Nov 25 13:11:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:11:19 np0005535469 podman[476011]: 2025-11-25 18:11:19.401186802 +0000 UTC m=+0.021610070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:11:19 np0005535469 podman[476011]: 2025-11-25 18:11:19.504443801 +0000 UTC m=+0.124867079 container init 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:11:19 np0005535469 podman[476011]: 2025-11-25 18:11:19.516108379 +0000 UTC m=+0.136531617 container start 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 13:11:19 np0005535469 podman[476011]: 2025-11-25 18:11:19.519083059 +0000 UTC m=+0.139506297 container attach 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:11:19 np0005535469 naughty_cannon[476026]: 167 167
Nov 25 13:11:19 np0005535469 systemd[1]: libpod-0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f.scope: Deactivated successfully.
Nov 25 13:11:19 np0005535469 podman[476011]: 2025-11-25 18:11:19.521521896 +0000 UTC m=+0.141945144 container died 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 13:11:19 np0005535469 systemd[1]: var-lib-containers-storage-overlay-3b6a9c6744e5d021d780aca84c06b39515842bde5dee5d7c73e555dd15575160-merged.mount: Deactivated successfully.
Nov 25 13:11:19 np0005535469 podman[476011]: 2025-11-25 18:11:19.558951124 +0000 UTC m=+0.179374362 container remove 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 13:11:19 np0005535469 systemd[1]: libpod-conmon-0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f.scope: Deactivated successfully.
Nov 25 13:11:19 np0005535469 podman[476048]: 2025-11-25 18:11:19.713725215 +0000 UTC m=+0.041509120 container create 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 13:11:19 np0005535469 systemd[1]: Started libpod-conmon-7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695.scope.
Nov 25 13:11:19 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:11:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:19 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:19 np0005535469 podman[476048]: 2025-11-25 18:11:19.783513804 +0000 UTC m=+0.111297699 container init 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:11:19 np0005535469 podman[476048]: 2025-11-25 18:11:19.791728318 +0000 UTC m=+0.119512233 container start 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:11:19 np0005535469 podman[476048]: 2025-11-25 18:11:19.697980127 +0000 UTC m=+0.025764052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:11:19 np0005535469 podman[476048]: 2025-11-25 18:11:19.795422918 +0000 UTC m=+0.123206843 container attach 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 13:11:20 np0005535469 nova_compute[254092]: 2025-11-25 18:11:20.235 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4456: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:20 np0005535469 nostalgic_ishizaka[476065]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:11:20 np0005535469 nostalgic_ishizaka[476065]: --> relative data size: 1.0
Nov 25 13:11:20 np0005535469 nostalgic_ishizaka[476065]: --> All data devices are unavailable
Nov 25 13:11:20 np0005535469 systemd[1]: libpod-7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695.scope: Deactivated successfully.
Nov 25 13:11:20 np0005535469 podman[476048]: 2025-11-25 18:11:20.795624934 +0000 UTC m=+1.123408839 container died 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 13:11:20 np0005535469 systemd[1]: var-lib-containers-storage-overlay-bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9-merged.mount: Deactivated successfully.
Nov 25 13:11:20 np0005535469 podman[476048]: 2025-11-25 18:11:20.847508275 +0000 UTC m=+1.175292180 container remove 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:11:20 np0005535469 systemd[1]: libpod-conmon-7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695.scope: Deactivated successfully.
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.324001) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281324041, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 511, "num_deletes": 251, "total_data_size": 511026, "memory_usage": 520528, "flush_reason": "Manual Compaction"}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281328201, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 507627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91262, "largest_seqno": 91772, "table_properties": {"data_size": 504673, "index_size": 925, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 5886, "raw_average_key_size": 16, "raw_value_size": 498914, "raw_average_value_size": 1389, "num_data_blocks": 41, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094249, "oldest_key_time": 1764094249, "file_creation_time": 1764094281, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 4230 microseconds, and 1794 cpu microseconds.
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.328232) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 507627 bytes OK
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.328247) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329370) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329383) EVENT_LOG_v1 {"time_micros": 1764094281329378, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 508062, prev total WAL file size 508062, number of live WAL files 2.
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329738) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(495KB)], [218(10MB)]
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281329776, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 11377035, "oldest_snapshot_seqno": -1}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 10119 keys, 10360719 bytes, temperature: kUnknown
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281396125, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 10360719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10299785, "index_size": 34517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 271142, "raw_average_key_size": 26, "raw_value_size": 10125088, "raw_average_value_size": 1000, "num_data_blocks": 1299, "num_entries": 10119, "num_filter_entries": 10119, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094281, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.397134) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 10360719 bytes
Nov 25 13:11:21 np0005535469 podman[476244]: 2025-11-25 18:11:21.39785719 +0000 UTC m=+0.048853441 container create a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.398147) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 154.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.8) write-amplify(20.4) OK, records in: 10633, records dropped: 514 output_compression: NoCompression
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.398179) EVENT_LOG_v1 {"time_micros": 1764094281398167, "job": 138, "event": "compaction_finished", "compaction_time_micros": 67146, "compaction_time_cpu_micros": 34503, "output_level": 6, "num_output_files": 1, "total_output_size": 10360719, "num_input_records": 10633, "num_output_records": 10119, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281398349, "job": 138, "event": "table_file_deletion", "file_number": 220}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281400124, "job": 138, "event": "table_file_deletion", "file_number": 218}
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:11:21 np0005535469 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 13:11:21 np0005535469 systemd[1]: Started libpod-conmon-a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf.scope.
Nov 25 13:11:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:11:21 np0005535469 podman[476244]: 2025-11-25 18:11:21.36812085 +0000 UTC m=+0.019117011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:11:21 np0005535469 podman[476244]: 2025-11-25 18:11:21.477275231 +0000 UTC m=+0.128271372 container init a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 13:11:21 np0005535469 podman[476244]: 2025-11-25 18:11:21.484868337 +0000 UTC m=+0.135864468 container start a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 13:11:21 np0005535469 podman[476244]: 2025-11-25 18:11:21.487749296 +0000 UTC m=+0.138745457 container attach a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:11:21 np0005535469 gracious_banzai[476260]: 167 167
Nov 25 13:11:21 np0005535469 systemd[1]: libpod-a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf.scope: Deactivated successfully.
Nov 25 13:11:21 np0005535469 podman[476244]: 2025-11-25 18:11:21.490504661 +0000 UTC m=+0.141500792 container died a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:11:21 np0005535469 systemd[1]: var-lib-containers-storage-overlay-d99070f003d0f7a269b1365556165201f6f36c4ab57151d373531c750b22a317-merged.mount: Deactivated successfully.
Nov 25 13:11:21 np0005535469 podman[476244]: 2025-11-25 18:11:21.523941441 +0000 UTC m=+0.174937572 container remove a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:11:21 np0005535469 systemd[1]: libpod-conmon-a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf.scope: Deactivated successfully.
Nov 25 13:11:21 np0005535469 podman[476283]: 2025-11-25 18:11:21.670389925 +0000 UTC m=+0.035983590 container create 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 13:11:21 np0005535469 systemd[1]: Started libpod-conmon-56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209.scope.
Nov 25 13:11:21 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:11:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:21 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:21 np0005535469 podman[476283]: 2025-11-25 18:11:21.731665052 +0000 UTC m=+0.097258717 container init 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 13:11:21 np0005535469 podman[476283]: 2025-11-25 18:11:21.739817764 +0000 UTC m=+0.105411429 container start 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:11:21 np0005535469 podman[476283]: 2025-11-25 18:11:21.742925719 +0000 UTC m=+0.108519384 container attach 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 13:11:21 np0005535469 podman[476283]: 2025-11-25 18:11:21.655032087 +0000 UTC m=+0.020625772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:11:22 np0005535469 eager_curran[476300]: {
Nov 25 13:11:22 np0005535469 eager_curran[476300]:    "0": [
Nov 25 13:11:22 np0005535469 eager_curran[476300]:        {
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "devices": [
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "/dev/loop3"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            ],
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_name": "ceph_lv0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_size": "21470642176",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "name": "ceph_lv0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "tags": {
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cluster_name": "ceph",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.crush_device_class": "",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.encrypted": "0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osd_id": "0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.type": "block",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.vdo": "0"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            },
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "type": "block",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "vg_name": "ceph_vg0"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:        }
Nov 25 13:11:22 np0005535469 eager_curran[476300]:    ],
Nov 25 13:11:22 np0005535469 eager_curran[476300]:    "1": [
Nov 25 13:11:22 np0005535469 eager_curran[476300]:        {
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "devices": [
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "/dev/loop4"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            ],
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_name": "ceph_lv1",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_size": "21470642176",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "name": "ceph_lv1",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "tags": {
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cluster_name": "ceph",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.crush_device_class": "",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.encrypted": "0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osd_id": "1",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.type": "block",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.vdo": "0"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            },
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "type": "block",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "vg_name": "ceph_vg1"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:        }
Nov 25 13:11:22 np0005535469 eager_curran[476300]:    ],
Nov 25 13:11:22 np0005535469 eager_curran[476300]:    "2": [
Nov 25 13:11:22 np0005535469 eager_curran[476300]:        {
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "devices": [
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "/dev/loop5"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            ],
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_name": "ceph_lv2",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_size": "21470642176",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "name": "ceph_lv2",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "tags": {
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.cluster_name": "ceph",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.crush_device_class": "",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.encrypted": "0",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osd_id": "2",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.type": "block",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:                "ceph.vdo": "0"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            },
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "type": "block",
Nov 25 13:11:22 np0005535469 eager_curran[476300]:            "vg_name": "ceph_vg2"
Nov 25 13:11:22 np0005535469 eager_curran[476300]:        }
Nov 25 13:11:22 np0005535469 eager_curran[476300]:    ]
Nov 25 13:11:22 np0005535469 eager_curran[476300]: }
Nov 25 13:11:22 np0005535469 systemd[1]: libpod-56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209.scope: Deactivated successfully.
Nov 25 13:11:22 np0005535469 podman[476283]: 2025-11-25 18:11:22.481767201 +0000 UTC m=+0.847360866 container died 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 13:11:22 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29-merged.mount: Deactivated successfully.
Nov 25 13:11:22 np0005535469 podman[476283]: 2025-11-25 18:11:22.531071323 +0000 UTC m=+0.896664988 container remove 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 13:11:22 np0005535469 systemd[1]: libpod-conmon-56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209.scope: Deactivated successfully.
Nov 25 13:11:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4457: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:23 np0005535469 nova_compute[254092]: 2025-11-25 18:11:23.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:23 np0005535469 podman[476463]: 2025-11-25 18:11:23.05096948 +0000 UTC m=+0.035544599 container create 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:11:23 np0005535469 systemd[1]: Started libpod-conmon-9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab.scope.
Nov 25 13:11:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:11:23 np0005535469 podman[476463]: 2025-11-25 18:11:23.123258826 +0000 UTC m=+0.107833975 container init 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:11:23 np0005535469 podman[476463]: 2025-11-25 18:11:23.12999111 +0000 UTC m=+0.114566239 container start 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 13:11:23 np0005535469 podman[476463]: 2025-11-25 18:11:23.03482983 +0000 UTC m=+0.019404969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:11:23 np0005535469 podman[476463]: 2025-11-25 18:11:23.133014382 +0000 UTC m=+0.117589511 container attach 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:11:23 np0005535469 silly_jones[476480]: 167 167
Nov 25 13:11:23 np0005535469 systemd[1]: libpod-9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab.scope: Deactivated successfully.
Nov 25 13:11:23 np0005535469 podman[476463]: 2025-11-25 18:11:23.135851279 +0000 UTC m=+0.120426408 container died 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:11:23 np0005535469 systemd[1]: var-lib-containers-storage-overlay-da8e9da3f0a1358d316ae2895f9fcdcdfde6a34bbc77ad74cf58f418d9ed8f97-merged.mount: Deactivated successfully.
Nov 25 13:11:23 np0005535469 podman[476463]: 2025-11-25 18:11:23.170365688 +0000 UTC m=+0.154940817 container remove 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 13:11:23 np0005535469 systemd[1]: libpod-conmon-9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab.scope: Deactivated successfully.
Nov 25 13:11:23 np0005535469 podman[476502]: 2025-11-25 18:11:23.320248057 +0000 UTC m=+0.039084595 container create 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:11:23 np0005535469 systemd[1]: Started libpod-conmon-21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e.scope.
Nov 25 13:11:23 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:11:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:23 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:11:23 np0005535469 podman[476502]: 2025-11-25 18:11:23.396219034 +0000 UTC m=+0.115055602 container init 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 13:11:23 np0005535469 podman[476502]: 2025-11-25 18:11:23.302986117 +0000 UTC m=+0.021822695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:11:23 np0005535469 podman[476502]: 2025-11-25 18:11:23.402664529 +0000 UTC m=+0.121501067 container start 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:11:23 np0005535469 podman[476502]: 2025-11-25 18:11:23.405563918 +0000 UTC m=+0.124400466 container attach 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]: {
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "osd_id": 1,
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "type": "bluestore"
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:    },
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "osd_id": 2,
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "type": "bluestore"
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:    },
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "osd_id": 0,
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:        "type": "bluestore"
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]:    }
Nov 25 13:11:24 np0005535469 vigorous_yalow[476518]: }
Nov 25 13:11:24 np0005535469 systemd[1]: libpod-21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e.scope: Deactivated successfully.
Nov 25 13:11:24 np0005535469 podman[476502]: 2025-11-25 18:11:24.332739715 +0000 UTC m=+1.051576253 container died 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:11:24 np0005535469 systemd[1]: var-lib-containers-storage-overlay-f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742-merged.mount: Deactivated successfully.
Nov 25 13:11:24 np0005535469 podman[476502]: 2025-11-25 18:11:24.380898115 +0000 UTC m=+1.099734663 container remove 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 13:11:24 np0005535469 systemd[1]: libpod-conmon-21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e.scope: Deactivated successfully.
Nov 25 13:11:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:11:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:24 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:11:24 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 87a1cd80-0b93-4f3f-8d52-9b6f465333d0 does not exist
Nov 25 13:11:24 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ba4cbcd4-1145-43c4-b379-91b21880834f does not exist
Nov 25 13:11:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4458: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:25 np0005535469 nova_compute[254092]: 2025-11-25 18:11:25.238 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:11:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4459: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:28 np0005535469 nova_compute[254092]: 2025-11-25 18:11:28.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:28 np0005535469 nova_compute[254092]: 2025-11-25 18:11:28.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4460: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:30 np0005535469 nova_compute[254092]: 2025-11-25 18:11:30.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:30 np0005535469 nova_compute[254092]: 2025-11-25 18:11:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4461: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4462: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:33 np0005535469 nova_compute[254092]: 2025-11-25 18:11:33.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4463: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:34 np0005535469 podman[476615]: 2025-11-25 18:11:34.668471073 +0000 UTC m=+0.071188419 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 13:11:34 np0005535469 podman[476614]: 2025-11-25 18:11:34.669972273 +0000 UTC m=+0.081261892 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 25 13:11:34 np0005535469 podman[476616]: 2025-11-25 18:11:34.769577044 +0000 UTC m=+0.170452799 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 13:11:35 np0005535469 nova_compute[254092]: 2025-11-25 18:11:35.244 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:36 np0005535469 nova_compute[254092]: 2025-11-25 18:11:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4464: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:11:37 np0005535469 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 19K writes, 91K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.01 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.12 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1296 writes, 6150 keys, 1296 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s#012Interval WAL: 1296 writes, 1296 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.5      2.76              0.40        69    0.040       0      0       0.0       0.0#012  L6      1/0    9.88 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    120.2    102.8      5.86              1.90        68    0.086    526K    36K       0.0       0.0#012 Sum      1/0    9.88 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     81.7     82.8      8.62              2.30       137    0.063    526K    36K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     59.9     59.8      1.19              0.25        12    0.099     64K   3081       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    120.2    102.8      5.86              1.90        68    0.086    526K    36K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.2      2.72              0.40        68    0.040       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 8400.1 total, 600.0 interval#012Flush(GB): cumulative 0.109, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.70 GB write, 0.09 MB/s write, 0.69 GB read, 0.08 MB/s read, 8.6 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 79.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 15 last_copies: 0 last_secs: 0.000458 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5090,76.03 MB,25.0108%) FilterBlock(138,1.39 MB,0.458742%) IndexBlock(138,2.22 MB,0.730168%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:11:37 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:11:37 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221870468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:11:37 np0005535469 nova_compute[254092]: 2025-11-25 18:11:37.997 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.213 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.214 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3606MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.214 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.214 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.271 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.272 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.285 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 13:11:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4465: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.722 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.723 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.799 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.830 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 13:11:38 np0005535469 nova_compute[254092]: 2025-11-25 18:11:38.844 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:11:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:11:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291437619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:11:39 np0005535469 nova_compute[254092]: 2025-11-25 18:11:39.253 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:11:39 np0005535469 nova_compute[254092]: 2025-11-25 18:11:39.260 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:11:39 np0005535469 nova_compute[254092]: 2025-11-25 18:11:39.273 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:11:39 np0005535469 nova_compute[254092]: 2025-11-25 18:11:39.275 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:11:39 np0005535469 nova_compute[254092]: 2025-11-25 18:11:39.275 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:11:40 np0005535469 nova_compute[254092]: 2025-11-25 18:11:40.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:11:40
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.meta', '.mgr']
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4466: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:11:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:11:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:11:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:11:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:11:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:11:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:11:41 np0005535469 nova_compute[254092]: 2025-11-25 18:11:41.276 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:41 np0005535469 nova_compute[254092]: 2025-11-25 18:11:41.276 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:11:41 np0005535469 nova_compute[254092]: 2025-11-25 18:11:41.276 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:11:41 np0005535469 nova_compute[254092]: 2025-11-25 18:11:41.299 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:11:41 np0005535469 nova_compute[254092]: 2025-11-25 18:11:41.299 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4467: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:43 np0005535469 nova_compute[254092]: 2025-11-25 18:11:43.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:43 np0005535469 nova_compute[254092]: 2025-11-25 18:11:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:43 np0005535469 nova_compute[254092]: 2025-11-25 18:11:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:11:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4468: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:45 np0005535469 nova_compute[254092]: 2025-11-25 18:11:45.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4469: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:48 np0005535469 nova_compute[254092]: 2025-11-25 18:11:48.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4470: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:50 np0005535469 nova_compute[254092]: 2025-11-25 18:11:50.253 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:50 np0005535469 nova_compute[254092]: 2025-11-25 18:11:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:11:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4471: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4472: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:11:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:11:53 np0005535469 nova_compute[254092]: 2025-11-25 18:11:53.098 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4473: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:55 np0005535469 nova_compute[254092]: 2025-11-25 18:11:55.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:11:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3157365552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:11:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:11:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3157365552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:11:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:11:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4474: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:11:58 np0005535469 nova_compute[254092]: 2025-11-25 18:11:58.099 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:11:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4475: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:00 np0005535469 nova_compute[254092]: 2025-11-25 18:12:00.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4476: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4477: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:03 np0005535469 nova_compute[254092]: 2025-11-25 18:12:03.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4478: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:05 np0005535469 nova_compute[254092]: 2025-11-25 18:12:05.261 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:05 np0005535469 podman[476723]: 2025-11-25 18:12:05.640697692 +0000 UTC m=+0.052097829 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:12:05 np0005535469 podman[476722]: 2025-11-25 18:12:05.654050084 +0000 UTC m=+0.061590546 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:12:05 np0005535469 podman[476724]: 2025-11-25 18:12:05.701358482 +0000 UTC m=+0.094753789 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 13:12:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4479: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:08 np0005535469 nova_compute[254092]: 2025-11-25 18:12:08.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4480: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:12:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:12:10 np0005535469 nova_compute[254092]: 2025-11-25 18:12:10.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4481: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4482: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:13 np0005535469 nova_compute[254092]: 2025-11-25 18:12:13.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:12:13.721 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:12:13.721 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:12:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:12:13.721 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:12:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4483: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:15 np0005535469 nova_compute[254092]: 2025-11-25 18:12:15.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4484: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:18 np0005535469 nova_compute[254092]: 2025-11-25 18:12:18.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4485: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:20 np0005535469 nova_compute[254092]: 2025-11-25 18:12:20.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4486: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4487: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:23 np0005535469 nova_compute[254092]: 2025-11-25 18:12:23.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4488: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:12:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ff0c8eb1-2831-4c4a-b717-3454a8ee23fb does not exist
Nov 25 13:12:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e49c5826-e3a1-4a46-bcb6-e2511b11090f does not exist
Nov 25 13:12:25 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev c0cc8b11-0e75-48c0-a1b3-2d832e456cbf does not exist
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:12:25 np0005535469 nova_compute[254092]: 2025-11-25 18:12:25.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:25 np0005535469 podman[477057]: 2025-11-25 18:12:25.851993283 +0000 UTC m=+0.036968557 container create 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 13:12:25 np0005535469 systemd[1]: Started libpod-conmon-63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868.scope.
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:12:25 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:12:25 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:12:25 np0005535469 podman[477057]: 2025-11-25 18:12:25.835569727 +0000 UTC m=+0.020545111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:12:25 np0005535469 podman[477057]: 2025-11-25 18:12:25.942001573 +0000 UTC m=+0.126976867 container init 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:12:25 np0005535469 podman[477057]: 2025-11-25 18:12:25.948791047 +0000 UTC m=+0.133766321 container start 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:12:25 np0005535469 podman[477057]: 2025-11-25 18:12:25.95182906 +0000 UTC m=+0.136804344 container attach 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 13:12:25 np0005535469 angry_dijkstra[477073]: 167 167
Nov 25 13:12:25 np0005535469 systemd[1]: libpod-63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868.scope: Deactivated successfully.
Nov 25 13:12:25 np0005535469 podman[477057]: 2025-11-25 18:12:25.954933524 +0000 UTC m=+0.139908798 container died 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 13:12:25 np0005535469 systemd[1]: var-lib-containers-storage-overlay-1125b769e919610d0ddd2fbc5edf09cd6f68f035850719f0917eec825e691bdf-merged.mount: Deactivated successfully.
Nov 25 13:12:25 np0005535469 podman[477057]: 2025-11-25 18:12:25.991835308 +0000 UTC m=+0.176810582 container remove 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 13:12:25 np0005535469 systemd[1]: libpod-conmon-63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868.scope: Deactivated successfully.
Nov 25 13:12:26 np0005535469 podman[477097]: 2025-11-25 18:12:26.177314735 +0000 UTC m=+0.040096182 container create 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 13:12:26 np0005535469 systemd[1]: Started libpod-conmon-05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae.scope.
Nov 25 13:12:26 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:26 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:26 np0005535469 podman[477097]: 2025-11-25 18:12:26.162022069 +0000 UTC m=+0.024803536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:12:26 np0005535469 podman[477097]: 2025-11-25 18:12:26.259353227 +0000 UTC m=+0.122134684 container init 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 13:12:26 np0005535469 podman[477097]: 2025-11-25 18:12:26.269787892 +0000 UTC m=+0.132569339 container start 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 13:12:26 np0005535469 podman[477097]: 2025-11-25 18:12:26.273301727 +0000 UTC m=+0.136083194 container attach 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 13:12:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4489: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:27 np0005535469 quizzical_cori[477114]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:12:27 np0005535469 quizzical_cori[477114]: --> relative data size: 1.0
Nov 25 13:12:27 np0005535469 quizzical_cori[477114]: --> All data devices are unavailable
Nov 25 13:12:27 np0005535469 systemd[1]: libpod-05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae.scope: Deactivated successfully.
Nov 25 13:12:27 np0005535469 podman[477143]: 2025-11-25 18:12:27.316289685 +0000 UTC m=+0.024401905 container died 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 13:12:27 np0005535469 systemd[1]: var-lib-containers-storage-overlay-5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2-merged.mount: Deactivated successfully.
Nov 25 13:12:27 np0005535469 podman[477143]: 2025-11-25 18:12:27.36387424 +0000 UTC m=+0.071986420 container remove 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:12:27 np0005535469 systemd[1]: libpod-conmon-05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae.scope: Deactivated successfully.
Nov 25 13:12:27 np0005535469 podman[477297]: 2025-11-25 18:12:27.925455251 +0000 UTC m=+0.042156738 container create 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 13:12:27 np0005535469 systemd[1]: Started libpod-conmon-81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805.scope.
Nov 25 13:12:27 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:12:28 np0005535469 podman[477297]: 2025-11-25 18:12:27.904361966 +0000 UTC m=+0.021063483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:12:28 np0005535469 podman[477297]: 2025-11-25 18:12:28.010418552 +0000 UTC m=+0.127120329 container init 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:12:28 np0005535469 podman[477297]: 2025-11-25 18:12:28.016479307 +0000 UTC m=+0.133180794 container start 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:12:28 np0005535469 kind_jemison[477313]: 167 167
Nov 25 13:12:28 np0005535469 systemd[1]: libpod-81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805.scope: Deactivated successfully.
Nov 25 13:12:28 np0005535469 podman[477297]: 2025-11-25 18:12:28.023431667 +0000 UTC m=+0.140133174 container attach 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 13:12:28 np0005535469 podman[477297]: 2025-11-25 18:12:28.02390946 +0000 UTC m=+0.140610967 container died 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:12:28 np0005535469 systemd[1]: var-lib-containers-storage-overlay-c8dae33bbe7c5adfeed93bfcdc01a767524a5f9384bdba3f23bcc2bfa951309a-merged.mount: Deactivated successfully.
Nov 25 13:12:28 np0005535469 podman[477297]: 2025-11-25 18:12:28.081504396 +0000 UTC m=+0.198205883 container remove 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:12:28 np0005535469 systemd[1]: libpod-conmon-81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805.scope: Deactivated successfully.
Nov 25 13:12:28 np0005535469 nova_compute[254092]: 2025-11-25 18:12:28.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:28 np0005535469 podman[477338]: 2025-11-25 18:12:28.260083385 +0000 UTC m=+0.050984658 container create 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:12:28 np0005535469 systemd[1]: Started libpod-conmon-3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba.scope.
Nov 25 13:12:28 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:12:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:28 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:28 np0005535469 podman[477338]: 2025-11-25 18:12:28.231833237 +0000 UTC m=+0.022734520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:12:28 np0005535469 podman[477338]: 2025-11-25 18:12:28.34624567 +0000 UTC m=+0.137147093 container init 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 13:12:28 np0005535469 podman[477338]: 2025-11-25 18:12:28.352734696 +0000 UTC m=+0.143635989 container start 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:12:28 np0005535469 podman[477338]: 2025-11-25 18:12:28.358964996 +0000 UTC m=+0.149866279 container attach 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 13:12:28 np0005535469 nova_compute[254092]: 2025-11-25 18:12:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4490: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:29 np0005535469 eager_hellman[477354]: {
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:    "0": [
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:        {
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "devices": [
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "/dev/loop3"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            ],
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_name": "ceph_lv0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_size": "21470642176",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "name": "ceph_lv0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "tags": {
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cluster_name": "ceph",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.crush_device_class": "",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.encrypted": "0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osd_id": "0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.type": "block",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.vdo": "0"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            },
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "type": "block",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "vg_name": "ceph_vg0"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:        }
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:    ],
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:    "1": [
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:        {
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "devices": [
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "/dev/loop4"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            ],
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_name": "ceph_lv1",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_size": "21470642176",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "name": "ceph_lv1",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "tags": {
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cluster_name": "ceph",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.crush_device_class": "",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.encrypted": "0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osd_id": "1",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.type": "block",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.vdo": "0"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            },
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "type": "block",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "vg_name": "ceph_vg1"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:        }
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:    ],
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:    "2": [
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:        {
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "devices": [
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "/dev/loop5"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            ],
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_name": "ceph_lv2",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_size": "21470642176",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "name": "ceph_lv2",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "tags": {
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.cluster_name": "ceph",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.crush_device_class": "",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.encrypted": "0",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osd_id": "2",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.type": "block",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:                "ceph.vdo": "0"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            },
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "type": "block",
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:            "vg_name": "ceph_vg2"
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:        }
Nov 25 13:12:29 np0005535469 eager_hellman[477354]:    ]
Nov 25 13:12:29 np0005535469 eager_hellman[477354]: }
Nov 25 13:12:29 np0005535469 systemd[1]: libpod-3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba.scope: Deactivated successfully.
Nov 25 13:12:29 np0005535469 podman[477338]: 2025-11-25 18:12:29.198738815 +0000 UTC m=+0.989640088 container died 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 13:12:29 np0005535469 systemd[1]: var-lib-containers-storage-overlay-42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596-merged.mount: Deactivated successfully.
Nov 25 13:12:29 np0005535469 podman[477338]: 2025-11-25 18:12:29.261334558 +0000 UTC m=+1.052235831 container remove 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:12:29 np0005535469 systemd[1]: libpod-conmon-3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba.scope: Deactivated successfully.
Nov 25 13:12:29 np0005535469 podman[477514]: 2025-11-25 18:12:29.935293737 +0000 UTC m=+0.041956253 container create 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:12:29 np0005535469 systemd[1]: Started libpod-conmon-0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf.scope.
Nov 25 13:12:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:12:30 np0005535469 podman[477514]: 2025-11-25 18:12:29.916581107 +0000 UTC m=+0.023243713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:12:30 np0005535469 podman[477514]: 2025-11-25 18:12:30.01923137 +0000 UTC m=+0.125893896 container init 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:12:30 np0005535469 podman[477514]: 2025-11-25 18:12:30.026310084 +0000 UTC m=+0.132972600 container start 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:12:30 np0005535469 cool_austin[477530]: 167 167
Nov 25 13:12:30 np0005535469 systemd[1]: libpod-0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf.scope: Deactivated successfully.
Nov 25 13:12:30 np0005535469 podman[477514]: 2025-11-25 18:12:30.030806365 +0000 UTC m=+0.137468911 container attach 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 13:12:30 np0005535469 podman[477514]: 2025-11-25 18:12:30.031653538 +0000 UTC m=+0.138316064 container died 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:12:30 np0005535469 systemd[1]: var-lib-containers-storage-overlay-0d9b3bde585c38faa1846b412a72ac237b3a3ec01657986504ddf75e0a18ffeb-merged.mount: Deactivated successfully.
Nov 25 13:12:30 np0005535469 podman[477514]: 2025-11-25 18:12:30.066016164 +0000 UTC m=+0.172678680 container remove 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:12:30 np0005535469 systemd[1]: libpod-conmon-0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf.scope: Deactivated successfully.
Nov 25 13:12:30 np0005535469 podman[477554]: 2025-11-25 18:12:30.247656166 +0000 UTC m=+0.041878600 container create e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 13:12:30 np0005535469 systemd[1]: Started libpod-conmon-e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f.scope.
Nov 25 13:12:30 np0005535469 nova_compute[254092]: 2025-11-25 18:12:30.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:30 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:30 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:12:30 np0005535469 podman[477554]: 2025-11-25 18:12:30.228408682 +0000 UTC m=+0.022631136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:12:30 np0005535469 podman[477554]: 2025-11-25 18:12:30.337985674 +0000 UTC m=+0.132208128 container init e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 13:12:30 np0005535469 podman[477554]: 2025-11-25 18:12:30.343528135 +0000 UTC m=+0.137750569 container start e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:12:30 np0005535469 podman[477554]: 2025-11-25 18:12:30.346994819 +0000 UTC m=+0.141217253 container attach e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 13:12:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4491: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:31 np0005535469 magical_neumann[477570]: {
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "osd_id": 1,
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "type": "bluestore"
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:    },
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "osd_id": 2,
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "type": "bluestore"
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:    },
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "osd_id": 0,
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:        "type": "bluestore"
Nov 25 13:12:31 np0005535469 magical_neumann[477570]:    }
Nov 25 13:12:31 np0005535469 magical_neumann[477570]: }
Nov 25 13:12:31 np0005535469 systemd[1]: libpod-e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f.scope: Deactivated successfully.
Nov 25 13:12:31 np0005535469 podman[477554]: 2025-11-25 18:12:31.289036351 +0000 UTC m=+1.083258815 container died e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 13:12:31 np0005535469 systemd[1]: var-lib-containers-storage-overlay-934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8-merged.mount: Deactivated successfully.
Nov 25 13:12:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:31 np0005535469 podman[477554]: 2025-11-25 18:12:31.343201145 +0000 UTC m=+1.137423569 container remove e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 13:12:31 np0005535469 systemd[1]: libpod-conmon-e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f.scope: Deactivated successfully.
Nov 25 13:12:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:12:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:12:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:12:31 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:12:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 183bea00-64fb-44eb-837f-599cacecafc8 does not exist
Nov 25 13:12:31 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 17ebbdac-f163-49ba-a541-06e5d601d13b does not exist
Nov 25 13:12:31 np0005535469 nova_compute[254092]: 2025-11-25 18:12:31.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:31 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:12:31 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:12:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4492: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:33 np0005535469 nova_compute[254092]: 2025-11-25 18:12:33.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4493: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:35 np0005535469 nova_compute[254092]: 2025-11-25 18:12:35.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:36 np0005535469 nova_compute[254092]: 2025-11-25 18:12:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4494: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:36 np0005535469 podman[477666]: 2025-11-25 18:12:36.64152874 +0000 UTC m=+0.055874301 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 13:12:36 np0005535469 podman[477665]: 2025-11-25 18:12:36.643590286 +0000 UTC m=+0.059504020 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 13:12:36 np0005535469 podman[477667]: 2025-11-25 18:12:36.671477345 +0000 UTC m=+0.084429338 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 13:12:38 np0005535469 nova_compute[254092]: 2025-11-25 18:12:38.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4495: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:39 np0005535469 nova_compute[254092]: 2025-11-25 18:12:39.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:39 np0005535469 nova_compute[254092]: 2025-11-25 18:12:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:39 np0005535469 nova_compute[254092]: 2025-11-25 18:12:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:12:39 np0005535469 nova_compute[254092]: 2025-11-25 18:12:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:12:39 np0005535469 nova_compute[254092]: 2025-11-25 18:12:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:12:39 np0005535469 nova_compute[254092]: 2025-11-25 18:12:39.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:12:39 np0005535469 nova_compute[254092]: 2025-11-25 18:12:39.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:12:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:12:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/280637257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.017 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.155 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.156 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.157 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.157 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.245 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.272 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:12:40
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4496: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:12:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/805166120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.688 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.694 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.707 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.708 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:12:40 np0005535469 nova_compute[254092]: 2025-11-25 18:12:40.709 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:12:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:12:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:12:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:12:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:12:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:12:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:12:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:41 np0005535469 nova_compute[254092]: 2025-11-25 18:12:41.709 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:41 np0005535469 nova_compute[254092]: 2025-11-25 18:12:41.710 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:12:41 np0005535469 nova_compute[254092]: 2025-11-25 18:12:41.710 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:12:41 np0005535469 nova_compute[254092]: 2025-11-25 18:12:41.731 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:12:41 np0005535469 nova_compute[254092]: 2025-11-25 18:12:41.732 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4497: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:43 np0005535469 nova_compute[254092]: 2025-11-25 18:12:43.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:43 np0005535469 nova_compute[254092]: 2025-11-25 18:12:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:43 np0005535469 nova_compute[254092]: 2025-11-25 18:12:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:12:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4498: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:45 np0005535469 nova_compute[254092]: 2025-11-25 18:12:45.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4499: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:48 np0005535469 nova_compute[254092]: 2025-11-25 18:12:48.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4500: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:50 np0005535469 nova_compute[254092]: 2025-11-25 18:12:50.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4501: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:52 np0005535469 nova_compute[254092]: 2025-11-25 18:12:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4502: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:12:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:12:53 np0005535469 nova_compute[254092]: 2025-11-25 18:12:53.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:53 np0005535469 nova_compute[254092]: 2025-11-25 18:12:53.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:12:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4503: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:55 np0005535469 nova_compute[254092]: 2025-11-25 18:12:55.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:12:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133286245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:12:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:12:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133286245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:12:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:12:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4504: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:12:58 np0005535469 nova_compute[254092]: 2025-11-25 18:12:58.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:12:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4505: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:00 np0005535469 nova_compute[254092]: 2025-11-25 18:13:00.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4506: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4507: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:03 np0005535469 nova_compute[254092]: 2025-11-25 18:13:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4508: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:05 np0005535469 nova_compute[254092]: 2025-11-25 18:13:05.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4509: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:07 np0005535469 podman[477776]: 2025-11-25 18:13:07.658182034 +0000 UTC m=+0.060151948 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 13:13:07 np0005535469 podman[477775]: 2025-11-25 18:13:07.670955321 +0000 UTC m=+0.074862037 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 13:13:07 np0005535469 podman[477777]: 2025-11-25 18:13:07.714614939 +0000 UTC m=+0.110235270 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 25 13:13:08 np0005535469 nova_compute[254092]: 2025-11-25 18:13:08.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4510: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:13:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:13:10 np0005535469 nova_compute[254092]: 2025-11-25 18:13:10.416 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4511: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:11 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:12 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4512: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:13 np0005535469 nova_compute[254092]: 2025-11-25 18:13:13.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:13:13.722 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:13:13.722 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:13:13 np0005535469 ovn_metadata_agent[163333]: 2025-11-25 18:13:13.722 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:13:14 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4513: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:15 np0005535469 nova_compute[254092]: 2025-11-25 18:13:15.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:16 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:16 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4514: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:18 np0005535469 nova_compute[254092]: 2025-11-25 18:13:18.176 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:18 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4515: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:20 np0005535469 nova_compute[254092]: 2025-11-25 18:13:20.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:20 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4516: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:21 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:22 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4517: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:23 np0005535469 nova_compute[254092]: 2025-11-25 18:13:23.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:24 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4518: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:25 np0005535469 nova_compute[254092]: 2025-11-25 18:13:25.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:26 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:26 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4519: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:28 np0005535469 nova_compute[254092]: 2025-11-25 18:13:28.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:28 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4520: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:30 np0005535469 nova_compute[254092]: 2025-11-25 18:13:30.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:30 np0005535469 nova_compute[254092]: 2025-11-25 18:13:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:30 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4521: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:31 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:13:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 506dfbc4-b02d-40e4-9f7b-12cae849f91e does not exist
Nov 25 13:13:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev e9c30f05-2e2c-437c-b716-fcf14667ac53 does not exist
Nov 25 13:13:32 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev ebd1655f-681e-4a6c-b367-c77b17a271ed does not exist
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:13:32 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:13:32 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4522: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:32 np0005535469 podman[478110]: 2025-11-25 18:13:32.797657828 +0000 UTC m=+0.056340654 container create ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:13:32 np0005535469 systemd[1]: Started libpod-conmon-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope.
Nov 25 13:13:32 np0005535469 podman[478110]: 2025-11-25 18:13:32.766626863 +0000 UTC m=+0.025309799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:13:32 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:13:32 np0005535469 podman[478110]: 2025-11-25 18:13:32.89256987 +0000 UTC m=+0.151252776 container init ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 13:13:32 np0005535469 podman[478110]: 2025-11-25 18:13:32.901261257 +0000 UTC m=+0.159944083 container start ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:13:32 np0005535469 podman[478110]: 2025-11-25 18:13:32.904556426 +0000 UTC m=+0.163239362 container attach ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:13:32 np0005535469 elated_turing[478126]: 167 167
Nov 25 13:13:32 np0005535469 systemd[1]: libpod-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope: Deactivated successfully.
Nov 25 13:13:32 np0005535469 conmon[478126]: conmon ce3270290fc65962c131 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope/container/memory.events
Nov 25 13:13:32 np0005535469 podman[478110]: 2025-11-25 18:13:32.90985409 +0000 UTC m=+0.168536916 container died ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 13:13:32 np0005535469 systemd[1]: var-lib-containers-storage-overlay-dc69fb1fcff0c54633af0ccd9b9451cca57663cac7a7d0f05c1b8298f5b4ab59-merged.mount: Deactivated successfully.
Nov 25 13:13:32 np0005535469 podman[478110]: 2025-11-25 18:13:32.954653619 +0000 UTC m=+0.213336445 container remove ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 13:13:32 np0005535469 systemd[1]: libpod-conmon-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope: Deactivated successfully.
Nov 25 13:13:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 13:13:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:13:33 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 13:13:33 np0005535469 podman[478150]: 2025-11-25 18:13:33.144706581 +0000 UTC m=+0.056459157 container create 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:13:33 np0005535469 systemd[1]: Started libpod-conmon-7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708.scope.
Nov 25 13:13:33 np0005535469 nova_compute[254092]: 2025-11-25 18:13:33.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:33 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:13:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:33 np0005535469 podman[478150]: 2025-11-25 18:13:33.126966417 +0000 UTC m=+0.038719043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:13:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:33 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:33 np0005535469 podman[478150]: 2025-11-25 18:13:33.232694554 +0000 UTC m=+0.144447210 container init 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 13:13:33 np0005535469 podman[478150]: 2025-11-25 18:13:33.242881802 +0000 UTC m=+0.154634378 container start 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 25 13:13:33 np0005535469 podman[478150]: 2025-11-25 18:13:33.245986976 +0000 UTC m=+0.157739592 container attach 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 13:13:33 np0005535469 nova_compute[254092]: 2025-11-25 18:13:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:34 np0005535469 flamboyant_brahmagupta[478167]: --> passed data devices: 0 physical, 3 LVM
Nov 25 13:13:34 np0005535469 flamboyant_brahmagupta[478167]: --> relative data size: 1.0
Nov 25 13:13:34 np0005535469 flamboyant_brahmagupta[478167]: --> All data devices are unavailable
Nov 25 13:13:34 np0005535469 podman[478150]: 2025-11-25 18:13:34.26305087 +0000 UTC m=+1.174803456 container died 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 13:13:34 np0005535469 systemd[1]: libpod-7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708.scope: Deactivated successfully.
Nov 25 13:13:34 np0005535469 systemd[1]: var-lib-containers-storage-overlay-267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6-merged.mount: Deactivated successfully.
Nov 25 13:13:34 np0005535469 podman[478150]: 2025-11-25 18:13:34.34607624 +0000 UTC m=+1.257828856 container remove 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:13:34 np0005535469 systemd[1]: libpod-conmon-7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708.scope: Deactivated successfully.
Nov 25 13:13:34 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4523: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:34 np0005535469 podman[478350]: 2025-11-25 18:13:34.995443078 +0000 UTC m=+0.038559490 container create f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:13:35 np0005535469 systemd[1]: Started libpod-conmon-f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded.scope.
Nov 25 13:13:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:13:35 np0005535469 podman[478350]: 2025-11-25 18:13:35.071481636 +0000 UTC m=+0.114598098 container init f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 13:13:35 np0005535469 podman[478350]: 2025-11-25 18:13:34.980331397 +0000 UTC m=+0.023447829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:13:35 np0005535469 podman[478350]: 2025-11-25 18:13:35.077178402 +0000 UTC m=+0.120294814 container start f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 13:13:35 np0005535469 podman[478350]: 2025-11-25 18:13:35.079806893 +0000 UTC m=+0.122923355 container attach f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 13:13:35 np0005535469 agitated_allen[478367]: 167 167
Nov 25 13:13:35 np0005535469 systemd[1]: libpod-f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded.scope: Deactivated successfully.
Nov 25 13:13:35 np0005535469 podman[478350]: 2025-11-25 18:13:35.082601609 +0000 UTC m=+0.125718021 container died f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 13:13:35 np0005535469 systemd[1]: var-lib-containers-storage-overlay-99e181816fd1e7c9d8522431c82cd7b57d0fb776ad50201c44eab9394e3c33da-merged.mount: Deactivated successfully.
Nov 25 13:13:35 np0005535469 podman[478350]: 2025-11-25 18:13:35.119323419 +0000 UTC m=+0.162439831 container remove f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 13:13:35 np0005535469 systemd[1]: libpod-conmon-f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded.scope: Deactivated successfully.
Nov 25 13:13:35 np0005535469 podman[478391]: 2025-11-25 18:13:35.298217586 +0000 UTC m=+0.049303892 container create 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 13:13:35 np0005535469 systemd[1]: Started libpod-conmon-07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1.scope.
Nov 25 13:13:35 np0005535469 podman[478391]: 2025-11-25 18:13:35.272316361 +0000 UTC m=+0.023402677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:13:35 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:13:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:35 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:35 np0005535469 podman[478391]: 2025-11-25 18:13:35.410523202 +0000 UTC m=+0.161609488 container init 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:13:35 np0005535469 podman[478391]: 2025-11-25 18:13:35.419070215 +0000 UTC m=+0.170156501 container start 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 13:13:35 np0005535469 podman[478391]: 2025-11-25 18:13:35.422708664 +0000 UTC m=+0.173794990 container attach 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 13:13:35 np0005535469 nova_compute[254092]: 2025-11-25 18:13:35.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]: {
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:    "0": [
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:        {
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "devices": [
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "/dev/loop3"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            ],
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_name": "ceph_lv0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_size": "21470642176",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "name": "ceph_lv0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "tags": {
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cluster_name": "ceph",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.crush_device_class": "",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.encrypted": "0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osd_id": "0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.type": "block",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.vdo": "0"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            },
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "type": "block",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "vg_name": "ceph_vg0"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:        }
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:    ],
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:    "1": [
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:        {
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "devices": [
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "/dev/loop4"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            ],
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_name": "ceph_lv1",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_size": "21470642176",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "name": "ceph_lv1",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "tags": {
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cluster_name": "ceph",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.crush_device_class": "",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.encrypted": "0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osd_id": "1",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.type": "block",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.vdo": "0"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            },
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "type": "block",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "vg_name": "ceph_vg1"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:        }
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:    ],
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:    "2": [
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:        {
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "devices": [
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "/dev/loop5"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            ],
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_name": "ceph_lv2",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_size": "21470642176",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "name": "ceph_lv2",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "tags": {
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cephx_lockbox_secret": "",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.cluster_name": "ceph",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.crush_device_class": "",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.encrypted": "0",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osd_id": "2",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.type": "block",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:                "ceph.vdo": "0"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            },
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "type": "block",
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:            "vg_name": "ceph_vg2"
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:        }
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]:    ]
Nov 25 13:13:36 np0005535469 frosty_hugle[478407]: }
Nov 25 13:13:36 np0005535469 systemd[1]: libpod-07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1.scope: Deactivated successfully.
Nov 25 13:13:36 np0005535469 podman[478391]: 2025-11-25 18:13:36.170421698 +0000 UTC m=+0.921507964 container died 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 13:13:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe-merged.mount: Deactivated successfully.
Nov 25 13:13:36 np0005535469 podman[478391]: 2025-11-25 18:13:36.221810276 +0000 UTC m=+0.972896542 container remove 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:13:36 np0005535469 systemd[1]: libpod-conmon-07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1.scope: Deactivated successfully.
Nov 25 13:13:36 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:36 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4524: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:36 np0005535469 podman[478567]: 2025-11-25 18:13:36.830527839 +0000 UTC m=+0.042450406 container create 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 13:13:36 np0005535469 systemd[1]: Started libpod-conmon-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope.
Nov 25 13:13:36 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:13:36 np0005535469 podman[478567]: 2025-11-25 18:13:36.810902485 +0000 UTC m=+0.022825042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:13:36 np0005535469 podman[478567]: 2025-11-25 18:13:36.919053308 +0000 UTC m=+0.130975845 container init 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 13:13:36 np0005535469 podman[478567]: 2025-11-25 18:13:36.932922066 +0000 UTC m=+0.144844633 container start 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:13:36 np0005535469 podman[478567]: 2025-11-25 18:13:36.93677035 +0000 UTC m=+0.148692907 container attach 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:13:36 np0005535469 great_moore[478584]: 167 167
Nov 25 13:13:36 np0005535469 systemd[1]: libpod-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope: Deactivated successfully.
Nov 25 13:13:36 np0005535469 conmon[478584]: conmon 77bb1c7a261d38ac48b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope/container/memory.events
Nov 25 13:13:36 np0005535469 podman[478567]: 2025-11-25 18:13:36.940045689 +0000 UTC m=+0.151968266 container died 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 13:13:36 np0005535469 systemd[1]: var-lib-containers-storage-overlay-942ef13ac88d04b5005221c2c538b4afb85d4a35a07f7993d4c83c6451c58d10-merged.mount: Deactivated successfully.
Nov 25 13:13:36 np0005535469 podman[478567]: 2025-11-25 18:13:36.977513829 +0000 UTC m=+0.189436366 container remove 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 13:13:37 np0005535469 systemd[1]: libpod-conmon-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope: Deactivated successfully.
Nov 25 13:13:37 np0005535469 podman[478608]: 2025-11-25 18:13:37.185331554 +0000 UTC m=+0.056170370 container create 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 13:13:37 np0005535469 systemd[1]: Started libpod-conmon-8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f.scope.
Nov 25 13:13:37 np0005535469 podman[478608]: 2025-11-25 18:13:37.159198973 +0000 UTC m=+0.030037859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 13:13:37 np0005535469 systemd[1]: Started libcrun container.
Nov 25 13:13:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:37 np0005535469 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 13:13:37 np0005535469 podman[478608]: 2025-11-25 18:13:37.284552173 +0000 UTC m=+0.155391059 container init 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 13:13:37 np0005535469 podman[478608]: 2025-11-25 18:13:37.298176184 +0000 UTC m=+0.169015010 container start 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 13:13:37 np0005535469 podman[478608]: 2025-11-25 18:13:37.302159663 +0000 UTC m=+0.172998579 container attach 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 13:13:38 np0005535469 nova_compute[254092]: 2025-11-25 18:13:38.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]: {
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:    "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "osd_id": 1,
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "type": "bluestore"
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:    },
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:    "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "osd_id": 2,
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "type": "bluestore"
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:    },
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:    "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "osd_id": 0,
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:        "type": "bluestore"
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]:    }
Nov 25 13:13:38 np0005535469 confident_antonelli[478624]: }
Nov 25 13:13:38 np0005535469 systemd[1]: libpod-8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f.scope: Deactivated successfully.
Nov 25 13:13:38 np0005535469 podman[478608]: 2025-11-25 18:13:38.281959273 +0000 UTC m=+1.152798079 container died 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 13:13:38 np0005535469 systemd[1]: var-lib-containers-storage-overlay-97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947-merged.mount: Deactivated successfully.
Nov 25 13:13:38 np0005535469 podman[478608]: 2025-11-25 18:13:38.354223709 +0000 UTC m=+1.225062505 container remove 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 13:13:38 np0005535469 systemd[1]: libpod-conmon-8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f.scope: Deactivated successfully.
Nov 25 13:13:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 13:13:38 np0005535469 podman[478670]: 2025-11-25 18:13:38.393488247 +0000 UTC m=+0.063987732 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 13:13:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:13:38 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 13:13:38 np0005535469 podman[478658]: 2025-11-25 18:13:38.404223039 +0000 UTC m=+0.084485129 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 13:13:38 np0005535469 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:13:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 47e8b7c1-35fb-4c4c-97a4-fa00e1a6d2f1 does not exist
Nov 25 13:13:38 np0005535469 ceph-mgr[75280]: [progress WARNING root] complete: ev 85154e3c-0446-4bf2-83f8-4cdab9d54b5d does not exist
Nov 25 13:13:38 np0005535469 podman[478676]: 2025-11-25 18:13:38.443870598 +0000 UTC m=+0.109933132 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 13:13:38 np0005535469 nova_compute[254092]: 2025-11-25 18:13:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:38 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4525: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:13:39 np0005535469 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 13:13:39 np0005535469 nova_compute[254092]: 2025-11-25 18:13:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:39 np0005535469 nova_compute[254092]: 2025-11-25 18:13:39.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:13:39 np0005535469 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:13:39 np0005535469 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:13:39 np0005535469 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 13:13:39 np0005535469 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:13:39 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:13:39 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614377039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:13:39 np0005535469 nova_compute[254092]: 2025-11-25 18:13:39.978 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.178 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.179 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3534MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.179 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.179 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.317 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.349 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:13:40
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', 'backups']
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4526: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:13:40 np0005535469 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 45K writes, 183K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.77 writes per sync, written: 0.17 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:13:40 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 13:13:40 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254113131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.799 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.805 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.843 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.845 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 13:13:40 np0005535469 nova_compute[254092]: 2025-11-25 18:13:40.845 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:13:40 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:13:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 13:13:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 13:13:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 13:13:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 13:13:41 np0005535469 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 13:13:41 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:41 np0005535469 nova_compute[254092]: 2025-11-25 18:13:41.846 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:41 np0005535469 nova_compute[254092]: 2025-11-25 18:13:41.846 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:41 np0005535469 nova_compute[254092]: 2025-11-25 18:13:41.846 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 13:13:41 np0005535469 nova_compute[254092]: 2025-11-25 18:13:41.847 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 13:13:41 np0005535469 nova_compute[254092]: 2025-11-25 18:13:41.871 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 13:13:41 np0005535469 nova_compute[254092]: 2025-11-25 18:13:41.871 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:42 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4527: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:43 np0005535469 nova_compute[254092]: 2025-11-25 18:13:43.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:43 np0005535469 nova_compute[254092]: 2025-11-25 18:13:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:43 np0005535469 nova_compute[254092]: 2025-11-25 18:13:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 13:13:44 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4528: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:45 np0005535469 nova_compute[254092]: 2025-11-25 18:13:45.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:46 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:46 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4529: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:48 np0005535469 nova_compute[254092]: 2025-11-25 18:13:48.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:48 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4530: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:13:50 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8401.3 total, 600.0 interval#012Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.78 writes per sync, written: 0.18 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:13:50 np0005535469 nova_compute[254092]: 2025-11-25 18:13:50.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:50 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4531: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:50 np0005535469 systemd-logind[791]: New session 55 of user zuul.
Nov 25 13:13:50 np0005535469 systemd[1]: Started Session 55 of User zuul.
Nov 25 13:13:51 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 13:13:52 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4532: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:53 np0005535469 nova_compute[254092]: 2025-11-25 18:13:53.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:54 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23309 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:13:54 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23311 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:13:54 np0005535469 nova_compute[254092]: 2025-11-25 18:13:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:13:54 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4533: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:54 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 13:13:54 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601491089' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 13:13:55 np0005535469 nova_compute[254092]: 2025-11-25 18:13:55.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 13:13:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958131460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 13:13:55 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 13:13:55 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958131460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 13:13:56 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:13:56 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4534: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:57 np0005535469 ovs-vsctl[479109]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 13:13:58 np0005535469 nova_compute[254092]: 2025-11-25 18:13:58.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:13:58 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4535: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:13:58 np0005535469 virtqemud[253880]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 13:13:59 np0005535469 virtqemud[253880]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 13:13:59 np0005535469 virtqemud[253880]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 13:13:59 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: cache status {prefix=cache status} (starting...)
Nov 25 13:13:59 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: client ls {prefix=client ls} (starting...)
Nov 25 13:13:59 np0005535469 lvm[479444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 13:13:59 np0005535469 lvm[479444]: VG ceph_vg0 finished
Nov 25 13:13:59 np0005535469 lvm[479448]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 13:13:59 np0005535469 lvm[479448]: VG ceph_vg2 finished
Nov 25 13:14:00 np0005535469 lvm[479482]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 13:14:00 np0005535469 lvm[479482]: VG ceph_vg1 finished
Nov 25 13:14:00 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23319 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:00 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 13:14:00 np0005535469 nova_compute[254092]: 2025-11-25 18:14:00.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:14:00 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 13:14:00 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23321 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:00 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4536: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:14:00 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 13:14:00 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 13:14:00 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 13:14:01 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237477704' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 13:14:01 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 13:14:01 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23327 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:01 np0005535469 ceph-mgr[75280]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 13:14:01 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T18:14:01.323+0000 7f2d477a6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:14:01 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175176537' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 13:14:01 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: ops {prefix=ops} (starting...)
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576232190' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 13:14:01 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295279610' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302530583' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755463129' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 13:14:02 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: session ls {prefix=session ls} (starting...)
Nov 25 13:14:02 np0005535469 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: status {prefix=status} (starting...)
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318995126' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 13:14:02 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4537: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:14:02 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23341 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 13:14:02 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407612323' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 13:14:03 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23345 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:03 np0005535469 nova_compute[254092]: 2025-11-25 18:14:03.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252542466' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911525299' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137598177' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 13:14:03 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332770137' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 13:14:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 13:14:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835828571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 13:14:04 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23357 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:04 np0005535469 ceph-mgr[75280]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 13:14:04 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T18:14:04.387+0000 7f2d477a6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 13:14:04 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23359 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:04 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4538: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:14:04 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 13:14:04 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089263276' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 13:14:04 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23363 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 13:14:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2307039187' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 13:14:05 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23367 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:05 np0005535469 nova_compute[254092]: 2025-11-25 18:14:05.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:14:05 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23369 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:05 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 13:14:05 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1013756043' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 13:14:06 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23373 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x55750c646400 session 0x55750d8dbc20
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 57892864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300589056 unmapped: 57827328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300597248 unmapped: 57819136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 57745408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 57655296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 57565184 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 57565184 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300875776 unmapped: 57540608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 57532416 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 57532416 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.5 total, 600.0 interval#012Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 230 writes, 420 keys, 230 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s#012Interval WAL: 230 writes, 108 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 13:14:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877039370' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 57376768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 57376768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 57253888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 57253888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.576416016s of 600.132080078s, submitted: 90
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 58507264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 58482688 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 58482688 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 58482688 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 58417152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 58392576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 58392576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 58384384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 58384384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 58384384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 58376192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 58376192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 58376192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 58359808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 58335232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 58335232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 58327040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 58294272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 58294272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 58286080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 58286080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 58253312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 58253312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 58253312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 58236928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 58236928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 58212352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 58204160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 58187776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 58187776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 58179584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 58179584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300261376 unmapped: 58155008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 58138624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 58056704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 58056704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 58048512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 57999360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 57999360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 57999360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 57950208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300597248 unmapped: 57819136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300597248 unmapped: 57819136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 57655296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 57655296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 57614336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 57581568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 57581568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 57581568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 57565184 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7201.5 total, 600.0 interval#012Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 57499648 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x55750e881000 session 0x55750cdc83c0
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc ms_handle_reset ms_handle_reset con 0x55750ddfd000
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1812945073
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1812945073,v1:192.168.122.100:6801/1812945073]
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc handle_mgr_configure stats_period=5
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 57393152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 57393152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 57393152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 57376768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x557511c03c00 session 0x55750bb781e0
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 596.978881836s of 597.216186523s, submitted: 90
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302284800 unmapped: 56131584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302333952 unmapped: 56082432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302350336 unmapped: 56066048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302350336 unmapped: 56066048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302407680 unmapped: 56008704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302407680 unmapped: 56008704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302465024 unmapped: 55951360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302489600 unmapped: 55926784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302530560 unmapped: 55885824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302530560 unmapped: 55885824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302530560 unmapped: 55885824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 55877632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 55877632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 55877632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302563328 unmapped: 55853056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302563328 unmapped: 55853056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302579712 unmapped: 55836672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 55754752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 55754752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 55713792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7801.5 total, 600.0 interval#012Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 216 writes, 324 keys, 216 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 216 writes, 108 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 55484416 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 602.693115234s of 603.017700195s, submitted: 108
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 55451648 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 55451648 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 55443456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [0,1])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 55328768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 55312384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 55255040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 55255040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x557512508c00 session 0x55750bb783c0
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc ms_handle_reset ms_handle_reset con 0x55750e883400
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1812945073
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1812945073,v1:192.168.122.100:6801/1812945073]
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: mgrc handle_mgr_configure stats_period=5
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 55205888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 55205888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 55205888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 55181312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x55750de3dc00 session 0x55750e844b40
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 55091200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 55091200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 55083008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 55083008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 55083008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 55058432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 55058432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 55058432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 55050240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 55050240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 55050240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 55009280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 55009280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'config diff' '{prefix=config diff}'
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'config show' '{prefix=config show}'
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:06 np0005535469 ceph-osd[90994]: do_command 'log dump' '{prefix=log dump}'
Nov 25 13:14:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 13:14:06 np0005535469 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 13:14:06 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23377 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 13:14:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432690294' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 13:14:06 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4539: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:14:06 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 13:14:06 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 13:14:06 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933937080' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 13:14:07 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23385 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 13:14:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 13:14:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237594970' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 13:14:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:14:07 np0005535469 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8401.5 total, 600.0 interval#012Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:14:07 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23389 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 13:14:07 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 13:14:07 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264492742' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 13:14:08 np0005535469 nova_compute[254092]: 2025-11-25 18:14:08.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:14:08 np0005535469 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23397 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 13:14:08 np0005535469 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T18:14:08.430+0000 7f2d477a6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 13:14:08 np0005535469 ceph-mgr[75280]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 13:14:08 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4540: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:14:08 np0005535469 podman[480860]: 2025-11-25 18:14:08.672470755 +0000 UTC m=+0.089225029 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:14:08 np0005535469 podman[480850]: 2025-11-25 18:14:08.672941317 +0000 UTC m=+0.089874436 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 13:14:08 np0005535469 podman[480861]: 2025-11-25 18:14:08.697044284 +0000 UTC m=+0.112192274 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 13:14:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 13:14:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859644948' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 13:14:08 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 13:14:08 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320324554' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 13:14:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 13:14:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814498474' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 13:14:09 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 13:14:09 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099905580' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677391739' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 13:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 13:14:10 np0005535469 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459415464' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200805878' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.496 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.497 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.516 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.524 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.524 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.524 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.526 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.526 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 25 13:14:10 np0005535469 nova_compute[254092]: 2025-11-25 18:14:10.526 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 13:14:10 np0005535469 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825372936' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 13:14:10 np0005535469 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4541: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 90734592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 90734592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 90734592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 90726400 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 90726400 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618ddfa4c00 session 0x5618decc0780
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 90701824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 90693632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 90660864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 90660864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 90652672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 90652672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 90611712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 90595328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 90595328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 90587136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 90587136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 90570752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 90570752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 90562560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 90562560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 90562560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 90554368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 90554368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 90554368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 90546176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 90546176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353050624 unmapped: 90505216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353050624 unmapped: 90505216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 90497024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 90497024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 90497024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 90480640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 90480640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 90480640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 90439680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 90439680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 90439680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353124352 unmapped: 90431488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353124352 unmapped: 90431488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353124352 unmapped: 90431488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 90382336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 90382336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 90365952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 90365952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 90357760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353214464 unmapped: 90341376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.2 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 16K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 300 writes, 695 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 300 writes, 143 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 90316800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353263616 unmapped: 90292224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 90284032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 90284032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 90275840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 90243072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 90243072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 90226688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 90226688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 90226688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 90193920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 90193920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 90193920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 90177536 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 90177536 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 90177536 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 90169344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 90169344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 90169344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 90136576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 90136576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 90136576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 90120192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 90120192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 90087424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 90071040 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 90071040 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 90071040 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 90062848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 90062848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 90038272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 90038272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 90038272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353525760 unmapped: 90030080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353525760 unmapped: 90030080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 90021888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 90021888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.755249023s of 600.130493164s, submitted: 90
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 90013696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 90013696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 89931776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 89931776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 89931776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 89899008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 89899008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 89899008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353673216 unmapped: 89882624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353673216 unmapped: 89882624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353730560 unmapped: 89825280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353763328 unmapped: 89792512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 89784320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 89784320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 89784320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 89751552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 89751552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 89751552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353812480 unmapped: 89743360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 89718784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 89718784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 89718784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 89694208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353886208 unmapped: 89669632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 89661440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 89661440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 89661440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 89645056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 89636864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 89636864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 89636864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 89628672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 89628672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 89628672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 89604096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 89604096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 89604096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 89595904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 89595904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 89595904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 89587712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 89587712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 89587712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 89563136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 89563136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 89554944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 89538560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354025472 unmapped: 89530368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354025472 unmapped: 89530368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 89513984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 89513984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 89473024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 89473024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 89448448 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 89440256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 89440256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 89440256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 89423872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 89423872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 89407488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 89407488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 89382912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 89382912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 89382912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354189312 unmapped: 89366528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 89350144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 89350144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 89350144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 89341952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 89341952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 89341952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 89333760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 89333760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 89333760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354254848 unmapped: 89300992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354254848 unmapped: 89300992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354254848 unmapped: 89300992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 89292800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 89292800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 89276416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 89260032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 89260032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 89260032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 89243648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 89243648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 89178112 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 89169920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 89169920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 89169920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 89137152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 89137152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 89137152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 89120768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 89120768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 89104384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 89088000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 89088000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 89088000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 89079808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 89063424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 89063424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 89038848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 89038848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 89022464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 89022464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 89006080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 89006080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7201.2 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 88973312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 88973312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 88948736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 88948736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618decf7800 session 0x5618ddf62960
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 88899584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: mgrc ms_handle_reset ms_handle_reset con 0x5618ded40c00
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1812945073
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1812945073,v1:192.168.122.100:6801/1812945073]
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: mgrc handle_mgr_configure stats_period=5
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618ded13800 session 0x5618de7d85a0
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618e59c9c00 session 0x5618de7d81e0
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 88825856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 88825856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 88825856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 88809472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 88809472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 88793088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 88793088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354779136 unmapped: 88776704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354779136 unmapped: 88776704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 88743936 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354844672 unmapped: 88711168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354844672 unmapped: 88711168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 88694784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 88670208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 88670208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 88670208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618ded41c00 session 0x5618dcf27a40
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 88653824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 88653824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 88637440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 596.947326660s of 597.180541992s, submitted: 90
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 88629248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 91742208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,1])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 91676672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 91676672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 13:14:10 np0005535469 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
